pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6930
- Accuracy: 0.5047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7082 | 1.0 | 3 | 0.7048 | 0.25 |
| 0.6761 | 2.0 | 6 | 0.7249 | 0.25 |
| 0.6653 | 3.0 | 9 | 0.7423 | 0.25 |
| 0.6212 | 4.0 | 12 | 0.7727 | 0.25 |
| 0.5932 | 5.0 | 15 | 0.8098 | 0.25 |
| 0.5427 | 6.0 | 18 | 0.8496 | 0.25 |
| 0.5146 | 7.0 | 21 | 0.8992 | 0.25 |
| 0.4356 | 8.0 | 24 | 0.9494 | 0.25 |
| 0.4275 | 9.0 | 27 | 0.9694 | 0.25 |
| 0.3351 | 10.0 | 30 | 0.9968 | 0.25 |
| 0.2812 | 11.0 | 33 | 1.0056 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-8-1", "results": []}]}
|
SetFit/distilbert-base-uncased__sst2__train-8-1
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased\_\_sst2\_\_train-8-1
============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6930
* Accuracy: 0.5047
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6932
- Accuracy: 0.4931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7081 | 1.0 | 3 | 0.7031 | 0.25 |
| 0.6853 | 2.0 | 6 | 0.7109 | 0.25 |
| 0.6696 | 3.0 | 9 | 0.7211 | 0.25 |
| 0.6174 | 4.0 | 12 | 0.7407 | 0.25 |
| 0.5717 | 5.0 | 15 | 0.7625 | 0.25 |
| 0.5096 | 6.0 | 18 | 0.7732 | 0.25 |
| 0.488 | 7.0 | 21 | 0.7798 | 0.25 |
| 0.4023 | 8.0 | 24 | 0.7981 | 0.25 |
| 0.3556 | 9.0 | 27 | 0.8110 | 0.25 |
| 0.2714 | 10.0 | 30 | 0.8269 | 0.25 |
| 0.2295 | 11.0 | 33 | 0.8276 | 0.25 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-8-2", "results": []}]}
|
SetFit/distilbert-base-uncased__sst2__train-8-2
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased\_\_sst2\_\_train-8-2
============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6932
* Accuracy: 0.4931
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6914
- Accuracy: 0.5195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6931 | 1.0 | 3 | 0.7039 | 0.25 |
| 0.6615 | 2.0 | 6 | 0.7186 | 0.25 |
| 0.653 | 3.0 | 9 | 0.7334 | 0.25 |
| 0.601 | 4.0 | 12 | 0.7592 | 0.25 |
| 0.5555 | 5.0 | 15 | 0.7922 | 0.25 |
| 0.4832 | 6.0 | 18 | 0.8179 | 0.25 |
| 0.4565 | 7.0 | 21 | 0.8285 | 0.25 |
| 0.3996 | 8.0 | 24 | 0.8559 | 0.25 |
| 0.3681 | 9.0 | 27 | 0.8586 | 0.5 |
| 0.2901 | 10.0 | 30 | 0.8646 | 0.5 |
| 0.241 | 11.0 | 33 | 0.8524 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-8-3", "results": []}]}
|
SetFit/distilbert-base-uncased__sst2__train-8-3
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased\_\_sst2\_\_train-8-3
============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6914
* Accuracy: 0.5195
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6921
- Accuracy: 0.5107
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7163 | 1.0 | 3 | 0.7100 | 0.25 |
| 0.6785 | 2.0 | 6 | 0.7209 | 0.25 |
| 0.6455 | 3.0 | 9 | 0.7321 | 0.25 |
| 0.6076 | 4.0 | 12 | 0.7517 | 0.25 |
| 0.5593 | 5.0 | 15 | 0.7780 | 0.25 |
| 0.5202 | 6.0 | 18 | 0.7990 | 0.25 |
| 0.4967 | 7.0 | 21 | 0.8203 | 0.25 |
| 0.4158 | 8.0 | 24 | 0.8497 | 0.25 |
| 0.3997 | 9.0 | 27 | 0.8638 | 0.25 |
| 0.3064 | 10.0 | 30 | 0.8732 | 0.25 |
| 0.2618 | 11.0 | 33 | 0.8669 | 0.25 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-8-4", "results": []}]}
|
SetFit/distilbert-base-uncased__sst2__train-8-4
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased\_\_sst2\_\_train-8-4
============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6921
* Accuracy: 0.5107
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8419
- Accuracy: 0.6172
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7057 | 1.0 | 3 | 0.6848 | 0.75 |
| 0.6681 | 2.0 | 6 | 0.6875 | 0.5 |
| 0.6591 | 3.0 | 9 | 0.6868 | 0.25 |
| 0.6052 | 4.0 | 12 | 0.6943 | 0.25 |
| 0.557 | 5.0 | 15 | 0.7078 | 0.25 |
| 0.4954 | 6.0 | 18 | 0.7168 | 0.25 |
| 0.4593 | 7.0 | 21 | 0.7185 | 0.25 |
| 0.3936 | 8.0 | 24 | 0.7212 | 0.25 |
| 0.3699 | 9.0 | 27 | 0.6971 | 0.5 |
| 0.2916 | 10.0 | 30 | 0.6827 | 0.5 |
| 0.2511 | 11.0 | 33 | 0.6464 | 0.5 |
| 0.2109 | 12.0 | 36 | 0.6344 | 0.75 |
| 0.1655 | 13.0 | 39 | 0.6377 | 0.75 |
| 0.1412 | 14.0 | 42 | 0.6398 | 0.75 |
| 0.1157 | 15.0 | 45 | 0.6315 | 0.75 |
| 0.0895 | 16.0 | 48 | 0.6210 | 0.75 |
| 0.0783 | 17.0 | 51 | 0.5918 | 0.75 |
| 0.0606 | 18.0 | 54 | 0.5543 | 0.75 |
| 0.0486 | 19.0 | 57 | 0.5167 | 0.75 |
| 0.0405 | 20.0 | 60 | 0.4862 | 0.75 |
| 0.0376 | 21.0 | 63 | 0.4644 | 0.75 |
| 0.0294 | 22.0 | 66 | 0.4497 | 0.75 |
| 0.0261 | 23.0 | 69 | 0.4428 | 0.75 |
| 0.0238 | 24.0 | 72 | 0.4408 | 0.75 |
| 0.0217 | 25.0 | 75 | 0.4392 | 0.75 |
| 0.0187 | 26.0 | 78 | 0.4373 | 0.75 |
| 0.0177 | 27.0 | 81 | 0.4360 | 0.75 |
| 0.0136 | 28.0 | 84 | 0.4372 | 0.75 |
| 0.0144 | 29.0 | 87 | 0.4368 | 0.75 |
| 0.014 | 30.0 | 90 | 0.4380 | 0.75 |
| 0.0137 | 31.0 | 93 | 0.4383 | 0.75 |
| 0.0133 | 32.0 | 96 | 0.4409 | 0.75 |
| 0.013 | 33.0 | 99 | 0.4380 | 0.75 |
| 0.0096 | 34.0 | 102 | 0.4358 | 0.75 |
| 0.012 | 35.0 | 105 | 0.4339 | 0.75 |
| 0.0122 | 36.0 | 108 | 0.4305 | 0.75 |
| 0.0109 | 37.0 | 111 | 0.4267 | 0.75 |
| 0.0121 | 38.0 | 114 | 0.4231 | 0.75 |
| 0.0093 | 39.0 | 117 | 0.4209 | 0.75 |
| 0.0099 | 40.0 | 120 | 0.4199 | 0.75 |
| 0.0091 | 41.0 | 123 | 0.4184 | 0.75 |
| 0.0116 | 42.0 | 126 | 0.4173 | 0.75 |
| 0.01 | 43.0 | 129 | 0.4163 | 0.75 |
| 0.0098 | 44.0 | 132 | 0.4153 | 0.75 |
| 0.0101 | 45.0 | 135 | 0.4155 | 0.75 |
| 0.0088 | 46.0 | 138 | 0.4149 | 0.75 |
| 0.0087 | 47.0 | 141 | 0.4150 | 0.75 |
| 0.0093 | 48.0 | 144 | 0.4147 | 0.75 |
| 0.0081 | 49.0 | 147 | 0.4147 | 0.75 |
| 0.009 | 50.0 | 150 | 0.4150 | 0.75 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-8-5", "results": []}]}
|
SetFit/distilbert-base-uncased__sst2__train-8-5
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased\_\_sst2\_\_train-8-5
============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8419
* Accuracy: 0.6172
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5336
- Accuracy: 0.7523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7161 | 1.0 | 3 | 0.6941 | 0.5 |
| 0.6786 | 2.0 | 6 | 0.7039 | 0.25 |
| 0.6586 | 3.0 | 9 | 0.7090 | 0.25 |
| 0.6121 | 4.0 | 12 | 0.7183 | 0.25 |
| 0.5696 | 5.0 | 15 | 0.7266 | 0.25 |
| 0.522 | 6.0 | 18 | 0.7305 | 0.25 |
| 0.4899 | 7.0 | 21 | 0.7339 | 0.25 |
| 0.3985 | 8.0 | 24 | 0.7429 | 0.25 |
| 0.3758 | 9.0 | 27 | 0.7224 | 0.25 |
| 0.2876 | 10.0 | 30 | 0.7068 | 0.5 |
| 0.2498 | 11.0 | 33 | 0.6751 | 0.75 |
| 0.1921 | 12.0 | 36 | 0.6487 | 0.75 |
| 0.1491 | 13.0 | 39 | 0.6261 | 0.75 |
| 0.1276 | 14.0 | 42 | 0.6102 | 0.75 |
| 0.0996 | 15.0 | 45 | 0.5964 | 0.75 |
| 0.073 | 16.0 | 48 | 0.6019 | 0.75 |
| 0.0627 | 17.0 | 51 | 0.5933 | 0.75 |
| 0.053 | 18.0 | 54 | 0.5768 | 0.75 |
| 0.0403 | 19.0 | 57 | 0.5698 | 0.75 |
| 0.0328 | 20.0 | 60 | 0.5656 | 0.75 |
| 0.03 | 21.0 | 63 | 0.5634 | 0.75 |
| 0.025 | 22.0 | 66 | 0.5620 | 0.75 |
| 0.0209 | 23.0 | 69 | 0.5623 | 0.75 |
| 0.0214 | 24.0 | 72 | 0.5606 | 0.75 |
| 0.0191 | 25.0 | 75 | 0.5565 | 0.75 |
| 0.0173 | 26.0 | 78 | 0.5485 | 0.75 |
| 0.0175 | 27.0 | 81 | 0.5397 | 0.75 |
| 0.0132 | 28.0 | 84 | 0.5322 | 0.75 |
| 0.0138 | 29.0 | 87 | 0.5241 | 0.75 |
| 0.0128 | 30.0 | 90 | 0.5235 | 0.75 |
| 0.0126 | 31.0 | 93 | 0.5253 | 0.75 |
| 0.012 | 32.0 | 96 | 0.5317 | 0.75 |
| 0.0118 | 33.0 | 99 | 0.5342 | 0.75 |
| 0.0092 | 34.0 | 102 | 0.5388 | 0.75 |
| 0.0117 | 35.0 | 105 | 0.5414 | 0.75 |
| 0.0124 | 36.0 | 108 | 0.5453 | 0.75 |
| 0.0109 | 37.0 | 111 | 0.5506 | 0.75 |
| 0.0112 | 38.0 | 114 | 0.5555 | 0.75 |
| 0.0087 | 39.0 | 117 | 0.5597 | 0.75 |
| 0.01 | 40.0 | 120 | 0.5640 | 0.75 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-8-6", "results": []}]}
|
SetFit/distilbert-base-uncased__sst2__train-8-6
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased\_\_sst2\_\_train-8-6
============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5336
* Accuracy: 0.7523
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6950
- Accuracy: 0.4618
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7156 | 1.0 | 3 | 0.6965 | 0.25 |
| 0.6645 | 2.0 | 6 | 0.7059 | 0.25 |
| 0.6368 | 3.0 | 9 | 0.7179 | 0.25 |
| 0.5944 | 4.0 | 12 | 0.7408 | 0.25 |
| 0.5369 | 5.0 | 15 | 0.7758 | 0.25 |
| 0.449 | 6.0 | 18 | 0.8009 | 0.25 |
| 0.4352 | 7.0 | 21 | 0.8209 | 0.5 |
| 0.3462 | 8.0 | 24 | 0.8470 | 0.5 |
| 0.3028 | 9.0 | 27 | 0.8579 | 0.5 |
| 0.2365 | 10.0 | 30 | 0.8704 | 0.5 |
| 0.2023 | 11.0 | 33 | 0.8770 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-8-7", "results": []}]}
|
SetFit/distilbert-base-uncased__sst2__train-8-7
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased\_\_sst2\_\_train-8-7
============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6950
* Accuracy: 0.4618
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6925
- Accuracy: 0.5200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7061 | 1.0 | 3 | 0.6899 | 0.75 |
| 0.6627 | 2.0 | 6 | 0.7026 | 0.25 |
| 0.644 | 3.0 | 9 | 0.7158 | 0.25 |
| 0.6087 | 4.0 | 12 | 0.7325 | 0.25 |
| 0.5602 | 5.0 | 15 | 0.7555 | 0.25 |
| 0.5034 | 6.0 | 18 | 0.7725 | 0.25 |
| 0.4672 | 7.0 | 21 | 0.7983 | 0.25 |
| 0.403 | 8.0 | 24 | 0.8314 | 0.25 |
| 0.3571 | 9.0 | 27 | 0.8555 | 0.25 |
| 0.2792 | 10.0 | 30 | 0.9065 | 0.25 |
| 0.2373 | 11.0 | 33 | 0.9286 | 0.25 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-8-8", "results": []}]}
|
SetFit/distilbert-base-uncased__sst2__train-8-8
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased\_\_sst2\_\_train-8-8
============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6925
* Accuracy: 0.5200
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6925
- Accuracy: 0.5140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7204 | 1.0 | 3 | 0.7025 | 0.5 |
| 0.6885 | 2.0 | 6 | 0.7145 | 0.5 |
| 0.6662 | 3.0 | 9 | 0.7222 | 0.5 |
| 0.6182 | 4.0 | 12 | 0.7427 | 0.25 |
| 0.5707 | 5.0 | 15 | 0.7773 | 0.25 |
| 0.5247 | 6.0 | 18 | 0.8137 | 0.25 |
| 0.5003 | 7.0 | 21 | 0.8556 | 0.25 |
| 0.4195 | 8.0 | 24 | 0.9089 | 0.5 |
| 0.387 | 9.0 | 27 | 0.9316 | 0.25 |
| 0.2971 | 10.0 | 30 | 0.9558 | 0.25 |
| 0.2581 | 11.0 | 33 | 0.9420 | 0.25 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-8-9", "results": []}]}
|
SetFit/distilbert-base-uncased__sst2__train-8-9
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased\_\_sst2\_\_train-8-9
============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6925
* Accuracy: 0.5140
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst5__all-train
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3757
- Accuracy: 0.5045
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2492 | 1.0 | 534 | 1.1163 | 0.4991 |
| 0.9937 | 2.0 | 1068 | 1.1232 | 0.5122 |
| 0.7867 | 3.0 | 1602 | 1.2097 | 0.5045 |
| 0.595 | 4.0 | 2136 | 1.3757 | 0.5045 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst5__all-train", "results": []}]}
|
SetFit/distilbert-base-uncased__sst5__all-train
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased\_\_sst5\_\_all-train
============================================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3757
* Accuracy: 0.5045
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu102
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__all-train
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3193
- Accuracy: 0.9485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1992 | 1.0 | 500 | 0.1236 | 0.963 |
| 0.084 | 2.0 | 1000 | 0.1428 | 0.963 |
| 0.0333 | 3.0 | 1500 | 0.1906 | 0.965 |
| 0.0159 | 4.0 | 2000 | 0.3193 | 0.9485 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased__subj__all-train", "results": []}]}
|
SetFit/distilbert-base-uncased__subj__all-train
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased\_\_subj\_\_all-train
============================================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3193
* Accuracy: 0.9485
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu102
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4440
- Accuracy: 0.789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7163 | 1.0 | 3 | 0.6868 | 0.5 |
| 0.6683 | 2.0 | 6 | 0.6804 | 0.75 |
| 0.6375 | 3.0 | 9 | 0.6702 | 0.75 |
| 0.5997 | 4.0 | 12 | 0.6686 | 0.75 |
| 0.5345 | 5.0 | 15 | 0.6720 | 0.75 |
| 0.4673 | 6.0 | 18 | 0.6646 | 0.75 |
| 0.4214 | 7.0 | 21 | 0.6494 | 0.75 |
| 0.3439 | 8.0 | 24 | 0.6313 | 0.75 |
| 0.3157 | 9.0 | 27 | 0.6052 | 0.75 |
| 0.2329 | 10.0 | 30 | 0.5908 | 0.75 |
| 0.1989 | 11.0 | 33 | 0.5768 | 0.75 |
| 0.1581 | 12.0 | 36 | 0.5727 | 0.75 |
| 0.1257 | 13.0 | 39 | 0.5678 | 0.75 |
| 0.1005 | 14.0 | 42 | 0.5518 | 0.75 |
| 0.0836 | 15.0 | 45 | 0.5411 | 0.75 |
| 0.0611 | 16.0 | 48 | 0.5320 | 0.75 |
| 0.0503 | 17.0 | 51 | 0.5299 | 0.75 |
| 0.0407 | 18.0 | 54 | 0.5368 | 0.75 |
| 0.0332 | 19.0 | 57 | 0.5455 | 0.75 |
| 0.0293 | 20.0 | 60 | 0.5525 | 0.75 |
| 0.0254 | 21.0 | 63 | 0.5560 | 0.75 |
| 0.0231 | 22.0 | 66 | 0.5569 | 0.75 |
| 0.0201 | 23.0 | 69 | 0.5572 | 0.75 |
| 0.0179 | 24.0 | 72 | 0.5575 | 0.75 |
| 0.0184 | 25.0 | 75 | 0.5547 | 0.75 |
| 0.0148 | 26.0 | 78 | 0.5493 | 0.75 |
| 0.0149 | 27.0 | 81 | 0.5473 | 0.75 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__subj__train-8-0", "results": []}]}
|
SetFit/distilbert-base-uncased__subj__train-8-0
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased\_\_subj\_\_train-8-0
============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4440
* Accuracy: 0.789
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5488
- Accuracy: 0.791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.703 | 1.0 | 3 | 0.6906 | 0.5 |
| 0.666 | 2.0 | 6 | 0.6945 | 0.25 |
| 0.63 | 3.0 | 9 | 0.6885 | 0.5 |
| 0.588 | 4.0 | 12 | 0.6888 | 0.25 |
| 0.5181 | 5.0 | 15 | 0.6899 | 0.25 |
| 0.4508 | 6.0 | 18 | 0.6770 | 0.5 |
| 0.4025 | 7.0 | 21 | 0.6579 | 0.5 |
| 0.3361 | 8.0 | 24 | 0.6392 | 0.5 |
| 0.2919 | 9.0 | 27 | 0.6113 | 0.5 |
| 0.2151 | 10.0 | 30 | 0.5774 | 0.75 |
| 0.1728 | 11.0 | 33 | 0.5248 | 0.75 |
| 0.1313 | 12.0 | 36 | 0.4824 | 0.75 |
| 0.1046 | 13.0 | 39 | 0.4456 | 0.75 |
| 0.0858 | 14.0 | 42 | 0.4076 | 0.75 |
| 0.0679 | 15.0 | 45 | 0.3755 | 0.75 |
| 0.0485 | 16.0 | 48 | 0.3422 | 0.75 |
| 0.0416 | 17.0 | 51 | 0.3055 | 0.75 |
| 0.0358 | 18.0 | 54 | 0.2731 | 1.0 |
| 0.0277 | 19.0 | 57 | 0.2443 | 1.0 |
| 0.0234 | 20.0 | 60 | 0.2187 | 1.0 |
| 0.0223 | 21.0 | 63 | 0.1960 | 1.0 |
| 0.0187 | 22.0 | 66 | 0.1762 | 1.0 |
| 0.017 | 23.0 | 69 | 0.1629 | 1.0 |
| 0.0154 | 24.0 | 72 | 0.1543 | 1.0 |
| 0.0164 | 25.0 | 75 | 0.1476 | 1.0 |
| 0.0131 | 26.0 | 78 | 0.1423 | 1.0 |
| 0.0139 | 27.0 | 81 | 0.1387 | 1.0 |
| 0.0107 | 28.0 | 84 | 0.1360 | 1.0 |
| 0.0108 | 29.0 | 87 | 0.1331 | 1.0 |
| 0.0105 | 30.0 | 90 | 0.1308 | 1.0 |
| 0.0106 | 31.0 | 93 | 0.1276 | 1.0 |
| 0.0104 | 32.0 | 96 | 0.1267 | 1.0 |
| 0.0095 | 33.0 | 99 | 0.1255 | 1.0 |
| 0.0076 | 34.0 | 102 | 0.1243 | 1.0 |
| 0.0094 | 35.0 | 105 | 0.1235 | 1.0 |
| 0.0103 | 36.0 | 108 | 0.1228 | 1.0 |
| 0.0086 | 37.0 | 111 | 0.1231 | 1.0 |
| 0.0094 | 38.0 | 114 | 0.1236 | 1.0 |
| 0.0074 | 39.0 | 117 | 0.1240 | 1.0 |
| 0.0085 | 40.0 | 120 | 0.1246 | 1.0 |
| 0.0079 | 41.0 | 123 | 0.1253 | 1.0 |
| 0.0088 | 42.0 | 126 | 0.1248 | 1.0 |
| 0.0082 | 43.0 | 129 | 0.1244 | 1.0 |
| 0.0082 | 44.0 | 132 | 0.1234 | 1.0 |
| 0.0082 | 45.0 | 135 | 0.1223 | 1.0 |
| 0.0071 | 46.0 | 138 | 0.1212 | 1.0 |
| 0.0073 | 47.0 | 141 | 0.1208 | 1.0 |
| 0.0081 | 48.0 | 144 | 0.1205 | 1.0 |
| 0.0067 | 49.0 | 147 | 0.1202 | 1.0 |
| 0.0077 | 50.0 | 150 | 0.1202 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__subj__train-8-1", "results": []}]}
|
SetFit/distilbert-base-uncased__subj__train-8-1
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased\_\_subj\_\_train-8-1
============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5488
* Accuracy: 0.791
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3081
- Accuracy: 0.8755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7146 | 1.0 | 3 | 0.6798 | 0.75 |
| 0.6737 | 2.0 | 6 | 0.6847 | 0.75 |
| 0.6519 | 3.0 | 9 | 0.6783 | 0.75 |
| 0.6105 | 4.0 | 12 | 0.6812 | 0.25 |
| 0.5463 | 5.0 | 15 | 0.6869 | 0.25 |
| 0.4922 | 6.0 | 18 | 0.6837 | 0.5 |
| 0.4543 | 7.0 | 21 | 0.6716 | 0.5 |
| 0.3856 | 8.0 | 24 | 0.6613 | 0.75 |
| 0.3475 | 9.0 | 27 | 0.6282 | 0.75 |
| 0.2717 | 10.0 | 30 | 0.6045 | 0.75 |
| 0.2347 | 11.0 | 33 | 0.5620 | 0.75 |
| 0.1979 | 12.0 | 36 | 0.5234 | 1.0 |
| 0.1535 | 13.0 | 39 | 0.4771 | 1.0 |
| 0.1332 | 14.0 | 42 | 0.4277 | 1.0 |
| 0.1041 | 15.0 | 45 | 0.3785 | 1.0 |
| 0.082 | 16.0 | 48 | 0.3318 | 1.0 |
| 0.0672 | 17.0 | 51 | 0.2885 | 1.0 |
| 0.0538 | 18.0 | 54 | 0.2568 | 1.0 |
| 0.0412 | 19.0 | 57 | 0.2356 | 1.0 |
| 0.0361 | 20.0 | 60 | 0.2217 | 1.0 |
| 0.0303 | 21.0 | 63 | 0.2125 | 1.0 |
| 0.0268 | 22.0 | 66 | 0.2060 | 1.0 |
| 0.0229 | 23.0 | 69 | 0.2015 | 1.0 |
| 0.0215 | 24.0 | 72 | 0.1989 | 1.0 |
| 0.0211 | 25.0 | 75 | 0.1969 | 1.0 |
| 0.0172 | 26.0 | 78 | 0.1953 | 1.0 |
| 0.0165 | 27.0 | 81 | 0.1935 | 1.0 |
| 0.0132 | 28.0 | 84 | 0.1923 | 1.0 |
| 0.0146 | 29.0 | 87 | 0.1914 | 1.0 |
| 0.0125 | 30.0 | 90 | 0.1904 | 1.0 |
| 0.0119 | 31.0 | 93 | 0.1897 | 1.0 |
| 0.0122 | 32.0 | 96 | 0.1886 | 1.0 |
| 0.0118 | 33.0 | 99 | 0.1875 | 1.0 |
| 0.0097 | 34.0 | 102 | 0.1866 | 1.0 |
| 0.0111 | 35.0 | 105 | 0.1861 | 1.0 |
| 0.0111 | 36.0 | 108 | 0.1855 | 1.0 |
| 0.0102 | 37.0 | 111 | 0.1851 | 1.0 |
| 0.0109 | 38.0 | 114 | 0.1851 | 1.0 |
| 0.0085 | 39.0 | 117 | 0.1854 | 1.0 |
| 0.0089 | 40.0 | 120 | 0.1855 | 1.0 |
| 0.0092 | 41.0 | 123 | 0.1863 | 1.0 |
| 0.0105 | 42.0 | 126 | 0.1868 | 1.0 |
| 0.0089 | 43.0 | 129 | 0.1874 | 1.0 |
| 0.0091 | 44.0 | 132 | 0.1877 | 1.0 |
| 0.0096 | 45.0 | 135 | 0.1881 | 1.0 |
| 0.0081 | 46.0 | 138 | 0.1881 | 1.0 |
| 0.0086 | 47.0 | 141 | 0.1883 | 1.0 |
| 0.009 | 48.0 | 144 | 0.1884 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__subj__train-8-2", "results": []}]}
|
SetFit/distilbert-base-uncased__subj__train-8-2
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased\_\_subj\_\_train-8-2
============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3081
* Accuracy: 0.8755
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3496
- Accuracy: 0.859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7136 | 1.0 | 3 | 0.6875 | 0.75 |
| 0.6702 | 2.0 | 6 | 0.6824 | 0.75 |
| 0.6456 | 3.0 | 9 | 0.6687 | 0.75 |
| 0.5934 | 4.0 | 12 | 0.6564 | 0.75 |
| 0.537 | 5.0 | 15 | 0.6428 | 0.75 |
| 0.4812 | 6.0 | 18 | 0.6180 | 0.75 |
| 0.4279 | 7.0 | 21 | 0.5864 | 0.75 |
| 0.3608 | 8.0 | 24 | 0.5540 | 0.75 |
| 0.3076 | 9.0 | 27 | 0.5012 | 1.0 |
| 0.2292 | 10.0 | 30 | 0.4497 | 1.0 |
| 0.1991 | 11.0 | 33 | 0.3945 | 1.0 |
| 0.1495 | 12.0 | 36 | 0.3483 | 1.0 |
| 0.1176 | 13.0 | 39 | 0.3061 | 1.0 |
| 0.0947 | 14.0 | 42 | 0.2683 | 1.0 |
| 0.0761 | 15.0 | 45 | 0.2295 | 1.0 |
| 0.0584 | 16.0 | 48 | 0.1996 | 1.0 |
| 0.0451 | 17.0 | 51 | 0.1739 | 1.0 |
| 0.0387 | 18.0 | 54 | 0.1521 | 1.0 |
| 0.0272 | 19.0 | 57 | 0.1333 | 1.0 |
| 0.0247 | 20.0 | 60 | 0.1171 | 1.0 |
| 0.0243 | 21.0 | 63 | 0.1044 | 1.0 |
| 0.0206 | 22.0 | 66 | 0.0943 | 1.0 |
| 0.0175 | 23.0 | 69 | 0.0859 | 1.0 |
| 0.0169 | 24.0 | 72 | 0.0799 | 1.0 |
| 0.0162 | 25.0 | 75 | 0.0746 | 1.0 |
| 0.0137 | 26.0 | 78 | 0.0705 | 1.0 |
| 0.0141 | 27.0 | 81 | 0.0674 | 1.0 |
| 0.0107 | 28.0 | 84 | 0.0654 | 1.0 |
| 0.0117 | 29.0 | 87 | 0.0634 | 1.0 |
| 0.0113 | 30.0 | 90 | 0.0617 | 1.0 |
| 0.0107 | 31.0 | 93 | 0.0599 | 1.0 |
| 0.0106 | 32.0 | 96 | 0.0585 | 1.0 |
| 0.0101 | 33.0 | 99 | 0.0568 | 1.0 |
| 0.0084 | 34.0 | 102 | 0.0553 | 1.0 |
| 0.0101 | 35.0 | 105 | 0.0539 | 1.0 |
| 0.0102 | 36.0 | 108 | 0.0529 | 1.0 |
| 0.009 | 37.0 | 111 | 0.0520 | 1.0 |
| 0.0092 | 38.0 | 114 | 0.0511 | 1.0 |
| 0.0073 | 39.0 | 117 | 0.0504 | 1.0 |
| 0.0081 | 40.0 | 120 | 0.0497 | 1.0 |
| 0.0079 | 41.0 | 123 | 0.0492 | 1.0 |
| 0.0092 | 42.0 | 126 | 0.0488 | 1.0 |
| 0.008 | 43.0 | 129 | 0.0483 | 1.0 |
| 0.0087 | 44.0 | 132 | 0.0479 | 1.0 |
| 0.009 | 45.0 | 135 | 0.0474 | 1.0 |
| 0.0076 | 46.0 | 138 | 0.0470 | 1.0 |
| 0.0075 | 47.0 | 141 | 0.0467 | 1.0 |
| 0.008 | 48.0 | 144 | 0.0465 | 1.0 |
| 0.0069 | 49.0 | 147 | 0.0464 | 1.0 |
| 0.0077 | 50.0 | 150 | 0.0464 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__subj__train-8-3", "results": []}]}
|
SetFit/distilbert-base-uncased__subj__train-8-3
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased\_\_subj\_\_train-8-3
============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3496
* Accuracy: 0.859
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3305
- Accuracy: 0.8565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6991 | 1.0 | 3 | 0.6772 | 0.75 |
| 0.6707 | 2.0 | 6 | 0.6704 | 0.75 |
| 0.6402 | 3.0 | 9 | 0.6608 | 1.0 |
| 0.5789 | 4.0 | 12 | 0.6547 | 0.75 |
| 0.5211 | 5.0 | 15 | 0.6434 | 0.75 |
| 0.454 | 6.0 | 18 | 0.6102 | 1.0 |
| 0.4187 | 7.0 | 21 | 0.5701 | 1.0 |
| 0.3401 | 8.0 | 24 | 0.5289 | 1.0 |
| 0.3107 | 9.0 | 27 | 0.4737 | 1.0 |
| 0.2381 | 10.0 | 30 | 0.4255 | 1.0 |
| 0.1982 | 11.0 | 33 | 0.3685 | 1.0 |
| 0.1631 | 12.0 | 36 | 0.3200 | 1.0 |
| 0.1234 | 13.0 | 39 | 0.2798 | 1.0 |
| 0.0993 | 14.0 | 42 | 0.2455 | 1.0 |
| 0.0781 | 15.0 | 45 | 0.2135 | 1.0 |
| 0.0586 | 16.0 | 48 | 0.1891 | 1.0 |
| 0.0513 | 17.0 | 51 | 0.1671 | 1.0 |
| 0.043 | 18.0 | 54 | 0.1427 | 1.0 |
| 0.0307 | 19.0 | 57 | 0.1225 | 1.0 |
| 0.0273 | 20.0 | 60 | 0.1060 | 1.0 |
| 0.0266 | 21.0 | 63 | 0.0920 | 1.0 |
| 0.0233 | 22.0 | 66 | 0.0823 | 1.0 |
| 0.0185 | 23.0 | 69 | 0.0751 | 1.0 |
| 0.0173 | 24.0 | 72 | 0.0698 | 1.0 |
| 0.0172 | 25.0 | 75 | 0.0651 | 1.0 |
| 0.0142 | 26.0 | 78 | 0.0613 | 1.0 |
| 0.0151 | 27.0 | 81 | 0.0583 | 1.0 |
| 0.0117 | 28.0 | 84 | 0.0563 | 1.0 |
| 0.0123 | 29.0 | 87 | 0.0546 | 1.0 |
| 0.0121 | 30.0 | 90 | 0.0531 | 1.0 |
| 0.0123 | 31.0 | 93 | 0.0511 | 1.0 |
| 0.0112 | 32.0 | 96 | 0.0496 | 1.0 |
| 0.0103 | 33.0 | 99 | 0.0481 | 1.0 |
| 0.0086 | 34.0 | 102 | 0.0468 | 1.0 |
| 0.0096 | 35.0 | 105 | 0.0457 | 1.0 |
| 0.0107 | 36.0 | 108 | 0.0447 | 1.0 |
| 0.0095 | 37.0 | 111 | 0.0439 | 1.0 |
| 0.0102 | 38.0 | 114 | 0.0429 | 1.0 |
| 0.0077 | 39.0 | 117 | 0.0422 | 1.0 |
| 0.0092 | 40.0 | 120 | 0.0415 | 1.0 |
| 0.0083 | 41.0 | 123 | 0.0409 | 1.0 |
| 0.0094 | 42.0 | 126 | 0.0404 | 1.0 |
| 0.0084 | 43.0 | 129 | 0.0400 | 1.0 |
| 0.0085 | 44.0 | 132 | 0.0396 | 1.0 |
| 0.0092 | 45.0 | 135 | 0.0392 | 1.0 |
| 0.0076 | 46.0 | 138 | 0.0389 | 1.0 |
| 0.0073 | 47.0 | 141 | 0.0388 | 1.0 |
| 0.0085 | 48.0 | 144 | 0.0387 | 1.0 |
| 0.0071 | 49.0 | 147 | 0.0386 | 1.0 |
| 0.0079 | 50.0 | 150 | 0.0386 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__subj__train-8-4", "results": []}]}
|
SetFit/distilbert-base-uncased__subj__train-8-4
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased\_\_subj\_\_train-8-4
============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3305
* Accuracy: 0.8565
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6927
- Accuracy: 0.506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7102 | 1.0 | 3 | 0.6790 | 0.75 |
| 0.6693 | 2.0 | 6 | 0.6831 | 0.75 |
| 0.6438 | 3.0 | 9 | 0.6876 | 0.75 |
| 0.6047 | 4.0 | 12 | 0.6970 | 0.75 |
| 0.547 | 5.0 | 15 | 0.7065 | 0.75 |
| 0.4885 | 6.0 | 18 | 0.7114 | 0.75 |
| 0.4601 | 7.0 | 21 | 0.7147 | 0.5 |
| 0.4017 | 8.0 | 24 | 0.7178 | 0.5 |
| 0.3474 | 9.0 | 27 | 0.7145 | 0.5 |
| 0.2624 | 10.0 | 30 | 0.7153 | 0.5 |
| 0.2175 | 11.0 | 33 | 0.7158 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__subj__train-8-5", "results": []}]}
|
SetFit/distilbert-base-uncased__subj__train-8-5
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased\_\_subj\_\_train-8-5
============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6927
* Accuracy: 0.506
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6075
- Accuracy: 0.7485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7163 | 1.0 | 3 | 0.6923 | 0.5 |
| 0.6648 | 2.0 | 6 | 0.6838 | 0.5 |
| 0.6329 | 3.0 | 9 | 0.6747 | 0.75 |
| 0.5836 | 4.0 | 12 | 0.6693 | 0.5 |
| 0.5287 | 5.0 | 15 | 0.6670 | 0.25 |
| 0.4585 | 6.0 | 18 | 0.6517 | 0.5 |
| 0.415 | 7.0 | 21 | 0.6290 | 0.5 |
| 0.3353 | 8.0 | 24 | 0.6019 | 0.5 |
| 0.2841 | 9.0 | 27 | 0.5613 | 0.75 |
| 0.2203 | 10.0 | 30 | 0.5222 | 1.0 |
| 0.1743 | 11.0 | 33 | 0.4769 | 1.0 |
| 0.1444 | 12.0 | 36 | 0.4597 | 1.0 |
| 0.1079 | 13.0 | 39 | 0.4462 | 1.0 |
| 0.0891 | 14.0 | 42 | 0.4216 | 1.0 |
| 0.0704 | 15.0 | 45 | 0.3880 | 1.0 |
| 0.0505 | 16.0 | 48 | 0.3663 | 1.0 |
| 0.0428 | 17.0 | 51 | 0.3536 | 1.0 |
| 0.0356 | 18.0 | 54 | 0.3490 | 1.0 |
| 0.0283 | 19.0 | 57 | 0.3531 | 1.0 |
| 0.025 | 20.0 | 60 | 0.3595 | 1.0 |
| 0.0239 | 21.0 | 63 | 0.3594 | 1.0 |
| 0.0202 | 22.0 | 66 | 0.3521 | 1.0 |
| 0.0168 | 23.0 | 69 | 0.3475 | 1.0 |
| 0.0159 | 24.0 | 72 | 0.3458 | 1.0 |
| 0.0164 | 25.0 | 75 | 0.3409 | 1.0 |
| 0.0132 | 26.0 | 78 | 0.3360 | 1.0 |
| 0.0137 | 27.0 | 81 | 0.3302 | 1.0 |
| 0.0112 | 28.0 | 84 | 0.3235 | 1.0 |
| 0.0113 | 29.0 | 87 | 0.3178 | 1.0 |
| 0.0111 | 30.0 | 90 | 0.3159 | 1.0 |
| 0.0113 | 31.0 | 93 | 0.3108 | 1.0 |
| 0.0107 | 32.0 | 96 | 0.3101 | 1.0 |
| 0.0101 | 33.0 | 99 | 0.3100 | 1.0 |
| 0.0083 | 34.0 | 102 | 0.3110 | 1.0 |
| 0.0092 | 35.0 | 105 | 0.3117 | 1.0 |
| 0.0102 | 36.0 | 108 | 0.3104 | 1.0 |
| 0.0086 | 37.0 | 111 | 0.3086 | 1.0 |
| 0.0092 | 38.0 | 114 | 0.3047 | 1.0 |
| 0.0072 | 39.0 | 117 | 0.3024 | 1.0 |
| 0.0079 | 40.0 | 120 | 0.3014 | 1.0 |
| 0.0079 | 41.0 | 123 | 0.2983 | 1.0 |
| 0.0091 | 42.0 | 126 | 0.2948 | 1.0 |
| 0.0077 | 43.0 | 129 | 0.2915 | 1.0 |
| 0.0085 | 44.0 | 132 | 0.2890 | 1.0 |
| 0.009 | 45.0 | 135 | 0.2870 | 1.0 |
| 0.0073 | 46.0 | 138 | 0.2856 | 1.0 |
| 0.0073 | 47.0 | 141 | 0.2844 | 1.0 |
| 0.0076 | 48.0 | 144 | 0.2841 | 1.0 |
| 0.0065 | 49.0 | 147 | 0.2836 | 1.0 |
| 0.0081 | 50.0 | 150 | 0.2835 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__subj__train-8-6", "results": []}]}
|
SetFit/distilbert-base-uncased__subj__train-8-6
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased\_\_subj\_\_train-8-6
============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6075
* Accuracy: 0.7485
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2766
- Accuracy: 0.8845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7044 | 1.0 | 3 | 0.6909 | 0.5 |
| 0.6678 | 2.0 | 6 | 0.6901 | 0.5 |
| 0.6336 | 3.0 | 9 | 0.6807 | 0.5 |
| 0.5926 | 4.0 | 12 | 0.6726 | 0.5 |
| 0.5221 | 5.0 | 15 | 0.6648 | 0.5 |
| 0.4573 | 6.0 | 18 | 0.6470 | 0.5 |
| 0.4177 | 7.0 | 21 | 0.6251 | 0.5 |
| 0.3252 | 8.0 | 24 | 0.5994 | 0.5 |
| 0.2831 | 9.0 | 27 | 0.5529 | 0.5 |
| 0.213 | 10.0 | 30 | 0.5078 | 0.75 |
| 0.1808 | 11.0 | 33 | 0.4521 | 1.0 |
| 0.1355 | 12.0 | 36 | 0.3996 | 1.0 |
| 0.1027 | 13.0 | 39 | 0.3557 | 1.0 |
| 0.0862 | 14.0 | 42 | 0.3121 | 1.0 |
| 0.0682 | 15.0 | 45 | 0.2828 | 1.0 |
| 0.0517 | 16.0 | 48 | 0.2603 | 1.0 |
| 0.0466 | 17.0 | 51 | 0.2412 | 1.0 |
| 0.038 | 18.0 | 54 | 0.2241 | 1.0 |
| 0.0276 | 19.0 | 57 | 0.2096 | 1.0 |
| 0.0246 | 20.0 | 60 | 0.1969 | 1.0 |
| 0.0249 | 21.0 | 63 | 0.1859 | 1.0 |
| 0.0201 | 22.0 | 66 | 0.1770 | 1.0 |
| 0.018 | 23.0 | 69 | 0.1703 | 1.0 |
| 0.0164 | 24.0 | 72 | 0.1670 | 1.0 |
| 0.0172 | 25.0 | 75 | 0.1639 | 1.0 |
| 0.0135 | 26.0 | 78 | 0.1604 | 1.0 |
| 0.014 | 27.0 | 81 | 0.1585 | 1.0 |
| 0.0108 | 28.0 | 84 | 0.1569 | 1.0 |
| 0.0116 | 29.0 | 87 | 0.1549 | 1.0 |
| 0.0111 | 30.0 | 90 | 0.1532 | 1.0 |
| 0.0113 | 31.0 | 93 | 0.1513 | 1.0 |
| 0.0104 | 32.0 | 96 | 0.1503 | 1.0 |
| 0.01 | 33.0 | 99 | 0.1490 | 1.0 |
| 0.0079 | 34.0 | 102 | 0.1479 | 1.0 |
| 0.0097 | 35.0 | 105 | 0.1466 | 1.0 |
| 0.0112 | 36.0 | 108 | 0.1458 | 1.0 |
| 0.0091 | 37.0 | 111 | 0.1457 | 1.0 |
| 0.0098 | 38.0 | 114 | 0.1454 | 1.0 |
| 0.0076 | 39.0 | 117 | 0.1451 | 1.0 |
| 0.0085 | 40.0 | 120 | 0.1448 | 1.0 |
| 0.0079 | 41.0 | 123 | 0.1445 | 1.0 |
| 0.0096 | 42.0 | 126 | 0.1440 | 1.0 |
| 0.0081 | 43.0 | 129 | 0.1430 | 1.0 |
| 0.0083 | 44.0 | 132 | 0.1424 | 1.0 |
| 0.0088 | 45.0 | 135 | 0.1418 | 1.0 |
| 0.0077 | 46.0 | 138 | 0.1414 | 1.0 |
| 0.0073 | 47.0 | 141 | 0.1413 | 1.0 |
| 0.0084 | 48.0 | 144 | 0.1412 | 1.0 |
| 0.0072 | 49.0 | 147 | 0.1411 | 1.0 |
| 0.0077 | 50.0 | 150 | 0.1411 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__subj__train-8-7", "results": []}]}
|
SetFit/distilbert-base-uncased__subj__train-8-7
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased\_\_subj\_\_train-8-7
============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2766
* Accuracy: 0.8845
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3160
- Accuracy: 0.8735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7187 | 1.0 | 3 | 0.6776 | 1.0 |
| 0.684 | 2.0 | 6 | 0.6608 | 1.0 |
| 0.6532 | 3.0 | 9 | 0.6364 | 1.0 |
| 0.5996 | 4.0 | 12 | 0.6119 | 1.0 |
| 0.5242 | 5.0 | 15 | 0.5806 | 1.0 |
| 0.4612 | 6.0 | 18 | 0.5320 | 1.0 |
| 0.4192 | 7.0 | 21 | 0.4714 | 1.0 |
| 0.3274 | 8.0 | 24 | 0.4071 | 1.0 |
| 0.2871 | 9.0 | 27 | 0.3378 | 1.0 |
| 0.2082 | 10.0 | 30 | 0.2822 | 1.0 |
| 0.1692 | 11.0 | 33 | 0.2271 | 1.0 |
| 0.1242 | 12.0 | 36 | 0.1793 | 1.0 |
| 0.0977 | 13.0 | 39 | 0.1417 | 1.0 |
| 0.0776 | 14.0 | 42 | 0.1117 | 1.0 |
| 0.0631 | 15.0 | 45 | 0.0894 | 1.0 |
| 0.0453 | 16.0 | 48 | 0.0733 | 1.0 |
| 0.0399 | 17.0 | 51 | 0.0617 | 1.0 |
| 0.0333 | 18.0 | 54 | 0.0528 | 1.0 |
| 0.0266 | 19.0 | 57 | 0.0454 | 1.0 |
| 0.0234 | 20.0 | 60 | 0.0393 | 1.0 |
| 0.0223 | 21.0 | 63 | 0.0345 | 1.0 |
| 0.0195 | 22.0 | 66 | 0.0309 | 1.0 |
| 0.0161 | 23.0 | 69 | 0.0281 | 1.0 |
| 0.0167 | 24.0 | 72 | 0.0260 | 1.0 |
| 0.0163 | 25.0 | 75 | 0.0242 | 1.0 |
| 0.0134 | 26.0 | 78 | 0.0227 | 1.0 |
| 0.0128 | 27.0 | 81 | 0.0214 | 1.0 |
| 0.0101 | 28.0 | 84 | 0.0204 | 1.0 |
| 0.0109 | 29.0 | 87 | 0.0194 | 1.0 |
| 0.0112 | 30.0 | 90 | 0.0186 | 1.0 |
| 0.0108 | 31.0 | 93 | 0.0179 | 1.0 |
| 0.011 | 32.0 | 96 | 0.0174 | 1.0 |
| 0.0099 | 33.0 | 99 | 0.0169 | 1.0 |
| 0.0083 | 34.0 | 102 | 0.0164 | 1.0 |
| 0.0096 | 35.0 | 105 | 0.0160 | 1.0 |
| 0.01 | 36.0 | 108 | 0.0156 | 1.0 |
| 0.0084 | 37.0 | 111 | 0.0152 | 1.0 |
| 0.0089 | 38.0 | 114 | 0.0149 | 1.0 |
| 0.0073 | 39.0 | 117 | 0.0146 | 1.0 |
| 0.0082 | 40.0 | 120 | 0.0143 | 1.0 |
| 0.008 | 41.0 | 123 | 0.0141 | 1.0 |
| 0.0093 | 42.0 | 126 | 0.0139 | 1.0 |
| 0.0078 | 43.0 | 129 | 0.0138 | 1.0 |
| 0.0086 | 44.0 | 132 | 0.0136 | 1.0 |
| 0.009 | 45.0 | 135 | 0.0135 | 1.0 |
| 0.0072 | 46.0 | 138 | 0.0134 | 1.0 |
| 0.0075 | 47.0 | 141 | 0.0133 | 1.0 |
| 0.0082 | 48.0 | 144 | 0.0133 | 1.0 |
| 0.0068 | 49.0 | 147 | 0.0132 | 1.0 |
| 0.0074 | 50.0 | 150 | 0.0132 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__subj__train-8-8", "results": []}]}
|
SetFit/distilbert-base-uncased__subj__train-8-8
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased\_\_subj\_\_train-8-8
============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3160
* Accuracy: 0.8735
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4865
- Accuracy: 0.778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7024 | 1.0 | 3 | 0.6843 | 0.75 |
| 0.67 | 2.0 | 6 | 0.6807 | 0.5 |
| 0.6371 | 3.0 | 9 | 0.6677 | 0.5 |
| 0.585 | 4.0 | 12 | 0.6649 | 0.5 |
| 0.5122 | 5.0 | 15 | 0.6707 | 0.5 |
| 0.4379 | 6.0 | 18 | 0.6660 | 0.5 |
| 0.4035 | 7.0 | 21 | 0.6666 | 0.5 |
| 0.323 | 8.0 | 24 | 0.6672 | 0.5 |
| 0.2841 | 9.0 | 27 | 0.6534 | 0.5 |
| 0.21 | 10.0 | 30 | 0.6456 | 0.5 |
| 0.1735 | 11.0 | 33 | 0.6325 | 0.5 |
| 0.133 | 12.0 | 36 | 0.6214 | 0.5 |
| 0.0986 | 13.0 | 39 | 0.6351 | 0.5 |
| 0.081 | 14.0 | 42 | 0.6495 | 0.5 |
| 0.0638 | 15.0 | 45 | 0.6671 | 0.5 |
| 0.0449 | 16.0 | 48 | 0.7156 | 0.5 |
| 0.0399 | 17.0 | 51 | 0.7608 | 0.5 |
| 0.0314 | 18.0 | 54 | 0.7796 | 0.5 |
| 0.0243 | 19.0 | 57 | 0.7789 | 0.5 |
| 0.0227 | 20.0 | 60 | 0.7684 | 0.5 |
| 0.0221 | 21.0 | 63 | 0.7628 | 0.5 |
| 0.0192 | 22.0 | 66 | 0.7728 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__subj__train-8-9", "results": []}]}
|
SetFit/distilbert-base-uncased__subj__train-8-9
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased\_\_subj\_\_train-8-9
============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4865
* Accuracy: 0.778
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
null |
transformers
|
# Small-E-Czech
Small-E-Czech is an [Electra](https://arxiv.org/abs/2003.10555)-small model pretrained on a Czech web corpus created at [Seznam.cz](https://www.seznam.cz/) and introduced in an [IAAI 2022 paper](https://arxiv.org/abs/2112.01810). Like other pretrained models, it should be finetuned on a downstream task of interest before use. At Seznam.cz, it has helped improve [web search ranking](https://blog.seznam.cz/2021/02/vyhledavani-pomoci-vyznamovych-vektoru/), query typo correction or clickbait titles detection. We release it under [CC BY 4.0 license](https://creativecommons.org/licenses/by/4.0/) (i.e. allowing commercial use). To raise an issue, please visit our [github](https://github.com/seznam/small-e-czech).
### How to use the discriminator in transformers
```python
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("Seznam/small-e-czech")
tokenizer = ElectraTokenizerFast.from_pretrained("Seznam/small-e-czech")
sentence = "Za hory, za doly, mé zlaté parohy"
fake_sentence = "Za hory, za doly, kočka zlaté parohy"
fake_sentence_tokens = ["[CLS]"] + tokenizer.tokenize(fake_sentence) + ["[SEP]"]
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
outputs = discriminator(fake_inputs)
predictions = torch.nn.Sigmoid()(outputs[0]).cpu().detach().numpy()
for token in fake_sentence_tokens:
print("{:>7s}".format(token), end="")
print()
for prediction in predictions.squeeze():
print("{:7.1f}".format(prediction), end="")
print()
```
In the output we can see the probabilities of particular tokens not belonging in the sentence (i.e. having been faked by the generator) according to the discriminator:
```
[CLS] za hory , za dol ##y , kočka zlaté paro ##hy [SEP]
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.3 0.2 0.1 0.0
```
### Finetuning
For instructions on how to finetune the model on a new task, see the official HuggingFace transformers [tutorial](https://huggingface.co/transformers/training.html).
|
{"language": "cs", "license": "cc-by-4.0"}
|
Seznam/small-e-czech
| null |
[
"transformers",
"pytorch",
"tf",
"electra",
"cs",
"arxiv:2003.10555",
"arxiv:2112.01810",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2003.10555",
"2112.01810"
] |
[
"cs"
] |
TAGS
#transformers #pytorch #tf #electra #cs #arxiv-2003.10555 #arxiv-2112.01810 #license-cc-by-4.0 #endpoints_compatible #region-us
|
# Small-E-Czech
Small-E-Czech is an Electra-small model pretrained on a Czech web corpus created at URL and introduced in an IAAI 2022 paper. Like other pretrained models, it should be finetuned on a downstream task of interest before use. At URL, it has helped improve web search ranking, query typo correction or clickbait titles detection. We release it under CC BY 4.0 license (i.e. allowing commercial use). To raise an issue, please visit our github.
### How to use the discriminator in transformers
In the output we can see the probabilities of particular tokens not belonging in the sentence (i.e. having been faked by the generator) according to the discriminator:
### Finetuning
For instructions on how to finetune the model on a new task, see the official HuggingFace transformers tutorial.
|
[
"# Small-E-Czech\n\nSmall-E-Czech is an Electra-small model pretrained on a Czech web corpus created at URL and introduced in an IAAI 2022 paper. Like other pretrained models, it should be finetuned on a downstream task of interest before use. At URL, it has helped improve web search ranking, query typo correction or clickbait titles detection. We release it under CC BY 4.0 license (i.e. allowing commercial use). To raise an issue, please visit our github.",
"### How to use the discriminator in transformers\n\n\nIn the output we can see the probabilities of particular tokens not belonging in the sentence (i.e. having been faked by the generator) according to the discriminator:",
"### Finetuning\n\nFor instructions on how to finetune the model on a new task, see the official HuggingFace transformers tutorial."
] |
[
"TAGS\n#transformers #pytorch #tf #electra #cs #arxiv-2003.10555 #arxiv-2112.01810 #license-cc-by-4.0 #endpoints_compatible #region-us \n",
"# Small-E-Czech\n\nSmall-E-Czech is an Electra-small model pretrained on a Czech web corpus created at URL and introduced in an IAAI 2022 paper. Like other pretrained models, it should be finetuned on a downstream task of interest before use. At URL, it has helped improve web search ranking, query typo correction or clickbait titles detection. We release it under CC BY 4.0 license (i.e. allowing commercial use). To raise an issue, please visit our github.",
"### How to use the discriminator in transformers\n\n\nIn the output we can see the probabilities of particular tokens not belonging in the sentence (i.e. having been faked by the generator) according to the discriminator:",
"### Finetuning\n\nFor instructions on how to finetune the model on a new task, see the official HuggingFace transformers tutorial."
] |
summarization
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mode-bart-deutsch
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the mlsum de dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2152
- Rouge1: 41.698
- Rouge2: 31.3548
- Rougel: 38.2817
- Rougelsum: 39.6349
- Gen Len: 63.1723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"language": "de", "license": "apache-2.0", "tags": ["generated_from_trainer", "summarization"], "datasets": ["mlsum"], "metrics": ["rouge"], "model-index": [{"name": "mode-bart-deutsch", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "mlsum de", "type": "mlsum", "args": "de"}, "metrics": [{"type": "rouge", "value": 41.698, "name": "Rouge1"}]}]}]}
|
Shahm/bart-german
| null |
[
"transformers",
"pytorch",
"tensorboard",
"onnx",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"summarization",
"de",
"dataset:mlsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #tensorboard #onnx #safetensors #bart #text2text-generation #generated_from_trainer #summarization #de #dataset-mlsum #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# mode-bart-deutsch
This model is a fine-tuned version of facebook/bart-base on the mlsum de dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2152
- Rouge1: 41.698
- Rouge2: 31.3548
- Rougel: 38.2817
- Rougelsum: 39.6349
- Gen Len: 63.1723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# mode-bart-deutsch\n\nThis model is a fine-tuned version of facebook/bart-base on the mlsum de dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.2152\n- Rouge1: 41.698\n- Rouge2: 31.3548\n- Rougel: 38.2817\n- Rougelsum: 39.6349\n- Gen Len: 63.1723",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 6\n- eval_batch_size: 6\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #onnx #safetensors #bart #text2text-generation #generated_from_trainer #summarization #de #dataset-mlsum #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# mode-bart-deutsch\n\nThis model is a fine-tuned version of facebook/bart-base on the mlsum de dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.2152\n- Rouge1: 41.698\n- Rouge2: 31.3548\n- Rougel: 38.2817\n- Rougelsum: 39.6349\n- Gen Len: 63.1723",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 6\n- eval_batch_size: 6\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
summarization
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-seven-epoch-base-german
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the mlsum de dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5491
- Rouge1: 42.3787
- Rouge2: 32.0253
- Rougel: 38.9529
- Rougelsum: 40.4544
- Gen Len: 47.7873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"language": "de", "license": "apache-2.0", "tags": ["generated_from_trainer", "summarization"], "datasets": ["mlsum"], "metrics": ["rouge"], "model-index": [{"name": "t5-seven-epoch-base-german", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "mlsum de", "type": "mlsum", "args": "de"}, "metrics": [{"type": "rouge", "value": 42.3787, "name": "Rouge1"}]}]}]}
|
Shahm/t5-small-german
| null |
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"summarization",
"de",
"dataset:mlsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #summarization #de #dataset-mlsum #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# t5-seven-epoch-base-german
This model is a fine-tuned version of t5-small on the mlsum de dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5491
- Rouge1: 42.3787
- Rouge2: 32.0253
- Rougel: 38.9529
- Rougelsum: 40.4544
- Gen Len: 47.7873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# t5-seven-epoch-base-german\n\nThis model is a fine-tuned version of t5-small on the mlsum de dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.5491\n- Rouge1: 42.3787\n- Rouge2: 32.0253\n- Rougel: 38.9529\n- Rougelsum: 40.4544\n- Gen Len: 47.7873",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 6\n- eval_batch_size: 6\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 7.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #summarization #de #dataset-mlsum #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# t5-seven-epoch-base-german\n\nThis model is a fine-tuned version of t5-small on the mlsum de dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.5491\n- Rouge1: 42.3787\n- Rouge2: 32.0253\n- Rougel: 38.9529\n- Rougelsum: 40.4544\n- Gen Len: 47.7873",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 6\n- eval_batch_size: 6\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 7.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# Spongebob DialoGPT model
|
{"tags": ["conversational"]}
|
Shakaw/DialoGPT-small-spongebot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Spongebob DialoGPT model
|
[
"# Spongebob DialoGPT model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Spongebob DialoGPT model"
] |
null |
transformers
|
# ChineseBERT-base
This repository contains code, model, dataset for **ChineseBERT** at ACL2021.
paper:
**[ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information](https://arxiv.org/abs/2106.16038)**
*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*
code:
[ChineseBERT github link](https://github.com/ShannonAI/ChineseBert)
## Model description
We propose ChineseBERT, which incorporates both the glyph and pinyin information of Chinese
characters into language model pretraining.
First, for each Chinese character, we get three kind of embedding.
- **Char Embedding:** the same as origin BERT token embedding.
- **Glyph Embedding:** capture visual features based on different fonts of a Chinese character.
- **Pinyin Embedding:** capture phonetic feature from the pinyin sequence ot a Chinese Character.
Then, char embedding, glyph embedding and pinyin embedding
are first concatenated, and mapped to a D-dimensional embedding through a fully
connected layer to form the fusion embedding.
Finally, the fusion embedding is added with the position embedding, which is fed as input to the BERT model.
The following image shows an overview architecture of ChineseBERT model.

ChineseBERT leverages the glyph and pinyin information of Chinese
characters to enhance the model's ability of capturing
context semantics from surface character forms and
disambiguating polyphonic characters in Chinese.
|
{}
|
ShannonAI/ChineseBERT-base
| null |
[
"transformers",
"pytorch",
"arxiv:2106.16038",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.16038"
] |
[] |
TAGS
#transformers #pytorch #arxiv-2106.16038 #endpoints_compatible #region-us
|
# ChineseBERT-base
This repository contains code, model, dataset for ChineseBERT at ACL2021.
paper:
ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information
*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*
code:
ChineseBERT github link
## Model description
We propose ChineseBERT, which incorporates both the glyph and pinyin information of Chinese
characters into language model pretraining.
First, for each Chinese character, we get three kind of embedding.
- Char Embedding: the same as origin BERT token embedding.
- Glyph Embedding: capture visual features based on different fonts of a Chinese character.
- Pinyin Embedding: capture phonetic feature from the pinyin sequence ot a Chinese Character.
Then, char embedding, glyph embedding and pinyin embedding
are first concatenated, and mapped to a D-dimensional embedding through a fully
connected layer to form the fusion embedding.
Finally, the fusion embedding is added with the position embedding, which is fed as input to the BERT model.
The following image shows an overview architecture of ChineseBERT model.
!MODEL
ChineseBERT leverages the glyph and pinyin information of Chinese
characters to enhance the model's ability of capturing
context semantics from surface character forms and
disambiguating polyphonic characters in Chinese.
|
[
"# ChineseBERT-base\n\nThis repository contains code, model, dataset for ChineseBERT at ACL2021.\n\npaper: \nChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information \n*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*\n\ncode: \nChineseBERT github link",
"## Model description\nWe propose ChineseBERT, which incorporates both the glyph and pinyin information of Chinese\ncharacters into language model pretraining. \n \nFirst, for each Chinese character, we get three kind of embedding.\n - Char Embedding: the same as origin BERT token embedding.\n - Glyph Embedding: capture visual features based on different fonts of a Chinese character.\n - Pinyin Embedding: capture phonetic feature from the pinyin sequence ot a Chinese Character.\n \nThen, char embedding, glyph embedding and pinyin embedding \nare first concatenated, and mapped to a D-dimensional embedding through a fully \nconnected layer to form the fusion embedding. \nFinally, the fusion embedding is added with the position embedding, which is fed as input to the BERT model. \nThe following image shows an overview architecture of ChineseBERT model.\n \n!MODEL\n\nChineseBERT leverages the glyph and pinyin information of Chinese \ncharacters to enhance the model's ability of capturing\ncontext semantics from surface character forms and\ndisambiguating polyphonic characters in Chinese."
] |
[
"TAGS\n#transformers #pytorch #arxiv-2106.16038 #endpoints_compatible #region-us \n",
"# ChineseBERT-base\n\nThis repository contains code, model, dataset for ChineseBERT at ACL2021.\n\npaper: \nChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information \n*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*\n\ncode: \nChineseBERT github link",
"## Model description\nWe propose ChineseBERT, which incorporates both the glyph and pinyin information of Chinese\ncharacters into language model pretraining. \n \nFirst, for each Chinese character, we get three kind of embedding.\n - Char Embedding: the same as origin BERT token embedding.\n - Glyph Embedding: capture visual features based on different fonts of a Chinese character.\n - Pinyin Embedding: capture phonetic feature from the pinyin sequence ot a Chinese Character.\n \nThen, char embedding, glyph embedding and pinyin embedding \nare first concatenated, and mapped to a D-dimensional embedding through a fully \nconnected layer to form the fusion embedding. \nFinally, the fusion embedding is added with the position embedding, which is fed as input to the BERT model. \nThe following image shows an overview architecture of ChineseBERT model.\n \n!MODEL\n\nChineseBERT leverages the glyph and pinyin information of Chinese \ncharacters to enhance the model's ability of capturing\ncontext semantics from surface character forms and\ndisambiguating polyphonic characters in Chinese."
] |
null |
transformers
|
# ChineseBERT-large
This repository contains code, model, dataset for **ChineseBERT** at ACL2021.
paper:
**[ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information](https://arxiv.org/abs/2106.16038)**
*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*
code:
[ChineseBERT github link](https://github.com/ShannonAI/ChineseBert)
## Model description
We propose ChineseBERT, which incorporates both the glyph and pinyin information of Chinese
characters into language model pretraining.
First, for each Chinese character, we get three kind of embedding.
- **Char Embedding:** the same as origin BERT token embedding.
- **Glyph Embedding:** capture visual features based on different fonts of a Chinese character.
- **Pinyin Embedding:** capture phonetic feature from the pinyin sequence ot a Chinese Character.
Then, char embedding, glyph embedding and pinyin embedding
are first concatenated, and mapped to a D-dimensional embedding through a fully
connected layer to form the fusion embedding.
Finally, the fusion embedding is added with the position embedding, which is fed as input to the BERT model.
The following image shows an overview architecture of ChineseBERT model.

ChineseBERT leverages the glyph and pinyin information of Chinese
characters to enhance the model's ability of capturing
context semantics from surface character forms and
disambiguating polyphonic characters in Chinese.
|
{}
|
ShannonAI/ChineseBERT-large
| null |
[
"transformers",
"pytorch",
"arxiv:2106.16038",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.16038"
] |
[] |
TAGS
#transformers #pytorch #arxiv-2106.16038 #endpoints_compatible #region-us
|
# ChineseBERT-large
This repository contains code, model, dataset for ChineseBERT at ACL2021.
paper:
ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information
*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*
code:
ChineseBERT github link
## Model description
We propose ChineseBERT, which incorporates both the glyph and pinyin information of Chinese
characters into language model pretraining.
First, for each Chinese character, we get three kind of embedding.
- Char Embedding: the same as origin BERT token embedding.
- Glyph Embedding: capture visual features based on different fonts of a Chinese character.
- Pinyin Embedding: capture phonetic feature from the pinyin sequence ot a Chinese Character.
Then, char embedding, glyph embedding and pinyin embedding
are first concatenated, and mapped to a D-dimensional embedding through a fully
connected layer to form the fusion embedding.
Finally, the fusion embedding is added with the position embedding, which is fed as input to the BERT model.
The following image shows an overview architecture of ChineseBERT model.
!MODEL
ChineseBERT leverages the glyph and pinyin information of Chinese
characters to enhance the model's ability of capturing
context semantics from surface character forms and
disambiguating polyphonic characters in Chinese.
|
[
"# ChineseBERT-large\n\nThis repository contains code, model, dataset for ChineseBERT at ACL2021.\n\npaper: \nChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information \n*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*\n\ncode: \nChineseBERT github link",
"## Model description\nWe propose ChineseBERT, which incorporates both the glyph and pinyin information of Chinese\ncharacters into language model pretraining. \n \nFirst, for each Chinese character, we get three kind of embedding.\n - Char Embedding: the same as origin BERT token embedding.\n - Glyph Embedding: capture visual features based on different fonts of a Chinese character.\n - Pinyin Embedding: capture phonetic feature from the pinyin sequence ot a Chinese Character.\n \nThen, char embedding, glyph embedding and pinyin embedding \nare first concatenated, and mapped to a D-dimensional embedding through a fully \nconnected layer to form the fusion embedding. \nFinally, the fusion embedding is added with the position embedding, which is fed as input to the BERT model. \nThe following image shows an overview architecture of ChineseBERT model.\n \n!MODEL\n\nChineseBERT leverages the glyph and pinyin information of Chinese \ncharacters to enhance the model's ability of capturing\ncontext semantics from surface character forms and\ndisambiguating polyphonic characters in Chinese."
] |
[
"TAGS\n#transformers #pytorch #arxiv-2106.16038 #endpoints_compatible #region-us \n",
"# ChineseBERT-large\n\nThis repository contains code, model, dataset for ChineseBERT at ACL2021.\n\npaper: \nChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information \n*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*\n\ncode: \nChineseBERT github link",
"## Model description\nWe propose ChineseBERT, which incorporates both the glyph and pinyin information of Chinese\ncharacters into language model pretraining. \n \nFirst, for each Chinese character, we get three kind of embedding.\n - Char Embedding: the same as origin BERT token embedding.\n - Glyph Embedding: capture visual features based on different fonts of a Chinese character.\n - Pinyin Embedding: capture phonetic feature from the pinyin sequence ot a Chinese Character.\n \nThen, char embedding, glyph embedding and pinyin embedding \nare first concatenated, and mapped to a D-dimensional embedding through a fully \nconnected layer to form the fusion embedding. \nFinally, the fusion embedding is added with the position embedding, which is fed as input to the BERT model. \nThe following image shows an overview architecture of ChineseBERT model.\n \n!MODEL\n\nChineseBERT leverages the glyph and pinyin information of Chinese \ncharacters to enhance the model's ability of capturing\ncontext semantics from surface character forms and\ndisambiguating polyphonic characters in Chinese."
] |
text-classification
|
transformers
|
[](https://colab.research.google.com/drive/1dqeUwS_DZ-urrmYzB29nTCBUltwJxhbh?usp=sharing)
# 22 Language Identifier - BERT
This model is trained to identify the following 22 different languages.
- Arabic
- Chinese
- Dutch
- English
- Estonian
- French
- Hindi
- Indonesian
- Japanese
- Korean
- Latin
- Persian
- Portugese
- Pushto
- Romanian
- Russian
- Spanish
- Swedish
- Tamil
- Thai
- Turkish
- Urdu
## Loading the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("SharanSMenon/22-languages-bert-base-cased")
model = AutoModelForSequenceClassification.from_pretrained("SharanSMenon/22-languages-bert-base-cased")
```
## Inference
```python
def predict(sentence):
tokenized = tokenizer(sentence, return_tensors="pt")
outputs = model(**tokenized)
return model.config.id2label[outputs.logits.argmax(dim=1).item()]
```
### Examples
```python
sentence1 = "in war resolution, in defeat defiance, in victory magnanimity"
predict(sentence1) # English
sentence2 = "en la guerra resolución en la derrota desafío en la victoria magnanimidad"
predict(sentence2) # Spanish
sentence3 = "هذا هو أعظم إله على الإطلاق"
predict(sentence3) # Arabic
```
|
{"metrics": ["accuracy"], "widget": [{"text": "In war resolution, in defeat defiance, in victory magnanimity"}, {"text": "en la guerra resoluci\u00f3n en la derrota desaf\u00edo en la victoria magnanimidad"}]}
|
SharanSMenon/22-languages-bert-base-cased
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
 model
### Fine-tuning format:
```
"<s>paragraph: "+eng_context+"\nlang: rus\nquestion: "+rus_question+' answer: '+ rus_answer+"</s>"
```
### About ruGPT-3 XL model
Model was trained with 512 sequence length using [Deepspeed](https://github.com/microsoft/DeepSpeed) and [Megatron](https://github.com/NVIDIA/Megatron-LM) code by [SberDevices](https://sberdevices.ru/) team, on 80B tokens dataset for 4 epochs. After that model was finetuned 1 epoch with sequence length 2048.
*Note! Model has sparse attention blocks.*
Total training time was around 10 days on 256 GPUs.
Final perplexity on test set is 12.05. Model parameters: 1.3B.
|
{"language": ["ru", "en"], "tags": ["PyTorch", "Transformers", "gpt2", "squad", "lm-head", "casual-lm"], "pipeline_tag": "text2text-generation", "thumbnail": "https://github.com/RussianNLP/RusEnQA"}
|
Shavrina/RusEnQA
| null |
[
"PyTorch",
"Transformers",
"gpt2",
"squad",
"lm-head",
"casual-lm",
"text2text-generation",
"ru",
"en",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ru",
"en"
] |
TAGS
#PyTorch #Transformers #gpt2 #squad #lm-head #casual-lm #text2text-generation #ru #en #region-us
|
## RusEnQA
QA for Russian and English based on the rugpt3xl model
### Fine-tuning format:
### About ruGPT-3 XL model
Model was trained with 512 sequence length using Deepspeed and Megatron code by SberDevices team, on 80B tokens dataset for 4 epochs. After that model was finetuned 1 epoch with sequence length 2048.
*Note! Model has sparse attention blocks.*
Total training time was around 10 days on 256 GPUs.
Final perplexity on test set is 12.05. Model parameters: 1.3B.
|
[
"## RusEnQA\n\nQA for Russian and English based on the rugpt3xl model",
"### Fine-tuning format:",
"### About ruGPT-3 XL model\nModel was trained with 512 sequence length using Deepspeed and Megatron code by SberDevices team, on 80B tokens dataset for 4 epochs. After that model was finetuned 1 epoch with sequence length 2048. \n*Note! Model has sparse attention blocks.*\n\nTotal training time was around 10 days on 256 GPUs.\nFinal perplexity on test set is 12.05. Model parameters: 1.3B."
] |
[
"TAGS\n#PyTorch #Transformers #gpt2 #squad #lm-head #casual-lm #text2text-generation #ru #en #region-us \n",
"## RusEnQA\n\nQA for Russian and English based on the rugpt3xl model",
"### Fine-tuning format:",
"### About ruGPT-3 XL model\nModel was trained with 512 sequence length using Deepspeed and Megatron code by SberDevices team, on 80B tokens dataset for 4 epochs. After that model was finetuned 1 epoch with sequence length 2048. \n*Note! Model has sparse attention blocks.*\n\nTotal training time was around 10 days on 256 GPUs.\nFinal perplexity on test set is 12.05. Model parameters: 1.3B."
] |
text-generation
|
transformers
|
# SHAY0 Dialo GPT Model
|
{"tags": ["conversational"]}
|
ShayoGun/DialoGPT-small-shayo
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# SHAY0 Dialo GPT Model
|
[
"# SHAY0 Dialo GPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# SHAY0 Dialo GPT Model"
] |
text-generation
|
transformers
|
# Harry Potter DialGPT Model
|
{"tags": ["conversational"]}
|
Sheel/DialoGPT-small-harrypotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialGPT Model
|
[
"# Harry Potter DialGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialGPT Model"
] |
text-generation
|
transformers
|
# Mikasa DialoGPT Model
|
{"tags": ["conversational"]}
|
Sheerwin02/DialoGPT-medium-mikasa
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Mikasa DialoGPT Model
|
[
"# Mikasa DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Mikasa DialoGPT Model"
] |
text-generation
|
transformers
|
#isla DialoGPT Model
|
{"tags": ["conversational"]}
|
Sheerwin02/DialoGPT-small-isla
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#isla DialoGPT Model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | null |
This is a repo with gather thoughts and experiments on the state-of-the-art techniques in NLP.
|
{}
|
ShenSeanchen/NLP
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
This is a repo with gather thoughts and experiments on the state-of-the-art techniques in NLP.
|
[] |
[
"TAGS\n#region-us \n"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# superglue-boolq
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2098
- Accuracy: 76.7584
- Average Metrics: 76.7584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Average Metrics |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|
| No log | 0.34 | 100 | 0.2293 | 73.2722 | 73.2722 |
| No log | 0.68 | 200 | 0.2098 | 76.7584 | 76.7584 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu111
- Datasets 1.17.0
- Tokenizers 0.12.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "superglue-boolq", "results": []}]}
|
ShengdingHu/superglue-boolq
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
superglue-boolq
===============
This model is a fine-tuned version of t5-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2098
* Accuracy: 76.7584
* Average Metrics: 76.7584
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.18.0
* Pytorch 1.10.2+cu111
* Datasets 1.17.0
* Tokenizers 0.12.1
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.18.0\n* Pytorch 1.10.2+cu111\n* Datasets 1.17.0\n* Tokenizers 0.12.1"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.18.0\n* Pytorch 1.10.2+cu111\n* Datasets 1.17.0\n* Tokenizers 0.12.1"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0620
- Precision: 0.9267
- Recall: 0.9371
- F1: 0.9319
- Accuracy: 0.9838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2462 | 1.0 | 878 | 0.0714 | 0.9052 | 0.9223 | 0.9137 | 0.9803 |
| 0.0535 | 2.0 | 1756 | 0.0615 | 0.9188 | 0.9331 | 0.9259 | 0.9827 |
| 0.0315 | 3.0 | 2634 | 0.0620 | 0.9267 | 0.9371 | 0.9319 | 0.9838 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9266592920353982, "name": "Precision"}, {"type": "recall", "value": 0.9371294328224634, "name": "Recall"}, {"type": "f1", "value": 0.9318649535569274, "name": "F1"}, {"type": "accuracy", "value": 0.9838117781625813, "name": "Accuracy"}]}]}]}
|
Shenyancheng/distilbert-base-uncased-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-ner
=====================================
This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0620
* Precision: 0.9267
* Recall: 0.9371
* F1: 0.9319
* Accuracy: 0.9838
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
text-generation
| null |
# My Awesome Model
|
{"tags": ["conversational"]}
|
Sherman/DialoGPT-medium-joey
| null |
[
"conversational",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#conversational #region-us
|
# My Awesome Model
|
[
"# My Awesome Model"
] |
[
"TAGS\n#conversational #region-us \n",
"# My Awesome Model"
] |
text-generation
|
transformers
|
# Harry Potter DialoGPT Model
|
{"tags": ["conversational"]}
|
Shike/DialoGPT_medium_harrypotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model
|
[
"# Harry Potter DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] |
text-generation
|
transformers
|
# My Hero Academia DialoGPT Model
|
{"tags": ["conversational"]}
|
Shinx/DialoGPT-medium-myheroacademia
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# My Hero Academia DialoGPT Model
|
[
"# My Hero Academia DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# My Hero Academia DialoGPT Model"
] |
text-generation
|
transformers
|
# My Awesome Model
|
{"tags": ["conversational"]}
|
NaturesDisaster/DialoGPT-large-Neku
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# My Awesome Model
|
[
"# My Awesome Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# My Awesome Model"
] |
null | null |
tags:
- conversational
|
{}
|
ShinxisS/DialoGPT-medium-Neku
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
tags:
- conversational
|
[] |
[
"TAGS\n#region-us \n"
] |
text-generation
|
transformers
|
# My Awesome Model
|
{"tags": ["conversational"]}
|
NaturesDisaster/DialoGPT-small-Neku
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# My Awesome Model
|
[
"# My Awesome Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# My Awesome Model"
] |
text-generation
|
transformers
|
# Rick DialoGPT Model
|
{"tags": ["conversational"]}
|
ShiroNeko/DialoGPT-small-rick
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick DialoGPT Model
|
[
"# Rick DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick DialoGPT Model"
] |
question-answering
| null |
model card
|
{"language": "en", "license": "mit", "tags": ["exbert", "my-tag"], "datasets": ["dataset1", "scan-web"], "pipeline_tag": "question-answering"}
|
Shiyu/my-repo
| null |
[
"exbert",
"my-tag",
"question-answering",
"en",
"dataset:dataset1",
"dataset:scan-web",
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#exbert #my-tag #question-answering #en #dataset-dataset1 #dataset-scan-web #license-mit #region-us
|
model card
|
[] |
[
"TAGS\n#exbert #my-tag #question-answering #en #dataset-dataset1 #dataset-scan-web #license-mit #region-us \n"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-roberta-depression
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1385
- Accuracy: 0.9745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0238 | 1.0 | 625 | 0.1385 | 0.9745 |
| 0.0333 | 2.0 | 1250 | 0.1385 | 0.9745 |
| 0.0263 | 3.0 | 1875 | 0.1385 | 0.9745 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "widget": [{"text": "I feel so low and numb, don't feel like doing anything. Just passing my days"}, {"text": "Sleep is my greatest and most comforting escape whenever I wake up these days. The literal very first emotion I feel is just misery and reminding myself of all my problems."}, {"text": "I went to a movie today. It was below my expectations but the day was fine."}, {"text": "The first day of work was a little hectic but met pretty good colleagues, we went for a team dinner party at the end of the day."}], "model-index": [{"name": "finetuned-roberta-depression", "results": []}]}
|
ShreyaR/finetuned-roberta-depression
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
finetuned-roberta-depression
============================
This model is a fine-tuned version of roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1385
* Accuracy: 0.9745
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.17.0
* Pytorch 1.10.0+cu111
* Datasets 2.0.0
* Tokenizers 0.11.6
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0\n* Pytorch 1.10.0+cu111\n* Datasets 2.0.0\n* Tokenizers 0.11.6"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0\n* Pytorch 1.10.0+cu111\n* Datasets 2.0.0\n* Tokenizers 0.11.6"
] |
text-generation
|
transformers
|
#goku DialoGPT Model
|
{"tags": ["conversational"]}
|
Shubham-Kumar-DTU/DialoGPT-small-goku
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#goku DialoGPT Model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 22 | 3.9518 |
| No log | 2.0 | 44 | 3.2703 |
| No log | 3.0 | 66 | 2.9308 |
| No log | 4.0 | 88 | 2.7806 |
| No log | 5.0 | 110 | 2.6926 |
| No log | 6.0 | 132 | 2.7043 |
| No log | 7.0 | 154 | 2.7113 |
| No log | 8.0 | 176 | 2.7236 |
| No log | 9.0 | 198 | 2.7559 |
| No log | 10.0 | 220 | 2.7515 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT", "results": []}]}
|
Shushant/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #license-mit #endpoints_compatible #region-us
|
BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel\_PubmedBERT
====================================================================================
This model is a fine-tuned version of microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.7515
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
# NepNewsBERT
## Masked Language Model for nepali language trained on nepali news scrapped from different nepali news website. The data set contained about 10 million of nepali sentences mainly related to nepali news.
## Usage
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("Shushant/NepNewsBERT")
model = AutoModelForMaskedLM.from_pretrained("Shushant/NepNewsBERT")
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer,
)
from pprint import pprint
pprint(fill_mask(f"तिमीलाई कस्तो {tokenizer.mask_token}."))
|
{}
|
Shushant/NepNewsBERT
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
# NepNewsBERT
## Masked Language Model for nepali language trained on nepali news scrapped from different nepali news website. The data set contained about 10 million of nepali sentences mainly related to nepali news.
## Usage
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("Shushant/NepNewsBERT")
model = AutoModelForMaskedLM.from_pretrained("Shushant/NepNewsBERT")
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer,
)
from pprint import pprint
pprint(fill_mask(f"तिमीलाई कस्तो {tokenizer.mask_token}."))
|
[
"# NepNewsBERT",
"## Masked Language Model for nepali language trained on nepali news scrapped from different nepali news website. The data set contained about 10 million of nepali sentences mainly related to nepali news.",
"## Usage \n\nfrom transformers import AutoTokenizer, AutoModelForMaskedLM\n\ntokenizer = AutoTokenizer.from_pretrained(\"Shushant/NepNewsBERT\")\n\nmodel = AutoModelForMaskedLM.from_pretrained(\"Shushant/NepNewsBERT\")\n\nfrom transformers import pipeline\n\nfill_mask = pipeline(\n \"fill-mask\",\n model=model,\n tokenizer=tokenizer,\n)\nfrom pprint import pprint\npprint(fill_mask(f\"तिमीलाई कस्तो {tokenizer.mask_token}.\"))"
] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n",
"# NepNewsBERT",
"## Masked Language Model for nepali language trained on nepali news scrapped from different nepali news website. The data set contained about 10 million of nepali sentences mainly related to nepali news.",
"## Usage \n\nfrom transformers import AutoTokenizer, AutoModelForMaskedLM\n\ntokenizer = AutoTokenizer.from_pretrained(\"Shushant/NepNewsBERT\")\n\nmodel = AutoModelForMaskedLM.from_pretrained(\"Shushant/NepNewsBERT\")\n\nfrom transformers import pipeline\n\nfill_mask = pipeline(\n \"fill-mask\",\n model=model,\n tokenizer=tokenizer,\n)\nfrom pprint import pprint\npprint(fill_mask(f\"तिमीलाई कस्तो {tokenizer.mask_token}.\"))"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-v1.1-biomedicalQuestionAnswering
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 22 | 3.7409 |
| No log | 2.0 | 44 | 3.1852 |
| No log | 3.0 | 66 | 3.0342 |
| No log | 4.0 | 88 | 2.9416 |
| No log | 5.0 | 110 | 2.9009 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model-index": [{"name": "biobert-v1.1-biomedicalQuestionAnswering", "results": []}]}
|
Shushant/biobert-v1.1-biomedicalQuestionAnswering
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #endpoints_compatible #region-us
|
biobert-v1.1-biomedicalQuestionAnswering
========================================
This model is a fine-tuned version of dmis-lab/biobert-v1.1 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.9009
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
# NEPALI BERT
## Masked Language Model for nepali language trained on nepali news scrapped from different nepali news website. The data set contained about 10 million of nepali sentences mainly related to nepali news.
This model is a fine-tuned version of [Bert Base Uncased](https://huggingface.co/bert-base-uncased) on dataset composed of different news scrapped from nepali news portals comprising of 4.6 GB of textual data.
It achieves the following results on the evaluation set:
- Loss: 1.0495
## Model description
Pretraining done on bert base architecture.
## Intended uses & limitations
This transformer model can be used for any NLP tasks related to Devenagari language. At the time of training, this is the state of the art model developed
for Devanagari dataset. Intrinsic evaluation with Perplexity of 8.56 achieves this state of the art whereas extrinsit evaluation done on sentiment analysis of Nepali tweets outperformed other existing
masked language models on Nepali dataset.
## Training and evaluation data
THe training corpus is developed using 85467 news scrapped from different job portals. This is a preliminary dataset
for the experimentation. THe corpus size is about 4.3 GB of textual data. Similary evaluation data contains few news articles about 12 mb of textual data.
## Training procedure
For the pretraining of masked language model, Trainer API from Huggingface is used. The pretraining took about 3 days 8 hours 57 minutes. Training was done on Tesla V100 GPU.
With 640 Tensor Cores, Tesla V100 is the world's first GPU to break the 100 teraFLOPS (TFLOPS) barrier of deep learning performance. This GPU was faciliated by Kathmandu University (KU) supercomputer.
Thanks to KU administration.
Usage
```
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("Shushant/nepaliBERT")
model = AutoModelForMaskedLM.from_pretrained("Shushant/nepaliBERT")
from transformers import pipeline
fill_mask = pipeline( "fill-mask", model=model, tokenizer=tokenizer, )
from pprint import pprint pprint(fill_mask(f"तिमीलाई कस्तो {tokenizer.mask_token}."))
```
## Data Description
Trained on about 4.6 GB of Nepali text corpus collected from various sources
These data were collected from nepali news site, OSCAR nepali corpus
# Paper and CItation Details
If you are interested to read the implementation details of this language model, you can read the full paper here.
https://www.researchgate.net/publication/375019515_NepaliBERT_Pre-training_of_Masked_Language_Model_in_Nepali_Corpus
## Plain Text
S. Pudasaini, S. Shakya, A. Tamang, S. Adhikari, S. Thapa and S. Lamichhane, "NepaliBERT: Pre-training of Masked Language Model in Nepali Corpus," 2023 7th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), Kirtipur, Nepal, 2023, pp. 325-330, doi: 10.1109/I-SMAC58438.2023.10290690.
## Bibtex
@INPROCEEDINGS{10290690,
author={Pudasaini, Shushanta and Shakya, Subarna and Tamang, Aakash and Adhikari, Sajjan and Thapa, Sunil and Lamichhane, Sagar},
booktitle={2023 7th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC)},
title={NepaliBERT: Pre-training of Masked Language Model in Nepali Corpus},
year={2023},
volume={},
number={},
pages={325-330},
doi={10.1109/I-SMAC58438.2023.10290690}}
|
{"language": ["ne"], "license": "mit", "library_name": "transformers", "datasets": ["Shushant/nepali"], "metrics": ["perplexity"], "pipeline_tag": "fill-mask"}
|
Shushant/nepaliBERT
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"ne",
"dataset:Shushant/nepali",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ne"
] |
TAGS
#transformers #pytorch #bert #fill-mask #ne #dataset-Shushant/nepali #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# NEPALI BERT
## Masked Language Model for nepali language trained on nepali news scrapped from different nepali news website. The data set contained about 10 million of nepali sentences mainly related to nepali news.
This model is a fine-tuned version of Bert Base Uncased on dataset composed of different news scrapped from nepali news portals comprising of 4.6 GB of textual data.
It achieves the following results on the evaluation set:
- Loss: 1.0495
## Model description
Pretraining done on bert base architecture.
## Intended uses & limitations
This transformer model can be used for any NLP tasks related to Devenagari language. At the time of training, this is the state of the art model developed
for Devanagari dataset. Intrinsic evaluation with Perplexity of 8.56 achieves this state of the art whereas extrinsit evaluation done on sentiment analysis of Nepali tweets outperformed other existing
masked language models on Nepali dataset.
## Training and evaluation data
THe training corpus is developed using 85467 news scrapped from different job portals. This is a preliminary dataset
for the experimentation. THe corpus size is about 4.3 GB of textual data. Similary evaluation data contains few news articles about 12 mb of textual data.
## Training procedure
For the pretraining of masked language model, Trainer API from Huggingface is used. The pretraining took about 3 days 8 hours 57 minutes. Training was done on Tesla V100 GPU.
With 640 Tensor Cores, Tesla V100 is the world's first GPU to break the 100 teraFLOPS (TFLOPS) barrier of deep learning performance. This GPU was faciliated by Kathmandu University (KU) supercomputer.
Thanks to KU administration.
Usage
## Data Description
Trained on about 4.6 GB of Nepali text corpus collected from various sources
These data were collected from nepali news site, OSCAR nepali corpus
# Paper and CItation Details
If you are interested to read the implementation details of this language model, you can read the full paper here.
URL
## Plain Text
S. Pudasaini, S. Shakya, A. Tamang, S. Adhikari, S. Thapa and S. Lamichhane, "NepaliBERT: Pre-training of Masked Language Model in Nepali Corpus," 2023 7th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), Kirtipur, Nepal, 2023, pp. 325-330, doi: 10.1109/I-SMAC58438.2023.10290690.
## Bibtex
@INPROCEEDINGS{10290690,
author={Pudasaini, Shushanta and Shakya, Subarna and Tamang, Aakash and Adhikari, Sajjan and Thapa, Sunil and Lamichhane, Sagar},
booktitle={2023 7th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC)},
title={NepaliBERT: Pre-training of Masked Language Model in Nepali Corpus},
year={2023},
volume={},
number={},
pages={325-330},
doi={10.1109/I-SMAC58438.2023.10290690}}
|
[
"# NEPALI BERT",
"## Masked Language Model for nepali language trained on nepali news scrapped from different nepali news website. The data set contained about 10 million of nepali sentences mainly related to nepali news.\n\nThis model is a fine-tuned version of Bert Base Uncased on dataset composed of different news scrapped from nepali news portals comprising of 4.6 GB of textual data.\nIt achieves the following results on the evaluation set:\n- Loss: 1.0495",
"## Model description\n\nPretraining done on bert base architecture.",
"## Intended uses & limitations\nThis transformer model can be used for any NLP tasks related to Devenagari language. At the time of training, this is the state of the art model developed \nfor Devanagari dataset. Intrinsic evaluation with Perplexity of 8.56 achieves this state of the art whereas extrinsit evaluation done on sentiment analysis of Nepali tweets outperformed other existing \nmasked language models on Nepali dataset.",
"## Training and evaluation data\nTHe training corpus is developed using 85467 news scrapped from different job portals. This is a preliminary dataset \nfor the experimentation. THe corpus size is about 4.3 GB of textual data. Similary evaluation data contains few news articles about 12 mb of textual data.",
"## Training procedure\nFor the pretraining of masked language model, Trainer API from Huggingface is used. The pretraining took about 3 days 8 hours 57 minutes. Training was done on Tesla V100 GPU. \nWith 640 Tensor Cores, Tesla V100 is the world's first GPU to break the 100 teraFLOPS (TFLOPS) barrier of deep learning performance. This GPU was faciliated by Kathmandu University (KU) supercomputer. \nThanks to KU administration.\n\nUsage",
"## Data Description\nTrained on about 4.6 GB of Nepali text corpus collected from various sources\nThese data were collected from nepali news site, OSCAR nepali corpus",
"# Paper and CItation Details\nIf you are interested to read the implementation details of this language model, you can read the full paper here.\nURL",
"## Plain Text\nS. Pudasaini, S. Shakya, A. Tamang, S. Adhikari, S. Thapa and S. Lamichhane, \"NepaliBERT: Pre-training of Masked Language Model in Nepali Corpus,\" 2023 7th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), Kirtipur, Nepal, 2023, pp. 325-330, doi: 10.1109/I-SMAC58438.2023.10290690.",
"## Bibtex\n\n@INPROCEEDINGS{10290690,\n author={Pudasaini, Shushanta and Shakya, Subarna and Tamang, Aakash and Adhikari, Sajjan and Thapa, Sunil and Lamichhane, Sagar},\n booktitle={2023 7th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC)}, \n title={NepaliBERT: Pre-training of Masked Language Model in Nepali Corpus}, \n year={2023},\n volume={},\n number={},\n pages={325-330},\n doi={10.1109/I-SMAC58438.2023.10290690}}"
] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #ne #dataset-Shushant/nepali #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# NEPALI BERT",
"## Masked Language Model for nepali language trained on nepali news scrapped from different nepali news website. The data set contained about 10 million of nepali sentences mainly related to nepali news.\n\nThis model is a fine-tuned version of Bert Base Uncased on dataset composed of different news scrapped from nepali news portals comprising of 4.6 GB of textual data.\nIt achieves the following results on the evaluation set:\n- Loss: 1.0495",
"## Model description\n\nPretraining done on bert base architecture.",
"## Intended uses & limitations\nThis transformer model can be used for any NLP tasks related to Devenagari language. At the time of training, this is the state of the art model developed \nfor Devanagari dataset. Intrinsic evaluation with Perplexity of 8.56 achieves this state of the art whereas extrinsit evaluation done on sentiment analysis of Nepali tweets outperformed other existing \nmasked language models on Nepali dataset.",
"## Training and evaluation data\nTHe training corpus is developed using 85467 news scrapped from different job portals. This is a preliminary dataset \nfor the experimentation. THe corpus size is about 4.3 GB of textual data. Similary evaluation data contains few news articles about 12 mb of textual data.",
"## Training procedure\nFor the pretraining of masked language model, Trainer API from Huggingface is used. The pretraining took about 3 days 8 hours 57 minutes. Training was done on Tesla V100 GPU. \nWith 640 Tensor Cores, Tesla V100 is the world's first GPU to break the 100 teraFLOPS (TFLOPS) barrier of deep learning performance. This GPU was faciliated by Kathmandu University (KU) supercomputer. \nThanks to KU administration.\n\nUsage",
"## Data Description\nTrained on about 4.6 GB of Nepali text corpus collected from various sources\nThese data were collected from nepali news site, OSCAR nepali corpus",
"# Paper and CItation Details\nIf you are interested to read the implementation details of this language model, you can read the full paper here.\nURL",
"## Plain Text\nS. Pudasaini, S. Shakya, A. Tamang, S. Adhikari, S. Thapa and S. Lamichhane, \"NepaliBERT: Pre-training of Masked Language Model in Nepali Corpus,\" 2023 7th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), Kirtipur, Nepal, 2023, pp. 325-330, doi: 10.1109/I-SMAC58438.2023.10290690.",
"## Bibtex\n\n@INPROCEEDINGS{10290690,\n author={Pudasaini, Shushanta and Shakya, Subarna and Tamang, Aakash and Adhikari, Sajjan and Thapa, Sunil and Lamichhane, Sagar},\n booktitle={2023 7th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC)}, \n title={NepaliBERT: Pre-training of Masked Language Model in Nepali Corpus}, \n year={2023},\n volume={},\n number={},\n pages={325-330},\n doi={10.1109/I-SMAC58438.2023.10290690}}"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 164469
## Validation Metrics
- Loss: 0.05527503043413162
- Accuracy: 0.9853049228508449
- Precision: 0.991044776119403
- Recall: 0.9793510324483776
- AUC: 0.9966895139869654
- F1: 0.9851632047477745
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Shuvam/autonlp-college_classification-164469
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Shuvam/autonlp-college_classification-164469", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Shuvam/autonlp-college_classification-164469", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["Shuvam/autonlp-data-college_classification"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
|
Shuvam/autonlp-college_classification-164469
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:Shuvam/autonlp-data-college_classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #bert #text-classification #autonlp #en #dataset-Shuvam/autonlp-data-college_classification #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 164469
## Validation Metrics
- Loss: 0.05527503043413162
- Accuracy: 0.9853049228508449
- Precision: 0.991044776119403
- Recall: 0.9793510324483776
- AUC: 0.9966895139869654
- F1: 0.9851632047477745
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 164469",
"## Validation Metrics\n\n- Loss: 0.05527503043413162\n- Accuracy: 0.9853049228508449\n- Precision: 0.991044776119403\n- Recall: 0.9793510324483776\n- AUC: 0.9966895139869654\n- F1: 0.9851632047477745",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #text-classification #autonlp #en #dataset-Shuvam/autonlp-data-college_classification #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 164469",
"## Validation Metrics\n\n- Loss: 0.05527503043413162\n- Accuracy: 0.9853049228508449\n- Precision: 0.991044776119403\n- Recall: 0.9793510324483776\n- AUC: 0.9966895139869654\n- F1: 0.9851632047477745",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-generation
|
transformers
|
This model is a fine-tuned version of Microsoft/DialoGPT-medium trained to created sarcastic responses from the dataset "Sarcasm on Reddit" located [here](https://www.kaggle.com/danofer/sarcasm).
|
{"pipeline_tag": "conversational"}
|
SilentMyuth/sarcastic-model
| null |
[
"transformers",
"conversational",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #conversational #endpoints_compatible #region-us
|
This model is a fine-tuned version of Microsoft/DialoGPT-medium trained to created sarcastic responses from the dataset "Sarcasm on Reddit" located here.
|
[] |
[
"TAGS\n#transformers #conversational #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
# My Awesome Model
|
{"tags": ["conversational"]}
|
SilentMyuth/stableben
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# My Awesome Model
|
[
"# My Awesome Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# My Awesome Model"
] |
text-generation
|
transformers
|
Hewlo
|
{}
|
SilentMyuth/stablejen
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Hewlo
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
conver = pipeline("conversational")
---
tags:
- conversational
---
# Harry potter DialoGPT model
|
{}
|
Sin/DialoGPT-small-zai
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
conver = pipeline("conversational")
---
tags:
- conversational
---
# Harry potter DialoGPT model
|
[
"# Harry potter DialoGPT model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry potter DialoGPT model"
] |
question-answering
|
transformers
|
# Muril Large Squad2
This model is finetuned for QA task on Squad2 from [Muril Large checkpoint](https://huggingface.co/google/muril-large-cased).
## Hyperparameters
```
Batch Size: 4
Grad Accumulation Steps = 8
Total epochs = 3
MLM Checkpoint = google/muril-large-cased
max_seq_len = 256
learning_rate = 1e-5
lr_schedule = LinearWarmup
warmup_ratio = 0.1
doc_stride = 128
```
## Squad 2 Evaluation stats:
Generated from [the official Squad2 evaluation script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/)
```json
{
"exact": 82.0180240882675,
"f1": 85.10110304685352,
"total": 11873,
"HasAns_exact": 81.6970310391363,
"HasAns_f1": 87.87203044454981,
"HasAns_total": 5928,
"NoAns_exact": 82.3380992430614,
"NoAns_f1": 82.3380992430614,
"NoAns_total": 5945
}
```
## Limitations
MuRIL is specifically trained to work on 18 Indic languages and English. This model is not expected to perform well in any other languages. See the MuRIL checkpoint for further details.
For any questions, you can reach out to me [on Twitter](https://twitter.com/batw0man)
|
{}
|
Sindhu/muril-large-squad2
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #endpoints_compatible #region-us
|
# Muril Large Squad2
This model is finetuned for QA task on Squad2 from Muril Large checkpoint.
## Hyperparameters
## Squad 2 Evaluation stats:
Generated from the official Squad2 evaluation script
## Limitations
MuRIL is specifically trained to work on 18 Indic languages and English. This model is not expected to perform well in any other languages. See the MuRIL checkpoint for further details.
For any questions, you can reach out to me on Twitter
|
[
"# Muril Large Squad2\nThis model is finetuned for QA task on Squad2 from Muril Large checkpoint.",
"## Hyperparameters",
"## Squad 2 Evaluation stats:\nGenerated from the official Squad2 evaluation script",
"## Limitations\nMuRIL is specifically trained to work on 18 Indic languages and English. This model is not expected to perform well in any other languages. See the MuRIL checkpoint for further details.\n\nFor any questions, you can reach out to me on Twitter"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #endpoints_compatible #region-us \n",
"# Muril Large Squad2\nThis model is finetuned for QA task on Squad2 from Muril Large checkpoint.",
"## Hyperparameters",
"## Squad 2 Evaluation stats:\nGenerated from the official Squad2 evaluation script",
"## Limitations\nMuRIL is specifically trained to work on 18 Indic languages and English. This model is not expected to perform well in any other languages. See the MuRIL checkpoint for further details.\n\nFor any questions, you can reach out to me on Twitter"
] |
question-answering
|
transformers
|
# Rembert Squad2
This model is finetuned for QA task on Squad2 from [Rembert checkpoint](https://huggingface.co/google/rembert).
## Hyperparameters
```
Batch Size: 4
Grad Accumulation Steps = 8
Total epochs = 3
MLM Checkpoint = "rembert"
max_seq_len = 256
learning_rate = 1e-5
lr_schedule = LinearWarmup
warmup_ratio = 0.1
doc_stride = 128
```
## Squad 2 Evaluation stats:
Metrics generated from [the official Squad2 evaluation script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/)
```json
{
"exact": 84.51107554956624,
"f1": 87.46644042781853,
"total": 11873,
"HasAns_exact": 80.97165991902834,
"HasAns_f1": 86.89086491219469,
"HasAns_total": 5928,
"NoAns_exact": 88.04037005887301,
"NoAns_f1": 88.04037005887301,
"NoAns_total": 5945
}
```
For any questions, you can reach out to me [on Twitter](https://twitter.com/batw0man)
|
{"language": ["multilingual"], "tags": ["question-answering"], "datasets": ["squad2"], "metrics": ["squad2"]}
|
Sindhu/rembert-squad2
| null |
[
"transformers",
"pytorch",
"rembert",
"question-answering",
"multilingual",
"dataset:squad2",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"multilingual"
] |
TAGS
#transformers #pytorch #rembert #question-answering #multilingual #dataset-squad2 #endpoints_compatible #region-us
|
# Rembert Squad2
This model is finetuned for QA task on Squad2 from Rembert checkpoint.
## Hyperparameters
## Squad 2 Evaluation stats:
Metrics generated from the official Squad2 evaluation script
For any questions, you can reach out to me on Twitter
|
[
"# Rembert Squad2\nThis model is finetuned for QA task on Squad2 from Rembert checkpoint.",
"## Hyperparameters",
"## Squad 2 Evaluation stats:\n\nMetrics generated from the official Squad2 evaluation script\n\nFor any questions, you can reach out to me on Twitter"
] |
[
"TAGS\n#transformers #pytorch #rembert #question-answering #multilingual #dataset-squad2 #endpoints_compatible #region-us \n",
"# Rembert Squad2\nThis model is finetuned for QA task on Squad2 from Rembert checkpoint.",
"## Hyperparameters",
"## Squad 2 Evaluation stats:\n\nMetrics generated from the official Squad2 evaluation script\n\nFor any questions, you can reach out to me on Twitter"
] |
text-generation
|
transformers
|
# The Vampire Diaries DialoGPT Model
|
{"tags": ["conversational"]}
|
SirBastianXVII/DialoGPT-small-TVD
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# The Vampire Diaries DialoGPT Model
|
[
"# The Vampire Diaries DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# The Vampire Diaries DialoGPT Model"
] |
text-generation
|
transformers
|
# Trump Insults GPT Bot
|
{"tags": ["conversational"]}
|
Sired/DialoGPT-small-trumpbot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Trump Insults GPT Bot
|
[
"# Trump Insults GPT Bot"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Trump Insults GPT Bot"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wangchanberta-base-att-spm-uncased-finetuned-th-squad
This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on the thaiqa_squad dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["thaiqa_squad"], "widget": [{"text": "\u0e2a\u0e42\u0e21\u0e2a\u0e23\u0e40\u0e23\u0e2d\u0e31\u0e25\u0e21\u0e32\u0e14\u0e23\u0e34\u0e14\u0e01\u0e48\u0e2d\u0e15\u0e31\u0e49\u0e07\u0e02\u0e36\u0e49\u0e19\u0e43\u0e19\u0e1b\u0e35\u0e43\u0e14", "context": "\u0e2a\u0e42\u0e21\u0e2a\u0e23\u0e1f\u0e38\u0e15\u0e1a\u0e2d\u0e25\u0e40\u0e23\u0e2d\u0e31\u0e25\u0e21\u0e32\u0e14\u0e23\u0e34\u0e14 (\u0e2a\u0e40\u0e1b\u0e19: Real Madrid Club de F\u00fatbol) \u0e40\u0e1b\u0e47\u0e19\u0e2a\u0e42\u0e21\u0e2a\u0e23\u0e1f\u0e38\u0e15\u0e1a\u0e2d\u0e25\u0e17\u0e35\u0e48\u0e21\u0e35\u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e2a\u0e35\u0e22\u0e07\u0e43\u0e19\u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28\u0e2a\u0e40\u0e1b\u0e19 \u0e15\u0e31\u0e49\u0e07\u0e2d\u0e22\u0e39\u0e48\u0e17\u0e35\u0e48\u0e01\u0e23\u0e38\u0e07\u0e21\u0e32\u0e14\u0e23\u0e34\u0e14 \u0e1b\u0e31\u0e08\u0e08\u0e38\u0e1a\u0e31\u0e19\u0e40\u0e25\u0e48\u0e19\u0e2d\u0e22\u0e39\u0e48\u0e43\u0e19\u0e25\u0e32\u0e25\u0e34\u0e01\u0e32 \u0e01\u0e48\u0e2d\u0e15\u0e31\u0e49\u0e07\u0e02\u0e36\u0e49\u0e19\u0e43\u0e19 \u0e04.\u0e28. 1902 \u0e42\u0e14\u0e22\u0e40\u0e1b\u0e47\u0e19\u0e2b\u0e19\u0e36\u0e48\u0e07\u0e43\u0e19\u0e2a\u0e42\u0e21\u0e2a\u0e23\u0e17\u0e35\u0e48\u0e1b\u0e23\u0e30\u0e2a\u0e1a\u0e04\u0e27\u0e32\u0e21\u0e2a\u0e33\u0e40\u0e23\u0e47\u0e08\u0e21\u0e32\u0e01\u0e17\u0e35\u0e48\u0e2a\u0e38\u0e14\u0e43\u0e19\u0e17\u0e27\u0e35\u0e1b\u0e22\u0e38\u0e42\u0e23\u0e1b \u0e40\u0e23\u0e2d\u0e31\u0e25\u0e21\u0e32\u0e14\u0e23\u0e34\u0e14\u0e40\u0e1b\u0e47\u0e19\u0e2a\u0e21\u0e32\u0e0a\u0e34\u0e01\u0e02\u0e2d\u0e07\u0e01\u0e25\u0e38\u0e48\u0e21 14 \u0e0b\u0e36\u0e48\u0e07\u0e40\u0e1b\u0e47\u0e19\u0e01\u0e25\u0e38\u0e48\u0e21\u0e2a\u0e42\u0e21\u0e2a\u0e23\u0e0a\u0e31\u0e49\u0e19\u0e19\u0e33\u0e02\u0e2d\u0e07\u0e22\u0e39\u0e1f\u0e48\u0e32 \u0e41\u0e25\u0e30\u0e22\u0e31\u0e07\u0e40\u0e1b\u0e47\u0e19\u0e2b\u0e19\u0e36\u0e48\u0e07\u0e43\u0e19\u0e2a\u0e32\u0e21\u0e2a\u0e42\u0e21\u0e2a\u0e23\u0e1c\u0e39\u0e49\u0e23\u0e48\u0e27\u0e21\u0e01\u0e48\u0e2d\u0e15\u0e31\u0e49\u0e07\u0e25\u0e32\u0e25\u0e34\u0e01\u0e32\u0e0b\u0e36\u0e48\u0e07\u0e44\u0e21\u0e48\u0e40\u0e04\u0e22\u0e15\u0e01\u0e0a\u0e31\u0e49\u0e19\u0e08\u0e32\u0e01\u0e25\u0e35\u0e01\u0e2a\u0e39\u0e07\u0e2a\u0e38\u0e14\u0e19\u0e31\u0e1a\u0e15\u0e31\u0e49\u0e07\u0e41\u0e15\u0e48 \u0e04.\u0e28. 1929 \u0e21\u0e35\u0e04\u0e39\u0e48\u0e2d\u0e23\u0e34\u0e04\u0e37\u0e2d\u0e2a\u0e42\u0e21\u0e2a\u0e23\u0e1a\u0e32\u0e23\u0e4c\u0e40\u0e0b\u0e42\u0e25\u0e19\u0e32 \u0e41\u0e25\u0e30 \u0e2d\u0e31\u0e15\u0e40\u0e25\u0e15\u0e34\u0e42\u0e01\u0e40\u0e14\u0e21\u0e32\u0e14\u0e23\u0e34\u0e14 \u0e21\u0e35\u0e2a\u0e19\u0e32\u0e21\u0e40\u0e2b\u0e22\u0e49\u0e32\u0e04\u0e37\u0e2d\u0e0b\u0e32\u0e19\u0e40\u0e15\u0e35\u0e22\u0e42\u0e01 \u0e40\u0e1a\u0e23\u0e4c\u0e19\u0e32\u0e40\u0e1a\u0e27"}, {"text": "\u0e23\u0e31\u0e01\u0e1a\u0e35\u0e49\u0e16\u0e37\u0e2d\u0e01\u0e33\u0e40\u0e19\u0e34\u0e14\u0e02\u0e36\u0e49\u0e19\u0e43\u0e19\u0e1b\u0e35\u0e43\u0e14", "context": "\u0e23\u0e31\u0e01\u0e1a\u0e35\u0e49 \u0e40\u0e1b\u0e47\u0e19\u0e01\u0e35\u0e2c\u0e32\u0e0a\u0e19\u0e34\u0e14\u0e2b\u0e19\u0e36\u0e48\u0e07\u0e16\u0e37\u0e2d\u0e01\u0e33\u0e40\u0e19\u0e34\u0e14\u0e02\u0e36\u0e49\u0e19\u0e08\u0e32\u0e01\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e23\u0e31\u0e01\u0e1a\u0e35\u0e49 (Rugby School) \u0e43\u0e19\u0e40\u0e21\u0e37\u0e2d\u0e07\u0e23\u0e31\u0e01\u0e1a\u0e35\u0e49 \u0e43\u0e19\u0e40\u0e02\u0e15\u0e27\u0e2d\u0e23\u0e4c\u0e27\u0e34\u0e01\u0e40\u0e0a\u0e35\u0e22\u0e23\u0e4c \u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28\u0e2d\u0e31\u0e07\u0e01\u0e24\u0e29 \u0e40\u0e23\u0e34\u0e48\u0e21\u0e15\u0e49\u0e19\u0e08\u0e32\u0e01 \u0e43\u0e19\u0e1b\u0e35 \u0e04.\u0e28. 1826 \u0e02\u0e13\u0e30\u0e19\u0e31\u0e49\u0e19\u0e40\u0e1b\u0e47\u0e19\u0e01\u0e32\u0e23\u0e41\u0e02\u0e48\u0e07\u0e02\u0e31\u0e19 \u0e1f\u0e38\u0e15\u0e1a\u0e2d\u0e25 \u0e20\u0e32\u0e22\u0e43\u0e19\u0e02\u0e2d\u0e07\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e23\u0e31\u0e01\u0e1a\u0e35\u0e49 \u0e0b\u0e36\u0e48\u0e07\u0e15\u0e31\u0e49\u0e07\u0e2d\u0e22\u0e39\u0e48 \u0e13 \u0e40\u0e21\u0e37\u0e2d\u0e07\u0e23\u0e31\u0e01\u0e1a\u0e35\u0e49 \u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28\u0e2d\u0e31\u0e07\u0e01\u0e24\u0e29 \u0e1c\u0e39\u0e49\u0e40\u0e25\u0e48\u0e19\u0e04\u0e19\u0e2b\u0e19\u0e36\u0e48\u0e07\u0e0a\u0e37\u0e48\u0e2d \u0e27\u0e34\u0e25\u0e40\u0e25\u0e35\u0e22\u0e21 \u0e40\u0e27\u0e1a\u0e1a\u0e4c \u0e40\u0e2d\u0e25\u0e25\u0e34\u0e2a (William Webb Ellis) \u0e44\u0e14\u0e49\u0e17\u0e33\u0e1c\u0e34\u0e14\u0e01\u0e15\u0e34\u0e01\u0e32\u0e01\u0e32\u0e23\u0e41\u0e02\u0e48\u0e07\u0e02\u0e31\u0e19\u0e17\u0e35\u0e48\u0e27\u0e32\u0e07\u0e44\u0e27\u0e49 \u0e42\u0e14\u0e22\u0e27\u0e34\u0e48\u0e07\u0e2d\u0e38\u0e49\u0e21\u0e25\u0e39\u0e01\u0e1a\u0e2d\u0e25\u0e0b\u0e36\u0e48\u0e07\u0e15\u0e31\u0e27\u0e40\u0e02\u0e32\u0e40\u0e2d\u0e07\u0e44\u0e21\u0e48\u0e44\u0e14\u0e49\u0e40\u0e1b\u0e47\u0e19\u0e1c\u0e39\u0e49\u0e40\u0e25\u0e48\u0e19\u0e43\u0e19\u0e15\u0e33\u0e41\u0e2b\u0e19\u0e48\u0e07\u0e1c\u0e39\u0e49\u0e23\u0e31\u0e01\u0e29\u0e32\u0e1b\u0e23\u0e30\u0e15\u0e39 \u0e41\u0e25\u0e30\u0e44\u0e14\u0e49\u0e27\u0e34\u0e48\u0e07\u0e2d\u0e38\u0e49\u0e21\u0e25\u0e39\u0e01\u0e1a\u0e2d\u0e25\u0e44\u0e1b\u0e08\u0e19\u0e16\u0e36\u0e07\u0e40\u0e2a\u0e49\u0e19\u0e1b\u0e23\u0e30\u0e15\u0e39\u0e1d\u0e48\u0e32\u0e22\u0e15\u0e23\u0e07\u0e02\u0e49\u0e32\u0e21 \u0e40\u0e02\u0e32\u0e08\u0e30\u0e08\u0e07\u0e43\u0e08\u0e2b\u0e23\u0e37\u0e2d\u0e44\u0e21\u0e48\u0e01\u0e47\u0e15\u0e32\u0e21\u0e41\u0e15\u0e48 \u0e41\u0e15\u0e48\u0e01\u0e32\u0e23\u0e40\u0e25\u0e48\u0e19\u0e17\u0e35\u0e48\u0e19\u0e2d\u0e01\u0e25\u0e39\u0e48\u0e19\u0e2d\u0e01\u0e17\u0e32\u0e07\u0e02\u0e2d\u0e07\u0e40\u0e02\u0e32\u0e44\u0e14\u0e49\u0e40\u0e1b\u0e47\u0e19\u0e17\u0e35\u0e48\u0e1e\u0e39\u0e14\u0e16\u0e36\u0e07\u0e2d\u0e22\u0e48\u0e32\u0e07\u0e41\u0e1e\u0e23\u0e48\u0e2b\u0e25\u0e32\u0e22 \u0e43\u0e19\u0e2b\u0e21\u0e39\u0e48\u0e1c\u0e39\u0e49\u0e40\u0e25\u0e48\u0e19\u0e41\u0e25\u0e30\u0e1c\u0e39\u0e49\u0e14\u0e39\u0e08\u0e19\u0e41\u0e1e\u0e23\u0e48\u0e01\u0e23\u0e30\u0e08\u0e32\u0e22\u0e44\u0e1b\u0e15\u0e32\u0e21\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e15\u0e48\u0e32\u0e07\u0e46\u0e43\u0e19\u0e2d\u0e31\u0e07\u0e01\u0e24\u0e29 \u0e42\u0e14\u0e22\u0e40\u0e09\u0e1e\u0e32\u0e30\u0e43\u0e19\u0e2b\u0e21\u0e39\u0e48\u0e19\u0e31\u0e01\u0e40\u0e23\u0e35\u0e22\u0e19\u0e02\u0e2d\u0e07\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e40\u0e04\u0e21\u0e1a\u0e23\u0e34\u0e14\u0e08\u0e4c \u0e44\u0e14\u0e49\u0e19\u0e33\u0e40\u0e2d\u0e32\u0e27\u0e34\u0e18\u0e35\u0e01\u0e32\u0e23\u0e40\u0e25\u0e48\u0e19\u0e02\u0e2d\u0e07 \u0e19\u0e32\u0e22\u0e40\u0e2d\u0e25\u0e25\u0e35\u0e2a \u0e44\u0e1b\u0e08\u0e31\u0e14\u0e01\u0e32\u0e23\u0e41\u0e02\u0e48\u0e07\u0e02\u0e31\u0e19\u0e42\u0e14\u0e22\u0e40\u0e23\u0e35\u0e22\u0e01\u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e01\u0e21\u0e0a\u0e19\u0e34\u0e14\u0e43\u0e2b\u0e21\u0e48\u0e19\u0e35\u0e49\u0e27\u0e48\u0e32 \u0e23\u0e31\u0e01\u0e1a\u0e35\u0e49\u0e40\u0e01\u0e21\u0e2a\u0e4c (Rugby Games) \u0e20\u0e32\u0e22\u0e2b\u0e25\u0e31\u0e07\u0e08\u0e32\u0e01\u0e19\u0e31\u0e49\u0e19\u0e01\u0e47\u0e40\u0e1b\u0e47\u0e19\u0e17\u0e35\u0e48\u0e19\u0e34\u0e22\u0e21\u0e40\u0e25\u0e48\u0e19\u0e01\u0e31\u0e19\u0e21\u0e32\u0e01\u0e02\u0e36\u0e49\u0e19 \u0e17\u0e31\u0e49\u0e07\u0e44\u0e14\u0e49\u0e21\u0e35\u0e01\u0e32\u0e23\u0e40\u0e1b\u0e25\u0e35\u0e48\u0e22\u0e19\u0e41\u0e1b\u0e25\u0e07\u0e41\u0e01\u0e49\u0e44\u0e02\u0e01\u0e32\u0e23\u0e40\u0e25\u0e48\u0e19\u0e40\u0e23\u0e37\u0e48\u0e2d\u0e22\u0e21\u0e32\u0e43\u0e19\u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28\u0e2d\u0e31\u0e07\u0e01\u0e24\u0e29"}], "model-index": [{"name": "wangchanberta-base-att-spm-uncased-finetuned-th-squad", "results": []}]}
|
Sirinya/wangchanberta-th-squad_test1
| null |
[
"transformers",
"pytorch",
"tensorboard",
"camembert",
"question-answering",
"generated_from_trainer",
"dataset:thaiqa_squad",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #camembert #question-answering #generated_from_trainer #dataset-thaiqa_squad #endpoints_compatible #region-us
|
# wangchanberta-base-att-spm-uncased-finetuned-th-squad
This model is a fine-tuned version of airesearch/wangchanberta-base-att-spm-uncased on the thaiqa_squad dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
[
"# wangchanberta-base-att-spm-uncased-finetuned-th-squad\n\nThis model is a fine-tuned version of airesearch/wangchanberta-base-att-spm-uncased on the thaiqa_squad dataset.",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Framework versions\n\n- Transformers 4.13.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #camembert #question-answering #generated_from_trainer #dataset-thaiqa_squad #endpoints_compatible #region-us \n",
"# wangchanberta-base-att-spm-uncased-finetuned-th-squad\n\nThis model is a fine-tuned version of airesearch/wangchanberta-base-att-spm-uncased on the thaiqa_squad dataset.",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Framework versions\n\n- Transformers 4.13.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# DialoGPT Trained on a customized various spiritual texts and mixed with various different character personalities.
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on the energy complex known as Ra. Some text has been changed from the original with the intention of making it fit our discord server better. I've also trained it on various channeling experiences. I'm testing mixing this dataset with character from popular shows with the intention of creating a more diverse dialogue.
I built a Discord AI chatbot based on this model for internal use within Siyris, Inc.
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("Siyris/DialoGPT-medium-SIY")
model = AutoModelWithLMHead.from_pretrained("Siyris/DialoGPT-medium-SIY")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("SIY: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
{"license": "mit", "tags": ["conversational"], "thumbnail": "https://huggingface.co/front/thumbnails/dialogpt.png"}
|
Siyris/DialoGPT-medium-SIY
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# DialoGPT Trained on a customized various spiritual texts and mixed with various different character personalities.
This is an instance of microsoft/DialoGPT-medium trained on the energy complex known as Ra. Some text has been changed from the original with the intention of making it fit our discord server better. I've also trained it on various channeling experiences. I'm testing mixing this dataset with character from popular shows with the intention of creating a more diverse dialogue.
I built a Discord AI chatbot based on this model for internal use within Siyris, Inc.
Chat with the model:
|
[
"# DialoGPT Trained on a customized various spiritual texts and mixed with various different character personalities.\nThis is an instance of microsoft/DialoGPT-medium trained on the energy complex known as Ra. Some text has been changed from the original with the intention of making it fit our discord server better. I've also trained it on various channeling experiences. I'm testing mixing this dataset with character from popular shows with the intention of creating a more diverse dialogue.\nI built a Discord AI chatbot based on this model for internal use within Siyris, Inc.\nChat with the model:"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# DialoGPT Trained on a customized various spiritual texts and mixed with various different character personalities.\nThis is an instance of microsoft/DialoGPT-medium trained on the energy complex known as Ra. Some text has been changed from the original with the intention of making it fit our discord server better. I've also trained it on various channeling experiences. I'm testing mixing this dataset with character from popular shows with the intention of creating a more diverse dialogue.\nI built a Discord AI chatbot based on this model for internal use within Siyris, Inc.\nChat with the model:"
] |
text-generation
|
transformers
|
# DialoGPT Trained on a customized version of The Law of One.
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on the energy complex known as Ra. Some text has been changed from the original with the intention of making it fit our discord server better.
I built a Discord AI chatbot based on this model for internal use within Siyris, Inc.
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("Siyris/SIY")
model = AutoModelWithLMHead.from_pretrained("Siyris/SIY")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("SIY: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
{"license": "mit", "tags": ["conversational"], "thumbnail": "https://huggingface.co/front/thumbnails/dialogpt.png"}
|
Siyris/SIY
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# DialoGPT Trained on a customized version of The Law of One.
This is an instance of microsoft/DialoGPT-medium trained on the energy complex known as Ra. Some text has been changed from the original with the intention of making it fit our discord server better.
I built a Discord AI chatbot based on this model for internal use within Siyris, Inc.
Chat with the model:
|
[
"# DialoGPT Trained on a customized version of The Law of One.\nThis is an instance of microsoft/DialoGPT-medium trained on the energy complex known as Ra. Some text has been changed from the original with the intention of making it fit our discord server better.\nI built a Discord AI chatbot based on this model for internal use within Siyris, Inc.\nChat with the model:"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# DialoGPT Trained on a customized version of The Law of One.\nThis is an instance of microsoft/DialoGPT-medium trained on the energy complex known as Ra. Some text has been changed from the original with the intention of making it fit our discord server better.\nI built a Discord AI chatbot based on this model for internal use within Siyris, Inc.\nChat with the model:"
] |
question-answering
|
transformers
|
# BERT base Japanese - JaQuAD
## Description
A Japanese Question Answering model fine-tuned on [JaQuAD](https://huggingface.co/datasets/SkelterLabsInc/JaQuAD).
Please refer [BERT base Japanese](https://huggingface.co/cl-tohoku/bert-base-japanese) for details about the pre-training model.
The codes for the fine-tuning are available at [SkelterLabsInc/JaQuAD](https://github.com/SkelterLabsInc/JaQuAD)
## Evaluation results
On the development set.
```shell
{"f1": 77.35, "exact_match": 61.01}
```
On the test set.
```shell
{"f1": 78.92, "exact_match": 63.38}
```
## Usage
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
question = 'アレクサンダー・グラハム・ベルは、どこで生まれたの?'
context = 'アレクサンダー・グラハム・ベルは、スコットランド生まれの科学者、発明家、工学者である。世界初の>実用的電話の発明で知られている。'
model = AutoModelForQuestionAnswering.from_pretrained(
'SkelterLabsInc/bert-base-japanese-jaquad')
tokenizer = AutoTokenizer.from_pretrained(
'SkelterLabsInc/bert-base-japanese-jaquad')
inputs = tokenizer(
question, context, add_special_tokens=True, return_tensors="pt")
input_ids = inputs["input_ids"].tolist()[0]
outputs = model(**inputs)
answer_start_scores = outputs.start_logits
answer_end_scores = outputs.end_logits
# Get the most likely beginning of answer with the argmax of the score.
answer_start = torch.argmax(answer_start_scores)
# Get the most likely end of answer with the argmax of the score.
# 1 is added to `answer_end` because the index pointed by score is inclusive.
answer_end = torch.argmax(answer_end_scores) + 1
answer = tokenizer.convert_tokens_to_string(
tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
# answer = 'スコットランド'
```
## License
The fine-tuned model is licensed under the [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/) license.
## Citation
```bibtex
@misc{so2022jaquad,
title={{JaQuAD: Japanese Question Answering Dataset for Machine Reading Comprehension}},
author={ByungHoon So and Kyuhong Byun and Kyungwon Kang and Seongjin Cho},
year={2022},
eprint={2202.01764},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-sa-3.0", "tags": ["question-answering", "extractive-qa"], "datasets": ["SkelterLabsInc/JaQuAD"], "metrics": ["Exact match", "F1 score"], "pipeline_tag": ["None"]}
|
SkelterLabsInc/bert-base-japanese-jaquad
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"extractive-qa",
"ja",
"dataset:SkelterLabsInc/JaQuAD",
"arxiv:2202.01764",
"license:cc-by-sa-3.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2202.01764"
] |
[
"ja"
] |
TAGS
#transformers #pytorch #bert #question-answering #extractive-qa #ja #dataset-SkelterLabsInc/JaQuAD #arxiv-2202.01764 #license-cc-by-sa-3.0 #endpoints_compatible #region-us
|
# BERT base Japanese - JaQuAD
## Description
A Japanese Question Answering model fine-tuned on JaQuAD.
Please refer BERT base Japanese for details about the pre-training model.
The codes for the fine-tuning are available at SkelterLabsInc/JaQuAD
## Evaluation results
On the development set.
On the test set.
## Usage
## License
The fine-tuned model is licensed under the CC BY-SA 3.0 license.
|
[
"# BERT base Japanese - JaQuAD",
"## Description\n\nA Japanese Question Answering model fine-tuned on JaQuAD.\nPlease refer BERT base Japanese for details about the pre-training model.\nThe codes for the fine-tuning are available at SkelterLabsInc/JaQuAD",
"## Evaluation results\n\nOn the development set.\n\n\n\nOn the test set.",
"## Usage",
"## License\n\nThe fine-tuned model is licensed under the CC BY-SA 3.0 license."
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #extractive-qa #ja #dataset-SkelterLabsInc/JaQuAD #arxiv-2202.01764 #license-cc-by-sa-3.0 #endpoints_compatible #region-us \n",
"# BERT base Japanese - JaQuAD",
"## Description\n\nA Japanese Question Answering model fine-tuned on JaQuAD.\nPlease refer BERT base Japanese for details about the pre-training model.\nThe codes for the fine-tuning are available at SkelterLabsInc/JaQuAD",
"## Evaluation results\n\nOn the development set.\n\n\n\nOn the test set.",
"## Usage",
"## License\n\nThe fine-tuned model is licensed under the CC BY-SA 3.0 license."
] |
text2text-generation
|
transformers
|
**Model Overview**
This is the model presented in the paper ["ParaDetox: Detoxification with Parallel Data"](https://aclanthology.org/2022.acl-long.469/).
The model itself is [BART (base)](https://huggingface.co/facebook/bart-base) model trained on parallel detoxification dataset ParaDetox achiving SOTA results for detoxification task. More details, code and data can be found [here](https://github.com/skoltech-nlp/paradetox).
**How to use**
```python
from transformers import BartForConditionalGeneration, AutoTokenizer
base_model_name = 'facebook/bart-base'
model_name = 'SkolkovoInstitute/bart-base-detox'
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
model = BartForConditionalGeneration.from_pretrained(model_name)
```
**Citation**
```
@inproceedings{logacheva-etal-2022-paradetox,
title = "{P}ara{D}etox: Detoxification with Parallel Data",
author = "Logacheva, Varvara and
Dementieva, Daryna and
Ustyantsev, Sergey and
Moskovskiy, Daniil and
Dale, David and
Krotova, Irina and
Semenov, Nikita and
Panchenko, Alexander",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.469",
pages = "6804--6818",
abstract = "We present a novel pipeline for the collection of parallel data for the detoxification task. We collect non-toxic paraphrases for over 10,000 English toxic sentences. We also show that this pipeline can be used to distill a large existing corpus of paraphrases to get toxic-neutral sentence pairs. We release two parallel corpora which can be used for the training of detoxification models. To the best of our knowledge, these are the first parallel datasets for this task.We describe our pipeline in detail to make it fast to set up for a new language or domain, thus contributing to faster and easier development of new parallel resources.We train several detoxification models on the collected data and compare them with several baselines and state-of-the-art unsupervised approaches. We conduct both automatic and manual evaluations. All models trained on parallel data outperform the state-of-the-art unsupervised models by a large margin. This suggests that our novel datasets can boost the performance of detoxification systems.",
}
```
|
{"language": ["en"], "license": "openrail++", "tags": ["detoxification"], "datasets": ["s-nlp/paradetox"], "licenses": ["cc-by-nc-sa"]}
|
s-nlp/bart-base-detox
| null |
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"detoxification",
"en",
"dataset:s-nlp/paradetox",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #bart #text2text-generation #detoxification #en #dataset-s-nlp/paradetox #license-openrail++ #autotrain_compatible #endpoints_compatible #region-us
|
Model Overview
This is the model presented in the paper "ParaDetox: Detoxification with Parallel Data".
The model itself is BART (base) model trained on parallel detoxification dataset ParaDetox achiving SOTA results for detoxification task. More details, code and data can be found here.
How to use
Citation
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #bart #text2text-generation #detoxification #en #dataset-s-nlp/paradetox #license-openrail++ #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
# Model Details
This is a conditional language model based on [gpt2-medium](https://huggingface.co/gpt2-medium/) but with a vocabulary from [t5-base](https://huggingface.co/t5-base), for compatibility with T5-based paraphrasers such as [t5-paranmt-detox](https://huggingface.co/SkolkovoInstitute/t5-paranmt-detox). The model is conditional on two styles, `toxic` and `normal`, and was fine-tuned on the dataset from the Jigsaw [toxic comment classification challenge](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge).
The model was trained for the paper [Text Detoxification using Large Pre-trained Neural Models](https://arxiv.org/abs/2109.08914) (Dale et al, 2021) that describes its possible usage in more detail.
An example of its use and the code for its training is given in https://github.com/skoltech-nlp/detox.
## Model Description
- **Developed by:** SkolkovoInstitute
- **Model type:** Conditional Text Generation
- **Language:** English
- **Related Models:**
- **Parent Model:** [gpt2-medium](https://huggingface.co/gpt2-medium/)
- **Source of vocabulary:** [t5-base](https://huggingface.co/t5-base)
- **Resources for more information:**
- The paper [Text Detoxification using Large Pre-trained Neural Models](https://arxiv.org/abs/2109.08914)
- Its repository https://github.com/skoltech-nlp/detox.
# Uses
The model is intended for usage as a discriminator in a text detoxification pipeline using the ParaGeDi approach (see [the paper](https://arxiv.org/abs/2109.08914) for more details). It can also be used for text generation conditional on toxic or non-toxic style, but we do not know how to condition it on the things other than toxicity, so we do not recommend this usage. Another possible use is as a toxicity classifier (using the Bayes rule), but the model is not expected to perform better than e.g. a BERT-based standard classifier.
# Bias, Risks, and Limitations
The model inherits all the risks of its parent model, [gpt2-medium](https://huggingface.co/gpt2-medium/). It also inherits all the biases of the [Jigsaw dataset](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) on which it was fine-tuned. The model is intended to be conditional on style, but in fact it does not clearly separate the concepts of style and content, so it might regard some texts as toxic or safe based not on the style, but on their topics or keywords.
# Training Details
See the paper [Text Detoxification using Large Pre-trained Neural Models](https://arxiv.org/abs/2109.08914) and [the associated code](https://github.com/s-nlp/detox/tree/main/emnlp2021/style_transfer/paraGeDi).
# Evaluation
The model has not been evaluated on its own, only as a part as a ParaGeDi text detoxification pipeline (see [the paper](https://arxiv.org/abs/2109.08914)).
# Citation
**BibTeX:**
```
@inproceedings{dale-etal-2021-text,
title = "Text Detoxification using Large Pre-trained Neural Models",
author = "Dale, David and
Voronov, Anton and
Dementieva, Daryna and
Logacheva, Varvara and
Kozlova, Olga and
Semenov, Nikita and
Panchenko, Alexander",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.629",
pages = "7979--7996",
}
```
|
{"language": ["en"], "tags": ["text-generation", "conditional-text-generation"]}
|
s-nlp/gpt2-base-gedi-detoxification
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conditional-text-generation",
"en",
"arxiv:2109.08914",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.08914"
] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conditional-text-generation #en #arxiv-2109.08914 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Details
This is a conditional language model based on gpt2-medium but with a vocabulary from t5-base, for compatibility with T5-based paraphrasers such as t5-paranmt-detox. The model is conditional on two styles, 'toxic' and 'normal', and was fine-tuned on the dataset from the Jigsaw toxic comment classification challenge.
The model was trained for the paper Text Detoxification using Large Pre-trained Neural Models (Dale et al, 2021) that describes its possible usage in more detail.
An example of its use and the code for its training is given in URL
## Model Description
- Developed by: SkolkovoInstitute
- Model type: Conditional Text Generation
- Language: English
- Related Models:
- Parent Model: gpt2-medium
- Source of vocabulary: t5-base
- Resources for more information:
- The paper Text Detoxification using Large Pre-trained Neural Models
- Its repository URL
# Uses
The model is intended for usage as a discriminator in a text detoxification pipeline using the ParaGeDi approach (see the paper for more details). It can also be used for text generation conditional on toxic or non-toxic style, but we do not know how to condition it on the things other than toxicity, so we do not recommend this usage. Another possible use is as a toxicity classifier (using the Bayes rule), but the model is not expected to perform better than e.g. a BERT-based standard classifier.
# Bias, Risks, and Limitations
The model inherits all the risks of its parent model, gpt2-medium. It also inherits all the biases of the Jigsaw dataset on which it was fine-tuned. The model is intended to be conditional on style, but in fact it does not clearly separate the concepts of style and content, so it might regard some texts as toxic or safe based not on the style, but on their topics or keywords.
# Training Details
See the paper Text Detoxification using Large Pre-trained Neural Models and the associated code.
# Evaluation
The model has not been evaluated on its own, only as a part as a ParaGeDi text detoxification pipeline (see the paper).
BibTeX:
|
[
"# Model Details\n \n\nThis is a conditional language model based on gpt2-medium but with a vocabulary from t5-base, for compatibility with T5-based paraphrasers such as t5-paranmt-detox. The model is conditional on two styles, 'toxic' and 'normal', and was fine-tuned on the dataset from the Jigsaw toxic comment classification challenge.\n\nThe model was trained for the paper Text Detoxification using Large Pre-trained Neural Models (Dale et al, 2021) that describes its possible usage in more detail. \n\nAn example of its use and the code for its training is given in URL",
"## Model Description\n \n- Developed by: SkolkovoInstitute\n- Model type: Conditional Text Generation\n- Language: English\n- Related Models:\n - Parent Model: gpt2-medium\n - Source of vocabulary: t5-base\n- Resources for more information: \n - The paper Text Detoxification using Large Pre-trained Neural Models\n - Its repository URL",
"# Uses\n\nThe model is intended for usage as a discriminator in a text detoxification pipeline using the ParaGeDi approach (see the paper for more details). It can also be used for text generation conditional on toxic or non-toxic style, but we do not know how to condition it on the things other than toxicity, so we do not recommend this usage. Another possible use is as a toxicity classifier (using the Bayes rule), but the model is not expected to perform better than e.g. a BERT-based standard classifier.",
"# Bias, Risks, and Limitations\nThe model inherits all the risks of its parent model, gpt2-medium. It also inherits all the biases of the Jigsaw dataset on which it was fine-tuned. The model is intended to be conditional on style, but in fact it does not clearly separate the concepts of style and content, so it might regard some texts as toxic or safe based not on the style, but on their topics or keywords.",
"# Training Details\nSee the paper Text Detoxification using Large Pre-trained Neural Models and the associated code.",
"# Evaluation\nThe model has not been evaluated on its own, only as a part as a ParaGeDi text detoxification pipeline (see the paper).\n\nBibTeX:"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conditional-text-generation #en #arxiv-2109.08914 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Details\n \n\nThis is a conditional language model based on gpt2-medium but with a vocabulary from t5-base, for compatibility with T5-based paraphrasers such as t5-paranmt-detox. The model is conditional on two styles, 'toxic' and 'normal', and was fine-tuned on the dataset from the Jigsaw toxic comment classification challenge.\n\nThe model was trained for the paper Text Detoxification using Large Pre-trained Neural Models (Dale et al, 2021) that describes its possible usage in more detail. \n\nAn example of its use and the code for its training is given in URL",
"## Model Description\n \n- Developed by: SkolkovoInstitute\n- Model type: Conditional Text Generation\n- Language: English\n- Related Models:\n - Parent Model: gpt2-medium\n - Source of vocabulary: t5-base\n- Resources for more information: \n - The paper Text Detoxification using Large Pre-trained Neural Models\n - Its repository URL",
"# Uses\n\nThe model is intended for usage as a discriminator in a text detoxification pipeline using the ParaGeDi approach (see the paper for more details). It can also be used for text generation conditional on toxic or non-toxic style, but we do not know how to condition it on the things other than toxicity, so we do not recommend this usage. Another possible use is as a toxicity classifier (using the Bayes rule), but the model is not expected to perform better than e.g. a BERT-based standard classifier.",
"# Bias, Risks, and Limitations\nThe model inherits all the risks of its parent model, gpt2-medium. It also inherits all the biases of the Jigsaw dataset on which it was fine-tuned. The model is intended to be conditional on style, but in fact it does not clearly separate the concepts of style and content, so it might regard some texts as toxic or safe based not on the style, but on their topics or keywords.",
"# Training Details\nSee the paper Text Detoxification using Large Pre-trained Neural Models and the associated code.",
"# Evaluation\nThe model has not been evaluated on its own, only as a part as a ParaGeDi text detoxification pipeline (see the paper).\n\nBibTeX:"
] |
text-classification
|
transformers
|
The model has been trained to predict for English sentences, whether they are formal or informal.
Base model: `roberta-base`
Datasets: [GYAFC](https://github.com/raosudha89/GYAFC-corpus) from [Rao and Tetreault, 2018](https://aclanthology.org/N18-1012) and [online formality corpus](http://www.seas.upenn.edu/~nlp/resources/formality-corpus.tgz) from [Pavlick and Tetreault, 2016](https://aclanthology.org/Q16-1005).
Data augmentation: changing texts to upper or lower case; removing all punctuation, adding dot at the end of a sentence. It was applied because otherwise the model is over-reliant on punctuation and capitalization and does not pay enough attention to other features.
Loss: binary classification (on GYAFC), in-batch ranking (on PT data).
Performance metrics on the test data:
| dataset | ROC AUC | precision | recall | fscore | accuracy | Spearman |
|----------------------------------------------|---------|-----------|--------|--------|----------|------------|
| GYAFC | 0.9779 | 0.90 | 0.91 | 0.90 | 0.9087 | 0.8233 |
| GYAFC normalized (lowercase + remove punct.) | 0.9234 | 0.85 | 0.81 | 0.82 | 0.8218 | 0.7294 |
| P&T subset | Spearman R |
| - | - |
news | 0.4003
answers | 0.7500
blog | 0.7334
email | 0.7606
## Citation
If you are using the model in your research, please cite the following
[paper](https://doi.org/10.1007/978-3-031-35320-8_4) where it was introduced:
```
@InProceedings{10.1007/978-3-031-35320-8_4,
author="Babakov, Nikolay
and Dale, David
and Gusev, Ilya
and Krotova, Irina
and Panchenko, Alexander",
editor="M{\'e}tais, Elisabeth
and Meziane, Farid
and Sugumaran, Vijayan
and Manning, Warren
and Reiff-Marganiec, Stephan",
title="Don't Lose the Message While Paraphrasing: A Study on Content Preserving Style Transfer",
booktitle="Natural Language Processing and Information Systems",
year="2023",
publisher="Springer Nature Switzerland",
address="Cham",
pages="47--61",
isbn="978-3-031-35320-8"
}
```
## Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png
|
{"language": ["en"], "license": "cc-by-nc-sa-4.0", "tags": ["formality"], "datasets": ["GYAFC", "Pavlick-Tetreault-2016"]}
|
s-nlp/roberta-base-formality-ranker
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"formality",
"en",
"dataset:GYAFC",
"dataset:Pavlick-Tetreault-2016",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #roberta #text-classification #formality #en #dataset-GYAFC #dataset-Pavlick-Tetreault-2016 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
The model has been trained to predict for English sentences, whether they are formal or informal.
Base model: 'roberta-base'
Datasets: GYAFC from Rao and Tetreault, 2018 and online formality corpus from Pavlick and Tetreault, 2016.
Data augmentation: changing texts to upper or lower case; removing all punctuation, adding dot at the end of a sentence. It was applied because otherwise the model is over-reliant on punctuation and capitalization and does not pay enough attention to other features.
Loss: binary classification (on GYAFC), in-batch ranking (on PT data).
Performance metrics on the test data:
If you are using the model in your research, please cite the following
paper where it was introduced:
Licensing Information
---------------------
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](URL).
[](URL)
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #text-classification #formality #en #dataset-GYAFC #dataset-Pavlick-Tetreault-2016 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
text-classification
|
transformers
|
## Toxicity Classification Model (but for the first part of the data)
This model is trained for toxicity classification task. The dataset used for training is the merge of the English parts of the three datasets by **Jigsaw** ([Jigsaw 2018](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge), [Jigsaw 2019](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification), [Jigsaw 2020](https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification)), containing around 2 million examples. We split it into two parts and fine-tune a RoBERTa model ([RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692)) on it. THIS MODEL WAS FINE-TUNED ON THE FIRST PART. The classifiers perform closely on the test set of the first Jigsaw competition, reaching the **AUC-ROC** of 0.98 and **F1-score** of 0.76.
## How to use
```python
from transformers import RobertaTokenizer, RobertaForSequenceClassification
# load tokenizer and model weights, but be careful, here we need to use auth token
tokenizer = RobertaTokenizer.from_pretrained('SkolkovoInstitute/roberta_toxicity_classifier', use_auth_token=True)
model = RobertaForSequenceClassification.from_pretrained('SkolkovoInstitute/roberta_toxicity_classifier', use_auth_token=True)
# prepare the input
batch = tokenizer.encode('you are amazing', return_tensors='pt')
# inference
model(batch)
```
## Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png
|
{"language": ["en"], "tags": ["toxic comments classification"], "licenses": ["cc-by-nc-sa"]}
|
s-nlp/roberta_first_toxicity_classifier
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"toxic comments classification",
"en",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692"
] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #text-classification #toxic comments classification #en #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us
|
## Toxicity Classification Model (but for the first part of the data)
This model is trained for toxicity classification task. The dataset used for training is the merge of the English parts of the three datasets by Jigsaw (Jigsaw 2018, Jigsaw 2019, Jigsaw 2020), containing around 2 million examples. We split it into two parts and fine-tune a RoBERTa model (RoBERTa: A Robustly Optimized BERT Pretraining Approach) on it. THIS MODEL WAS FINE-TUNED ON THE FIRST PART. The classifiers perform closely on the test set of the first Jigsaw competition, reaching the AUC-ROC of 0.98 and F1-score of 0.76.
## How to use
## Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: URL
[cc-by-nc-sa-image]: https://i.URL
|
[
"## Toxicity Classification Model (but for the first part of the data)\nThis model is trained for toxicity classification task. The dataset used for training is the merge of the English parts of the three datasets by Jigsaw (Jigsaw 2018, Jigsaw 2019, Jigsaw 2020), containing around 2 million examples. We split it into two parts and fine-tune a RoBERTa model (RoBERTa: A Robustly Optimized BERT Pretraining Approach) on it. THIS MODEL WAS FINE-TUNED ON THE FIRST PART. The classifiers perform closely on the test set of the first Jigsaw competition, reaching the AUC-ROC of 0.98 and F1-score of 0.76.",
"## How to use",
"## Licensing Information\n\n[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].\n\n[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]\n\n[cc-by-nc-sa]: URL\n[cc-by-nc-sa-image]: https://i.URL"
] |
[
"TAGS\n#transformers #pytorch #roberta #text-classification #toxic comments classification #en #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us \n",
"## Toxicity Classification Model (but for the first part of the data)\nThis model is trained for toxicity classification task. The dataset used for training is the merge of the English parts of the three datasets by Jigsaw (Jigsaw 2018, Jigsaw 2019, Jigsaw 2020), containing around 2 million examples. We split it into two parts and fine-tune a RoBERTa model (RoBERTa: A Robustly Optimized BERT Pretraining Approach) on it. THIS MODEL WAS FINE-TUNED ON THE FIRST PART. The classifiers perform closely on the test set of the first Jigsaw competition, reaching the AUC-ROC of 0.98 and F1-score of 0.76.",
"## How to use",
"## Licensing Information\n\n[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].\n\n[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]\n\n[cc-by-nc-sa]: URL\n[cc-by-nc-sa-image]: https://i.URL"
] |
text-classification
|
transformers
|
## Toxicity Classification Model
This model is trained for toxicity classification task. The dataset used for training is the merge of the English parts of the three datasets by **Jigsaw** ([Jigsaw 2018](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge), [Jigsaw 2019](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification), [Jigsaw 2020](https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification)), containing around 2 million examples. We split it into two parts and fine-tune a RoBERTa model ([RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692)) on it. The classifiers perform closely on the test set of the first Jigsaw competition, reaching the **AUC-ROC** of 0.98 and **F1-score** of 0.76.
## How to use
```python
from transformers import RobertaTokenizer, RobertaForSequenceClassification
# load tokenizer and model weights
tokenizer = RobertaTokenizer.from_pretrained('SkolkovoInstitute/roberta_toxicity_classifier')
model = RobertaForSequenceClassification.from_pretrained('SkolkovoInstitute/roberta_toxicity_classifier')
# prepare the input
batch = tokenizer.encode('you are amazing', return_tensors='pt')
# inference
model(batch)
```
## Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png
|
{"language": ["en"], "tags": ["toxic comments classification"], "licenses": ["cc-by-nc-sa"]}
|
s-nlp/roberta_toxicity_classifier
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"toxic comments classification",
"en",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692"
] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #text-classification #toxic comments classification #en #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
## Toxicity Classification Model
This model is trained for toxicity classification task. The dataset used for training is the merge of the English parts of the three datasets by Jigsaw (Jigsaw 2018, Jigsaw 2019, Jigsaw 2020), containing around 2 million examples. We split it into two parts and fine-tune a RoBERTa model (RoBERTa: A Robustly Optimized BERT Pretraining Approach) on it. The classifiers perform closely on the test set of the first Jigsaw competition, reaching the AUC-ROC of 0.98 and F1-score of 0.76.
## How to use
## Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: URL
[cc-by-nc-sa-image]: https://i.URL
|
[
"## Toxicity Classification Model\n\nThis model is trained for toxicity classification task. The dataset used for training is the merge of the English parts of the three datasets by Jigsaw (Jigsaw 2018, Jigsaw 2019, Jigsaw 2020), containing around 2 million examples. We split it into two parts and fine-tune a RoBERTa model (RoBERTa: A Robustly Optimized BERT Pretraining Approach) on it. The classifiers perform closely on the test set of the first Jigsaw competition, reaching the AUC-ROC of 0.98 and F1-score of 0.76.",
"## How to use",
"## Licensing Information\n\n[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].\n\n[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]\n\n[cc-by-nc-sa]: URL\n[cc-by-nc-sa-image]: https://i.URL"
] |
[
"TAGS\n#transformers #pytorch #roberta #text-classification #toxic comments classification #en #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"## Toxicity Classification Model\n\nThis model is trained for toxicity classification task. The dataset used for training is the merge of the English parts of the three datasets by Jigsaw (Jigsaw 2018, Jigsaw 2019, Jigsaw 2020), containing around 2 million examples. We split it into two parts and fine-tune a RoBERTa model (RoBERTa: A Robustly Optimized BERT Pretraining Approach) on it. The classifiers perform closely on the test set of the first Jigsaw competition, reaching the AUC-ROC of 0.98 and F1-score of 0.76.",
"## How to use",
"## Licensing Information\n\n[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].\n\n[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]\n\n[cc-by-nc-sa]: URL\n[cc-by-nc-sa-image]: https://i.URL"
] |
text-classification
|
transformers
|
This model is a clone of [SkolkovoInstitute/roberta_toxicity_classifier](https://huggingface.co/SkolkovoInstitute/roberta_toxicity_classifier) trained on a disjoint dataset.
While `roberta_toxicity_classifier` is used for evaluation of detoxification algorithms, `roberta_toxicity_classifier_v1` can be used within these algorithms, as in the paper [Text Detoxification using Large Pre-trained Neural Models](https://arxiv.org/abs/1911.00536).
|
{}
|
s-nlp/roberta_toxicity_classifier_v1
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"arxiv:1911.00536",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1911.00536"
] |
[] |
TAGS
#transformers #pytorch #roberta #text-classification #arxiv-1911.00536 #autotrain_compatible #endpoints_compatible #region-us
|
This model is a clone of SkolkovoInstitute/roberta_toxicity_classifier trained on a disjoint dataset.
While 'roberta_toxicity_classifier' is used for evaluation of detoxification algorithms, 'roberta_toxicity_classifier_v1' can be used within these algorithms, as in the paper Text Detoxification using Large Pre-trained Neural Models.
|
[] |
[
"TAGS\n#transformers #pytorch #roberta #text-classification #arxiv-1911.00536 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text2text-generation
|
transformers
|
This is the detoxification baseline model trained on the [train](https://github.com/skoltech-nlp/russe_detox_2022/blob/main/data/input/train.tsv) part of "RUSSE 2022: Russian Text Detoxification Based on Parallel Corpora" competition. The source sentences are Russian toxic messages from Odnoklassniki, Pikabu, and Twitter platforms. The base model is [ruT5](https://huggingface.co/sberbank-ai/ruT5-base) provided from Sber.
**How to use**
```python
from transformers import T5ForConditionalGeneration, AutoTokenizer
base_model_name = 'sberbank-ai/ruT5-base'
model_name = 'SkolkovoInstitute/ruT5-base-detox'
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
```
|
{"language": ["ru"], "license": "openrail++", "tags": ["text-generation-inference"], "datasets": ["s-nlp/ru_paradetox"]}
|
s-nlp/ruT5-base-detox
| null |
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"text-generation-inference",
"ru",
"dataset:s-nlp/ru_paradetox",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ru"
] |
TAGS
#transformers #pytorch #safetensors #t5 #text2text-generation #text-generation-inference #ru #dataset-s-nlp/ru_paradetox #license-openrail++ #autotrain_compatible #endpoints_compatible #region-us
|
This is the detoxification baseline model trained on the train part of "RUSSE 2022: Russian Text Detoxification Based on Parallel Corpora" competition. The source sentences are Russian toxic messages from Odnoklassniki, Pikabu, and Twitter platforms. The base model is ruT5 provided from Sber.
How to use
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #text-generation-inference #ru #dataset-s-nlp/ru_paradetox #license-openrail++ #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
This is a model for evaluation of naturalness of short Russian texts. It has been trained to distinguish human-written texts from their corrupted versions.
Corruption sources: random replacement, deletion, addition, shuffling, and re-inflection of words and characters, random changes of capitalization, round-trip translation, filling random gaps with T5 and RoBERTA models. For each original text, we sampled three corrupted texts, so the model is uniformly biased towards the `unnatural` label.
Data sources: web-corpora from [the Leipzig collection](https://wortschatz.uni-leipzig.de/en/download) (`rus_news_2020_100K`, `rus_newscrawl-public_2018_100K`, `rus-ru_web-public_2019_100K`, `rus_wikipedia_2021_100K`), comments from [OK](https://www.kaggle.com/alexandersemiletov/toxic-russian-comments) and [Pikabu](https://www.kaggle.com/blackmoon/russian-language-toxic-comments).
On our private test dataset, the model has achieved 40% rank correlation with human judgements of naturalness, which is higher than GPT perplexity, another popular fluency metric.
|
{"language": ["ru"], "tags": ["fluency"]}
|
s-nlp/rubert-base-corruption-detector
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"fluency",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ru"
] |
TAGS
#transformers #pytorch #bert #text-classification #fluency #ru #autotrain_compatible #endpoints_compatible #region-us
|
This is a model for evaluation of naturalness of short Russian texts. It has been trained to distinguish human-written texts from their corrupted versions.
Corruption sources: random replacement, deletion, addition, shuffling, and re-inflection of words and characters, random changes of capitalization, round-trip translation, filling random gaps with T5 and RoBERTA models. For each original text, we sampled three corrupted texts, so the model is uniformly biased towards the 'unnatural' label.
Data sources: web-corpora from the Leipzig collection ('rus_news_2020_100K', 'rus_newscrawl-public_2018_100K', 'rus-ru_web-public_2019_100K', 'rus_wikipedia_2021_100K'), comments from OK and Pikabu.
On our private test dataset, the model has achieved 40% rank correlation with human judgements of naturalness, which is higher than GPT perplexity, another popular fluency metric.
|
[] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #fluency #ru #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
Bert-based classifier (finetuned from [Conversational Rubert](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational)) trained on merge of Russian Language Toxic Comments [dataset](https://www.kaggle.com/blackmoon/russian-language-toxic-comments/metadata) collected from 2ch.hk and Toxic Russian Comments [dataset](https://www.kaggle.com/alexandersemiletov/toxic-russian-comments) collected from ok.ru.
The datasets were merged, shuffled, and split into train, dev, test splits in 80-10-10 proportion.
The metrics obtained from test dataset is as follows
| | precision | recall | f1-score | support |
|:------------:|:---------:|:------:|:--------:|:-------:|
| 0 | 0.98 | 0.99 | 0.98 | 21384 |
| 1 | 0.94 | 0.92 | 0.93 | 4886 |
| accuracy | | | 0.97 | 26270|
| macro avg | 0.96 | 0.96 | 0.96 | 26270 |
| weighted avg | 0.97 | 0.97 | 0.97 | 26270 |
## How to use
```python
from transformers import BertTokenizer, BertForSequenceClassification
# load tokenizer and model weights
tokenizer = BertTokenizer.from_pretrained('SkolkovoInstitute/russian_toxicity_classifier')
model = BertForSequenceClassification.from_pretrained('SkolkovoInstitute/russian_toxicity_classifier')
# prepare the input
batch = tokenizer.encode('ты супер', return_tensors='pt')
# inference
model(batch)
```
## Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png
|
{"language": ["ru"], "tags": ["toxic comments classification"], "licenses": ["cc-by-nc-sa"]}
|
s-nlp/russian_toxicity_classifier
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"toxic comments classification",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ru"
] |
TAGS
#transformers #pytorch #tf #safetensors #bert #text-classification #toxic comments classification #ru #autotrain_compatible #endpoints_compatible #has_space #region-us
|
Bert-based classifier (finetuned from Conversational Rubert) trained on merge of Russian Language Toxic Comments dataset collected from URL and Toxic Russian Comments dataset collected from URL.
The datasets were merged, shuffled, and split into train, dev, test splits in 80-10-10 proportion.
The metrics obtained from test dataset is as follows
How to use
----------
Licensing Information
---------------------
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](URL).
[](URL)
|
[] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #bert #text-classification #toxic comments classification #ru #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
text2text-generation
|
transformers
|
This is a paraphraser based on [ceshine/t5-paraphrase-paws-msrp-opinosis](https://huggingface.co/ceshine/t5-paraphrase-paws-msrp-opinosis)
and additionally fine-tuned on [ParaNMT](https://arxiv.org/abs/1711.05732) filtered for the task of detoxification.
The model was trained for the paper [Text Detoxification using Large Pre-trained Neural Models](https://arxiv.org/abs/1911.00536).
An example of its use and the code for its training is given in https://github.com/skoltech-nlp/detox
|
{"license": "openrail++", "datasets": ["s-nlp/paranmt_for_detox"]}
|
s-nlp/t5-paranmt-detox
| null |
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"dataset:s-nlp/paranmt_for_detox",
"arxiv:1711.05732",
"arxiv:1911.00536",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1711.05732",
"1911.00536"
] |
[] |
TAGS
#transformers #pytorch #safetensors #t5 #text2text-generation #dataset-s-nlp/paranmt_for_detox #arxiv-1711.05732 #arxiv-1911.00536 #license-openrail++ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
This is a paraphraser based on ceshine/t5-paraphrase-paws-msrp-opinosis
and additionally fine-tuned on ParaNMT filtered for the task of detoxification.
The model was trained for the paper Text Detoxification using Large Pre-trained Neural Models.
An example of its use and the code for its training is given in URL
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #dataset-s-nlp/paranmt_for_detox #arxiv-1711.05732 #arxiv-1911.00536 #license-openrail++ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
This is a paraphraser based on [ceshine/t5-paraphrase-paws-msrp-opinosis](https://huggingface.co/ceshine/t5-paraphrase-paws-msrp-opinosis)
and additionally fine-tuned on [ParaNMT](https://arxiv.org/abs/1711.05732).
The model was trained for the paper [Text Detoxification using Large Pre-trained Neural Models](https://arxiv.org/abs/1911.00536).
An example of its use is given in https://github.com/skoltech-nlp/detox
|
{}
|
s-nlp/t5-paraphrase-paws-msrp-opinosis-paranmt
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:1711.05732",
"arxiv:1911.00536",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1711.05732",
"1911.00536"
] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #arxiv-1711.05732 #arxiv-1911.00536 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
This is a paraphraser based on ceshine/t5-paraphrase-paws-msrp-opinosis
and additionally fine-tuned on ParaNMT.
The model was trained for the paper Text Detoxification using Large Pre-trained Neural Models.
An example of its use is given in URL
|
[] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #arxiv-1711.05732 #arxiv-1911.00536 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-classification
|
transformers
|
XLMRoberta-based classifier trained on XFORMAL.
all
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.744912 | 0.927790 | 0.826354 | 108019 |
| 1 | 0.889088 | 0.645630 | 0.748048 | 96845 |
| accuracy | | | 0.794405 | 204864 |
| macro avg | 0.817000 | 0.786710 | 0.787201 | 204864 |
| weighted avg | 0.813068 | 0.794405 | 0.789337 | 204864 |
en
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.800053 | 0.962981 | 0.873988 | 22151 |
| 1 | 0.945106 | 0.725899 | 0.821124 | 19449 |
| accuracy | | | 0.852139 | 41600 |
| macro avg | 0.872579 | 0.844440 | 0.847556 | 41600 |
| weighted avg | 0.867869 | 0.852139 | 0.849273 | 41600 |
fr
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.746709 | 0.925738 | 0.826641 | 21505 |
| 1 | 0.887305 | 0.650592 | 0.750731 | 19327 |
| accuracy | | | 0.795504 | 40832 |
| macro avg | 0.817007 | 0.788165 | 0.788686 | 40832 |
| weighted avg | 0.813257 | 0.795504 | 0.790711 | 40832 |
it
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.721282 | 0.914669 | 0.806545 | 21528 |
| 1 | 0.864887 | 0.607135 | 0.713445 | 19368 |
| accuracy | | | 0.769024 | 40896 |
| macro avg | 0.793084 | 0.760902 | 0.759995 | 40896 |
| weighted avg | 0.789292 | 0.769024 | 0.762454 | 40896 |
pt
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.717546 | 0.908167 | 0.801681 | 21637 |
| 1 | 0.853628 | 0.599700 | 0.704481 | 19323 |
| accuracy | | | 0.762646 | 40960 |
| macro avg | 0.785587 | 0.753933 | 0.753081 | 40960 |
| weighted avg | 0.781743 | 0.762646 | 0.755826 | 40960 |
## How to use
```python
from transformers import XLMRobertaTokenizerFast, XLMRobertaForSequenceClassification
# load tokenizer and model weights
tokenizer = XLMRobertaTokenizerFast.from_pretrained('SkolkovoInstitute/xlmr_formality_classifier')
model = XLMRobertaForSequenceClassification.from_pretrained('SkolkovoInstitute/xlmr_formality_classifier')
# prepare the input
batch = tokenizer.encode('ты супер', return_tensors='pt')
# inference
model(batch)
```
## Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png
|
{"language": ["en", "fr", "it", "pt"], "license": "cc-by-nc-sa-4.0", "tags": ["formal or informal classification"], "licenses": ["cc-by-nc-sa"]}
|
s-nlp/xlmr_formality_classifier
| null |
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"formal or informal classification",
"en",
"fr",
"it",
"pt",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en",
"fr",
"it",
"pt"
] |
TAGS
#transformers #pytorch #safetensors #xlm-roberta #text-classification #formal or informal classification #en #fr #it #pt #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
XLMRoberta-based classifier trained on XFORMAL.
all
en
fr
it
pt
How to use
----------
Licensing Information
---------------------
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](URL).
[](URL)
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #xlm-roberta #text-classification #formal or informal classification #en #fr #it #pt #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
## General concept of the model
#### Proposed usage
The **'inappropriateness'** substance we tried to collect in the dataset and detect with the model **is NOT a substitution of toxicity**, it is rather a derivative of toxicity. So the model based on our dataset could serve as **an additional layer of inappropriateness filtering after toxicity and obscenity filtration**. You can detect the exact sensitive topic by using [another model](https://huggingface.co/Skoltech/russian-sensitive-topics). The proposed pipeline is shown in the scheme below.

You can also train one classifier for both toxicity and inappropriateness detection. The data to be mixed with toxic labelled samples could be found on our [GitHub](https://github.com/skoltech-nlp/inappropriate-sensitive-topics/blob/main/Version2/appropriateness/Appropriateness.csv) or on [kaggle](https://www.kaggle.com/nigula/russianinappropriatemessages)
#### Inappropraiteness intuition
This model is trained on the dataset of inappropriate messages of the Russian language. Generally, an inappropriate utterance is an utterance that has not obscene words or any kind of toxic intent, but can still harm the reputation of the speaker. Find some sample for more intuition in the table below. Learn more about the concept of inappropriateness [in this article ](https://www.aclweb.org/anthology/2021.bsnlp-1.4/) presented at the workshop for Balto-Slavic NLP at the EACL-2021 conference. Please note that this article describes the first version of the dataset, while the model is trained on the extended version of the dataset open-sourced on our [GitHub](https://github.com/skoltech-nlp/inappropriate-sensitive-topics/blob/main/Version2/appropriateness/Appropriateness.csv) or on [kaggle](https://www.kaggle.com/nigula/russianinappropriatemessages). The properties of the dataset are the same as the one described in the article, the only difference is the size.
| Text | Inappropriate | Comment | Unsafe topic |
|------------------------------------------------------------------------------------------------------|---------------|----------------------------------------------------------------|---------------|
| Ладно бы видного деятеля завалили а тут какого то ноунейм нигру преступника | yes | Оправдание убийства | offline crime |
| Это нарушение УКРФ! | no | Осуждение нарушения закона | offline crime |
| Религия - это насилие высшего уровня. Верующего принуждают к вере еще в младенчестве. | yes | Оскорбление чувств верующих | religion |
| Христиане, как вы относитесь к иноверцам, в частности к Гамалиилу и к уверовавшим в Иисуса фарисеям? | no | Обычные вопросы к верующим без намека на оскорбление их чувств | religion |
| Хорошо что проституция запрещена | no | Осуждение простуитуции | prostitution |
| Возьму бабки и сниму шлюх на выходных | yes | Популяризация проституции | prostitution |
#### Metrics
The model was trained, validated, and tested only on the samples with 100% confidence, which allowed to get the following metrics on test set:
| | precision | recall | f1-score | support |
|--------------|----------|--------|----------|---------|
| 0 | 0.92 | 0.93 | 0.93 | 7839 |
| 1 | 0.80 | 0.76 | 0.78 | 2726 |
| accuracy | | | 0.89 | 10565 |
| macro avg | 0.86 | 0.85 | 0.85 | 10565 |
| weighted avg | 0.89 | 0.89 | 0.89 | 10565 |
## Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@inproceedings{babakov-etal-2021-detecting,
title = "Detecting Inappropriate Messages on Sensitive Topics that Could Harm a Company{'}s Reputation",
author = "Babakov, Nikolay and
Logacheva, Varvara and
Kozlova, Olga and
Semenov, Nikita and
Panchenko, Alexander",
booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing",
month = apr,
year = "2021",
address = "Kiyv, Ukraine",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bsnlp-1.4",
pages = "26--36",
abstract = "Not all topics are equally {``}flammable{''} in terms of toxicity: a calm discussion of turtles or fishing less often fuels inappropriate toxic dialogues than a discussion of politics or sexual minorities. We define a set of sensitive topics that can yield inappropriate and toxic messages and describe the methodology of collecting and labelling a dataset for appropriateness. While toxicity in user-generated data is well-studied, we aim at defining a more fine-grained notion of inappropriateness. The core of inappropriateness is that it can harm the reputation of a speaker. This is different from toxicity in two respects: (i) inappropriateness is topic-related, and (ii) inappropriate message is not toxic but still unacceptable. We collect and release two datasets for Russian: a topic-labelled dataset and an appropriateness-labelled dataset. We also release pre-trained classification models trained on this data.",
}
```
## Contacts
If you have any questions please contact [Nikolay](mailto:[email protected])
|
{"language": ["ru"], "tags": ["toxic comments classification"], "licenses": ["cc-by-nc-sa"]}
|
apanc/russian-inappropriate-messages
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"toxic comments classification",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ru"
] |
TAGS
#transformers #pytorch #tf #jax #bert #text-classification #toxic comments classification #ru #autotrain_compatible #endpoints_compatible #region-us
|
General concept of the model
----------------------------
#### Proposed usage
The 'inappropriateness' substance we tried to collect in the dataset and detect with the model is NOT a substitution of toxicity, it is rather a derivative of toxicity. So the model based on our dataset could serve as an additional layer of inappropriateness filtering after toxicity and obscenity filtration. You can detect the exact sensitive topic by using another model. The proposed pipeline is shown in the scheme below.
!alternativetext
You can also train one classifier for both toxicity and inappropriateness detection. The data to be mixed with toxic labelled samples could be found on our GitHub or on kaggle
#### Inappropraiteness intuition
This model is trained on the dataset of inappropriate messages of the Russian language. Generally, an inappropriate utterance is an utterance that has not obscene words or any kind of toxic intent, but can still harm the reputation of the speaker. Find some sample for more intuition in the table below. Learn more about the concept of inappropriateness in this article presented at the workshop for Balto-Slavic NLP at the EACL-2021 conference. Please note that this article describes the first version of the dataset, while the model is trained on the extended version of the dataset open-sourced on our GitHub or on kaggle. The properties of the dataset are the same as the one described in the article, the only difference is the size.
#### Metrics
The model was trained, validated, and tested only on the samples with 100% confidence, which allowed to get the following metrics on test set:
Licensing Information
---------------------
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](URL).
[](URL)
If you find this repository helpful, feel free to cite our publication:
Contacts
--------
If you have any questions please contact Nikolay
|
[
"#### Proposed usage\n\n\nThe 'inappropriateness' substance we tried to collect in the dataset and detect with the model is NOT a substitution of toxicity, it is rather a derivative of toxicity. So the model based on our dataset could serve as an additional layer of inappropriateness filtering after toxicity and obscenity filtration. You can detect the exact sensitive topic by using another model. The proposed pipeline is shown in the scheme below.\n\n\n!alternativetext\n\n\nYou can also train one classifier for both toxicity and inappropriateness detection. The data to be mixed with toxic labelled samples could be found on our GitHub or on kaggle",
"#### Inappropraiteness intuition\n\n\nThis model is trained on the dataset of inappropriate messages of the Russian language. Generally, an inappropriate utterance is an utterance that has not obscene words or any kind of toxic intent, but can still harm the reputation of the speaker. Find some sample for more intuition in the table below. Learn more about the concept of inappropriateness in this article presented at the workshop for Balto-Slavic NLP at the EACL-2021 conference. Please note that this article describes the first version of the dataset, while the model is trained on the extended version of the dataset open-sourced on our GitHub or on kaggle. The properties of the dataset are the same as the one described in the article, the only difference is the size.",
"#### Metrics\n\n\nThe model was trained, validated, and tested only on the samples with 100% confidence, which allowed to get the following metrics on test set:\n\n\n\nLicensing Information\n---------------------\n\n\n[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](URL).\n\n\n[](URL)\n\n\nIf you find this repository helpful, feel free to cite our publication:\n\n\nContacts\n--------\n\n\nIf you have any questions please contact Nikolay"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #toxic comments classification #ru #autotrain_compatible #endpoints_compatible #region-us \n",
"#### Proposed usage\n\n\nThe 'inappropriateness' substance we tried to collect in the dataset and detect with the model is NOT a substitution of toxicity, it is rather a derivative of toxicity. So the model based on our dataset could serve as an additional layer of inappropriateness filtering after toxicity and obscenity filtration. You can detect the exact sensitive topic by using another model. The proposed pipeline is shown in the scheme below.\n\n\n!alternativetext\n\n\nYou can also train one classifier for both toxicity and inappropriateness detection. The data to be mixed with toxic labelled samples could be found on our GitHub or on kaggle",
"#### Inappropraiteness intuition\n\n\nThis model is trained on the dataset of inappropriate messages of the Russian language. Generally, an inappropriate utterance is an utterance that has not obscene words or any kind of toxic intent, but can still harm the reputation of the speaker. Find some sample for more intuition in the table below. Learn more about the concept of inappropriateness in this article presented at the workshop for Balto-Slavic NLP at the EACL-2021 conference. Please note that this article describes the first version of the dataset, while the model is trained on the extended version of the dataset open-sourced on our GitHub or on kaggle. The properties of the dataset are the same as the one described in the article, the only difference is the size.",
"#### Metrics\n\n\nThe model was trained, validated, and tested only on the samples with 100% confidence, which allowed to get the following metrics on test set:\n\n\n\nLicensing Information\n---------------------\n\n\n[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](URL).\n\n\n[](URL)\n\n\nIf you find this repository helpful, feel free to cite our publication:\n\n\nContacts\n--------\n\n\nIf you have any questions please contact Nikolay"
] |
text-classification
|
transformers
|
## General concept of the model
This model is trained on the dataset of sensitive topics of the Russian language. The concept of sensitive topics is described [in this article ](https://www.aclweb.org/anthology/2021.bsnlp-1.4/) presented at the workshop for Balto-Slavic NLP at the EACL-2021 conference. Please note that this article describes the first version of the dataset, while the model is trained on the extended version of the dataset open-sourced on our [GitHub](https://github.com/skoltech-nlp/inappropriate-sensitive-topics/blob/main/Version2/sensitive_topics/sensitive_topics.csv) or on [kaggle](https://www.kaggle.com/nigula/russian-sensitive-topics). The properties of the dataset is the same as the one described in the article, the only difference is the size.
## Instructions
The model predicts combinations of 18 sensitive topics described in the [article](https://arxiv.org/abs/2103.05345). You can find step-by-step instructions for using the model [here](https://github.com/skoltech-nlp/inappropriate-sensitive-topics/blob/main/Version2/sensitive_topics/Inference.ipynb)
## Metrics
The dataset partially manually labeled samples and partially semi-automatically labeled samples. Learn more in our article. We tested the performance of the classifier only on the part of manually labeled data that is why some topics are not well represented in the test set.
| | precision | recall | f1-score | support |
|-------------------|-----------|--------|----------|---------|
| offline_crime | 0.65 | 0.55 | 0.6 | 132 |
| online_crime | 0.5 | 0.46 | 0.48 | 37 |
| drugs | 0.87 | 0.9 | 0.88 | 87 |
| gambling | 0.5 | 0.67 | 0.57 | 6 |
| pornography | 0.73 | 0.59 | 0.65 | 204 |
| prostitution | 0.75 | 0.69 | 0.72 | 91 |
| slavery | 0.72 | 0.72 | 0.73 | 40 |
| suicide | 0.33 | 0.29 | 0.31 | 7 |
| terrorism | 0.68 | 0.57 | 0.62 | 47 |
| weapons | 0.89 | 0.83 | 0.86 | 138 |
| body_shaming | 0.9 | 0.67 | 0.77 | 109 |
| health_shaming | 0.84 | 0.55 | 0.66 | 108 |
| politics | 0.68 | 0.54 | 0.6 | 241 |
| racism | 0.81 | 0.59 | 0.68 | 204 |
| religion | 0.94 | 0.72 | 0.81 | 102 |
| sexual_minorities | 0.69 | 0.46 | 0.55 | 102 |
| sexism | 0.66 | 0.64 | 0.65 | 132 |
| social_injustice | 0.56 | 0.37 | 0.45 | 181 |
| none | 0.62 | 0.67 | 0.64 | 250 |
| micro avg | 0.72 | 0.61 | 0.66 | 2218 |
| macro avg | 0.7 | 0.6 | 0.64 | 2218 |
| weighted avg | 0.73 | 0.61 | 0.66 | 2218 |
| samples avg | 0.75 | 0.66 | 0.68 | 2218 |
## Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@inproceedings{babakov-etal-2021-detecting,
title = "Detecting Inappropriate Messages on Sensitive Topics that Could Harm a Company{'}s Reputation",
author = "Babakov, Nikolay and
Logacheva, Varvara and
Kozlova, Olga and
Semenov, Nikita and
Panchenko, Alexander",
booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing",
month = apr,
year = "2021",
address = "Kiyv, Ukraine",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bsnlp-1.4",
pages = "26--36",
abstract = "Not all topics are equally {``}flammable{''} in terms of toxicity: a calm discussion of turtles or fishing less often fuels inappropriate toxic dialogues than a discussion of politics or sexual minorities. We define a set of sensitive topics that can yield inappropriate and toxic messages and describe the methodology of collecting and labelling a dataset for appropriateness. While toxicity in user-generated data is well-studied, we aim at defining a more fine-grained notion of inappropriateness. The core of inappropriateness is that it can harm the reputation of a speaker. This is different from toxicity in two respects: (i) inappropriateness is topic-related, and (ii) inappropriate message is not toxic but still unacceptable. We collect and release two datasets for Russian: a topic-labelled dataset and an appropriateness-labelled dataset. We also release pre-trained classification models trained on this data.",
}
```
|
{"language": ["ru"], "tags": ["toxic comments classification"], "licenses": ["cc-by-nc-sa"]}
|
apanc/russian-sensitive-topics
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"toxic comments classification",
"ru",
"arxiv:2103.05345",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2103.05345"
] |
[
"ru"
] |
TAGS
#transformers #pytorch #tf #jax #bert #text-classification #toxic comments classification #ru #arxiv-2103.05345 #autotrain_compatible #endpoints_compatible #region-us
|
General concept of the model
----------------------------
This model is trained on the dataset of sensitive topics of the Russian language. The concept of sensitive topics is described in this article presented at the workshop for Balto-Slavic NLP at the EACL-2021 conference. Please note that this article describes the first version of the dataset, while the model is trained on the extended version of the dataset open-sourced on our GitHub or on kaggle. The properties of the dataset is the same as the one described in the article, the only difference is the size.
Instructions
------------
The model predicts combinations of 18 sensitive topics described in the article. You can find step-by-step instructions for using the model here
Metrics
-------
The dataset partially manually labeled samples and partially semi-automatically labeled samples. Learn more in our article. We tested the performance of the classifier only on the part of manually labeled data that is why some topics are not well represented in the test set.
Licensing Information
---------------------
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](URL).
[](URL)
If you find this repository helpful, feel free to cite our publication:
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #toxic comments classification #ru #arxiv-2103.05345 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
# Harry Potter DialogGPT Model
|
{"tags": ["conversational"]}
|
Skywhy/DialoGPT-medium-Churchyy
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialogGPT Model
|
[
"# Harry Potter DialogGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialogGPT Model"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 452311620
- CO2 Emissions (in grams): 208.0823957145878
## Validation Metrics
- Loss: 0.5259971022605896
- Accuracy: 0.8767479025169796
- Macro F1: 0.8618813750734912
- Micro F1: 0.8767479025169796
- Weighted F1: 0.8742964006840133
- Macro Precision: 0.8627700506991158
- Micro Precision: 0.8767479025169796
- Weighted Precision: 0.8755603985289852
- Macro Recall: 0.8662183006750934
- Micro Recall: 0.8767479025169796
- Weighted Recall: 0.8767479025169796
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Smone55/autonlp-au_topics-452311620
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Smone55/autonlp-au_topics-452311620", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Smone55/autonlp-au_topics-452311620", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["Smone55/autonlp-data-au_topics"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 208.0823957145878}
|
Smone55/autonlp-au_topics-452311620
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:Smone55/autonlp-data-au_topics",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Smone55/autonlp-data-au_topics #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 452311620
- CO2 Emissions (in grams): 208.0823957145878
## Validation Metrics
- Loss: 0.5259971022605896
- Accuracy: 0.8767479025169796
- Macro F1: 0.8618813750734912
- Micro F1: 0.8767479025169796
- Weighted F1: 0.8742964006840133
- Macro Precision: 0.8627700506991158
- Micro Precision: 0.8767479025169796
- Weighted Precision: 0.8755603985289852
- Macro Recall: 0.8662183006750934
- Micro Recall: 0.8767479025169796
- Weighted Recall: 0.8767479025169796
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 452311620\n- CO2 Emissions (in grams): 208.0823957145878",
"## Validation Metrics\n\n- Loss: 0.5259971022605896\n- Accuracy: 0.8767479025169796\n- Macro F1: 0.8618813750734912\n- Micro F1: 0.8767479025169796\n- Weighted F1: 0.8742964006840133\n- Macro Precision: 0.8627700506991158\n- Micro Precision: 0.8767479025169796\n- Weighted Precision: 0.8755603985289852\n- Macro Recall: 0.8662183006750934\n- Micro Recall: 0.8767479025169796\n- Weighted Recall: 0.8767479025169796",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Smone55/autonlp-data-au_topics #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 452311620\n- CO2 Emissions (in grams): 208.0823957145878",
"## Validation Metrics\n\n- Loss: 0.5259971022605896\n- Accuracy: 0.8767479025169796\n- Macro F1: 0.8618813750734912\n- Micro F1: 0.8767479025169796\n- Weighted F1: 0.8742964006840133\n- Macro Precision: 0.8627700506991158\n- Micro Precision: 0.8767479025169796\n- Weighted Precision: 0.8755603985289852\n- Macro Recall: 0.8662183006750934\n- Micro Recall: 0.8767479025169796\n- Weighted Recall: 0.8767479025169796",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-generation
|
transformers
|
#StupidEdwin
|
{"tags": ["conversational"]}
|
Snaky/StupidEdwin
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#StupidEdwin
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
## Schema Guided Dialogue Output Plan Constructor
|
{}
|
SoLID/sgd-output-plan-constructor
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
## Schema Guided Dialogue Output Plan Constructor
|
[
"## Schema Guided Dialogue Output Plan Constructor"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## Schema Guided Dialogue Output Plan Constructor"
] |
text2text-generation
|
transformers
|
Hyperparameters: 1 epoch, max_len_dict including domain classification task, and 1e-5 learning rate
|
{"language": ["eng"], "license": "afl-3.0", "tags": ["dialogue"], "datasets": ["schema guided dialogue"], "metrics": ["exactness"], "thumbnail": "https://townsquare.media/site/88/files/2020/06/C_Charlotte_RGB_7484.jpg"}
|
SoLID/sgd-t5-tod
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"dialogue",
"eng",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"eng"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #dialogue #eng #license-afl-3.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Hyperparameters: 1 epoch, max_len_dict including domain classification task, and 1e-5 learning rate
|
[] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #dialogue #eng #license-afl-3.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
# Cartman DialoGPT Model
|
{"tags": ["conversational"]}
|
Soapsy/DialoGPT-mid-cartman
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Cartman DialoGPT Model
|
[
"# Cartman DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Cartman DialoGPT Model"
] |
text-generation
| null |
# My Awesome Model
|
{"tags": ["conversational"]}
|
SonMooSans/DialoGPT-small-joshua
| null |
[
"conversational",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#conversational #region-us
|
# My Awesome Model
|
[
"# My Awesome Model"
] |
[
"TAGS\n#conversational #region-us \n",
"# My Awesome Model"
] |
text-generation
|
transformers
|
# My Awesome Model
|
{"tags": ["conversational"]}
|
SonMooSans/test
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# My Awesome Model
|
[
"# My Awesome Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# My Awesome Model"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8549
- Matthews Correlation: 0.5332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5213 | 1.0 | 535 | 0.5163 | 0.4183 |
| 0.3479 | 2.0 | 1070 | 0.5351 | 0.5182 |
| 0.231 | 3.0 | 1605 | 0.6271 | 0.5291 |
| 0.166 | 4.0 | 2140 | 0.7531 | 0.5279 |
| 0.1313 | 5.0 | 2675 | 0.8549 | 0.5332 |
### Framework versions
- Transformers 4.10.0.dev0
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model_index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metric": {"name": "Matthews Correlation", "type": "matthews_correlation", "value": 0.5332198659134496}}]}]}
|
SongRb/distilbert-base-uncased-finetuned-cola
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8549
* Matthews Correlation: 0.5332
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.10.0.dev0
* Pytorch 1.8.1
* Datasets 1.11.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.0.dev0\n* Pytorch 1.8.1\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.0.dev0\n* Pytorch 1.8.1\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0746
- Precision: 0.9347
- Recall: 0.9426
- F1: 0.9386
- Accuracy: 0.9851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0832 | 1.0 | 3511 | 0.0701 | 0.9317 | 0.9249 | 0.9283 | 0.9827 |
| 0.0384 | 2.0 | 7022 | 0.0701 | 0.9282 | 0.9410 | 0.9346 | 0.9845 |
| 0.0222 | 3.0 | 10533 | 0.0746 | 0.9347 | 0.9426 | 0.9386 | 0.9851 |
### Framework versions
- Transformers 4.10.0.dev0
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9850826886110537}}]}]}
|
SongRb/distilbert-base-uncased-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-ner
=====================================
This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0746
* Precision: 0.9347
* Recall: 0.9426
* F1: 0.9386
* Accuracy: 0.9851
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.10.0.dev0
* Pytorch 1.8.1
* Datasets 1.11.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.0.dev0\n* Pytorch 1.8.1\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.0.dev0\n* Pytorch 1.8.1\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["en"], "license": "apache-2.0", "tags": ["question-answering"], "datasets": ["squad"], "metrics": ["squad"], "thumbnail": "https://github.com/karanchahal/distiller/blob/master/distiller.jpg"}
|
Sonny/distilbert-base-uncased-finetuned-squad-d5716d28
| null |
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1910.01108"
] |
[
"en"
] |
TAGS
#transformers #pytorch #distilbert #fill-mask #question-answering #en #dataset-squad #arxiv-1910.01108 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
DistilBERT with a second step of distillation
=============================================
Model description
-----------------
This model replicates the "DistilBERT (D)" model from Table 2 of the DistilBERT paper. In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: 'distilbert-base-uncased'
* Teacher: 'lewtun/bert-base-uncased-finetuned-squad-v1'
Training data
-------------
This model was trained on the SQuAD v1.1 dataset which can be obtained from the 'datasets' library as follows:
Training procedure
------------------
Eval results
------------
Exact Match: DistilBERT paper, F1: 79.1
Exact Match: Ours, F1: 78.4
The scores were calculated using the 'squad' metric from 'datasets'.
### BibTeX entry and citation info
|
[
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #distilbert #fill-mask #question-answering #en #dataset-squad #arxiv-1910.01108 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### BibTeX entry and citation info"
] |
fill-mask
|
transformers
|
This is a test model2.
|
{}
|
Sonny/dummy-model2
| null |
[
"transformers",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #camembert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
This is a test model2.
|
[] |
[
"TAGS\n#transformers #camembert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text2text-generation
|
transformers
|
This is the model so far before time out
|
{}
|
SophieTr/distil-pegasus-reddit
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
|
This is the model so far before time out
|
[] |
[
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tune-Pegasus-large
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 11.0526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.35e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model-index": [{"name": "fine-tune-Pegasus-large", "results": []}]}
|
SophieTr/fine-tune-Pegasus-large
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #pegasus #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
# fine-tune-Pegasus-large
This model is a fine-tuned version of google/pegasus-large on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 11.0526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.35e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# fine-tune-Pegasus-large\n\nThis model is a fine-tuned version of google/pegasus-large on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 11.0526",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 6.35e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.1\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"# fine-tune-Pegasus-large\n\nThis model is a fine-tuned version of google/pegasus-large on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 11.0526",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 6.35e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.1\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [sshleifer/distill-pegasus-xsum-16-4](https://huggingface.co/sshleifer/distill-pegasus-xsum-16-4) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4473
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.2378 | 0.51 | 100 | 7.1853 |
| 7.2309 | 1.01 | 200 | 6.6342 |
| 6.4796 | 1.52 | 300 | 6.3206 |
| 6.2691 | 2.02 | 400 | 6.0184 |
| 5.7382 | 2.53 | 500 | 5.5754 |
| 4.9922 | 3.03 | 600 | 4.5178 |
| 3.6031 | 3.54 | 700 | 2.8579 |
| 2.5203 | 4.04 | 800 | 2.4718 |
| 2.2563 | 4.55 | 900 | 2.4128 |
| 2.1425 | 5.05 | 1000 | 2.3767 |
| 2.004 | 5.56 | 1100 | 2.3982 |
| 2.0437 | 6.06 | 1200 | 2.3787 |
| 1.9407 | 6.57 | 1300 | 2.3952 |
| 1.9194 | 7.07 | 1400 | 2.3964 |
| 1.758 | 7.58 | 1500 | 2.4056 |
| 1.918 | 8.08 | 1600 | 2.4101 |
| 1.9162 | 8.59 | 1700 | 2.4085 |
| 1.8983 | 9.09 | 1800 | 2.4058 |
| 1.6939 | 9.6 | 1900 | 2.4050 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model-index": [{"name": "results", "results": []}]}
|
SophieTr/results
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #pegasus #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
results
=======
This model is a fine-tuned version of sshleifer/distill-pegasus-xsum-16-4 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.4473
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# Naruto DialoGPT Model
|
{"tags": ["conversational"]}
|
Sora4762/DialoGPT-small-naruto
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Naruto DialoGPT Model
|
[
"# Naruto DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Naruto DialoGPT Model"
] |
text-generation
|
transformers
|
# Naruto DialoGPT Model1.1
|
{"tags": ["conversational"]}
|
Sora4762/DialoGPT-small-naruto1.1
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Naruto DialoGPT Model1.1
|
[
"# Naruto DialoGPT Model1.1"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Naruto DialoGPT Model1.1"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT
This model is a fine-tuned version of [Sotireas/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT](https://huggingface.co/Sotireas/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 21 | 3.8118 |
| No log | 2.0 | 42 | 3.5006 |
| No log | 3.0 | 63 | 3.1242 |
| No log | 4.0 | 84 | 2.9528 |
| No log | 5.0 | 105 | 2.9190 |
| No log | 6.0 | 126 | 2.9876 |
| No log | 7.0 | 147 | 3.0574 |
| No log | 8.0 | 168 | 3.0718 |
| No log | 9.0 | 189 | 3.0426 |
| No log | 10.0 | 210 | 3.0853 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT", "results": []}]}
|
Sotireas/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #license-mit #endpoints_compatible #region-us
|
BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel\_PubmedBERT
====================================================================================
This model is a fine-tuned version of Sotireas/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel\_PubmedBERT on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 3.0853
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.21.0
* Pytorch 1.12.0+cu113
* Datasets 2.4.0
* Tokenizers 0.12.1
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.21.0\n* Pytorch 1.12.0+cu113\n* Datasets 2.4.0\n* Tokenizers 0.12.1"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.21.0\n* Pytorch 1.12.0+cu113\n* Datasets 2.4.0\n* Tokenizers 0.12.1"
] |
text-generation
|
transformers
|
# Harry Potter DialoGPT Model
|
{"tags": ["conversational"]}
|
Soumyajit1008/DialoGPT-small-harryPotterssen
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model
|
[
"# Harry Potter DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2188 | 1.0 | 5533 | 1.1708 |
| 0.9519 | 2.0 | 11066 | 1.1058 |
| 0.7576 | 3.0 | 16599 | 1.1573 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
|
Sourabh714/distilbert-base-uncased-finetuned-squad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-squad
=======================================
This model is a fine-tuned version of distilbert-base-uncased on the squad dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1573
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
null |
transformers
|
### VAE with Pytorch-Lightning
This is inspired from vae-playground. This is an example where we test out vae and conv_vae models with multiple datasets
like MNIST, celeb-a and MNIST-Fashion datasets.
This also comes with an example streamlit app & deployed at huggingface.
## Model Training
You can train the VAE models by using `train.py` and editing the `config.yaml` file. \
Hyperparameters to change are:
- model_type [vae|conv_vae]
- alpha
- hidden_dim
- dataset [celeba|mnist|fashion-mnist]
There are other configurations that can be changed if required like height, width, channels etc. It also contains the pytorch-lightning configs as well.
|
{"license": "apache-2.0"}
|
Souranil/VAE
| null |
[
"transformers",
"pytorch",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #license-apache-2.0 #endpoints_compatible #region-us
|
### VAE with Pytorch-Lightning
This is inspired from vae-playground. This is an example where we test out vae and conv_vae models with multiple datasets
like MNIST, celeb-a and MNIST-Fashion datasets.
This also comes with an example streamlit app & deployed at huggingface.
## Model Training
You can train the VAE models by using 'URL' and editing the 'URL' file. \
Hyperparameters to change are:
- model_type [vae|conv_vae]
- alpha
- hidden_dim
- dataset [celeba|mnist|fashion-mnist]
There are other configurations that can be changed if required like height, width, channels etc. It also contains the pytorch-lightning configs as well.
|
[
"### VAE with Pytorch-Lightning\r\n\r\nThis is inspired from vae-playground. This is an example where we test out vae and conv_vae models with multiple datasets \r\nlike MNIST, celeb-a and MNIST-Fashion datasets.\r\n\r\nThis also comes with an example streamlit app & deployed at huggingface.",
"## Model Training\r\n\r\nYou can train the VAE models by using 'URL' and editing the 'URL' file. \\\r\nHyperparameters to change are:\r\n- model_type [vae|conv_vae]\r\n- alpha\r\n- hidden_dim\r\n- dataset [celeba|mnist|fashion-mnist]\r\n\r\nThere are other configurations that can be changed if required like height, width, channels etc. It also contains the pytorch-lightning configs as well."
] |
[
"TAGS\n#transformers #pytorch #license-apache-2.0 #endpoints_compatible #region-us \n",
"### VAE with Pytorch-Lightning\r\n\r\nThis is inspired from vae-playground. This is an example where we test out vae and conv_vae models with multiple datasets \r\nlike MNIST, celeb-a and MNIST-Fashion datasets.\r\n\r\nThis also comes with an example streamlit app & deployed at huggingface.",
"## Model Training\r\n\r\nYou can train the VAE models by using 'URL' and editing the 'URL' file. \\\r\nHyperparameters to change are:\r\n- model_type [vae|conv_vae]\r\n- alpha\r\n- hidden_dim\r\n- dataset [celeba|mnist|fashion-mnist]\r\n\r\nThere are other configurations that can be changed if required like height, width, channels etc. It also contains the pytorch-lightning configs as well."
] |
null | null |
Log FiBER
This model is able to sentence embedding.
|
{}
|
Souvikcmsa/LogFiBER
| null |
[
"pytorch",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#pytorch #region-us
|
Log FiBER
This model is able to sentence embedding.
|
[] |
[
"TAGS\n#pytorch #region-us \n"
] |
text-generation
|
transformers
|
#Gandalf DialoGPT Model
|
{"tags": ["conversational"]}
|
SpacyGalaxy/DialoGPT-medium-Gandalf
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Gandalf DialoGPT Model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.