modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
CLAck/indo-pure
|
[
"pytorch",
"marian",
"text2text-generation",
"en",
"id",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] |
translation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bert-blame-concept
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-blame-concept
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7359
- Rmse: 0.6962
- Rmse Blame::a Un concetto astratto o un'emozione: 0.6962
- Mae: 0.5010
- Mae Blame::a Un concetto astratto o un'emozione: 0.5010
- R2: 0.3974
- R2 Blame::a Un concetto astratto o un'emozione: 0.3974
- Cos: 0.3913
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.5507
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a Un concetto astratto o un'emozione | Mae | Mae Blame::a Un concetto astratto o un'emozione | R2 | R2 Blame::a Un concetto astratto o un'emozione | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------------------------------------------------:|:------:|:-----------------------------------------------:|:-------:|:----------------------------------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0979 | 1.0 | 15 | 1.2387 | 0.9033 | 0.9033 | 0.6603 | 0.6603 | -0.0144 | -0.0144 | 0.0435 | 0.0 | 0.5 | 0.3432 | nan |
| 1.0172 | 2.0 | 30 | 1.1498 | 0.8703 | 0.8703 | 0.5964 | 0.5964 | 0.0584 | 0.0584 | 0.0435 | 0.0 | 0.5 | 0.2935 | nan |
| 0.9879 | 3.0 | 45 | 1.2139 | 0.8942 | 0.8942 | 0.6197 | 0.6197 | 0.0060 | 0.0060 | 0.2174 | 0.0 | 0.5 | 0.4582 | nan |
| 0.9723 | 4.0 | 60 | 1.1152 | 0.8571 | 0.8571 | 0.5982 | 0.5982 | 0.0867 | 0.0867 | 0.2174 | 0.0 | 0.5 | 0.3921 | nan |
| 0.9584 | 5.0 | 75 | 1.0607 | 0.8358 | 0.8358 | 0.5959 | 0.5959 | 0.1314 | 0.1314 | 0.0435 | 0.0 | 0.5 | 0.4165 | nan |
| 0.9023 | 6.0 | 90 | 1.0031 | 0.8128 | 0.8128 | 0.5827 | 0.5827 | 0.1786 | 0.1786 | -0.0435 | 0.0 | 0.5 | 0.3862 | nan |
| 0.8745 | 7.0 | 105 | 0.9715 | 0.7999 | 0.7999 | 0.5796 | 0.5796 | 0.2044 | 0.2044 | 0.3043 | 0.0 | 0.5 | 0.3665 | nan |
| 0.8082 | 8.0 | 120 | 0.8984 | 0.7692 | 0.7692 | 0.5699 | 0.5699 | 0.2643 | 0.2643 | 0.1304 | 0.0 | 0.5 | 0.3390 | nan |
| 0.7475 | 9.0 | 135 | 0.8532 | 0.7497 | 0.7497 | 0.5849 | 0.5849 | 0.3013 | 0.3013 | 0.0435 | 0.0 | 0.5 | 0.3100 | nan |
| 0.6599 | 10.0 | 150 | 0.8737 | 0.7586 | 0.7586 | 0.5822 | 0.5822 | 0.2846 | 0.2846 | 0.3043 | 0.0 | 0.5 | 0.3830 | nan |
| 0.5867 | 11.0 | 165 | 0.8159 | 0.7331 | 0.7331 | 0.5752 | 0.5752 | 0.3318 | 0.3318 | 0.2174 | 0.0 | 0.5 | 0.4439 | nan |
| 0.5081 | 12.0 | 180 | 0.8367 | 0.7424 | 0.7424 | 0.6071 | 0.6071 | 0.3148 | 0.3148 | 0.0435 | 0.0 | 0.5 | 0.3561 | nan |
| 0.4801 | 13.0 | 195 | 0.8353 | 0.7417 | 0.7417 | 0.5567 | 0.5567 | 0.3160 | 0.3160 | 0.3913 | 0.0 | 0.5 | 0.5850 | nan |
| 0.3714 | 14.0 | 210 | 0.8050 | 0.7282 | 0.7282 | 0.5824 | 0.5824 | 0.3408 | 0.3408 | 0.1304 | 0.0 | 0.5 | 0.3975 | nan |
| 0.3306 | 15.0 | 225 | 0.7833 | 0.7183 | 0.7183 | 0.5570 | 0.5570 | 0.3585 | 0.3585 | 0.2174 | 0.0 | 0.5 | 0.4604 | nan |
| 0.2674 | 16.0 | 240 | 0.8148 | 0.7326 | 0.7326 | 0.5475 | 0.5475 | 0.3328 | 0.3328 | 0.3043 | 0.0 | 0.5 | 0.4891 | nan |
| 0.2129 | 17.0 | 255 | 0.8715 | 0.7576 | 0.7576 | 0.5537 | 0.5537 | 0.2863 | 0.2863 | 0.4783 | 0.0 | 0.5 | 0.5017 | nan |
| 0.1924 | 18.0 | 270 | 0.7944 | 0.7234 | 0.7234 | 0.5276 | 0.5276 | 0.3495 | 0.3495 | 0.4783 | 0.0 | 0.5 | 0.5797 | nan |
| 0.1984 | 19.0 | 285 | 0.7885 | 0.7207 | 0.7207 | 0.5208 | 0.5208 | 0.3543 | 0.3543 | 0.3913 | 0.0 | 0.5 | 0.5507 | nan |
| 0.1623 | 20.0 | 300 | 0.7682 | 0.7113 | 0.7113 | 0.5132 | 0.5132 | 0.3709 | 0.3709 | 0.4783 | 0.0 | 0.5 | 0.5797 | nan |
| 0.1409 | 21.0 | 315 | 0.7653 | 0.7100 | 0.7100 | 0.5215 | 0.5215 | 0.3733 | 0.3733 | 0.3043 | 0.0 | 0.5 | 0.5415 | nan |
| 0.1386 | 22.0 | 330 | 0.7688 | 0.7116 | 0.7116 | 0.5124 | 0.5124 | 0.3704 | 0.3704 | 0.3913 | 0.0 | 0.5 | 0.5507 | nan |
| 0.123 | 23.0 | 345 | 0.7756 | 0.7148 | 0.7148 | 0.5144 | 0.5144 | 0.3648 | 0.3648 | 0.3913 | 0.0 | 0.5 | 0.5507 | nan |
| 0.1175 | 24.0 | 360 | 0.7423 | 0.6993 | 0.6993 | 0.5015 | 0.5015 | 0.3921 | 0.3921 | 0.3913 | 0.0 | 0.5 | 0.5507 | nan |
| 0.1188 | 25.0 | 375 | 0.7255 | 0.6913 | 0.6913 | 0.5063 | 0.5063 | 0.4059 | 0.4059 | 0.2174 | 0.0 | 0.5 | 0.4604 | nan |
| 0.1155 | 26.0 | 390 | 0.7635 | 0.7091 | 0.7091 | 0.5083 | 0.5083 | 0.3748 | 0.3748 | 0.4783 | 0.0 | 0.5 | 0.5797 | nan |
| 0.0981 | 27.0 | 405 | 0.7128 | 0.6852 | 0.6852 | 0.5020 | 0.5020 | 0.4163 | 0.4163 | 0.3043 | 0.0 | 0.5 | 0.5415 | nan |
| 0.1109 | 28.0 | 420 | 0.7430 | 0.6996 | 0.6996 | 0.5023 | 0.5023 | 0.3915 | 0.3915 | 0.3913 | 0.0 | 0.5 | 0.5507 | nan |
| 0.1081 | 29.0 | 435 | 0.7367 | 0.6966 | 0.6966 | 0.5007 | 0.5007 | 0.3967 | 0.3967 | 0.3913 | 0.0 | 0.5 | 0.5507 | nan |
| 0.0953 | 30.0 | 450 | 0.7359 | 0.6962 | 0.6962 | 0.5010 | 0.5010 | 0.3974 | 0.3974 | 0.3913 | 0.0 | 0.5 | 0.5507 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
CLAck/vi-en
|
[
"pytorch",
"marian",
"text2text-generation",
"en",
"vi",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] |
translation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bert-blame-none
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-blame-none
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8646
- Rmse: 1.1072
- Rmse Blame::a Nessuno: 1.1072
- Mae: 0.8721
- Mae Blame::a Nessuno: 0.8721
- R2: 0.3083
- R2 Blame::a Nessuno: 0.3083
- Cos: 0.5652
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.5070
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a Nessuno | Mae | Mae Blame::a Nessuno | R2 | R2 Blame::a Nessuno | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------:|:------:|:--------------------:|:-------:|:-------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.007 | 1.0 | 15 | 1.2585 | 1.3358 | 1.3358 | 1.1752 | 1.1752 | -0.0068 | -0.0068 | -0.0435 | 0.0 | 0.5 | 0.2970 | nan |
| 0.927 | 2.0 | 30 | 1.1310 | 1.2663 | 1.2663 | 1.0633 | 1.0633 | 0.0952 | 0.0952 | 0.4783 | 0.0 | 0.5 | 0.4012 | nan |
| 0.8376 | 3.0 | 45 | 1.0603 | 1.2261 | 1.2261 | 1.0574 | 1.0574 | 0.1518 | 0.1518 | 0.1304 | 0.0 | 0.5 | 0.2970 | nan |
| 0.7154 | 4.0 | 60 | 0.8347 | 1.0879 | 1.0879 | 0.8854 | 0.8854 | 0.3323 | 0.3323 | 0.6522 | 0.0 | 0.5 | 0.5209 | nan |
| 0.5766 | 5.0 | 75 | 0.7426 | 1.0261 | 1.0261 | 0.8340 | 0.8340 | 0.4059 | 0.4059 | 0.6522 | 0.0 | 0.5 | 0.5209 | nan |
| 0.4632 | 6.0 | 90 | 0.6671 | 0.9725 | 0.9725 | 0.7932 | 0.7932 | 0.4663 | 0.4663 | 0.6522 | 0.0 | 0.5 | 0.5209 | nan |
| 0.3854 | 7.0 | 105 | 0.6447 | 0.9561 | 0.9561 | 0.7424 | 0.7424 | 0.4842 | 0.4842 | 0.6522 | 0.0 | 0.5 | 0.4307 | nan |
| 0.3154 | 8.0 | 120 | 0.7198 | 1.0102 | 1.0102 | 0.8113 | 0.8113 | 0.4241 | 0.4241 | 0.6522 | 0.0 | 0.5 | 0.4307 | nan |
| 0.2637 | 9.0 | 135 | 0.7221 | 1.0118 | 1.0118 | 0.8319 | 0.8319 | 0.4223 | 0.4223 | 0.5652 | 0.0 | 0.5 | 0.4150 | nan |
| 0.1962 | 10.0 | 150 | 0.6999 | 0.9962 | 0.9962 | 0.7945 | 0.7945 | 0.4401 | 0.4401 | 0.4783 | 0.0 | 0.5 | 0.4056 | nan |
| 0.1784 | 11.0 | 165 | 0.7335 | 1.0198 | 1.0198 | 0.7969 | 0.7969 | 0.4132 | 0.4132 | 0.5652 | 0.0 | 0.5 | 0.4150 | nan |
| 0.1531 | 12.0 | 180 | 0.8277 | 1.0833 | 1.0833 | 0.8839 | 0.8839 | 0.3378 | 0.3378 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan |
| 0.1425 | 13.0 | 195 | 0.8644 | 1.1070 | 1.1070 | 0.8726 | 0.8726 | 0.3085 | 0.3085 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.0921 | 14.0 | 210 | 0.8874 | 1.1217 | 1.1217 | 0.9024 | 0.9024 | 0.2900 | 0.2900 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan |
| 0.0913 | 15.0 | 225 | 0.8663 | 1.1083 | 1.1083 | 0.8914 | 0.8914 | 0.3070 | 0.3070 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.08 | 16.0 | 240 | 0.8678 | 1.1093 | 1.1093 | 0.8762 | 0.8762 | 0.3057 | 0.3057 | 0.6522 | 0.0 | 0.5 | 0.5931 | nan |
| 0.0725 | 17.0 | 255 | 0.8497 | 1.0976 | 1.0976 | 0.8868 | 0.8868 | 0.3202 | 0.3202 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan |
| 0.0696 | 18.0 | 270 | 0.8533 | 1.1000 | 1.1000 | 0.8796 | 0.8796 | 0.3173 | 0.3173 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.0632 | 19.0 | 285 | 0.8563 | 1.1018 | 1.1018 | 0.8768 | 0.8768 | 0.3150 | 0.3150 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.0511 | 20.0 | 300 | 0.8433 | 1.0935 | 1.0935 | 0.8684 | 0.8684 | 0.3254 | 0.3254 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.0517 | 21.0 | 315 | 0.8449 | 1.0945 | 1.0945 | 0.8758 | 0.8758 | 0.3240 | 0.3240 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan |
| 0.0556 | 22.0 | 330 | 0.8305 | 1.0851 | 1.0851 | 0.8469 | 0.8469 | 0.3356 | 0.3356 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.0457 | 23.0 | 345 | 0.8369 | 1.0893 | 1.0893 | 0.8555 | 0.8555 | 0.3305 | 0.3305 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.0496 | 24.0 | 360 | 0.8441 | 1.0940 | 1.0940 | 0.8648 | 0.8648 | 0.3247 | 0.3247 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.0467 | 25.0 | 375 | 0.8470 | 1.0959 | 1.0959 | 0.8633 | 0.8633 | 0.3224 | 0.3224 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.0446 | 26.0 | 390 | 0.8562 | 1.1018 | 1.1018 | 0.8708 | 0.8708 | 0.3151 | 0.3151 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan |
| 0.0476 | 27.0 | 405 | 0.8600 | 1.1042 | 1.1042 | 0.8714 | 0.8714 | 0.3120 | 0.3120 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.042 | 28.0 | 420 | 0.8657 | 1.1079 | 1.1079 | 0.8763 | 0.8763 | 0.3074 | 0.3074 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan |
| 0.0431 | 29.0 | 435 | 0.8654 | 1.1077 | 1.1077 | 0.8734 | 0.8734 | 0.3077 | 0.3077 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.0423 | 30.0 | 450 | 0.8646 | 1.1072 | 1.1072 | 0.8721 | 0.8721 | 0.3083 | 0.3083 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
CLEE/CLEE
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bert-cause-human
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-cause-human
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7139
- Rmse: 1.2259
- Rmse Cause::a Causata da un essere umano: 1.2259
- Mae: 1.0480
- Mae Cause::a Causata da un essere umano: 1.0480
- R2: 0.4563
- R2 Cause::a Causata da un essere umano: 0.4563
- Cos: 0.4783
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.3953
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Cause::a Causata da un essere umano | Mae | Mae Cause::a Causata da un essere umano | R2 | R2 Cause::a Causata da un essere umano | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:----------------------------------------:|:------:|:---------------------------------------:|:------:|:--------------------------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.0874 | 1.0 | 15 | 1.2615 | 1.6296 | 1.6296 | 1.3836 | 1.3836 | 0.0393 | 0.0393 | 0.0435 | 0.0 | 0.5 | 0.2935 | nan |
| 0.9577 | 2.0 | 30 | 1.1988 | 1.5886 | 1.5886 | 1.3017 | 1.3017 | 0.0870 | 0.0870 | 0.4783 | 0.0 | 0.5 | 0.3944 | nan |
| 0.8414 | 3.0 | 45 | 0.9870 | 1.4414 | 1.4414 | 1.1963 | 1.1963 | 0.2483 | 0.2483 | 0.3913 | 0.0 | 0.5 | 0.3048 | nan |
| 0.7291 | 4.0 | 60 | 0.9098 | 1.3839 | 1.3839 | 1.1297 | 1.1297 | 0.3071 | 0.3071 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.5949 | 5.0 | 75 | 0.9207 | 1.3921 | 1.3921 | 1.2079 | 1.2079 | 0.2988 | 0.2988 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.4938 | 6.0 | 90 | 0.8591 | 1.3448 | 1.3448 | 1.1842 | 1.1842 | 0.3458 | 0.3458 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.3611 | 7.0 | 105 | 0.8176 | 1.3119 | 1.3119 | 1.1454 | 1.1454 | 0.3774 | 0.3774 | 0.5652 | 0.0 | 0.5 | 0.4091 | nan |
| 0.2663 | 8.0 | 120 | 0.6879 | 1.2034 | 1.2034 | 1.0300 | 1.0300 | 0.4761 | 0.4761 | 0.5652 | 0.0 | 0.5 | 0.4091 | nan |
| 0.1833 | 9.0 | 135 | 0.7704 | 1.2735 | 1.2735 | 1.1031 | 1.1031 | 0.4133 | 0.4133 | 0.5652 | 0.0 | 0.5 | 0.3152 | nan |
| 0.1704 | 10.0 | 150 | 0.7097 | 1.2222 | 1.2222 | 1.0382 | 1.0382 | 0.4596 | 0.4596 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.1219 | 11.0 | 165 | 0.6872 | 1.2027 | 1.2027 | 1.0198 | 1.0198 | 0.4767 | 0.4767 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.1011 | 12.0 | 180 | 0.7201 | 1.2312 | 1.2312 | 1.0466 | 1.0466 | 0.4516 | 0.4516 | 0.5652 | 0.0 | 0.5 | 0.3152 | nan |
| 0.0849 | 13.0 | 195 | 0.7267 | 1.2368 | 1.2368 | 1.0454 | 1.0454 | 0.4466 | 0.4466 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.0818 | 14.0 | 210 | 0.7361 | 1.2448 | 1.2448 | 1.0565 | 1.0565 | 0.4394 | 0.4394 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.0634 | 15.0 | 225 | 0.7158 | 1.2275 | 1.2275 | 1.0384 | 1.0384 | 0.4549 | 0.4549 | 0.3913 | 0.0 | 0.5 | 0.3306 | nan |
| 0.065 | 16.0 | 240 | 0.7394 | 1.2475 | 1.2475 | 1.0659 | 1.0659 | 0.4369 | 0.4369 | 0.3913 | 0.0 | 0.5 | 0.3306 | nan |
| 0.0541 | 17.0 | 255 | 0.7642 | 1.2683 | 1.2683 | 1.0496 | 1.0496 | 0.4181 | 0.4181 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.0577 | 18.0 | 270 | 0.7137 | 1.2257 | 1.2257 | 1.0303 | 1.0303 | 0.4565 | 0.4565 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.0474 | 19.0 | 285 | 0.7393 | 1.2475 | 1.2475 | 1.0447 | 1.0447 | 0.4370 | 0.4370 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.0494 | 20.0 | 300 | 0.7157 | 1.2274 | 1.2274 | 1.0453 | 1.0453 | 0.4550 | 0.4550 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.0434 | 21.0 | 315 | 0.7248 | 1.2352 | 1.2352 | 1.0462 | 1.0462 | 0.4480 | 0.4480 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.049 | 22.0 | 330 | 0.7384 | 1.2467 | 1.2467 | 1.0613 | 1.0613 | 0.4377 | 0.4377 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.0405 | 23.0 | 345 | 0.7420 | 1.2498 | 1.2498 | 1.0653 | 1.0653 | 0.4349 | 0.4349 | 0.3913 | 0.0 | 0.5 | 0.3306 | nan |
| 0.0398 | 24.0 | 360 | 0.7355 | 1.2442 | 1.2442 | 1.0620 | 1.0620 | 0.4399 | 0.4399 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.0398 | 25.0 | 375 | 0.7570 | 1.2623 | 1.2623 | 1.0698 | 1.0698 | 0.4235 | 0.4235 | 0.3913 | 0.0 | 0.5 | 0.3306 | nan |
| 0.0345 | 26.0 | 390 | 0.7359 | 1.2446 | 1.2446 | 1.0610 | 1.0610 | 0.4396 | 0.4396 | 0.5652 | 0.0 | 0.5 | 0.3152 | nan |
| 0.0345 | 27.0 | 405 | 0.7417 | 1.2495 | 1.2495 | 1.0660 | 1.0660 | 0.4352 | 0.4352 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.0386 | 28.0 | 420 | 0.7215 | 1.2323 | 1.2323 | 1.0514 | 1.0514 | 0.4506 | 0.4506 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.0372 | 29.0 | 435 | 0.7140 | 1.2260 | 1.2260 | 1.0477 | 1.0477 | 0.4562 | 0.4562 | 0.5652 | 0.0 | 0.5 | 0.4091 | nan |
| 0.0407 | 30.0 | 450 | 0.7139 | 1.2259 | 1.2259 | 1.0480 | 1.0480 | 0.4563 | 0.4563 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
CLTL/MedRoBERTa.nl
|
[
"pytorch",
"roberta",
"fill-mask",
"nl",
"transformers",
"license:mit",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2,988 | null |
---
license: mit
---
# PyAutoCode: GPT-2 based Python auto-code.
PyAutoCode is a cut-down python autosuggestion built on **GPT-2** *(motivation: GPyT)* model. This baby model *(trained only up to 3 epochs)* is not **"fine-tuned"** yet therefore, I highly recommend not to use it in a production environment or incorporate PyAutoCode in any of your projects. It has been trained on **112GB** of Python data sourced from the best crowdsource platform ever -- **GitHub**.
*NOTE: Increased training and fine tuning would be highly appreciated and I firmly believe that it would improve the ability of PyAutoCode significantly.*
## Some Model Features
- Built on *GPT-2*
- Tokenized with *ByteLevelBPETokenizer*
- Data Sourced from *GitHub (almost 5 consecutive days of latest Python repositories)*
- Makes use of *GPTLMHeadModel* and *DataCollatorForLanguageModelling* for training
- Newline characters are custom coded as `<N>`
## Get a Glimpse of the Model
You can make use of the **Inference API** of huggingface *(present on the right sidebar)* to load the model and check the result. Just enter any code snippet as input. Something like:
```sh
for i in range(
```
## Usage
You can use my model too!. Here's a quick tour of how you can achieve this:
Install transformers
```sh
$ pip install transformers
```
Call the API and get it to work!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("P0intMaN/PyAutoCode")
model = AutoModelForCausalLM.from_pretrained("P0intMaN/PyAutoCode")
# input: single line or multi-line. Highly recommended to use doc-strings.
inp = """import pandas"""
format_inp = inp.replace('\n', "<N>")
tokenize_inp = tokenizer.encode(format_inp, return_tensors='pt')
result = model.generate(tokenize_inp)
decode_result = tokenizer.decode(result[0])
format_result = decode_result.replace('<N>', "\n")
# printing the result
print(format_result)
```
Upon successful execution, the above should probably produce *(your results may vary when this model is fine-tuned)*
```sh
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
```
## Credits
##### *Developed as a part of a university project by [Pratheek U](https://www.github.com/P0intMaN) and [Sourav Singh](https://github.com/Sourav11902312lpu)*
|
CLTL/gm-ner-xlmrbase
|
[
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"nl",
"transformers",
"dighum",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bert-cause-object
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-cause-object
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4120
- Rmse: 1.0345
- Rmse Cause::a Causata da un oggetto (es. una pistola): 1.0345
- Mae: 0.6181
- Mae Cause::a Causata da un oggetto (es. una pistola): 0.6181
- R2: 0.3837
- R2 Cause::a Causata da un oggetto (es. una pistola): 0.3837
- Cos: 0.9130
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.8986
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Cause::a Causata da un oggetto (es. una pistola) | Mae | Mae Cause::a Causata da un oggetto (es. una pistola) | R2 | R2 Cause::a Causata da un oggetto (es. una pistola) | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------------------------------------------------:|:------:|:----------------------------------------------------:|:-------:|:---------------------------------------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.0824 | 1.0 | 15 | 0.6651 | 1.3143 | 1.3143 | 1.0930 | 1.0930 | 0.0052 | 0.0052 | 0.3043 | 0.0 | 0.5 | 0.4393 | nan |
| 0.9574 | 2.0 | 30 | 0.7088 | 1.3568 | 1.3568 | 1.1945 | 1.1945 | -0.0601 | -0.0601 | 0.0435 | 0.0 | 0.5 | 0.3380 | nan |
| 0.8151 | 3.0 | 45 | 0.6300 | 1.2791 | 1.2791 | 1.0206 | 1.0206 | 0.0577 | 0.0577 | 0.3043 | 0.0 | 0.5 | 0.3613 | nan |
| 0.6401 | 4.0 | 60 | 0.4871 | 1.1247 | 1.1247 | 0.7285 | 0.7285 | 0.2715 | 0.2715 | 0.5652 | 0.0 | 0.5 | 0.6424 | nan |
| 0.448 | 5.0 | 75 | 0.5005 | 1.1401 | 1.1401 | 0.7216 | 0.7216 | 0.2514 | 0.2514 | 0.4783 | 0.0 | 0.5 | 0.6077 | nan |
| 0.2893 | 6.0 | 90 | 0.4761 | 1.1119 | 1.1119 | 0.7237 | 0.7237 | 0.2879 | 0.2879 | 0.5652 | 0.0 | 0.5 | 0.6348 | nan |
| 0.174 | 7.0 | 105 | 0.4771 | 1.1131 | 1.1131 | 0.6836 | 0.6836 | 0.2865 | 0.2865 | 0.6522 | 0.0 | 0.5 | 0.6785 | nan |
| 0.1383 | 8.0 | 120 | 0.4313 | 1.0583 | 1.0583 | 0.6462 | 0.6462 | 0.3550 | 0.3550 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan |
| 0.1105 | 9.0 | 135 | 0.4660 | 1.1001 | 1.1001 | 0.6737 | 0.6737 | 0.3030 | 0.3030 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan |
| 0.0903 | 10.0 | 150 | 0.4866 | 1.1241 | 1.1241 | 0.7192 | 0.7192 | 0.2723 | 0.2723 | 0.7391 | 0.0 | 0.5 | 0.6833 | nan |
| 0.0571 | 11.0 | 165 | 0.4361 | 1.0642 | 1.0642 | 0.6130 | 0.6130 | 0.3478 | 0.3478 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan |
| 0.0623 | 12.0 | 180 | 0.4578 | 1.0904 | 1.0904 | 0.6844 | 0.6844 | 0.3152 | 0.3152 | 0.6522 | 0.0 | 0.5 | 0.6785 | nan |
| 0.0526 | 13.0 | 195 | 0.4605 | 1.0936 | 1.0936 | 0.6697 | 0.6697 | 0.3112 | 0.3112 | 0.6522 | 0.0 | 0.5 | 0.6785 | nan |
| 0.0472 | 14.0 | 210 | 0.4440 | 1.0738 | 1.0738 | 0.6589 | 0.6589 | 0.3360 | 0.3360 | 0.7391 | 0.0 | 0.5 | 0.7327 | nan |
| 0.0492 | 15.0 | 225 | 0.4593 | 1.0922 | 1.0922 | 0.6812 | 0.6812 | 0.3130 | 0.3130 | 0.7391 | 0.0 | 0.5 | 0.6833 | nan |
| 0.0389 | 16.0 | 240 | 0.4195 | 1.0437 | 1.0437 | 0.6252 | 0.6252 | 0.3726 | 0.3726 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan |
| 0.0396 | 17.0 | 255 | 0.4087 | 1.0302 | 1.0302 | 0.6119 | 0.6119 | 0.3888 | 0.3888 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
| 0.0328 | 18.0 | 270 | 0.4274 | 1.0535 | 1.0535 | 0.6457 | 0.6457 | 0.3608 | 0.3608 | 0.8261 | 0.0 | 0.5 | 0.7431 | nan |
| 0.0345 | 19.0 | 285 | 0.4306 | 1.0574 | 1.0574 | 0.6576 | 0.6576 | 0.3560 | 0.3560 | 0.8261 | 0.0 | 0.5 | 0.7431 | nan |
| 0.0328 | 20.0 | 300 | 0.4067 | 1.0277 | 1.0277 | 0.6160 | 0.6160 | 0.3918 | 0.3918 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
| 0.0344 | 21.0 | 315 | 0.4056 | 1.0263 | 1.0263 | 0.5948 | 0.5948 | 0.3934 | 0.3934 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
| 0.0312 | 22.0 | 330 | 0.4236 | 1.0488 | 1.0488 | 0.6277 | 0.6277 | 0.3665 | 0.3665 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
| 0.0241 | 23.0 | 345 | 0.4272 | 1.0533 | 1.0533 | 0.6444 | 0.6444 | 0.3610 | 0.3610 | 0.8261 | 0.0 | 0.5 | 0.7431 | nan |
| 0.0302 | 24.0 | 360 | 0.4046 | 1.0250 | 1.0250 | 0.6030 | 0.6030 | 0.3949 | 0.3949 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan |
| 0.0244 | 25.0 | 375 | 0.4194 | 1.0436 | 1.0436 | 0.6320 | 0.6320 | 0.3728 | 0.3728 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
| 0.0259 | 26.0 | 390 | 0.4025 | 1.0224 | 1.0224 | 0.6009 | 0.6009 | 0.3980 | 0.3980 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan |
| 0.0265 | 27.0 | 405 | 0.4103 | 1.0323 | 1.0323 | 0.6180 | 0.6180 | 0.3863 | 0.3863 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
| 0.0184 | 28.0 | 420 | 0.4059 | 1.0268 | 1.0268 | 0.6046 | 0.6046 | 0.3929 | 0.3929 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan |
| 0.0257 | 29.0 | 435 | 0.4088 | 1.0304 | 1.0304 | 0.6122 | 0.6122 | 0.3885 | 0.3885 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
| 0.0262 | 30.0 | 450 | 0.4120 | 1.0345 | 1.0345 | 0.6181 | 0.6181 | 0.3837 | 0.3837 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
CLTL/icf-domains
|
[
"pytorch",
"roberta",
"nl",
"transformers",
"license:mit",
"text-classification"
] |
text-classification
|
{
"architectures": [
"RobertaForMultiLabelSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 35 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bert-cause-concept
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-cause-concept
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4044
- Rmse: 0.6076
- Rmse Cause::a Causata da un concetto astratto (es. gelosia): 0.6076
- Mae: 0.4548
- Mae Cause::a Causata da un concetto astratto (es. gelosia): 0.4548
- R2: 0.5463
- R2 Cause::a Causata da un concetto astratto (es. gelosia): 0.5463
- Cos: 0.2174
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.3931
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Cause::a Causata da un concetto astratto (es. gelosia) | Mae | Mae Cause::a Causata da un concetto astratto (es. gelosia) | R2 | R2 Cause::a Causata da un concetto astratto (es. gelosia) | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------------------------------------------------------:|:------:|:----------------------------------------------------------:|:-------:|:---------------------------------------------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.08 | 1.0 | 15 | 0.9520 | 0.9323 | 0.9323 | 0.6560 | 0.6560 | -0.0680 | -0.0680 | 0.0435 | 0.0 | 0.5 | 0.3188 | nan |
| 0.9974 | 2.0 | 30 | 0.8621 | 0.8872 | 0.8872 | 0.5962 | 0.5962 | 0.0328 | 0.0328 | 0.1304 | 0.0 | 0.5 | 0.4066 | nan |
| 0.9337 | 3.0 | 45 | 0.9223 | 0.9176 | 0.9176 | 0.6608 | 0.6608 | -0.0347 | -0.0347 | 0.2174 | 0.0 | 0.5 | 0.3632 | nan |
| 0.966 | 4.0 | 60 | 0.8273 | 0.8691 | 0.8691 | 0.5874 | 0.5874 | 0.0719 | 0.0719 | 0.2174 | 0.0 | 0.5 | 0.3754 | nan |
| 0.8683 | 5.0 | 75 | 0.8741 | 0.8933 | 0.8933 | 0.6136 | 0.6136 | 0.0193 | 0.0193 | 0.2174 | 0.0 | 0.5 | 0.3529 | nan |
| 0.8522 | 6.0 | 90 | 0.7781 | 0.8428 | 0.8428 | 0.5732 | 0.5732 | 0.1271 | 0.1271 | 0.2174 | 0.0 | 0.5 | 0.4152 | nan |
| 0.7968 | 7.0 | 105 | 0.7257 | 0.8139 | 0.8139 | 0.5519 | 0.5519 | 0.1859 | 0.1859 | 0.2174 | 0.0 | 0.5 | 0.4152 | nan |
| 0.7166 | 8.0 | 120 | 0.7122 | 0.8064 | 0.8064 | 0.5792 | 0.5792 | 0.2010 | 0.2010 | 0.1304 | 0.0 | 0.5 | 0.3955 | nan |
| 0.6246 | 9.0 | 135 | 0.6771 | 0.7862 | 0.7862 | 0.5701 | 0.5701 | 0.2403 | 0.2403 | 0.0435 | 0.0 | 0.5 | 0.3955 | nan |
| 0.5205 | 10.0 | 150 | 0.6704 | 0.7823 | 0.7823 | 0.5735 | 0.5735 | 0.2479 | 0.2479 | 0.3913 | 0.0 | 0.5 | 0.4847 | nan |
| 0.4182 | 11.0 | 165 | 0.6852 | 0.7909 | 0.7909 | 0.5987 | 0.5987 | 0.2313 | 0.2313 | 0.3913 | 0.0 | 0.5 | 0.4847 | nan |
| 0.3984 | 12.0 | 180 | 0.6106 | 0.7466 | 0.7466 | 0.5696 | 0.5696 | 0.3150 | 0.3150 | 0.0435 | 0.0 | 0.5 | 0.2935 | nan |
| 0.3138 | 13.0 | 195 | 0.5867 | 0.7318 | 0.7318 | 0.5209 | 0.5209 | 0.3418 | 0.3418 | 0.2174 | 0.0 | 0.5 | 0.3119 | nan |
| 0.2323 | 14.0 | 210 | 0.5120 | 0.6837 | 0.6837 | 0.5007 | 0.5007 | 0.4256 | 0.4256 | 0.3043 | 0.0 | 0.5 | 0.3849 | nan |
| 0.2149 | 15.0 | 225 | 0.4789 | 0.6612 | 0.6612 | 0.4883 | 0.4883 | 0.4627 | 0.4627 | 0.3043 | 0.0 | 0.5 | 0.3849 | nan |
| 0.1753 | 16.0 | 240 | 0.4526 | 0.6428 | 0.6428 | 0.4775 | 0.4775 | 0.4922 | 0.4922 | 0.3043 | 0.0 | 0.5 | 0.3849 | nan |
| 0.1478 | 17.0 | 255 | 0.4383 | 0.6325 | 0.6325 | 0.4616 | 0.4616 | 0.5083 | 0.5083 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan |
| 0.1289 | 18.0 | 270 | 0.4141 | 0.6148 | 0.6148 | 0.4478 | 0.4478 | 0.5355 | 0.5355 | 0.3043 | 0.0 | 0.5 | 0.3849 | nan |
| 0.1035 | 19.0 | 285 | 0.3952 | 0.6007 | 0.6007 | 0.4407 | 0.4407 | 0.5566 | 0.5566 | 0.3043 | 0.0 | 0.5 | 0.3849 | nan |
| 0.1087 | 20.0 | 300 | 0.4217 | 0.6205 | 0.6205 | 0.4505 | 0.4505 | 0.5269 | 0.5269 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan |
| 0.1005 | 21.0 | 315 | 0.4065 | 0.6091 | 0.6091 | 0.4508 | 0.4508 | 0.5440 | 0.5440 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan |
| 0.0868 | 22.0 | 330 | 0.3937 | 0.5995 | 0.5995 | 0.4470 | 0.4470 | 0.5584 | 0.5584 | 0.3043 | 0.0 | 0.5 | 0.3849 | nan |
| 0.0808 | 23.0 | 345 | 0.4132 | 0.6142 | 0.6142 | 0.4617 | 0.4617 | 0.5364 | 0.5364 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan |
| 0.0737 | 24.0 | 360 | 0.4214 | 0.6203 | 0.6203 | 0.4659 | 0.4659 | 0.5272 | 0.5272 | 0.3043 | 0.0 | 0.5 | 0.4066 | nan |
| 0.0711 | 25.0 | 375 | 0.3863 | 0.5939 | 0.5939 | 0.4470 | 0.4470 | 0.5666 | 0.5666 | 0.3043 | 0.0 | 0.5 | 0.3849 | nan |
| 0.066 | 26.0 | 390 | 0.4353 | 0.6304 | 0.6304 | 0.4760 | 0.4760 | 0.5117 | 0.5117 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan |
| 0.0681 | 27.0 | 405 | 0.4078 | 0.6101 | 0.6101 | 0.4612 | 0.4612 | 0.5426 | 0.5426 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan |
| 0.0543 | 28.0 | 420 | 0.4118 | 0.6132 | 0.6132 | 0.4616 | 0.4616 | 0.5380 | 0.5380 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan |
| 0.069 | 29.0 | 435 | 0.4041 | 0.6074 | 0.6074 | 0.4551 | 0.4551 | 0.5466 | 0.5466 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan |
| 0.0604 | 30.0 | 450 | 0.4044 | 0.6076 | 0.6076 | 0.4548 | 0.4548 | 0.5463 | 0.5463 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
CLTL/icf-levels-adm
|
[
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 33 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bert-cause-none
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-cause-none
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6269
- Rmse: 1.2763
- Rmse Cause::a Spontanea, priva di un agente scatenante: 1.2763
- Mae: 1.0431
- Mae Cause::a Spontanea, priva di un agente scatenante: 1.0431
- R2: -1.4329
- R2 Cause::a Spontanea, priva di un agente scatenante: -1.4329
- Cos: -0.3913
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.3371
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Cause::a Spontanea, priva di un agente scatenante | Mae | Mae Cause::a Spontanea, priva di un agente scatenante | R2 | R2 Cause::a Spontanea, priva di un agente scatenante | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------------------------------------------------------:|:------:|:-----------------------------------------------------:|:-------:|:----------------------------------------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 0.994 | 1.0 | 15 | 0.7156 | 0.8465 | 0.8465 | 0.7809 | 0.7809 | -0.0701 | -0.0701 | -0.1304 | 0.0 | 0.5 | 0.2971 | nan |
| 0.9757 | 2.0 | 30 | 0.7096 | 0.8429 | 0.8429 | 0.7666 | 0.7666 | -0.0611 | -0.0611 | 0.0435 | 0.0 | 0.5 | 0.2515 | nan |
| 1.0086 | 3.0 | 45 | 0.7779 | 0.8825 | 0.8825 | 0.7981 | 0.7981 | -0.1632 | -0.1632 | -0.0435 | 0.0 | 0.5 | 0.2899 | nan |
| 0.9127 | 4.0 | 60 | 0.8158 | 0.9038 | 0.9038 | 0.8171 | 0.8171 | -0.2199 | -0.2199 | -0.2174 | 0.0 | 0.5 | 0.2975 | nan |
| 0.8555 | 5.0 | 75 | 0.7691 | 0.8775 | 0.8775 | 0.8121 | 0.8121 | -0.1501 | -0.1501 | -0.2174 | 0.0 | 0.5 | 0.3299 | nan |
| 0.8702 | 6.0 | 90 | 0.7818 | 0.8848 | 0.8848 | 0.7781 | 0.7781 | -0.1691 | -0.1691 | 0.0435 | 0.0 | 0.5 | 0.2515 | nan |
| 0.76 | 7.0 | 105 | 0.8377 | 0.9158 | 0.9158 | 0.7985 | 0.7985 | -0.2526 | -0.2526 | 0.0435 | 0.0 | 0.5 | 0.2515 | nan |
| 0.6997 | 8.0 | 120 | 0.9065 | 0.9527 | 0.9527 | 0.8370 | 0.8370 | -0.3555 | -0.3555 | -0.2174 | 0.0 | 0.5 | 0.3147 | nan |
| 0.5963 | 9.0 | 135 | 1.0611 | 1.0308 | 1.0308 | 0.8396 | 0.8396 | -0.5867 | -0.5867 | -0.0435 | 0.0 | 0.5 | 0.2645 | nan |
| 0.5413 | 10.0 | 150 | 1.1724 | 1.0835 | 1.0835 | 0.8649 | 0.8649 | -0.7532 | -0.7532 | -0.0435 | 0.0 | 0.5 | 0.2645 | nan |
| 0.4994 | 11.0 | 165 | 1.1471 | 1.0717 | 1.0717 | 0.8857 | 0.8857 | -0.7154 | -0.7154 | -0.2174 | 0.0 | 0.5 | 0.3271 | nan |
| 0.4208 | 12.0 | 180 | 1.2136 | 1.1024 | 1.1024 | 0.9392 | 0.9392 | -0.8148 | -0.8148 | -0.2174 | 0.0 | 0.5 | 0.3169 | nan |
| 0.316 | 13.0 | 195 | 1.3499 | 1.1626 | 1.1626 | 0.9395 | 0.9395 | -1.0187 | -1.0187 | -0.2174 | 0.0 | 0.5 | 0.3271 | nan |
| 0.2893 | 14.0 | 210 | 1.4229 | 1.1937 | 1.1937 | 0.9608 | 0.9608 | -1.1278 | -1.1278 | -0.3043 | 0.0 | 0.5 | 0.3269 | nan |
| 0.235 | 15.0 | 225 | 1.4699 | 1.2132 | 1.2132 | 0.9785 | 0.9785 | -1.1981 | -1.1981 | -0.0435 | 0.0 | 0.5 | 0.2865 | nan |
| 0.2397 | 16.0 | 240 | 1.5492 | 1.2455 | 1.2455 | 1.0005 | 1.0005 | -1.3167 | -1.3167 | -0.0435 | 0.0 | 0.5 | 0.2655 | nan |
| 0.1973 | 17.0 | 255 | 1.5541 | 1.2474 | 1.2474 | 1.0165 | 1.0165 | -1.3239 | -1.3239 | -0.0435 | 0.0 | 0.5 | 0.2655 | nan |
| 0.1793 | 18.0 | 270 | 1.4966 | 1.2242 | 1.2242 | 1.0058 | 1.0058 | -1.2380 | -1.2380 | -0.3043 | 0.0 | 0.5 | 0.3437 | nan |
| 0.16 | 19.0 | 285 | 1.4977 | 1.2246 | 1.2246 | 1.0140 | 1.0140 | -1.2396 | -1.2396 | -0.3913 | 0.0 | 0.5 | 0.3371 | nan |
| 0.1501 | 20.0 | 300 | 1.5751 | 1.2558 | 1.2558 | 1.0254 | 1.0254 | -1.3553 | -1.3553 | -0.3913 | 0.0 | 0.5 | 0.3371 | nan |
| 0.1342 | 21.0 | 315 | 1.7011 | 1.3051 | 1.3051 | 1.0681 | 1.0681 | -1.5438 | -1.5438 | -0.2174 | 0.0 | 0.5 | 0.2715 | nan |
| 0.137 | 22.0 | 330 | 1.5557 | 1.2481 | 1.2481 | 1.0393 | 1.0393 | -1.3263 | -1.3263 | -0.3043 | 0.0 | 0.5 | 0.3437 | nan |
| 0.11 | 23.0 | 345 | 1.5475 | 1.2448 | 1.2448 | 1.0320 | 1.0320 | -1.3141 | -1.3141 | -0.3913 | 0.0 | 0.5 | 0.3371 | nan |
| 0.1106 | 24.0 | 360 | 1.6006 | 1.2660 | 1.2660 | 1.0452 | 1.0452 | -1.3936 | -1.3936 | -0.3913 | 0.0 | 0.5 | 0.3297 | nan |
| 0.1013 | 25.0 | 375 | 1.5907 | 1.2621 | 1.2621 | 1.0368 | 1.0368 | -1.3787 | -1.3787 | -0.3043 | 0.0 | 0.5 | 0.2929 | nan |
| 0.0863 | 26.0 | 390 | 1.6436 | 1.2829 | 1.2829 | 1.0496 | 1.0496 | -1.4578 | -1.4578 | -0.3043 | 0.0 | 0.5 | 0.2929 | nan |
| 0.0929 | 27.0 | 405 | 1.6000 | 1.2658 | 1.2658 | 1.0341 | 1.0341 | -1.3927 | -1.3927 | -0.3043 | 0.0 | 0.5 | 0.3245 | nan |
| 0.0829 | 28.0 | 420 | 1.6277 | 1.2767 | 1.2767 | 1.0422 | 1.0422 | -1.4341 | -1.4341 | -0.3913 | 0.0 | 0.5 | 0.3371 | nan |
| 0.0884 | 29.0 | 435 | 1.6324 | 1.2785 | 1.2785 | 1.0436 | 1.0436 | -1.4411 | -1.4411 | -0.3913 | 0.0 | 0.5 | 0.3371 | nan |
| 0.0896 | 30.0 | 450 | 1.6269 | 1.2763 | 1.2763 | 1.0431 | 1.0431 | -1.4329 | -1.4329 | -0.3913 | 0.0 | 0.5 | 0.3371 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
CLTL/icf-levels-ber
|
[
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 33 | null |
---
tags:
- conversational
---
# Handsome Jack DialoGPT Model
|
CLTL/icf-levels-etn
|
[
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 31 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bert-focus-victim
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-focus-victim
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2466
- Rmse: 0.6201
- Rmse Focus::a Sulla vittima: 0.6201
- Mae: 0.4936
- Mae Focus::a Sulla vittima: 0.4936
- R2: 0.7293
- R2 Focus::a Sulla vittima: 0.7293
- Cos: 0.8261
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.8155
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Sulla vittima | Mae | Mae Focus::a Sulla vittima | R2 | R2 Focus::a Sulla vittima | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------------:|:------:|:--------------------------:|:-------:|:-------------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.0247 | 1.0 | 15 | 1.0286 | 1.2665 | 1.2665 | 1.0280 | 1.0280 | -0.1292 | -0.1292 | 0.1304 | 0.0 | 0.5 | 0.3685 | nan |
| 0.9912 | 2.0 | 30 | 1.0039 | 1.2512 | 1.2512 | 1.0347 | 1.0347 | -0.1020 | -0.1020 | 0.0435 | 0.0 | 0.5 | 0.3333 | nan |
| 0.9147 | 3.0 | 45 | 0.9338 | 1.2067 | 1.2067 | 0.9770 | 0.9770 | -0.0251 | -0.0251 | 0.1304 | 0.0 | 0.5 | 0.3685 | nan |
| 0.8194 | 4.0 | 60 | 0.7641 | 1.0916 | 1.0916 | 0.8476 | 0.8476 | 0.1612 | 0.1612 | 0.4783 | 0.0 | 0.5 | 0.5284 | nan |
| 0.6636 | 5.0 | 75 | 0.6618 | 1.0159 | 1.0159 | 0.8012 | 0.8012 | 0.2735 | 0.2735 | 0.6522 | 0.0 | 0.5 | 0.4741 | nan |
| 0.523 | 6.0 | 90 | 0.5176 | 0.8984 | 0.8984 | 0.7044 | 0.7044 | 0.4318 | 0.4318 | 0.6522 | 0.0 | 0.5 | 0.4741 | nan |
| 0.402 | 7.0 | 105 | 0.3804 | 0.7702 | 0.7702 | 0.6042 | 0.6042 | 0.5824 | 0.5824 | 0.6522 | 0.0 | 0.5 | 0.5395 | nan |
| 0.3401 | 8.0 | 120 | 0.3594 | 0.7487 | 0.7487 | 0.5703 | 0.5703 | 0.6054 | 0.6054 | 0.7391 | 0.0 | 0.5 | 0.6920 | nan |
| 0.2615 | 9.0 | 135 | 0.3429 | 0.7312 | 0.7312 | 0.6049 | 0.6049 | 0.6236 | 0.6236 | 0.7391 | 0.0 | 0.5 | 0.6920 | nan |
| 0.1928 | 10.0 | 150 | 0.2889 | 0.6712 | 0.6712 | 0.5487 | 0.5487 | 0.6828 | 0.6828 | 0.7391 | 0.0 | 0.5 | 0.6920 | nan |
| 0.1703 | 11.0 | 165 | 0.2675 | 0.6458 | 0.6458 | 0.5188 | 0.5188 | 0.7064 | 0.7064 | 0.7391 | 0.0 | 0.5 | 0.6920 | nan |
| 0.1209 | 12.0 | 180 | 0.2826 | 0.6639 | 0.6639 | 0.5475 | 0.5475 | 0.6897 | 0.6897 | 0.7391 | 0.0 | 0.5 | 0.6920 | nan |
| 0.1428 | 13.0 | 195 | 0.2978 | 0.6815 | 0.6815 | 0.5777 | 0.5777 | 0.6731 | 0.6731 | 0.7391 | 0.0 | 0.5 | 0.6920 | nan |
| 0.1038 | 14.0 | 210 | 0.2924 | 0.6753 | 0.6753 | 0.5865 | 0.5865 | 0.6790 | 0.6790 | 0.6522 | 0.0 | 0.5 | 0.2760 | nan |
| 0.0951 | 15.0 | 225 | 0.2905 | 0.6731 | 0.6731 | 0.5750 | 0.5750 | 0.6811 | 0.6811 | 0.7391 | 0.0 | 0.5 | 0.6920 | nan |
| 0.0809 | 16.0 | 240 | 0.2676 | 0.6460 | 0.6460 | 0.5552 | 0.5552 | 0.7062 | 0.7062 | 0.7391 | 0.0 | 0.5 | 0.6920 | nan |
| 0.0811 | 17.0 | 255 | 0.2770 | 0.6572 | 0.6572 | 0.5543 | 0.5543 | 0.6959 | 0.6959 | 0.7391 | 0.0 | 0.5 | 0.6920 | nan |
| 0.0703 | 18.0 | 270 | 0.2634 | 0.6409 | 0.6409 | 0.5251 | 0.5251 | 0.7108 | 0.7108 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.0595 | 19.0 | 285 | 0.2638 | 0.6413 | 0.6413 | 0.5196 | 0.5196 | 0.7104 | 0.7104 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.0651 | 20.0 | 300 | 0.2520 | 0.6268 | 0.6268 | 0.4970 | 0.4970 | 0.7234 | 0.7234 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.0637 | 21.0 | 315 | 0.2668 | 0.6451 | 0.6451 | 0.4965 | 0.4965 | 0.7071 | 0.7071 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.0582 | 22.0 | 330 | 0.2455 | 0.6188 | 0.6188 | 0.4759 | 0.4759 | 0.7305 | 0.7305 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.0616 | 23.0 | 345 | 0.2509 | 0.6255 | 0.6255 | 0.5084 | 0.5084 | 0.7246 | 0.7246 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.0492 | 24.0 | 360 | 0.2510 | 0.6256 | 0.6256 | 0.4985 | 0.4985 | 0.7244 | 0.7244 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.0504 | 25.0 | 375 | 0.2512 | 0.6259 | 0.6259 | 0.4849 | 0.4849 | 0.7242 | 0.7242 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.0501 | 26.0 | 390 | 0.2585 | 0.6350 | 0.6350 | 0.5140 | 0.5140 | 0.7162 | 0.7162 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.0411 | 27.0 | 405 | 0.2544 | 0.6299 | 0.6299 | 0.5148 | 0.5148 | 0.7207 | 0.7207 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.044 | 28.0 | 420 | 0.2466 | 0.6201 | 0.6201 | 0.4964 | 0.4964 | 0.7293 | 0.7293 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.042 | 29.0 | 435 | 0.2466 | 0.6201 | 0.6201 | 0.4836 | 0.4836 | 0.7293 | 0.7293 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.0446 | 30.0 | 450 | 0.2466 | 0.6201 | 0.6201 | 0.4936 | 0.4936 | 0.7293 | 0.7293 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
CLTL/icf-levels-ins
|
[
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 32 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9230800058707119
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2188
- Accuracy: 0.923
- F1: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8234 | 1.0 | 250 | 0.3123 | 0.9075 | 0.9058 |
| 0.2493 | 2.0 | 500 | 0.2188 | 0.923 | 0.9231 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
CLTL/icf-levels-stm
|
[
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 32 | null |
---
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-9-epoch-tweak
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-9-epoch-tweak
This model is a fine-tuned version of [Ameer05/model-token-repo](https://huggingface.co/Ameer05/model-token-repo) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4511
- Rouge1: 59.76
- Rouge2: 52.1999
- Rougel: 57.3631
- Rougelsum: 59.3075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 0.91 | 5 | 2.0185 | 52.2186 | 45.4675 | 49.3152 | 51.9415 |
| No log | 1.91 | 10 | 1.6571 | 60.7728 | 52.8611 | 57.3487 | 60.1676 |
| No log | 2.91 | 15 | 1.5323 | 60.5674 | 52.2246 | 57.9846 | 60.073 |
| No log | 3.91 | 20 | 1.4556 | 61.2167 | 53.5087 | 58.9609 | 60.893 |
| 1.566 | 4.91 | 25 | 1.4632 | 62.918 | 55.4544 | 60.7116 | 62.6614 |
| 1.566 | 5.91 | 30 | 1.4360 | 60.4173 | 52.5859 | 57.8131 | 59.8864 |
| 1.566 | 6.91 | 35 | 1.4361 | 61.4273 | 53.9663 | 59.4445 | 60.9672 |
| 1.566 | 7.91 | 40 | 1.4477 | 60.3401 | 52.7276 | 57.7504 | 59.8209 |
| 0.6928 | 8.91 | 45 | 1.4511 | 59.76 | 52.1999 | 57.3631 | 59.3075 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.10.3
|
CNT-UPenn/Bio_ClinicalBERT_for_seizureFreedom_classification
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 28 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-german-with-lm
results: []
---
# wav2vec2-large-xls-r-300m-german-with-lm
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the German set of the Common Voice dataset.
It achieves a Word Error Rate of 8,8 percent on the evaluation set
## Model description
German wav2vec2-xls-r-300m trained on the full train set of Common Voice dataset with a n-gram language model.
Full code available in [my Github repository](https://github.com/MichaelFleck92/asr-wav2vec)
## Citation
Feel free to cite this work by
```
@misc{mfleck/wav2vec2-large-xls-r-300m-german-with-lm,
title={XLS-R-300 Wav2Vec2 German with language model},
author={Fleck, Michael},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/mfleck/wav2vec2-large-xls-r-300m-german-with-lm}},
year={2022}
}
```
## Intended uses & limitations
Inference Usage
```python
from transformers import pipeline
pipe = pipeline(model="mfleck/wav2vec2-large-xls-r-300m-german-with-lm")
output = pipe("/path/to/file.wav",chunk_length_s=5, stride_length_s=1)
print(output["text"])
```
## Training and evaluation data
Script used for training (takes about 80 hours on a single A100 40GB)
```python
import random
import re
import json
from typing import Any, Dict, List, Optional, Union
import pandas as pd
import numpy as np
import torch
# import soundfile
from datasets import load_dataset, load_metric, Audio
from dataclasses import dataclass, field
from transformers import Wav2Vec2CTCTokenizer, Wav2Vec2FeatureExtractor, Wav2Vec2Processor, TrainingArguments, Trainer, Wav2Vec2ForCTC
'''
Most parts of this script are following the tutorial: https://huggingface.co/blog/fine-tune-xlsr-wav2vec2
'''
common_voice_train = load_dataset("common_voice", "de", split="train+validation")
# Use train dataset with less training data
#common_voice_train = load_dataset("common_voice", "de", split="train[:3%]")
common_voice_test = load_dataset("common_voice", "de", split="test")
# Remove unused columns
common_voice_train = common_voice_train.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "segment", "up_votes"])
common_voice_test = common_voice_test.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "segment", "up_votes"])
# Remove batches with chars which do not exist in German
print(len(common_voice_train))
regex = "[^A-Za-zäöüÄÖÜß,?.! ]+"
common_voice_train = common_voice_train.filter(lambda example: bool(re.search(regex, example['sentence']))==False)
common_voice_test = common_voice_test.filter(lambda example: bool(re.search(regex, example['sentence']))==False)
print(len(common_voice_train))
# Remove special chars from transcripts
chars_to_remove_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']'
def remove_special_characters(batch):
batch["sentence"] = re.sub(chars_to_remove_regex, '', batch["sentence"]).lower()
return batch
common_voice_train = common_voice_train.map(remove_special_characters, num_proc=10)
common_voice_test = common_voice_test.map(remove_special_characters, num_proc=10)
# Show some random transcripts to proof that preprocessing worked as expected
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
print(str(dataset[picks]))
show_random_elements(common_voice_train.remove_columns(["path","audio"]))
# Extract all chars which exist in datasets and add wav2vek tokens
def extract_all_chars(batch):
all_text = " ".join(batch["sentence"])
vocab = list(set(all_text))
return {"vocab": [vocab], "all_text": [all_text]}
vocab_train = common_voice_train.map(extract_all_chars, batched=True, batch_size=-1, keep_in_memory=True, remove_columns=common_voice_train.column_names)
vocab_test = common_voice_test.map(extract_all_chars, batched=True, batch_size=-1, keep_in_memory=True, remove_columns=common_voice_test.column_names)
vocab_list = list(set(vocab_train["vocab"][0]) | set(vocab_test["vocab"][0]))
vocab_dict = {v: k for k, v in enumerate(sorted(vocab_list))}
vocab_dict
vocab_dict["|"] = vocab_dict[" "]
del vocab_dict[" "]
vocab_dict["[UNK]"] = len(vocab_dict)
vocab_dict["[PAD]"] = len(vocab_dict)
len(vocab_dict)
with open('vocab.json', 'w') as vocab_file:
json.dump(vocab_dict, vocab_file)
# Create tokenizer and repo at Huggingface
tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("./", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|")
repo_name = "wav2vec2-large-xls-r-300m-german-with-lm"
tokenizer.push_to_hub(repo_name)
print("pushed to hub")
# Create feature extractor and processor
feature_extractor = Wav2Vec2FeatureExtractor(feature_size=1, sampling_rate=16000, padding_value=0.0, do_normalize=True, return_attention_mask=True)
processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)
# Cast audio column
common_voice_train = common_voice_train.cast_column("audio", Audio(sampling_rate=16_000))
common_voice_test = common_voice_test.cast_column("audio", Audio(sampling_rate=16_000))
# Convert audio signal to array and 16khz sampling rate
def prepare_dataset(batch):
audio = batch["audio"]
# batched output is "un-batched"
batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
# Save an audio file to check if it gets loaded correctly
# soundfile.write("/home/debian/trainnew/test.wav",batch["input_values"],audio["sampling_rate"])
batch["input_length"] = len(batch["input_values"])
with processor.as_target_processor():
batch["labels"] = processor(batch["sentence"]).input_ids
return batch
common_voice_train = common_voice_train.map(prepare_dataset, remove_columns=common_voice_train.column_names)
common_voice_test = common_voice_test.map(prepare_dataset, remove_columns=common_voice_test.column_names)
print("dataset prepared")
@dataclass
class DataCollatorCTCWithPadding:
"""
Data collator that will dynamically pad the inputs received.
Args:
processor (:class:`~transformers.Wav2Vec2Processor`)
The processor used for proccessing the data.
padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`):
Select a strategy to pad the returned sequences (according to the model's padding side and padding index)
among:
* :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
* :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the
maximum acceptable input length for the model if that argument is not provided.
* :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of
different lengths).
"""
processor: Wav2Vec2Processor
padding: Union[bool, str] = True
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
# split inputs and labels since they have to be of different lenghts and need
# different padding methods
input_features = [{"input_values": feature["input_values"]} for feature in features]
label_features = [{"input_ids": feature["labels"]} for feature in features]
batch = self.processor.pad(
input_features,
padding=self.padding,
return_tensors="pt",
)
with self.processor.as_target_processor():
labels_batch = self.processor.pad(
label_features,
padding=self.padding,
return_tensors="pt",
)
# replace padding with -100 to ignore loss correctly
labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
batch["labels"] = labels
return batch
data_collator = DataCollatorCTCWithPadding(processor=processor, padding=True)
# Use word error rate as metric
wer_metric = load_metric("wer")
def compute_metrics(pred):
pred_logits = pred.predictions
pred_ids = np.argmax(pred_logits, axis=-1)
pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
pred_str = processor.batch_decode(pred_ids)
# we do not want to group tokens when computing the metrics
label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
wer = wer_metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
# Model and training parameters
model = Wav2Vec2ForCTC.from_pretrained(
"facebook/wav2vec2-xls-r-300m",
attention_dropout=0.094,
hidden_dropout=0.01,
feat_proj_dropout=0.04,
mask_time_prob=0.08,
layerdrop=0.04,
ctc_loss_reduction="mean",
pad_token_id=processor.tokenizer.pad_token_id,
vocab_size=len(processor.tokenizer),
)
model.freeze_feature_extractor()
training_args = TrainingArguments(
output_dir=repo_name,
group_by_length=True,
per_device_train_batch_size=32,
gradient_accumulation_steps=2,
evaluation_strategy="steps",
num_train_epochs=20,
gradient_checkpointing=True,
fp16=True,
save_steps=5000,
eval_steps=5000,
logging_steps=100,
learning_rate=1e-4,
warmup_steps=500,
save_total_limit=3,
push_to_hub=True,
)
trainer = Trainer(
model=model,
data_collator=data_collator,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=common_voice_train,
eval_dataset=common_voice_test,
tokenizer=processor.feature_extractor,
)
# Start fine tuning
trainer.train()
# When done push final model to Huggingface hub
trainer.push_to_hub()
```
The model achieves a Word Error Rate of 8,8% using the following script:
```python
import argparse
import re
from typing import Dict
import torch
from datasets import Audio, Dataset, load_dataset, load_metric
from transformers import AutoFeatureExtractor, pipeline
# load dataset
dataset = load_dataset("common_voice", "de", split="test")
# use only 1% of data
#dataset = load_dataset("common_voice", "de", split="test[:1%]")
# load processor
feature_extractor = AutoFeatureExtractor.from_pretrained("mfleck/wav2vec2-large-xls-r-300m-german-with-lm")
sampling_rate = feature_extractor.sampling_rate
dataset = dataset.cast_column("audio", Audio(sampling_rate=sampling_rate))
# load eval pipeline
# device=0 means GPU, use device=-1 for CPU
asr = pipeline("automatic-speech-recognition", model="mfleck/wav2vec2-large-xls-r-300m-german-with-lm", device=0)
# Remove batches with chars which do not exist in German
regex = "[^A-Za-zäöüÄÖÜß,?.! ]+"
dataset = dataset.filter(lambda example: bool(re.search(regex, example['sentence']))==False)
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']'
# map function to decode audio
def map_to_pred(batch):
prediction = asr(batch["audio"]["array"], chunk_length_s=5, stride_length_s=1)
# Print automatic generated transcript
#print(str(prediction))
batch["prediction"] = prediction["text"]
text = batch["sentence"]
batch["target"] = re.sub(chars_to_ignore_regex, "", text.lower()) + " "
return batch
# run inference on all examples
result = dataset.map(map_to_pred, remove_columns=dataset.column_names)
# load metric
wer = load_metric("wer")
cer = load_metric("cer")
# compute metrics
wer_result = wer.compute(references=result["target"], predictions=result["prediction"])
cer_result = cer.compute(references=result["target"], predictions=result["prediction"])
# print results
result_str = f"WER: {wer_result}\n" f"CER: {cer_result}"
print(result_str)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1396 | 1.42 | 5000 | 0.1449 | 0.1479 |
| 0.1169 | 2.83 | 10000 | 0.1285 | 0.1286 |
| 0.0938 | 4.25 | 15000 | 0.1277 | 0.1230 |
| 0.0924 | 5.67 | 20000 | 0.1305 | 0.1191 |
| 0.0765 | 7.09 | 25000 | 0.1256 | 0.1158 |
| 0.0749 | 8.5 | 30000 | 0.1186 | 0.1092 |
| 0.066 | 9.92 | 35000 | 0.1173 | 0.1068 |
| 0.0581 | 11.34 | 40000 | 0.1225 | 0.1030 |
| 0.0582 | 12.75 | 45000 | 0.1153 | 0.0999 |
| 0.0507 | 14.17 | 50000 | 0.1182 | 0.0971 |
| 0.0491 | 15.59 | 55000 | 0.1136 | 0.0939 |
| 0.045 | 17.01 | 60000 | 0.1140 | 0.0914 |
| 0.0395 | 18.42 | 65000 | 0.1160 | 0.0902 |
| 0.037 | 19.84 | 70000 | 0.1148 | 0.0882 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.9.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Caddy/UD
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language: en
thumbnail: http://www.huggingtweets.com/atarifounders/1648266306699/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1507523916981583875/6n7ng67H_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">koala/claw/soppy</div>
<div style="text-align: center; font-size: 14px;">@atarifounders</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from koala/claw/soppy.
| Data | koala/claw/soppy |
| --- | --- |
| Tweets downloaded | 3239 |
| Retweets | 129 |
| Short tweets | 883 |
| Tweets kept | 2227 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2gsc0jwi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @atarifounders's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/tl1eu60e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/tl1eu60e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/atarifounders')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Callidior/bert2bert-base-arxiv-titlegen
|
[
"pytorch",
"safetensors",
"encoder-decoder",
"text2text-generation",
"en",
"dataset:arxiv_dataset",
"transformers",
"summarization",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
summarization
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 145 | null |
---
tags:
- generated_from_trainer
datasets:
- pub_med_summarization_dataset
metrics:
- rouge
model-index:
- name: pegasus-cnn_dailymail-finetuned-pubmed
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: pub_med_summarization_dataset
type: pub_med_summarization_dataset
args: document
metrics:
- name: Rouge1
type: rouge
value: 37.2569
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-cnn_dailymail-finetuned-pubmed
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the pub_med_summarization_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8050
- Rouge1: 37.2569
- Rouge2: 15.8205
- Rougel: 24.1969
- Rougelsum: 34.0331
- Gen Len: 125.892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.2449 | 1.0 | 1000 | 1.8942 | 36.4494 | 14.9948 | 23.8279 | 33.3081 | 124.482 |
| 2.0803 | 2.0 | 2000 | 1.8440 | 36.998 | 15.4992 | 24.091 | 33.6614 | 125.678 |
| 2.0166 | 3.0 | 3000 | 1.8176 | 37.4703 | 16.0358 | 24.5735 | 34.1789 | 125.094 |
| 1.9911 | 4.0 | 4000 | 1.8055 | 37.1338 | 15.7921 | 24.1412 | 33.8293 | 125.874 |
| 1.9419 | 5.0 | 5000 | 1.8050 | 37.2569 | 15.8205 | 24.1969 | 34.0331 | 125.892 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
CalvinHuang/mt5-small-finetuned-amazon-en-es
|
[
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] |
summarization
|
{
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 16 | null |
## bert-base-uncased finetuned on IMDB dataset
Evaluation set was created by taking 1000 samples from test set
```
DatasetDict({
train: Dataset({
features: ['text', 'label'],
num_rows: 25000
})
dev: Dataset({
features: ['text', 'label'],
num_rows: 1000
})
test: Dataset({
features: ['text', 'label'],
num_rows: 24000
})
})
```
## Parameters
```
max_sequence_length = 128
batch_size = 32
eval_steps = 100
learning_rate=2e-05
num_train_epochs=5
early_stopping_patience = 10
```
## Training Run
```
[2700/3910 1:11:43 < 32:09, 0.63 it/s, Epoch 3/5]
Step Training Loss Validation Loss Accuracy Precision Recall F1 Runtime Samples Per Second
100 No log 0.371974 0.845000 0.798942 0.917004 0.853911 15.256900 65.544000
200 No log 0.349631 0.850000 0.873913 0.813765 0.842767 15.288600 65.408000
300 No log 0.359376 0.845000 0.869281 0.807692 0.837356 15.303900 65.343000
400 No log 0.307613 0.870000 0.851351 0.892713 0.871542 15.358400 65.111000
500 0.364500 0.309362 0.856000 0.807018 0.931174 0.864662 15.326100 65.248000
600 0.364500 0.302709 0.867000 0.881607 0.844130 0.862461 15.324400 65.255000
700 0.364500 0.300102 0.871000 0.894168 0.838057 0.865204 15.474900 64.621000
800 0.364500 0.383784 0.866000 0.833333 0.910931 0.870406 15.380100 65.019000
900 0.364500 0.309934 0.874000 0.881743 0.860324 0.870902 15.358900 65.109000
1000 0.254600 0.332236 0.872000 0.894397 0.840081 0.866388 15.442700 64.756000
1100 0.254600 0.330807 0.871000 0.877847 0.858300 0.867963 15.410900 64.889000
1200 0.254600 0.352724 0.872000 0.925581 0.805668 0.861472 15.272800 65.476000
1300 0.254600 0.278529 0.881000 0.891441 0.864372 0.877698 15.408200 64.900000
1400 0.254600 0.291371 0.878000 0.854962 0.906883 0.880157 15.427400 64.820000
1500 0.208400 0.324827 0.869000 0.904232 0.821862 0.861082 15.338600 65.195000
1600 0.208400 0.377024 0.884000 0.898734 0.862348 0.880165 15.414500 64.874000
1700 0.208400 0.375274 0.885000 0.881288 0.886640 0.883956 15.367200 65.073000
1800 0.208400 0.378904 0.880000 0.877016 0.880567 0.878788 15.363900 65.088000
1900 0.208400 0.410517 0.874000 0.866534 0.880567 0.873494 15.324700 65.254000
2000 0.130800 0.404030 0.876000 0.888655 0.856275 0.872165 15.414200 64.875000
2100 0.130800 0.390763 0.883000 0.882353 0.880567 0.881459 15.341500 65.183000
2200 0.130800 0.417967 0.880000 0.875502 0.882591 0.879032 15.351300 65.141000
2300 0.130800 0.390974 0.883000 0.898520 0.860324 0.879007 15.396100 64.952000
2400 0.130800 0.479739 0.874000 0.856589 0.894737 0.875248 15.460500 64.681000
2500 0.098400 0.473215 0.875000 0.883576 0.860324 0.871795 15.392200 64.968000
2600 0.098400 0.532294 0.872000 0.889362 0.846154 0.867220 15.364100 65.087000
2700 0.098400 0.536664 0.881000 0.880325 0.878543 0.879433 15.351100 65.142000
TrainOutput(global_step=2700, training_loss=0.2004435383832013, metrics={'train_runtime': 4304.5331, 'train_samples_per_second': 0.908, 'total_flos': 7258763970957312, 'epoch': 3.45})
```
## Classification Report
```
precision recall f1-score support
0 0.90 0.87 0.89 11994
1 0.87 0.90 0.89 12006
accuracy 0.89 24000
macro avg 0.89 0.89 0.89 24000
weighted avg 0.89 0.89 0.89 24000
```
|
Cameron/BERT-SBIC-offensive
|
[
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 31 | null |
---
tags:
- conversational
---
# My Awesome Model
|
Cameron/BERT-eec-emotion
|
[
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 36 | null |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6263
- Wer: 0.8568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.9849 | 1.68 | 1500 | 5.9623 | 1.1028 |
| 5.1696 | 3.36 | 3000 | 5.5504 | 1.6345 |
| 4.1412 | 5.04 | 4500 | 5.3853 | 1.3565 |
| 2.7226 | 6.73 | 6000 | 5.3072 | 0.9908 |
| 3.2607 | 8.41 | 7500 | 5.4121 | 1.2854 |
| 2.4017 | 10.09 | 9000 | 5.1094 | 1.0303 |
| 1.7361 | 11.77 | 10500 | 4.8928 | 0.9506 |
| 2.0638 | 13.45 | 12000 | 4.8352 | 0.9127 |
| 1.2832 | 15.13 | 13500 | 4.7271 | 0.9103 |
| 1.0439 | 16.82 | 15000 | 4.5980 | 0.8720 |
| 0.4112 | 18.5 | 16500 | 4.6263 | 0.8568 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Cameron/BERT-jigsaw-identityhate
|
[
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 37 | null |
{ 'max_seq_length': 384,
'batch_size': 24,
'learning_rate': {'val': 3e-5, 'schelduler': 'Linear'},
'max_clip_norm': None,
'epochs': 2
}
|
Cameron/BERT-jigsaw-severetoxic
|
[
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 30 | null |
---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- spy24/autonlp-data-parrot_paraphrasing
co2_eq_emissions: 0.8335491678002559
---
# Test
|
Cameron/BERT-mdgender-wizard
|
[
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 30 | 2022-03-11T00:09:35Z |
---
language: en
license: apache-2.0
---
HF-version model for PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization (ACL 2022).
The original code can be found [here](https://github.com/allenai/PRIMER). You can find the script and notebook to train/evaluate the model in the original github repo.
* Note: due to the difference between the implementations of the original Longformer and the Huggingface LED model, the results of converted models are slightly different. We run a sanity check on both fine-tuned and non fine-tuned models, and show the results below:
| Model | Rouge-1 | Rouge-2 | Rouge-L |
| --- | ----------- |----------- |----------- |
| PRIMERA | 42.0 | 13.6 | 20.8|
| PRIMERA-hf | 41.7 |13.6 | 20.5|
| PRIMERA(finetuned) | 49.9 | 21.1 | 25.9|
| PRIMERA-hf(finetuned) | 49.9 | 20.9 | 25.8|
You can use it by
```
from transformers import (
AutoTokenizer,
LEDConfig,
LEDForConditionalGeneration,
)
tokenizer = AutoTokenizer.from_pretrained('allenai/PRIMERA')
config=LEDConfig.from_pretrained('allenai/PRIMERA')
model = LEDForConditionalGeneration.from_pretrained('allenai/PRIMERA')
```
|
Camzure/MaamiBot-test
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
language: en
license: apache-2.0
---
HF-version model for PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization (ACL 2022).
The original code can be found [here](https://github.com/allenai/PRIMER). You can find the script and notebook to train/evaluate the model in the original github repo.
* Note: due to the difference between the implementations of the original Longformer and the Huggingface LED model, the results of converted models are slightly different. We run a sanity check on both fine-tuned and non fine-tuned models on the **MultiNews dataset**, and show the results below:
| Model | Rouge-1 | Rouge-2 | Rouge-L |
| --- | ----------- |----------- |----------- |
| PRIMERA | 42.0 | 13.6 | 20.8|
| PRIMERA-hf | 41.7 |13.6 | 20.5|
| PRIMERA(finetuned) | 49.9 | 21.1 | 25.9|
| PRIMERA-hf(finetuned) | 49.9 | 20.9 | 25.8|
You can use it by
```
from transformers import (
AutoTokenizer,
LEDConfig,
LEDForConditionalGeneration,
)
tokenizer = AutoTokenizer.from_pretrained('allenai/PRIMERA')
config=LEDConfig.from_pretrained('allenai/PRIMERA')
model = LEDForConditionalGeneration.from_pretrained('allenai/PRIMERA')
```
|
Canadiancaleb/DialoGPT-small-jesse
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
license: apache-2.0
---
HF-version model for PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization (ACL 2022).
The original code can be found [here](https://github.com/allenai/PRIMER). You can find the script and notebook to train/evaluate the model in the original github repo.
* Note: due to the difference between the implementations of the original Longformer and the Huggingface LED model, the results of converted models are slightly different. We run a sanity check on both fine-tuned and non fine-tuned models on the **MultiNews dataset**, and show the results below:
| Model | Rouge-1 | Rouge-2 | Rouge-L |
| --- | ----------- |----------- |----------- |
| PRIMERA | 42.0 | 13.6 | 20.8|
| PRIMERA-hf | 41.7 |13.6 | 20.5|
| PRIMERA(finetuned) | 49.9 | 21.1 | 25.9|
| PRIMERA-hf(finetuned) | 49.9 | 20.9 | 25.8|
You can use it by
```
from transformers import (
AutoTokenizer,
LEDConfig,
LEDForConditionalGeneration,
)
tokenizer = AutoTokenizer.from_pretrained('allenai/PRIMERA')
config=LEDConfig.from_pretrained('allenai/PRIMERA')
model = LEDForConditionalGeneration.from_pretrained('allenai/PRIMERA')
```
|
Canadiancaleb/DialoGPT-small-walter
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 13 | null |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-fine-tuned-debiased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-fine-tuned-debiased
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.91 | 10 | 1.9130 |
| No log | 1.91 | 20 | 1.7356 |
| No log | 2.91 | 30 | 1.9216 |
| No log | 3.91 | 40 | 2.1714 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.5.0
- Datasets 1.12.1
- Tokenizers 0.11.6
|
Canyonevo/DialoGPT-medium-KingHenry
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1664
- F1: 0.8556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2846 | 1.0 | 715 | 0.1837 | 0.8247 |
| 0.1446 | 2.0 | 1430 | 0.1617 | 0.8409 |
| 0.0923 | 3.0 | 2145 | 0.1664 | 0.8556 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
CapitainData/wav2vec2-large-xlsr-turkish-demo-colab
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.838255033557047
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2691
- F1: 0.8383
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5851 | 1.0 | 191 | 0.3202 | 0.8011 |
| 0.256 | 2.0 | 382 | 0.2862 | 0.8344 |
| 0.1725 | 3.0 | 573 | 0.2691 | 0.8383 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Capreolus/bert-base-msmarco
|
[
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"arxiv:2008.09093",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 238 | null |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.830592105263158
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2400
- F1: 0.8306
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8118 | 1.0 | 70 | 0.3471 | 0.7047 |
| 0.2869 | 2.0 | 140 | 0.2679 | 0.8043 |
| 0.1762 | 3.0 | 210 | 0.2400 | 0.8306 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Capreolus/electra-base-msmarco
|
[
"pytorch",
"tf",
"electra",
"text-classification",
"arxiv:2008.09093",
"transformers"
] |
text-classification
|
{
"architectures": [
"ElectraForSequenceClassification"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 110 | null |
---
language:
- "de"
tags:
- "german"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "mit"
pipeline_tag: "token-classification"
---
# bert-large-german-upos
## Model Description
This is a BERT model pre-trained with [UD_German-HDT](https://github.com/UniversalDependencies/UD_German-HDT) for POS-tagging and dependency-parsing, derived from [gbert-large](https://huggingface.co/deepset/gbert-large). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-german-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-large-german-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-large-german-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
Captain-1337/CrudeBERT
|
[
"pytorch",
"bert",
"text-classification",
"arxiv:1908.10063",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 28 | null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1748
- F1: 0.8555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3036 | 1.0 | 835 | 0.1888 | 0.8068 |
| 0.1585 | 2.0 | 1670 | 0.1763 | 0.8415 |
| 0.1027 | 3.0 | 2505 | 0.1748 | 0.8555 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Captain272/lstm
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language: Thai
task: extractive question answering
datasets: xquad.th
tags:
- bert-base
---
# Model Description
This model is for Thai extractive question answering. It is based on the multilingual BERT [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) model, and it is case-sensitive: it makes a difference between english and English
# Training data
We split the original [xquad](https://github.com/deepmind/xquad) dataset into the training/validation/testing set. Totally, there are 876/161/153 question-answer pairs from 34/7/7 articles in the training/validation/testing set separately. You can find the details of the dataset here [xquad_split](https://huggingface.co/datasets/zhufy/xquad_split).
# How to use
You can use it directly from the [🤗 Transformers](https://github.com/huggingface/transformers) library with a pipeline:
``` python
>>> from transformers.pipelines import pipeline
>>> from transformers import AutoTokenizer, AutoModelForQuestionAnswering
>>> tokenizer = AutoTokenizer.from_pretrained("zhufy/xquad-th-mbert-base")
>>> model = AutoModelForQuestionAnswering.from_pretrained("zhufy/xquad-th-mbert-base")
>>> nlp = pipeline("question-answering", model=model, tokenizer=tokenizer)
>>> context = "ดินดอนสามเหลี่ยม ไรน์-เมิส ซึ่งเป็นภูมิภาคทางธรรมชาติที่สำคัญของเนเธอร์แลนด์เริ่มต้น
ใกล้มิลลิงเงิน อาน เดอ เรน ใกล้ชายแดนเนเธอร์แลนด์ติดกับเยอรมัน
โดยมีสาขาของไรน์ไหลเข้าสู่แม่น้ำวาลและเนเดอร์เรน เนื่องจากน้ำส่วนใหญ่จากแม่น้ำไรน์
คำว่า ดินดอนสามเหลี่ยมไรน์ ซึ่งสั้นกว่าจึงเป็นคำที่ใช้เรียกกันทั่วไป อย่างไรก็ดี
ชื่อนี้ยังใช้เรียกดินดอนสามเหลี่ยมบริเวณแม่น้ำซึ่งแม่น้ำไรน์ไหลเข้าสู่ทะเลสาบคอนสแตนซ์อีกด้วย
ดังนั้นการเรียกดินดอนสามเหลี่ยมซึ่งใหญ่กว่าว่าไรน์-เมิส หรือแม้กระทั่งดินแดนสามเหลี่ยมไรน์
-เมิส-สเกลต์จึงชัดเจนกว่า เนื่องจากแม่น้ำสเกลต์สิ้นสุดที่ดินดอนสามเหลี่ยมเดียวกัน"
>>> question = "ดินดอนสามเหลี่ยมในเนเธอร์แลนด์มีชื่อว่าอะไร?"
>>> inputs = {"question": question,
"context":context }
>>> nlp(inputs)
{'score': 0.9426798224449158,
'start': 17,
'end': 84,
'answer': 'ไรน์-เมิส ซึ่งเป็นภูมิภาคทางธรรมชาติที่สำคัญของเนเธอร์แลนด์เริ่มต้น'}
```
|
Carlork314/Xd
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language: Malay
task: extractive question answering
datasets: Malay SQuAD
tags:
- bert-base
---
# Model Description
This model is for Malay extractive question answering. It is based on the [malay-huggingface/bert-base-bahasa-cased](https://huggingface.co/malay-huggingface/bert-base-bahasa-cased/tree/main) model, and it is case-sensitive: it makes a difference between english and English.
# Training data
[Malay SQuAD v2.0](https://github.com/huseinzol05/malay-dataset/tree/master/question-answer/squad)
# How to use
You can use it directly from the [🤗 Transformers](https://github.com/huggingface/transformers) library with a pipeline:
``` python
>>> from transformers.pipelines import pipeline
>>> from transformers import AutoTokenizer, AutoModelForQuestionAnswering
>>> tokenizer = AutoTokenizer.from_pretrained("zhufy/squad-ms-bert-base")
>>> model = AutoModelForQuestionAnswering.from_pretrained("zhufy/squad-ms-bert-base")
>>> nlp = pipeline("question-answering", model=model, tokenizer=tokenizer)
>>> context = "Pada manusia, tindak balas ini diaktifkan dengan pelengkap
pengikatan kepada antibodi yang telah melekat pada mikrob ini
atau pengikatan protein pelengkap kepada karbohidrat pada permukaan
mikrob. Isyarat pengiktirafan ini mencetuskan tindak balas pembunuhan
yang pantas. Kelajuan tindak balas adalah hasil penguatan isyarat
yang berlaku berikutan pengaktifan proteolytik berturutan molekul
pelengkap, yang juga protease. Selepas protein pelengkap pada mulanya
mengikat kepada mikrob, mereka mengaktifkan aktiviti protease mereka,
yang seterusnya mengaktifkan protease pelengkap lain, dan sebagainya.
Ini menghasilkan cascade bermangkin yang menguatkan isyarat awal dengan
maklum balas positif terkawal. Kastil menghasilkan penghasilan peptida
yang menarik sel imun, meningkatkan kebolehtelapan vaskular, dan opsonize
(kot) permukaan patogen, menandakannya untuk kemusnahan. Pemendapan
pelengkap ini juga boleh membunuh sel secara terus dengan mengganggu
membran plasma mereka."
>>> question = "Protein pelengkap mengikat molekul apa yang berada di
permukaan mikrob untuk mendapatkan tindak balas imunWhat
are two basic primary resources used to guage complexity?"
>>> inputs = {"question": question,
"context":context }
>>> nlp(inputs)
{'score': 0.9848766922950745,
'start': 162,
'end': 173,
'answer': 'karbohidrat'}
```
|
CarlosTron/Yo
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- conversational
---
# willem DialoGPT Model
|
dccuchile/albert-base-spanish-finetuned-qa-mlqa
|
[
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: cc0-1.0
tags:
- automatic-speech-recognition
- NbAiLab/NPSC
- generated_from_trainer
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [KBLab/wav2vec2-large-voxrex](https://huggingface.co/KBLab/wav2vec2-large-voxrex) on the NBAILAB/NPSC - 16K_MP3_NYNORSK dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4929
- Wer: 0.1455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 80.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.0168 | 0.54 | 500 | 3.0478 | 1.0 |
| 2.8486 | 1.08 | 1000 | 2.7863 | 1.0 |
| 1.0509 | 1.62 | 1500 | 0.8737 | 0.5449 |
| 0.7873 | 2.16 | 2000 | 0.6718 | 0.4292 |
| 0.6987 | 2.7 | 2500 | 0.5497 | 0.3589 |
| 0.5548 | 3.24 | 3000 | 0.4841 | 0.3145 |
| 0.5421 | 3.78 | 3500 | 0.4569 | 0.2927 |
| 0.4416 | 4.31 | 4000 | 0.4702 | 0.2822 |
| 0.4388 | 4.85 | 4500 | 0.4145 | 0.2641 |
| 0.4011 | 5.39 | 5000 | 0.4033 | 0.2565 |
| 0.3959 | 5.93 | 5500 | 0.4127 | 0.2450 |
| 0.3643 | 6.47 | 6000 | 0.3972 | 0.2420 |
| 0.3594 | 7.01 | 6500 | 0.3882 | 0.2392 |
| 0.3315 | 7.55 | 7000 | 0.3714 | 0.2337 |
| 0.3131 | 8.09 | 7500 | 0.3964 | 0.2313 |
| 0.3192 | 8.63 | 8000 | 0.3711 | 0.2268 |
| 0.2855 | 9.17 | 8500 | 0.3815 | 0.2293 |
| 0.2756 | 9.71 | 9000 | 0.3653 | 0.2187 |
| 0.248 | 10.25 | 9500 | 0.3929 | 0.2093 |
| 0.2428 | 10.79 | 10000 | 0.3641 | 0.1986 |
| 0.2412 | 11.33 | 10500 | 0.3687 | 0.1978 |
| 0.2455 | 11.87 | 11000 | 0.3942 | 0.2005 |
| 0.2181 | 12.41 | 11500 | 0.3611 | 0.1876 |
| 0.2321 | 12.94 | 12000 | 0.3586 | 0.1940 |
| 0.2132 | 13.48 | 12500 | 0.3904 | 0.1892 |
| 0.2162 | 14.02 | 13000 | 0.3812 | 0.1867 |
| 0.205 | 14.56 | 13500 | 0.3751 | 0.1839 |
| 0.1757 | 15.1 | 14000 | 0.3722 | 0.1816 |
| 0.1722 | 15.64 | 14500 | 0.3873 | 0.1793 |
| 0.1862 | 16.18 | 15000 | 0.3924 | 0.1790 |
| 0.1549 | 16.72 | 15500 | 0.3719 | 0.1782 |
| 0.1616 | 17.26 | 16000 | 0.3570 | 0.1830 |
| 0.1646 | 17.8 | 16500 | 0.3867 | 0.1839 |
| 0.1541 | 18.34 | 17000 | 0.3944 | 0.1817 |
| 0.165 | 18.88 | 17500 | 0.3909 | 0.1806 |
| 0.152 | 19.42 | 18000 | 0.3883 | 0.1766 |
| 0.1532 | 19.96 | 18500 | 0.3732 | 0.1783 |
| 0.1498 | 20.5 | 19000 | 0.3931 | 0.1713 |
| 0.1424 | 21.04 | 19500 | 0.4205 | 0.1730 |
| 0.1394 | 21.57 | 20000 | 0.4291 | 0.1710 |
| 0.1407 | 22.11 | 20500 | 0.4239 | 0.1757 |
| 0.1275 | 22.65 | 21000 | 0.4171 | 0.1719 |
| 0.1262 | 23.19 | 21500 | 0.4346 | 0.1706 |
| 0.1301 | 23.73 | 22000 | 0.4281 | 0.1650 |
| 0.1342 | 24.27 | 22500 | 0.4469 | 0.1680 |
| 0.1249 | 24.81 | 23000 | 0.4297 | 0.1709 |
| 0.1143 | 25.35 | 23500 | 0.4130 | 0.1665 |
| 0.1121 | 25.89 | 24000 | 0.4458 | 0.1633 |
| 0.1206 | 26.43 | 24500 | 0.4597 | 0.1663 |
| 0.1142 | 26.97 | 25000 | 0.3961 | 0.1726 |
| 0.1025 | 27.51 | 25500 | 0.3985 | 0.1629 |
| 0.0961 | 28.05 | 26000 | 0.4002 | 0.1629 |
| 0.1253 | 28.59 | 26500 | 0.4256 | 0.1624 |
| 0.1228 | 29.13 | 27000 | 0.4308 | 0.1653 |
| 0.1034 | 29.67 | 27500 | 0.4354 | 0.1646 |
| 0.0853 | 30.2 | 28000 | 0.4200 | 0.1588 |
| 0.0936 | 30.74 | 28500 | 0.4748 | 0.1596 |
| 0.1015 | 31.28 | 29000 | 0.4383 | 0.1651 |
| 0.1 | 31.82 | 29500 | 0.4436 | 0.1659 |
| 0.1087 | 32.36 | 30000 | 0.4121 | 0.1596 |
| 0.1084 | 32.9 | 30500 | 0.4297 | 0.1602 |
| 0.0855 | 33.44 | 31000 | 0.4453 | 0.1645 |
| 0.0872 | 33.98 | 31500 | 0.4377 | 0.1605 |
| 0.0893 | 34.52 | 32000 | 0.4373 | 0.1556 |
| 0.0864 | 35.06 | 32500 | 0.4244 | 0.1607 |
| 0.08 | 35.6 | 33000 | 0.3972 | 0.1615 |
| 0.1025 | 36.14 | 33500 | 0.4481 | 0.1580 |
| 0.099 | 36.68 | 34000 | 0.4224 | 0.1613 |
| 0.083 | 37.22 | 34500 | 0.4499 | 0.1577 |
| 0.0783 | 37.76 | 35000 | 0.4649 | 0.1558 |
| 0.0856 | 38.3 | 35500 | 0.4493 | 0.1546 |
| 0.0888 | 38.83 | 36000 | 0.4313 | 0.1530 |
| 0.0752 | 39.37 | 36500 | 0.4737 | 0.1544 |
| 0.0723 | 39.91 | 37000 | 0.4539 | 0.1549 |
| 0.0785 | 40.45 | 37500 | 0.4585 | 0.1550 |
| 0.0686 | 40.99 | 38000 | 0.4489 | 0.1564 |
| 0.08 | 41.53 | 38500 | 0.4569 | 0.1553 |
| 0.0699 | 42.07 | 39000 | 0.4791 | 0.1551 |
| 0.066 | 42.61 | 39500 | 0.4807 | 0.1530 |
| 0.072 | 43.15 | 40000 | 0.4456 | 0.1570 |
| 0.0818 | 43.69 | 40500 | 0.4544 | 0.1582 |
| 0.0741 | 44.23 | 41000 | 0.4646 | 0.1573 |
| 0.0691 | 44.77 | 41500 | 0.4576 | 0.1531 |
| 0.0605 | 45.31 | 42000 | 0.4776 | 0.1558 |
| 0.0705 | 45.85 | 42500 | 0.4468 | 0.1562 |
| 0.0671 | 46.39 | 43000 | 0.4782 | 0.1563 |
| 0.0612 | 46.93 | 43500 | 0.4761 | 0.1542 |
| 0.0588 | 47.46 | 44000 | 0.4846 | 0.1534 |
| 0.0752 | 48.0 | 44500 | 0.4972 | 0.1554 |
| 0.0595 | 48.54 | 45000 | 0.4784 | 0.1546 |
| 0.0591 | 49.08 | 45500 | 0.4750 | 0.1609 |
| 0.0594 | 49.62 | 46000 | 0.4641 | 0.1593 |
| 0.0539 | 50.16 | 46500 | 0.4746 | 0.1545 |
| 0.0605 | 50.7 | 47000 | 0.4535 | 0.1586 |
| 0.0515 | 51.24 | 47500 | 0.4701 | 0.1577 |
| 0.058 | 51.78 | 48000 | 0.4667 | 0.1554 |
| 0.0503 | 52.32 | 48500 | 0.4747 | 0.1527 |
| 0.0536 | 52.86 | 49000 | 0.4914 | 0.1494 |
| 0.0569 | 53.4 | 49500 | 0.4869 | 0.1789 |
| 0.0711 | 53.94 | 50000 | 0.4863 | 0.1534 |
| 0.0605 | 54.48 | 50500 | 0.4533 | 0.1533 |
| 0.085 | 55.02 | 51000 | 0.4679 | 0.1545 |
| 0.05 | 55.56 | 51500 | 0.4699 | 0.1528 |
| 0.0577 | 56.09 | 52000 | 0.4865 | 0.1521 |
| 0.0494 | 56.63 | 52500 | 0.4852 | 0.1524 |
| 0.056 | 57.17 | 53000 | 0.4923 | 0.1508 |
| 0.056 | 57.71 | 53500 | 0.5102 | 0.1526 |
| 0.0515 | 58.25 | 54000 | 0.4989 | 0.1502 |
| 0.0465 | 58.79 | 54500 | 0.4852 | 0.1471 |
| 0.0537 | 59.33 | 55000 | 0.4716 | 0.1507 |
| 0.0494 | 59.87 | 55500 | 0.4852 | 0.1502 |
| 0.0482 | 60.41 | 56000 | 0.4887 | 0.1494 |
| 0.0574 | 60.95 | 56500 | 0.4689 | 0.1504 |
| 0.0558 | 61.49 | 57000 | 0.4683 | 0.1509 |
| 0.0509 | 62.03 | 57500 | 0.4923 | 0.1501 |
| 0.0484 | 62.57 | 58000 | 0.4871 | 0.1488 |
| 0.0512 | 63.11 | 58500 | 0.4751 | 0.1514 |
| 0.0502 | 63.65 | 59000 | 0.4805 | 0.1510 |
| 0.0466 | 64.19 | 59500 | 0.4939 | 0.1515 |
| 0.0518 | 64.72 | 60000 | 0.4840 | 0.1514 |
| 0.038 | 65.26 | 60500 | 0.4927 | 0.1511 |
| 0.0552 | 65.8 | 61000 | 0.4910 | 0.1490 |
| 0.0529 | 66.34 | 61500 | 0.4772 | 0.1484 |
| 0.0515 | 66.88 | 62000 | 0.4688 | 0.1482 |
| 0.0528 | 67.42 | 62500 | 0.4675 | 0.1472 |
| 0.0564 | 67.96 | 63000 | 0.4735 | 0.1483 |
| 0.0466 | 68.5 | 63500 | 0.4884 | 0.1460 |
| 0.0551 | 69.04 | 64000 | 0.4771 | 0.1479 |
| 0.0436 | 69.58 | 64500 | 0.4881 | 0.1489 |
| 0.043 | 70.12 | 65000 | 0.4847 | 0.1473 |
| 0.0529 | 70.66 | 65500 | 0.4846 | 0.1478 |
| 0.0434 | 71.2 | 66000 | 0.4921 | 0.1477 |
| 0.0395 | 71.74 | 66500 | 0.4961 | 0.1471 |
| 0.0398 | 72.28 | 67000 | 0.4940 | 0.1473 |
| 0.0405 | 72.82 | 67500 | 0.4891 | 0.1465 |
| 0.0404 | 73.35 | 68000 | 0.4880 | 0.1462 |
| 0.0478 | 73.89 | 68500 | 0.4937 | 0.1468 |
| 0.0388 | 74.43 | 69000 | 0.4868 | 0.1464 |
| 0.0426 | 74.97 | 69500 | 0.4965 | 0.1458 |
| 0.0382 | 75.51 | 70000 | 0.4999 | 0.1460 |
| 0.0426 | 76.05 | 70500 | 0.4944 | 0.1466 |
| 0.0459 | 76.59 | 71000 | 0.4978 | 0.1463 |
| 0.0366 | 77.13 | 71500 | 0.5010 | 0.1466 |
| 0.0511 | 77.67 | 72000 | 0.4920 | 0.1453 |
| 0.045 | 78.21 | 72500 | 0.4974 | 0.1461 |
| 0.0425 | 78.75 | 73000 | 0.4926 | 0.1453 |
| 0.0431 | 79.29 | 73500 | 0.4925 | 0.1456 |
| 0.0362 | 79.83 | 74000 | 0.4929 | 0.1455 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 1.18.5.dev0
- Tokenizers 0.11.6
|
dccuchile/albert-tiny-spanish-finetuned-mldoc
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 32 | null |
---
language:
- code
license: mit
datasets:
- anjandash/java-8m-methods-v1
---
|
dccuchile/albert-tiny-spanish-finetuned-ner
|
[
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
This repo contains model for [Data-to-text Generation with Variational Sequential Planning](https://arxiv.org/abs/2202.13756) (Ratish Puduppully and Yao Fu and Mirella Lapata; In Transactions of the Association for Computational Linguistics (TACL)). This model is trained on the [MLB dataset](https://huggingface.co/datasets/GEM/mlb_data_to_text). The code is available on github [repo](https://github.com/ratishsp/data2text-seq-plan-py).
## Citation
```
@article{puduppully-2021-seq-plan,
author = {Ratish Puduppully and Yao Fu and Mirella Lapata},
title = {Data-to-text Generation with Variational Sequential Planning},
journal = {Transactions of the Association for Computational Linguistics (to appear)},
url = {https://arxiv.org/abs/2202.13756},
year = {2022}
}
```
## License
The model is available under the MIT License.
|
dccuchile/albert-tiny-spanish-finetuned-pawsx
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 29 | null |
---
license: mit
---
CER: 0.0019
training code
https://colab.research.google.com/drive/14MfFkhgPS63RJcP7rpBOK6OII_y34jx_?usp=sharing
|
dccuchile/albert-tiny-spanish-finetuned-pos
|
[
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
This repo contains model for [Data-to-text Generation with Variational Sequential Planning](https://arxiv.org/abs/2202.13756) (Ratish Puduppully and Yao Fu and Mirella Lapata; In Transactions of the Association for Computational Linguistics (TACL)). This model is trained on the [RotoWire dataset](https://github.com/harvardnlp/boxscore-data). The code is available on github [repo](https://github.com/ratishsp/data2text-seq-plan-py).
## Citation
```
@article{puduppully-2021-seq-plan,
author = {Ratish Puduppully and Yao Fu and Mirella Lapata},
title = {Data-to-text Generation with Variational Sequential Planning},
journal = {Transactions of the Association for Computational Linguistics (to appear)},
url = {https://arxiv.org/abs/2202.13756},
year = {2022}
}
```
## License
The model is available under the MIT License.
|
dccuchile/albert-tiny-spanish-finetuned-xnli
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 31 | null |
This repo contains model for [Data-to-text Generation with Variational Sequential Planning](https://arxiv.org/abs/2202.13756) (Ratish Puduppully and Yao Fu and Mirella Lapata; In Transactions of the Association for Computational Linguistics (TACL)). This model is trained on the [German RotoWire dataset](https://huggingface.co/datasets/GEM/RotoWire_English-German). The code is available on github [repo](https://github.com/ratishsp/data2text-seq-plan-py).
## Citation
```
@article{puduppully-2021-seq-plan,
author = {Ratish Puduppully and Yao Fu and Mirella Lapata},
title = {Data-to-text Generation with Variational Sequential Planning},
journal = {Transactions of the Association for Computational Linguistics (to appear)},
url = {https://arxiv.org/abs/2202.13756},
year = {2022}
}
```
## License
The model is available under the MIT License.
|
dccuchile/albert-xlarge-spanish-finetuned-ner
|
[
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
This model generate the math expression LATEX sequence according to the handwritten math expression image.
in CROHME 2014 test dataset CER=0.507772718700326
|
dccuchile/albert-xlarge-spanish-finetuned-xnli
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 29 | null |
---
language:
- ar
tags:
- AraGPT2
- GPT-2
- MSA
- Arabic Text Summarization
- Arabic News Title Generation
- Arabic Paraphrasing
widget:
- text: ""
---
# An Arabic abstractive text summarization model
A fine-tuned AraGPT2 model on a dataset of 84,764 paragraph-summary pairs.
More details on the fine-tuning of this model will be released later.
The model can be used as follows:
```python
from transformers import GPT2TokenizerFast, AutoModelForCausalLM
from arabert.preprocess import ArabertPreprocessor
model_name="malmarjeh/gpt2"
preprocessor = ArabertPreprocessor(model_name="")
tokenizer = GPT2TokenizerFast.from_pretrained("aubmindlab/aragpt2-base")
model = AutoModelForCausalLM.from_pretrained(model_name)
text = "شهدت مدينة طرابلس، مساء أمس الأربعاء، احتجاجات شعبية وأعمال شغب لليوم الثالث على التوالي، وذلك بسبب تردي الوضع المعيشي والاقتصادي. واندلعت مواجهات عنيفة وعمليات كر وفر ما بين الجيش اللبناني والمحتجين استمرت لساعات، إثر محاولة فتح الطرقات المقطوعة، ما أدى إلى إصابة العشرات من الطرفين."
text = preprocessor.preprocess(text)
text = '\n النص: ' + text + ' \n الملخص: \n '
tokenizer.add_special_tokens({'pad_token': '<pad>'})
tokens = tokenizer.batch_encode_plus([text], return_tensors='pt', padding='max_length', max_length=150)
output = model.generate(input_ids=tokens['input_ids'],repetition_penalty=3.0, num_beams=3, max_length=240, pad_token_id=2, eos_token_id=0, bos_token_id=10611)
result = tokenizer.decode(output[0][150:], skip_special_tokens=True).strip()
result
>>> 'واحتجاجات في طرابلس لليوم الثالث على التوالي'
```
## Contact:
<[email protected]>
|
dccuchile/albert-xxlarge-spanish-finetuned-pos
|
[
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-biomedical-clinical-es-finetuned-ner-Concat_CRAFT_es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-finetuned-ner-Concat_CRAFT_es
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1874
- Precision: 0.8559
- Recall: 0.8425
- F1: 0.8492
- Accuracy: 0.9696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.072 | 1.0 | 2719 | 0.1500 | 0.8138 | 0.8224 | 0.8181 | 0.9644 |
| 0.0305 | 2.0 | 5438 | 0.1555 | 0.8417 | 0.8253 | 0.8334 | 0.9674 |
| 0.014 | 3.0 | 8157 | 0.1743 | 0.8429 | 0.8412 | 0.8421 | 0.9685 |
| 0.0076 | 4.0 | 10876 | 0.1874 | 0.8559 | 0.8425 | 0.8492 | 0.9696 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-pawsx
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 25 | null |
---
language: mr
license: cc-by-4.0
datasets:
- L3Cube-MahaHate
widget:
- text: "I like you. </s></s> I love you."
---
## MahaHate-multi-RoBERTa
MahaHate-multi-RoBERTa (Marathi Hate speech identification) is a MahaRoBERTa(l3cube-pune/marathi-roberta) model fine-tuned on L3Cube-MahaHate - a Marathi tweet-based hate speech detection dataset. This is a four-class model with labels as hate, offensive, profane, and not. The 2-class model can be found <a href='https://huggingface.co/l3cube-pune/mahahate-bert'> here </a>
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2203.13778)
```
@inproceedings{patil2022l3cube,
title={L3Cube-MahaHate: A Tweet-based Marathi Hate Speech Detection Dataset and BERT Models},
author={Patil, Hrushikesh and Velankar, Abhishek and Joshi, Raviraj},
booktitle={Proceedings of the Third Workshop on Threat, Aggression and Cyberbullying (TRAC 2022)},
pages={1--9},
year={2022}
}
```
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-pos
|
[
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | null |
---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: albert_ernie_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert_ernie_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.10.3
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-qa-mlqa
|
[
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
AI4Bharat's IndicBERT finetuned for few-shot transfer learning by fine-tuning on Hindi training data with Urdu validation and test sets. Expected low accuracy. Leverages mbert's tokenizer in this implementation.
---
language:
- ur
tags:
- named entity recognition
- ner
license: apache-2.0
datasets:
- wikiann
metrics:
- f1
- recall
- precision
- accuracy
---
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-ner
|
[
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
tags:
- object-detection
- COCO
- YOLO
- Darknet
model-index:
- name: moon
results:
- metrics:
- type: mAP
value: 1
name: mAP
task:
type: object-detection
name: object-detection
dataset:
name: COCO
type: COCO
---
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-pos
|
[
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
language: en
license: apache-2.0
---
## ELECTRA for IF
**ELECTRA** is a method for self-supervised language representation learning. They are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf).
For a detailed description and experimental results, please refer to the original paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
This repository contains a small ELECTRA discriminator finetuned on a corpus of interactive fiction commands labelled with the WordNet synset offset of the verb in the sentence. The original dataset has been collected from the list of action in the walkthroughs for the game included in the [Jericho](https://github.com/microsoft/jericho) framework and manually annotated. For more information visit https://github.com/aporporato/electra and https://github.com/aporporato/jericho-corpora.
## How to use the discriminator in `transformers`
(Heavily based on: https://github.com/huggingface/notebooks/blob/master/examples/text_classification-tf.ipynb)
```python
import math
import numpy as np
import tensorflow as tf
from datasets import load_metric, Dataset, DatasetDict
from transformers import TFAutoModelForSequenceClassification, AutoTokenizer, DataCollatorWithPadding, create_optimizer
from transformers.keras_callbacks import KerasMetricCallback
# This example shows how this model can be used:
# you should finetune the model of your specific corpus if commands, bigger than this
dict_train = {
"idx": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18",
"19", "20"],
"sentence": ["e", "get pen", "drop book", "x paper", "i", "south", "get paper", "drop the pen", "x book",
"inventory", "n", "get the book", "drop paper", "look at Pen", "inv", "g", "s", "get sandwich",
"drop sandwich", "x sandwich", "agin"],
"label": ["travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "travel.v.01", "take.v.04",
"drop.v.01", "examine.v.02", "inventory.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02",
"inventory.v.01", "repeat.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "repeat.v.01"]
}
dict_val = {
"idx": ["0", "1", "2", "3", "4", "5"],
"sentence": ["w", "get shield", "drop sword", "x spikes", "i", "repeat"],
"label": ["travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "repeat.v.01"]
}
raw_train_dataset = Dataset.from_dict(dict_train)
raw_val_dataset = Dataset.from_dict(dict_val)
raw_dataset = DatasetDict()
raw_dataset["train"] = raw_train_dataset
raw_dataset["val"] = raw_val_dataset
raw_dataset = raw_dataset.class_encode_column("label")
print(raw_dataset)
print(raw_dataset["train"].features)
print(raw_dataset["val"].features)
print(raw_dataset["train"][1])
label2id = {}
id2label = {}
for i, l in enumerate(raw_dataset["train"].features["label"].names):
label2id[l] = i
id2label[i] = l
discriminator = TFAutoModelForSequenceClassification.from_pretrained("Aureliano/electra-if",
label2id=label2id,
id2label=id2label)
tokenizer = AutoTokenizer.from_pretrained("Aureliano/electra-if")
tokenize_function = lambda example: tokenizer(example["sentence"], truncation=True)
pre_tokenizer_columns = set(raw_dataset["train"].features)
encoded_dataset = raw_dataset.map(tokenize_function, batched=True)
tokenizer_columns = list(set(encoded_dataset["train"].features) - pre_tokenizer_columns)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
batch_size = len(encoded_dataset["train"])
tf_train_dataset = encoded_dataset["train"].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["labels"],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator
)
tf_validation_dataset = encoded_dataset["val"].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["labels"],
shuffle=False,
batch_size=batch_size,
collate_fn=data_collator
)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
num_epochs = 25
batches_per_epoch = math.ceil(len(encoded_dataset["train"]) / batch_size)
total_train_steps = int(batches_per_epoch * num_epochs)
optimizer, schedule = create_optimizer(
init_lr=5e-5, num_warmup_steps=total_train_steps // 5, num_train_steps=total_train_steps
)
metric = load_metric("accuracy")
def compute_metrics(eval_predictions):
logits, labels = eval_predictions
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_dataset)
callbacks = [metric_callback]
discriminator.compile(optimizer=optimizer, loss=loss, metrics=["sparse_categorical_accuracy"])
discriminator.fit(
tf_train_dataset,
epochs=num_epochs,
validation_data=tf_validation_dataset,
callbacks=callbacks
)
print("Evaluate on test data")
results = discriminator.evaluate(tf_validation_dataset)
print("test loss, test acc:", results)
text = "i"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'inventory.v.01' (-> "make or include in an itemized record or report"), but probably only with a better finetuning dataset
text = "get lamp"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'take.v.04' (-> "get into one's hands, take physically"), but probably only with a better finetuning dataset
text = "w"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'travel.v.01' (-> "change location; move, travel, or proceed, also metaphorically"), but probably only with a better finetuning dataset
```
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-xnli
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 36 | null |
---
language: nl
tags:
- speech
---
# Wav2Vec2-Dutch-Base
A Dutch Wav2Vec2 model. This model is created by further pre-training the original English [`facebook/wav2vec2-base`](https://huggingface.co/facebook/wav2vec2-base) model on Dutch speech from [Het Corpus Gesproken Nederlands](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/).
This model is one of two Dutch Wav2Vec2 models:
- [`GroNLP/wav2vec2-dutch-base`](https://huggingface.co/GroNLP/wav2vec2-dutch-base) (this model)
- [`GroNLP/wav2vec2-dutch-large`](https://huggingface.co/GroNLP/wav2vec2-dutch-large)
|
dccuchile/distilbert-base-spanish-uncased-finetuned-ner
|
[
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 28 | null |
---
pipeline_tag: sentence-similarity
language: fr
datasets:
- stsb_multi_mt
tags:
- Text
- Sentence Similarity
- Sentence-Embedding
- camembert-base
license: apache-2.0
model-index:
- name: sentence-camembert-base by Van Tuan DANG
results:
- task:
name: Sentence-Embedding
type: Text Similarity
dataset:
name: Text Similarity fr
type: stsb_multi_mt
args: fr
metrics:
- name: Test Pearson correlation coefficient
type: Pearson_correlation_coefficient
value: xx.xx
---
## Pre-trained sentence embedding models are the state-of-the-art of Sentence Embeddings for French.
Model is Fine-tuned using pre-trained [facebook/camembert-base](https://huggingface.co/camembert/camembert-base) and
[Siamese BERT-Networks with 'sentences-transformers'](https://www.sbert.net/) on dataset [stsb](https://huggingface.co/datasets/stsb_multi_mt/viewer/fr/train)
## Usage
The model can be used directly (without a language model) as follows:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("dangvantuan/sentence-camembert-base")
sentences = ["Un avion est en train de décoller.",
"Un homme joue d'une grande flûte.",
"Un homme étale du fromage râpé sur une pizza.",
"Une personne jette un chat au plafond.",
"Une personne est en train de plier un morceau de papier.",
]
embeddings = model.encode(sentences)
```
## Evaluation
The model can be evaluated as follows on the French test data of stsb.
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.readers import InputExample
from sentence_transformers.evaluation import EmbeddingSimilarityEvaluator
from datasets import load_dataset
def convert_dataset(dataset):
dataset_samples=[]
for df in dataset:
score = float(df['similarity_score'])/5.0 # Normalize score to range 0 ... 1
inp_example = InputExample(texts=[df['sentence1'],
df['sentence2']], label=score)
dataset_samples.append(inp_example)
return dataset_samples
# Loading the dataset for evaluation
df_dev = load_dataset("stsb_multi_mt", name="fr", split="dev")
df_test = load_dataset("stsb_multi_mt", name="fr", split="test")
# Convert the dataset for evaluation
# For Dev set:
dev_samples = convert_dataset(df_dev)
val_evaluator = EmbeddingSimilarityEvaluator.from_input_examples(dev_samples, name='sts-dev')
val_evaluator(model, output_path="./")
# For Test set:
test_samples = convert_dataset(df_test)
test_evaluator = EmbeddingSimilarityEvaluator.from_input_examples(test_samples, name='sts-test')
test_evaluator(model, output_path="./")
```
**Test Result**:
The performance is measured using Pearson and Spearman correlation:
- On dev
| Model | Pearson correlation | Spearman correlation | #params |
| ------------- | ------------- | ------------- |------------- |
| [dangvantuan/sentence-camembert-base](https://huggingface.co/dangvantuan/sentence-camembert-base)| 86.73 |86.54 | 110M |
| [distiluse-base-multilingual-cased](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased) | 79.22 | 79.16|135M |
- On test
| Model | Pearson correlation | Spearman correlation |
| ------------- | ------------- | ------------- |
| [dangvantuan/sentence-camembert-base](https://huggingface.co/dangvantuan/sentence-camembert-base)| 82.36 | 81.64|
| [distiluse-base-multilingual-cased](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased) | 78.62 | 77.48|
## Citation
@article{reimers2019sentence,
title={Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks},
author={Nils Reimers, Iryna Gurevych},
journal={https://arxiv.org/abs/1908.10084},
year={2019}
}
@article{martin2020camembert,
title={CamemBERT: a Tasty French Language Mode},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
journal={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
|
dccuchile/distilbert-base-spanish-uncased-finetuned-pos
|
[
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
language: ja
license: cc-by-sa-4.0
tags:
- finance
widget:
- text: 流動[MASK]は、1億円となりました。
---
# Additional pretrained BERT base Japanese finance
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0).
## Model architecture
The model architecture is the same as BERT small in the [original BERT paper](https://arxiv.org/abs/1810.04805); 12 layers, 768 dimensions of hidden states, and 12 attention heads.
## Training Data
The models are additionally trained on financial corpus from [Tohoku University's BERT base Japanese model (cl-tohoku/bert-base-japanese)](https://huggingface.co/cl-tohoku/bert-base-japanese).
The financial corpus consists of 2 corpora:
- Summaries of financial results from October 9, 2012, to December 31, 2020
- Securities reports from February 8, 2018, to December 31, 2020
The financial corpus file consists of approximately 27M sentences.
## Tokenization
You can use tokenizer [Tohoku University's BERT base Japanese model (cl-tohoku/bert-base-japanese)](https://huggingface.co/cl-tohoku/bert-base-japanese).
You can use the tokenizer:
```
tokenizer = transformers.BertJapaneseTokenizer.from_pretrained('cl-tohoku/bert-base-japanese')
```
## Training
The models are trained with the same configuration as BERT base in the [original BERT paper](https://arxiv.org/abs/1810.04805); 512 tokens per instance, 256 instances per batch, and 1M training steps.
## Citation
```
@article{Suzuki-etal-2023-ipm,
title = {Constructing and analyzing domain-specific language model for financial text mining}
author = {Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi},
journal = {Information Processing & Management},
volume = {60},
number = {2},
pages = {103194},
year = {2023},
doi = {10.1016/j.ipm.2022.103194}
}
```
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
## Acknowledgments
This work was supported by JSPS KAKENHI Grant Number JP21K12010 and JST-Mirai Program Grant Number JPMJMI20B1.
|
dccuchile/distilbert-base-spanish-uncased
|
[
"pytorch",
"distilbert",
"fill-mask",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 670 | null |
---
language: en
tags:
- question_answering
datasets:
- qasper
---
# led-base for QA with qasper
A 10 epochs train of [Longformer Encoder Decoder Baselines for Qasper](https://github.com/allenai/qasper-led-baseline).
## How to use
```
git clone https://github.com/allenai/qasper-led-baseline.git
cd qasper-led-baseline
git clone https://huggingface.co/z-uo/led-base-qasper
pip install -r requirements.txt
# TODO test
python scripts/sample_qasper_answers.py --model led-base-qasper --data qasper-dev-v0.2.json --samples 10 --out test_only.log
```
|
CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate
|
[
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_en_es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_en_es
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1750
- Precision: 0.8664
- Recall: 0.8587
- F1: 0.8625
- Accuracy: 0.9727
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the [CRAFT](https://github.com/UCDenver-ccp/CRAFT/releases)(Colorado Richly Annotated Full Text) Corpus in Spanish and English.
Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0564 | 1.0 | 1360 | 0.1459 | 0.8296 | 0.8489 | 0.8392 | 0.9696 |
| 0.0222 | 2.0 | 2720 | 0.1554 | 0.8650 | 0.8320 | 0.8482 | 0.9702 |
| 0.0124 | 3.0 | 4080 | 0.1670 | 0.8588 | 0.8564 | 0.8576 | 0.9717 |
| 0.0052 | 4.0 | 5440 | 0.1750 | 0.8664 | 0.8587 | 0.8625 | 0.9727 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
CennetOguz/distilbert-base-uncased-finetuned-recipe
|
[
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Biobert-base-cased-v1.2-finetuned-ner-CRAFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Biobert-base-cased-v1.2-finetuned-ner-CRAFT
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1878
- Precision: 0.8397
- Recall: 0.8366
- F1: 0.8382
- Accuracy: 0.9683
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the [CRAFT](https://github.com/UCDenver-ccp/CRAFT/releases)(Colorado Richly Annotated Full Text) Corpus in English.
Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.11 | 1.0 | 1360 | 0.1668 | 0.7952 | 0.7917 | 0.7934 | 0.9611 |
| 0.0484 | 2.0 | 2720 | 0.1640 | 0.8224 | 0.8371 | 0.8297 | 0.9661 |
| 0.0261 | 3.0 | 4080 | 0.1812 | 0.8143 | 0.8447 | 0.8292 | 0.9662 |
| 0.0112 | 4.0 | 5440 | 0.1878 | 0.8397 | 0.8366 | 0.8382 | 0.9683 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Chae/botman
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Biobert-base-cased-v1.2-finetuned-ner-CRAFT_es_en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Biobert-base-cased-v1.2-finetuned-ner-CRAFT_es_en
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1811
- Precision: 0.8555
- Recall: 0.8539
- F1: 0.8547
- Accuracy: 0.9706
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the [CRAFT](https://github.com/UCDenver-ccp/CRAFT/releases)(Colorado Richly Annotated Full Text) Corpus in Spanish and English.
Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.052 | 1.0 | 1360 | 0.1413 | 0.8300 | 0.8442 | 0.8370 | 0.9677 |
| 0.0199 | 2.0 | 2720 | 0.1673 | 0.8461 | 0.8458 | 0.8459 | 0.9689 |
| 0.011 | 3.0 | 4080 | 0.1647 | 0.8588 | 0.8528 | 0.8558 | 0.9704 |
| 0.0031 | 4.0 | 5440 | 0.1811 | 0.8555 | 0.8539 | 0.8547 | 0.9706 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Chaewon/mmnt_decoder_en
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | 2022-03-11T22:57:19Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1502166273064517632/RdLwNuR6_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">rogue⛓🐕|| BIRFDAY BOY</div>
<div style="text-align: center; font-size: 14px;">@thed3linquent_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from rogue⛓🐕|| BIRFDAY BOY.
| Data | rogue⛓🐕|| BIRFDAY BOY |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 334 |
| Short tweets | 710 |
| Tweets kept | 2202 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1tal3g38/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @thed3linquent_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1aw76tml) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1aw76tml/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/thed3linquent_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Chaima/TunBerto
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language:
- multilingual
- af
- am
- ar
- ast
- az
- ba
- be
- bg
- bn
- br
- bs
- ca
- ceb
- cs
- cy
- da
- de
- el
- en
- es
- et
- fa
- ff
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- ht
- hu
- hy
- id
- ig
- ilo
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- lb
- lg
- ln
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- ns
- oc
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- so
- sq
- sr
- ss
- su
- sv
- sw
- ta
- th
- tl
- tn
- tr
- uk
- ur
- uz
- vi
- wo
- xh
- yi
- yo
- zh
- zu
license: mit
tags:
- m2m100-12B
---
# M2M100 12B
M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation.
It was introduced in this [paper](https://arxiv.org/abs/2010.11125) and first released in [this](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100) repository.
The model that can directly translate between the 9,900 directions of 100 languages.
To translate into a target language, the target language id is forced as the first generated token.
To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method.
*Note: `M2M100Tokenizer` depends on `sentencepiece`, so make sure to install it before running the example.*
To install `sentencepiece` run `pip install sentencepiece`
```python
from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
chinese_text = "生活就像一盒巧克力。"
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100-12B-last-ckpt")
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100-12B-last-ckpt")
# translate Hindi to French
tokenizer.src_lang = "hi"
encoded_hi = tokenizer(hi_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "La vie est comme une boîte de chocolat."
# translate Chinese to English
tokenizer.src_lang = "zh"
encoded_zh = tokenizer(chinese_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "Life is like a box of chocolate."
```
See the [model hub](https://huggingface.co/models?filter=m2m_100) to look for more fine-tuned versions.
## Languages covered
Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu)
## BibTeX entry and citation info
```
@misc{fan2020englishcentric,
title={Beyond English-Centric Multilingual Machine Translation},
author={Angela Fan and Shruti Bhosale and Holger Schwenk and Zhiyi Ma and Ahmed El-Kishky and Siddharth Goyal and Mandeep Baines and Onur Celebi and Guillaume Wenzek and Vishrav Chaudhary and Naman Goyal and Tom Birch and Vitaliy Liptchinsky and Sergey Edunov and Edouard Grave and Michael Auli and Armand Joulin},
year={2020},
eprint={2010.11125},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
chainyo/speaker-recognition-meetup
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | null |
---
language:
- es
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- bertin-project/mc4-es-sampled
---
- [✨Version v1✨](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1): August 25th, 2022 (*[full](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1) and [half-precision weights](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1-half)*, at step 1M)
- [Version v1beta3](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta3): July 22nd, 2022 (*[full](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta3) and [half-precision weights](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta3-half)*, at step 850k)
- [Version v1beta2](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta2): June 6th, 2022 (*[full](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta2) and [half-precision weights](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta2-half)*, at step 616k)
- [Version v1beta1](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta1-half): April 28th, 2022 (*half-precision weights only*, at step 408k)
- <details><summary>All checkpoints</summary>
- [Checkpoint 130k](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/6c116e533a00db027bf0a2e0b5e06d3e0772e2d0).
- [Checkpoint 275k](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/20f424ebcc7c500d5328ed45a8c911a2a75583f1).
- [Checkpoint 408k](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/c51db24abee958efe83e52fddf6d19e5f065b818).
- [Checkpoint 616k](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/abafe00bfb03330e72a67ed9fc0958c7399f2181).
- [Checkpoint 850k](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/59d5064b65043f2ff2b2549e4a076854eec22b2e).
- [Checkpoint 1M](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/153dab8ad6bc0bfe84346a58834992d46e83662a).
</details>
# BERTIN GPT-J-6B
<div align=center>
<img alt="BERTIN logo" src="https://huggingface.co/bertin-project/bertin-roberta-base-spanish/resolve/main/images/bertin.png" width="200px">
</div>
## Demo: https://huggingface.co/spaces/bertin-project/bertin-gpt-j-6B
## Model Description
BERTIN-GPT-J-6B is a Spanish finetuned version of GPT-J 6B, a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 6053381344 |
| \\(n_{layers}\\) | 28* |
| \\(d_{model}\\) | 4096 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Training data
BERTIN-GPT-J-6B was finetuned on [mC4-es-sampled (gaussian)](https://huggingface.co/datasets/bertin-project/mc4-es-sampled), a Spanish subset of mC4 sampled using perplexity values.
## Training procedure
This model was finetuned for ~65 billion tokens (65,536,000,000) over 1,000,000 steps on a single TPU v3-8 VM. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly. Training took roughly 6 months.
## Intended Use and Limitations
BERTIN-GPT-J-6B learns an inner representation of the Spanish language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("bertin-project/bertin-gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("bertin-project/bertin-gpt-j-6B")
```
### Limitations and Biases
As the original GPT-J model, the core functionality of BERTIN-GPT-J-6B is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting BERTIN-GPT-J-6B it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon BERTIN-GPT-J-6B to produce factually accurate output.
The original GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. A fine-grained analysis of the bias contained in the corpus used for fine-tuning is still pending, although some preliminary remarks are given in the [BERTIN paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/download/6403/3818).
As with all language models, it is hard to predict in advance how BERTIN-GPT-J-6B will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Evaluation results
We still have to find proper datasets to evaluate the model, so help is welcome!
## Citation and Related Information
### BibTeX entry
To cite this model:
```bibtex
@inproceedings{BERTIN-GPT,
author = {Javier De la Rosa and Andres Fernández},
editor = {Manuel Montes-y-Gómez and Julio Gonzalo and Francisco Rangel and Marco Casavantes and Miguel Ángel Álvarez-Carmona and Gemma Bel-Enguix and Hugo Jair Escalante and Larissa Freitas and Antonio Miranda-Escalada and Francisco Rodríguez-Sánchez and Aiala Rosá and Marco Antonio Sobrevilla-Cabezudo and Mariona Taulé and Rafael Valencia-García},
title = {Zero-shot Reading Comprehension and Reasoning for Spanish with {BERTIN} {GPT-J-6B}},
date = {2022-09},
booktitle = {Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2022)},
booktitleaddon = {Co-located with the Conference of the Spanish Society for Natural Language Processing (SEPLN 2022)},
eventdate = {2022-09-20/2022-09-25},
venue = {A Coru\~{n}a, Spain},
publisher = {CEUR Workshop Proceedings},
}
```
To cite the data used to train it:
```bibtex
@article{BERTIN,
author = {Javier De la Rosa y Eduardo G. Ponferrada y Manu Romero y Paulo Villegas y Pablo González de Prado Salas y María Grandury},
title = {{BERTIN}: Efficient Pre-Training of a Spanish Language Model using Perplexity Sampling},
journal = {Procesamiento del Lenguaje Natural},
volume = {68},
number = {0},
year = {2022},
keywords = {},
abstract = {The pre-training of large language models usually requires massive amounts of resources, both in terms of computation and data. Frequently used web sources such as Common Crawl might contain enough noise to make this pretraining sub-optimal. In this work, we experiment with different sampling methods from the Spanish version of mC4, and present a novel data-centric technique which we name perplexity sampling that enables the pre-training of language models in roughly half the amount of steps and using one fifth of the data. The resulting models are comparable to the current state-of-the-art, and even achieve better results for certain tasks. Our work is proof of the versatility of Transformers, and paves the way for small teams to train their models on a limited budget.},
issn = {1989-7553},
url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6403},
pages = {13--23}
}
```
If you use this model, we would love to hear about it! Reach out on twitter, GitHub, Discord, or shoot us an email.
## Team
- Javier de la Rosa ([versae](https://huggingface.co/versae))
- Eduardo González ([edugp](https://huggingface.co/edugp))
- Paulo Villegas ([paulo](https://huggingface.co/paulo))
- Pablo González de Prado ([Pablogps](https://huggingface.co/Pablogps))
- Manu Romero ([mrm8488](https://huggingface.co/))
- María Grandury ([mariagrandury](https://huggingface.co/))
## Acknowledgements
This project would not have been possible without compute generously provided by the National Library of Norway and Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha. And specially, to [Stella Biderman](https://www.stellabiderman.com) for her general openness, and [Ben Wang](https://github.com/kingoflolz/mesh-transformer-jax) for the main codebase.
## Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models be liable for any results arising from the use made by third parties of these models.
|
Chalponkey/DialoGPT-small-Barry
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | null |
Deberta large trained on slue transcriptions for 50 epochs, lr = 5e-6
|
CharlieChen/feedback-bigbird
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: reverse_text_generation_HarryPotter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reverse_text_generation_HarryPotter
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
ChauhanVipul/BERT
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language: "en"
tags:
- icefall
- k2
- transducer
- librispeech
- ASR
- stateless transducer
- PyTorch
- RNN-T
- pruned RNN-T
- speech recognition
license: "apache-2.0"
datasets:
- librispeech
metrics:
- WER
---
# Introduction
This repo contains pre-trained model using
<https://github.com/k2-fsa/icefall/pull/248>.
It is trained on full LibriSpeech dataset using pruned RNN-T loss from [k2](https://github.com/k2-fsa/k2).
## How to clone this repo
```
sudo apt-get install git-lfs
git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless-2022-03-12
cd icefall-asr-librispeech-pruned-transducer-stateless-2022-03-12
git lfs pull
```
**Caution**: You have to run `git lfs pull`. Otherwise, you will be SAD later.
The model in this repo is trained using the commit `1603744469d167d848e074f2ea98c587153205fa`.
You can use
```
git clone https://github.com/k2-fsa/icefall
cd icefall
git checkout 1603744469d167d848e074f2ea98c587153205fa
```
to download `icefall`.
The decoder architecture is modified from
[Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419).
A Conv1d layer is placed right after the input embedding layer.
-----
## Description
This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset
using [icefall][icefall]. There are no RNNs in the decoder. The decoder is stateless
and contains only an embedding layer and a Conv1d.
The commands for training are:
```
cd egs/librispeech/ASR/
./prepare.sh
export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7"
. path.sh
./pruned_transducer_stateless/train.py \
--world-size 8 \
--num-epochs 60 \
--start-epoch 0 \
--exp-dir pruned_transducer_stateless/exp \
--full-libri 1 \
--max-duration 300 \
--prune-range 5 \
--lr-factor 5 \
--lm-scale 0.25
```
The tensorboard training log can be found at
<https://tensorboard.dev/experiment/WKRFY5fYSzaVBHahenpNlA/>
The command for decoding is:
```bash
epoch=42
avg=11
sym=1
# greedy search
./pruned_transducer_stateless/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir ./pruned_transducer_stateless/exp \
--max-duration 100 \
--decoding-method greedy_search \
--beam-size 4 \
--max-sym-per-frame $sym
# modified beam search
./pruned_transducer_stateless/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir ./pruned_transducer_stateless/exp \
--max-duration 100 \
--decoding-method modified_beam_search \
--beam-size 4
# beam search
# (not recommended)
./pruned_transducer_stateless/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir ./pruned_transducer_stateless/exp \
--max-duration 100 \
--decoding-method beam_search \
--beam-size 4
```
You can find the decoding log for the above command in this
repo (in the folder `log`).
The WERs for the test datasets are
| | test-clean | test-other | comment |
|-------------------------------------|------------|------------|------------------------------------------|
| greedy search (max sym per frame 1) | 2.62 | 6.37 | --epoch 42, --avg 11, --max-duration 100 |
| greedy search (max sym per frame 2) | 2.62 | 6.37 | --epoch 42, --avg 11, --max-duration 100 |
| greedy search (max sym per frame 3) | 2.62 | 6.37 | --epoch 42, --avg 11, --max-duration 100 |
| modified beam search (beam size 4) | 2.56 | 6.27 | --epoch 42, --avg 11, --max-duration 100 |
| beam search (beam size 4) | 2.57 | 6.27 | --epoch 42, --avg 11, --max-duration 100 |
# File description
- [log][log], this directory contains the decoding log and decoding results
- [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model
- [data][data], this directory contains files generated by [prepare.sh][prepare]
- [exp][exp], this directory contains only one file: `preprained.pt`
`exp/pretrained.pt` is generated by the following command:
```bash
epoch=42
avg=11
./pruned_transducer_stateless/export.py \
--exp-dir ./pruned_transducer_stateless/exp \
--bpe-model data/lang_bpe_500/bpe.model \
--epoch $epoch \
--avg $avg
```
**HINT**: To use `pretrained.pt` to compute the WER for test-clean and test-other,
just do the following:
```
cp icefall-asr-librispeech-pruned-transducer-stateless-2022-03-12/exp/pretrained.pt \
/path/to/icefall/egs/librispeech/ASR/pruned_transducer_stateless/exp/epoch-999.pt
```
and pass `--epoch 999 --avg 1` to `pruned_transducer_stateless/decode.py`.
[icefall]: https://github.com/k2-fsa/icefall
[prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh
[exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless-2022-03-12/tree/main/exp
[data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless-2022-03-12/tree/main/data
[test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless-2022-03-12/tree/main/test_wavs
[log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless-2022-03-12/tree/main/log
[icefall]: https://github.com/k2-fsa/icefall
|
Cheapestmedsshop/Buymodafinilus
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language:
- en
license: apache-2.0
tags:
- bart
- biobart
- biomedical
inference: true
widget:
- text: "Influenza is a <mask> disease."
- type: "text-generation"
---
Paper: [BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model](https://arxiv.org/pdf/2204.03905.pdf)
```
@misc{BioBART,
title={BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model},
author={Hongyi Yuan and Zheng Yuan and Ruyi Gan and Jiaxing Zhang and Yutao Xie and Sheng Yu},
year={2022},
eprint={2204.03905},
archivePrefix={arXiv}
}
```
|
Cheatham/xlm-roberta-base-finetuned
|
[
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 20 | null |
---
language:
- en
license: apache-2.0
tags:
- bart
- biobart
- biomedical
inference: true
widget:
- text: "Influenza is a <mask> disease."
- type: "text-generation"
---
Paper: [BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model](https://arxiv.org/pdf/2204.03905.pdf)
```
@misc{BioBART,
title={BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model},
author={Hongyi Yuan and Zheng Yuan and Ruyi Gan and Jiaxing Zhang and Yutao Xie and Sheng Yu},
year={2022},
eprint={2204.03905},
archivePrefix={arXiv}
}
```
|
CleveGreen/JobClassifier_v2_gpt
|
[
"pytorch",
"gpt2",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"GPT2ForSequenceClassification"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 27 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xtreme_s_xlsr_minds14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_minds14
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2566
- F1: {'f1': 0.9460569664921582, 'accuracy': 0.9468540012217471}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------:|
| 2.551 | 2.7 | 200 | 2.5921 | {'f1': 0.03454307545755678, 'accuracy': 0.1148442272449603} |
| 1.6934 | 5.41 | 400 | 1.5353 | {'f1': 0.5831241711045994, 'accuracy': 0.6053756872327428} |
| 0.5914 | 8.11 | 600 | 0.7337 | {'f1': 0.7990425247664236, 'accuracy': 0.7947464874770922} |
| 0.3896 | 10.81 | 800 | 0.5076 | {'f1': 0.8738199236080776, 'accuracy': 0.872327428222358} |
| 0.5052 | 13.51 | 1000 | 0.4917 | {'f1': 0.8744760456867134, 'accuracy': 0.8747709224190593} |
| 0.4806 | 16.22 | 1200 | 0.4751 | {'f1': 0.8840798740258787, 'accuracy': 0.8845448992058644} |
| 0.2103 | 18.92 | 1400 | 0.5228 | {'f1': 0.8721632556623751, 'accuracy': 0.8729383017715333} |
| 0.4198 | 21.62 | 1600 | 0.5910 | {'f1': 0.8755207264572983, 'accuracy': 0.8766035430665852} |
| 0.11 | 24.32 | 1800 | 0.4464 | {'f1': 0.896423086249818, 'accuracy': 0.8955406230910201} |
| 0.1233 | 27.03 | 2000 | 0.3760 | {'f1': 0.9012283567348968, 'accuracy': 0.9016493585827734} |
| 0.1827 | 29.73 | 2200 | 0.4178 | {'f1': 0.9042381720184095, 'accuracy': 0.9059254734270006} |
| 0.1235 | 32.43 | 2400 | 0.4152 | {'f1': 0.9063257163259107, 'accuracy': 0.9071472205253512} |
| 0.1873 | 35.14 | 2600 | 0.2903 | {'f1': 0.9369340598806323, 'accuracy': 0.9376908979841173} |
| 0.017 | 37.84 | 2800 | 0.3046 | {'f1': 0.9300781160576355, 'accuracy': 0.9303604153940135} |
| 0.0436 | 40.54 | 3000 | 0.3111 | {'f1': 0.9315034391389341, 'accuracy': 0.9321930360415394} |
| 0.0455 | 43.24 | 3200 | 0.2748 | {'f1': 0.9417365311433034, 'accuracy': 0.9425778863775198} |
| 0.046 | 45.95 | 3400 | 0.2800 | {'f1': 0.9390712658440112, 'accuracy': 0.9395235186316433} |
| 0.0042 | 48.65 | 3600 | 0.2566 | {'f1': 0.9460569664921582, 'accuracy': 0.9468540012217471} |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
|
CodeNinja1126/bert-q-encoder
|
[
"pytorch"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | 2022-03-13T03:42:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.921871942661868
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2294
- Accuracy: 0.9215
- F1: 0.9219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8304 | 1.0 | 250 | 0.3312 | 0.899 | 0.8962 |
| 0.2547 | 2.0 | 500 | 0.2294 | 0.9215 | 0.9219 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
CodeNinja1126/xlm-roberta-large-kor-mrc
|
[
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"XLMRobertaForQuestionAnswering"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-weaksup-100-NOpad-early
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-weaksup-100-NOpad-early
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0768
- Rouge1: 28.7908
- Rouge2: 10.6989
- Rougel: 20.534
- Rougelsum: 24.1294
- Gen Len: 68.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 100 | 1.8905 | 31.1534 | 13.7074 | 21.6489 | 27.0709 | 64.2 |
| No log | 2.0 | 200 | 2.0768 | 28.7908 | 10.6989 | 20.534 | 24.1294 | 68.5 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
CoderEFE/DialoGPT-marxbot
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"has_space"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-weaksup-1000-NOpad-early
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-weaksup-1000-NOpad-early
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9082
- Rouge1: 26.9663
- Rouge2: 11.3027
- Rougel: 20.7327
- Rougelsum: 23.5965
- Gen Len: 67.19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.4775 | 1.0 | 1000 | 1.6796 | 27.208 | 12.01 | 20.8401 | 24.1333 | 66.06 |
| 0.6972 | 2.0 | 2000 | 1.9082 | 26.9663 | 11.3027 | 20.7327 | 23.5965 | 67.19 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
CoderEFE/DialoGPT-medium-marx
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-weaksup-10k-NOpad-early
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-weaksup-10k-NOpad-early
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7883
- Rouge1: 26.9755
- Rouge2: 12.4975
- Rougel: 21.0743
- Rougelsum: 23.9303
- Gen Len: 69.549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.4657 | 1.0 | 10000 | 1.7295 | 27.973 | 13.2818 | 21.8493 | 25.0101 | 67.831 |
| 0.8522 | 2.0 | 20000 | 1.7883 | 26.9755 | 12.4975 | 21.0743 | 23.9303 | 69.549 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Venkatakrishnan-Ramesh/Text_gen
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-100-lit-evalMA-NOpad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-100-lit-evalMA-NOpad
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1514
- Rouge1: 27.5985
- Rouge2: 11.3869
- Rougel: 20.9359
- Rougelsum: 24.7113
- Gen Len: 62.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 100 | 1.7982 | 28.7996 | 11.2592 | 19.7524 | 25.2125 | 62.5 |
| No log | 2.0 | 200 | 2.1514 | 27.5985 | 11.3869 | 20.9359 | 24.7113 | 62.5 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
CoffeeAddict93/gpt2-medium-call-of-the-wild
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 14 | null |
---
language:
- "ru"
tags:
- "russian"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
---
# bert-base-russian-upos
## Model Description
This is a BERT model pre-trained with [UD_Russian](https://universaldependencies.org/ru/) for POS-tagging and dependency-parsing, derived from [rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-russian-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-russian-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-base-russian-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
CoffeeAddict93/gpt2-medium-modest-proposal
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- DrishtiSharma/autonlp-data-Text-Classification-Catalonia-Independence-AutoNLP
co2_eq_emissions: 3.622203603306694
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 633018323
- CO2 Emissions (in grams): 3.622203603306694
## Validation Metrics
- Loss: 0.681106686592102
- Accuracy: 0.709136109384711
- Macro F1: 0.6987186860138147
- Micro F1: 0.709136109384711
- Weighted F1: 0.7059639788836748
- Macro Precision: 0.7174345617951404
- Micro Precision: 0.709136109384711
- Weighted Precision: 0.712710833401347
- Macro Recall: 0.6912117894374218
- Micro Recall: 0.709136109384711
- Weighted Recall: 0.709136109384711
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/DrishtiSharma/autonlp-Text-Classification-Catalonia-Independence-AutoNLP-633018323
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("DrishtiSharma/autonlp-Text-Classification-Catalonia-Independence-AutoNLP-633018323", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("DrishtiSharma/autonlp-Text-Classification-Catalonia-Independence-AutoNLP-633018323", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
CoffeeAddict93/gpt2-modest-proposal
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | null |
---
language:
- hi
- en
- multilingual
license: cc-by-4.0
tags:
- hi
- en
- codemix
datasets:
- L3Cube-HingCorpus
- L3Cube-HingLID
---
## HingBERT-LID
HingBERT-LID is a Hindi-English code-mixed language identification BERT model. It is a HingBERT model fine-tuned on L3Cube-HingLID dataset.
<br>
[dataset link] (https://github.com/l3cube-pune/code-mixed-nlp)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398)
```
@inproceedings{nayak-joshi-2022-l3cube,
title = "{L}3{C}ube-{H}ing{C}orpus and {H}ing{BERT}: A Code Mixed {H}indi-{E}nglish Dataset and {BERT} Language Models",
author = "Nayak, Ravindra and Joshi, Raviraj",
booktitle = "Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.wildre-1.2",
pages = "7--12",
}
```
|
ComCom/gpt2-large
|
[
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"GPT2Model"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-weaksup-100-NOpad-early1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-weaksup-100-NOpad-early1
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0768
- Rouge1: 28.7953
- Rouge2: 10.9535
- Rougel: 20.6447
- Rougelsum: 24.3516
- Gen Len: 68.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 100 | 1.8905 | 31.2906 | 13.5675 | 21.5533 | 27.2536 | 64.2 |
| No log | 2.0 | 200 | 2.0768 | 28.7953 | 10.9535 | 20.6447 | 24.3516 | 68.5 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ComCom/gpt2-medium
|
[
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"GPT2Model"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-weaksup-100-NOpad-early2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-weaksup-100-NOpad-early2
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0768
- Rouge1: 28.6914
- Rouge2: 11.1481
- Rougel: 20.6967
- Rougelsum: 24.2834
- Gen Len: 68.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 100 | 1.8905 | 31.4929 | 13.8614 | 21.6279 | 27.1315 | 64.2 |
| No log | 2.0 | 200 | 2.0768 | 28.6914 | 11.1481 | 20.6967 | 24.2834 | 68.5 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
cometrain/neurotitle-rugpt3-small
|
[
"pytorch",
"gpt2",
"text-generation",
"ru",
"en",
"dataset:All-NeurIPS-Papers-Scraper",
"transformers",
"Cometrain AutoCode",
"Cometrain AlphaML",
"license:mit"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 20 | null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-1000-lit-evalMA-NOpad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-1000-lit-evalMA-NOpad
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9804
- Rouge1: 27.2698
- Rouge2: 11.8561
- Rougel: 20.5948
- Rougelsum: 23.5497
- Gen Len: 67.67
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.5372 | 1.0 | 1000 | 1.7499 | 27.7275 | 12.7894 | 21.1334 | 24.4929 | 66.31 |
| 0.7344 | 2.0 | 2000 | 1.9804 | 27.2698 | 11.8561 | 20.5948 | 23.5497 | 67.67 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Connorvr/BrightBot-small
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-100-lit-evalMA-NOpad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-100-lit-evalMA-NOpad2
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2126
- Rouge1: 25.6196
- Rouge2: 7.2753
- Rougel: 18.0987
- Rougelsum: 20.8416
- Gen Len: 67.3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 100 | 1.0890 | 23.5493 | 8.9875 | 17.1471 | 20.1643 | 67.8 |
| No log | 2.0 | 200 | 1.2126 | 25.6196 | 7.2753 | 18.0987 | 20.8416 | 67.3 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Connorvr/TeachingGen
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-10k-lit-evalMA-NOpad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-10k-lit-evalMA-NOpad
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9464
- Rouge1: 28.6721
- Rouge2: 13.8303
- Rougel: 22.458
- Rougelsum: 25.668
- Gen Len: 66.893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.535 | 1.0 | 10000 | 1.7501 | 28.519 | 13.967 | 22.4854 | 25.4511 | 66.555 |
| 0.8754 | 2.0 | 20000 | 1.9464 | 28.6721 | 13.8303 | 22.458 | 25.668 | 66.893 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ConstellationBoi/Oop
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language:
- "de"
tags:
- "qa"
widget:
- text: ""
context: ""
example_title: "Extractive QA"
---
# GELECTRA-large-LegalQuAD
## Overview
**Language model:** GELECTRA-large
**Language:** German
**Downstream-task:** Extractive QA
**Training data:** German-legal-SQuAD
**Eval data:** German-legal-SQuAD testset
## Hyperparameters
```
batch_size = 10
n_epochs = 2
max_seq_len=256,
learning_rate=1e-5,
## Eval results
Evaluated on German-legal-SQuAD testset
"exact": 40.912
"f1": 72.368
|
Contrastive-Tension/BERT-Base-NLI-CT
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
tags:
- generated_from_trainer
model-index:
- name: finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned
This model is a fine-tuned version of [sberbank-ai/rugpt3small_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3small_based_on_gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 42
- eval_batch_size: 42
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 840
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- num_epochs: 300
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.6
|
Contrastive-Tension/BERT-Distil-CT-STSb
|
[
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"DistilBertModel"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | 2022-03-13T12:22:58Z |
---
tags:
- generated_from_trainer
datasets:
- korquad
model-index:
- name: komrc_train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# komrc_train
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the korquad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1234
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8187 | 0.31 | 2000 | 0.7377 |
| 0.6947 | 0.63 | 4000 | 0.6934 |
| 0.6352 | 0.94 | 6000 | 0.6544 |
| 0.3869 | 1.25 | 8000 | 0.7633 |
| 0.3812 | 1.56 | 10000 | 0.7047 |
| 0.3579 | 1.88 | 12000 | 0.7097 |
| 0.2053 | 2.19 | 14000 | 0.8511 |
| 0.2173 | 2.5 | 16000 | 0.8457 |
| 0.2094 | 2.82 | 18000 | 0.8433 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.10.3
|
Contrastive-Tension/BERT-Large-CT-STSb
|
[
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- sem_eval2010_task8
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sem
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: sem_eval2010_task8
type: sem_eval2010_task8
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8314317261685683
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sem
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the sem_eval2010_task8 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6704
- Accuracy: 0.8314
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9556 | 1.0 | 800 | 0.7859 | 0.7814 |
| 0.6136 | 2.0 | 1600 | 0.6069 | 0.8193 |
| 0.4314 | 3.0 | 2400 | 0.6179 | 0.8211 |
| 0.2315 | 4.0 | 3200 | 0.6617 | 0.8281 |
| 0.1655 | 5.0 | 4000 | 0.6704 | 0.8314 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Contrastive-Tension/BERT-Large-CT
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hindi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Cool/Demo
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language: en
thumbnail: http://www.huggingtweets.com/mikepompeo/1647181695747/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1498704685875744769/r3jThh-E_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mike Pompeo</div>
<div style="text-align: center; font-size: 14px;">@mikepompeo</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mike Pompeo.
| Data | Mike Pompeo |
| --- | --- |
| Tweets downloaded | 1899 |
| Retweets | 68 |
| Short tweets | 60 |
| Tweets kept | 1771 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ll5re58/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mikepompeo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zi1wgzl5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zi1wgzl5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mikepompeo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Coolhand/Sentiment
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- masked-auto-encoding
- generated_from_trainer
datasets:
- image_folder
model-index:
- name: test_mae_flysheet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_mae_flysheet
This model is a fine-tuned version of [facebook/vit-mae-base](https://huggingface.co/facebook/vit-mae-base) on the davanstrien/flysheet dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.75e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.284 | 1.0 | 28 | 2.2812 |
| 2.137 | 2.0 | 56 | 2.0288 |
| 1.6016 | 3.0 | 84 | 1.2437 |
| 0.8055 | 4.0 | 112 | 0.7419 |
| 0.5304 | 5.0 | 140 | 0.5151 |
| 0.4873 | 6.0 | 168 | 0.4884 |
| 0.442 | 7.0 | 196 | 0.4441 |
| 0.4039 | 8.0 | 224 | 0.4159 |
| 0.3866 | 9.0 | 252 | 0.3975 |
| 0.391 | 10.0 | 280 | 0.3869 |
| 0.3549 | 11.0 | 308 | 0.3801 |
| 0.3462 | 12.0 | 336 | 0.3577 |
| 0.3402 | 13.0 | 364 | 0.3519 |
| 0.3357 | 14.0 | 392 | 0.3447 |
| 0.3474 | 15.0 | 420 | 0.3369 |
| 0.3254 | 16.0 | 448 | 0.3386 |
| 0.3033 | 17.0 | 476 | 0.3294 |
| 0.3047 | 18.0 | 504 | 0.3274 |
| 0.3103 | 19.0 | 532 | 0.3209 |
| 0.3067 | 20.0 | 560 | 0.3186 |
| 0.2959 | 21.0 | 588 | 0.3190 |
| 0.2899 | 22.0 | 616 | 0.3147 |
| 0.2872 | 23.0 | 644 | 0.3082 |
| 0.2956 | 24.0 | 672 | 0.3070 |
| 0.2865 | 25.0 | 700 | 0.3072 |
| 0.2947 | 26.0 | 728 | 0.3072 |
| 0.2811 | 27.0 | 756 | 0.3131 |
| 0.2935 | 28.0 | 784 | 0.3069 |
| 0.2814 | 29.0 | 812 | 0.3043 |
| 0.2753 | 30.0 | 840 | 0.2984 |
| 0.2823 | 31.0 | 868 | 0.2995 |
| 0.2962 | 32.0 | 896 | 0.3012 |
| 0.2869 | 33.0 | 924 | 0.3050 |
| 0.2833 | 34.0 | 952 | 0.2960 |
| 0.2892 | 35.0 | 980 | 0.3039 |
| 0.2764 | 36.0 | 1008 | 0.3010 |
| 0.2807 | 37.0 | 1036 | 0.2998 |
| 0.2843 | 38.0 | 1064 | 0.2989 |
| 0.2808 | 39.0 | 1092 | 0.2970 |
| 0.2862 | 40.0 | 1120 | 0.2940 |
| 0.2601 | 41.0 | 1148 | 0.2952 |
| 0.2742 | 42.0 | 1176 | 0.2940 |
| 0.2791 | 43.0 | 1204 | 0.2997 |
| 0.2759 | 44.0 | 1232 | 0.2951 |
| 0.2819 | 45.0 | 1260 | 0.2896 |
| 0.287 | 46.0 | 1288 | 0.2938 |
| 0.2711 | 47.0 | 1316 | 0.2973 |
| 0.2782 | 48.0 | 1344 | 0.2946 |
| 0.2674 | 49.0 | 1372 | 0.2913 |
| 0.268 | 50.0 | 1400 | 0.2944 |
| 0.2624 | 51.0 | 1428 | 0.2940 |
| 0.2842 | 52.0 | 1456 | 0.2978 |
| 0.2753 | 53.0 | 1484 | 0.2951 |
| 0.2733 | 54.0 | 1512 | 0.2880 |
| 0.2782 | 55.0 | 1540 | 0.2969 |
| 0.2789 | 56.0 | 1568 | 0.2919 |
| 0.2815 | 57.0 | 1596 | 0.2916 |
| 0.2629 | 58.0 | 1624 | 0.2947 |
| 0.2716 | 59.0 | 1652 | 0.2828 |
| 0.2623 | 60.0 | 1680 | 0.2924 |
| 0.2773 | 61.0 | 1708 | 0.2765 |
| 0.268 | 62.0 | 1736 | 0.2754 |
| 0.2839 | 63.0 | 1764 | 0.2744 |
| 0.2684 | 64.0 | 1792 | 0.2744 |
| 0.2865 | 65.0 | 1820 | 0.2716 |
| 0.2845 | 66.0 | 1848 | 0.2769 |
| 0.2663 | 67.0 | 1876 | 0.2754 |
| 0.269 | 68.0 | 1904 | 0.2737 |
| 0.2681 | 69.0 | 1932 | 0.2697 |
| 0.2748 | 70.0 | 1960 | 0.2779 |
| 0.2769 | 71.0 | 1988 | 0.2728 |
| 0.2805 | 72.0 | 2016 | 0.2729 |
| 0.2771 | 73.0 | 2044 | 0.2728 |
| 0.2717 | 74.0 | 2072 | 0.2749 |
| 0.267 | 75.0 | 2100 | 0.2732 |
| 0.2812 | 76.0 | 2128 | 0.2743 |
| 0.2749 | 77.0 | 2156 | 0.2739 |
| 0.2746 | 78.0 | 2184 | 0.2730 |
| 0.2707 | 79.0 | 2212 | 0.2743 |
| 0.2644 | 80.0 | 2240 | 0.2740 |
| 0.2691 | 81.0 | 2268 | 0.2727 |
| 0.2679 | 82.0 | 2296 | 0.2771 |
| 0.2748 | 83.0 | 2324 | 0.2744 |
| 0.2744 | 84.0 | 2352 | 0.2703 |
| 0.2715 | 85.0 | 2380 | 0.2733 |
| 0.2682 | 86.0 | 2408 | 0.2715 |
| 0.2641 | 87.0 | 2436 | 0.2722 |
| 0.274 | 88.0 | 2464 | 0.2748 |
| 0.2669 | 89.0 | 2492 | 0.2753 |
| 0.2707 | 90.0 | 2520 | 0.2724 |
| 0.2755 | 91.0 | 2548 | 0.2703 |
| 0.2769 | 92.0 | 2576 | 0.2737 |
| 0.2659 | 93.0 | 2604 | 0.2721 |
| 0.2674 | 94.0 | 2632 | 0.2763 |
| 0.2723 | 95.0 | 2660 | 0.2723 |
| 0.2723 | 96.0 | 2688 | 0.2744 |
| 0.272 | 97.0 | 2716 | 0.2686 |
| 0.27 | 98.0 | 2744 | 0.2728 |
| 0.2721 | 99.0 | 2772 | 0.2743 |
| 0.2692 | 100.0 | 2800 | 0.2748 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Coverage/sakurajimamai
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- generated_from_trainer
model-index:
- name: NewModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NewModel
This model is a fine-tuned version of [sberbank-ai/rugpt3small_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3small_based_on_gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 42
- eval_batch_size: 42
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 840
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- num_epochs: 200
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.6
|
Coyotl/DialoGPT-test2-arthurmorgan
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2109 | 1.0 | 8235 | 1.2303 |
| 0.9385 | 2.0 | 16470 | 1.2412 |
| 0.7448 | 3.0 | 24705 | 1.4101 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Coyotl/DialoGPT-test3-arthurmorgan
|
[
"conversational"
] |
conversational
|
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language:
- en
tags:
- aspect-based-sentiment-analysis
- lcf-bert
license: mit
datasets:
- laptop14 (w/ augmentation)
- restaurant14 (w/ augmentation)
- restaurant16 (w/ augmentation)
- ACL-Twitter (w/ augmentation)
- MAMS (w/ augmentation)
- Television (w/ augmentation)
- TShirt (w/ augmentation)
- Yelp (w/ augmentation)
metrics:
- accuracy
- macro-f1
---
# Note
This model is training with 180k+ ABSA samples, see [ABSADatasets](https://github.com/yangheng95/ABSADatasets). Yet the test sets are not included in pre-training, so you can use this model for training and benchmarking on common ABSA datasets, e.g., Laptop14, Rest14 datasets. (Except for the Rest15 dataset!)
# DeBERTa for aspect-based sentiment analysis
The `deberta-v3-large-absa` model for aspect-based sentiment analysis, trained with English datasets from [ABSADatasets](https://github.com/yangheng95/ABSADatasets).
## Training Model
This model is trained based on the FAST-LSA-T model with `microsoft/deberta-v3-large`, which comes from [PyABSA](https://github.com/yangheng95/PyABSA).
To track state-of-the-art models, please see [PyASBA](https://github.com/yangheng95/PyABSA).
## Usage
```python3
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("yangheng/deberta-v3-large-absa")
model = AutoModel.from_pretrained("yangheng/deberta-v3-large-absa")
inputs = tokenizer("good product especially video and audio quality fantastic.", return_tensors="pt")
outputs = model(**inputs)
```
## Example in PyASBA
An [example](https://github.com/yangheng95/PyABSA/blob/release/demos/aspect_polarity_classification/train_apc_multilingual.py) for using FAST-LSA-T in PyASBA
## Datasets
This model is fine-tuned with 180k examples for the ABSA dataset (including augmented data). Training dataset files:
```
loading: integrated_datasets/apc_datasets/SemEval/laptop14/Laptops_Train.xml.seg
loading: integrated_datasets/apc_datasets/SemEval/laptop14/0.cross_boost.fast_lcf_bert_Laptop14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/laptop14/1.cross_boost.fast_lcf_bert_Laptop14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/laptop14/2.cross_boost.fast_lcf_bert_Laptop14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/laptop14/3.cross_boost.fast_lcf_bert_Laptop14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant14/Restaurants_Train.xml.seg
loading: integrated_datasets/apc_datasets/SemEval/restaurant14/0.cross_boost.fast_lcf_bert_Restaurant14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant14/1.cross_boost.fast_lcf_bert_Restaurant14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant14/2.cross_boost.fast_lcf_bert_Restaurant14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant14/3.cross_boost.fast_lcf_bert_Restaurant14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant16/restaurant_train.raw
loading: integrated_datasets/apc_datasets/SemEval/restaurant16/0.cross_boost.fast_lcf_bert_Restaurant16_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant16/1.cross_boost.fast_lcf_bert_Restaurant16_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant16/2.cross_boost.fast_lcf_bert_Restaurant16_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant16/3.cross_boost.fast_lcf_bert_Restaurant16_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/ACL_Twitter/acl-14-short-data/train.raw
loading: integrated_datasets/apc_datasets/ACL_Twitter/acl-14-short-data/0.cross_boost.fast_lcf_bert_Twitter_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/ACL_Twitter/acl-14-short-data/1.cross_boost.fast_lcf_bert_Twitter_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/ACL_Twitter/acl-14-short-data/2.cross_boost.fast_lcf_bert_Twitter_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/ACL_Twitter/acl-14-short-data/3.cross_boost.fast_lcf_bert_Twitter_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/MAMS/train.xml.dat
loading: integrated_datasets/apc_datasets/MAMS/0.cross_boost.fast_lcf_bert_MAMS_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/MAMS/1.cross_boost.fast_lcf_bert_MAMS_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/MAMS/2.cross_boost.fast_lcf_bert_MAMS_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/MAMS/3.cross_boost.fast_lcf_bert_MAMS_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Television/Television_Train.xml.seg
loading: integrated_datasets/apc_datasets/Television/0.cross_boost.fast_lcf_bert_Television_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Television/1.cross_boost.fast_lcf_bert_Television_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Television/2.cross_boost.fast_lcf_bert_Television_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Television/3.cross_boost.fast_lcf_bert_Television_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/TShirt/Menstshirt_Train.xml.seg
loading: integrated_datasets/apc_datasets/TShirt/0.cross_boost.fast_lcf_bert_TShirt_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/TShirt/1.cross_boost.fast_lcf_bert_TShirt_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/TShirt/2.cross_boost.fast_lcf_bert_TShirt_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/TShirt/3.cross_boost.fast_lcf_bert_TShirt_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Yelp/yelp.train.txt
loading: integrated_datasets/apc_datasets/Yelp/0.cross_boost.fast_lcf_bert_Yelp_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Yelp/1.cross_boost.fast_lcf_bert_Yelp_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Yelp/2.cross_boost.fast_lcf_bert_Yelp_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Yelp/3.cross_boost.fast_lcf_bert_Yelp_deberta-v3-base.train.augment
```
If you use this model in your research, please cite our paper:
```
@article{YangZMT21,
author = {Heng Yang and
Biqing Zeng and
Mayi Xu and
Tianxing Wang},
title = {Back to Reality: Leveraging Pattern-driven Modeling to Enable Affordable
Sentiment Dependency Learning},
journal = {CoRR},
volume = {abs/2110.08604},
year = {2021},
url = {https://arxiv.org/abs/2110.08604},
eprinttype = {arXiv},
eprint = {2110.08604},
timestamp = {Fri, 22 Oct 2021 13:33:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2110-08604.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
CracklesCreeper/Piglin-Talks-Harry-Potter
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.927273630943427
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2106
- Accuracy: 0.927
- F1: 0.9273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8007 | 1.0 | 250 | 0.2955 | 0.914 | 0.9117 |
| 0.2417 | 2.0 | 500 | 0.2106 | 0.927 | 0.9273 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.8.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Craftified/Bob
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-ft-with-non-challenging
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-ft-with-non-challenging
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 4.0984 |
| No log | 2.0 | 2 | 4.0802 |
| No log | 3.0 | 3 | 4.0443 |
| No log | 4.0 | 4 | 3.9906 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Craig/paraphrase-MiniLM-L6-v2
|
[
"pytorch",
"bert",
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,026 | null |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-with-non-challenging
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-with-non-challenging
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2020
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.99 | 31 | 1.5517 |
| No log | 1.99 | 62 | 1.3733 |
| No log | 2.99 | 93 | 1.4207 |
| No log | 3.99 | 124 | 1.4872 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
### Perplexity
Score: 28.26373863220215
### Dataset Size
Size: 5000
|
Crasher222/kaggle-comp-test
|
[
"pytorch",
"bert",
"text-classification",
"en",
"dataset:Crasher222/autonlp-data-kaggle-test",
"transformers",
"autonlp",
"co2_eq_emissions"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 29 | null |
---
language:
- multilingual
- af
- am
- ar
- ast
- az
- ba
- be
- bg
- bn
- br
- bs
- ca
- ceb
- cs
- cy
- da
- de
- el
- en
- es
- et
- fa
- ff
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- ht
- hu
- hy
- id
- ig
- ilo
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- lb
- lg
- ln
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- ns
- oc
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- so
- sq
- sr
- ss
- su
- sv
- sw
- ta
- th
- tl
- tn
- tr
- uk
- ur
- uz
- vi
- wo
- xh
- yi
- yo
- zh
- zu
license: mit
tags:
- m2m100-12B
---
# M2M100 12B (average of last 5 checkpoints)
M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation.
It was introduced in this [paper](https://arxiv.org/abs/2010.11125) and first released in [this](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100) repository.
The model that can directly translate between the 9,900 directions of 100 languages.
To translate into a target language, the target language id is forced as the first generated token.
To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method.
*Note: `M2M100Tokenizer` depends on `sentencepiece`, so make sure to install it before running the example.*
To install `sentencepiece` run `pip install sentencepiece`
```python
from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
chinese_text = "生活就像一盒巧克力。"
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100-12B-avg-5-ckpt")
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100-12B-avg-5-ckpt")
# translate Hindi to French
tokenizer.src_lang = "hi"
encoded_hi = tokenizer(hi_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "La vie est comme une boîte de chocolat."
# translate Chinese to English
tokenizer.src_lang = "zh"
encoded_zh = tokenizer(chinese_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "Life is like a box of chocolate."
```
See the [model hub](https://huggingface.co/models?filter=m2m_100) to look for more fine-tuned versions.
## Languages covered
Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu)
## BibTeX entry and citation info
```
@misc{fan2020englishcentric,
title={Beyond English-Centric Multilingual Machine Translation},
author={Angela Fan and Shruti Bhosale and Holger Schwenk and Zhiyi Ma and Ahmed El-Kishky and Siddharth Goyal and Mandeep Baines and Onur Celebi and Guillaume Wenzek and Vishrav Chaudhary and Naman Goyal and Tom Birch and Vitaliy Liptchinsky and Sergey Edunov and Edouard Grave and Michael Auli and Armand Joulin},
year={2020},
eprint={2010.11125},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
CrayonShinchan/fine_tune_try_1
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
datasets:
- IteraTeR_full_sent
---
# IteraTeR PEGASUS model
This model was obtained by fine-tuning [google/pegasus-large](https://huggingface.co/google/pegasus-large) on [IteraTeR-full-sent](https://huggingface.co/datasets/wanyu/IteraTeR_full_sent) dataset.
Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802) <br>
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
## Text Revision Task
Given an edit intention and an original sentence, our model can generate a revised sentence.<br>
The edit intentions are provided by [IteraTeR-full-sent](https://huggingface.co/datasets/wanyu/IteraTeR_full_sent) dataset, which are categorized as follows:
<table>
<tr>
<th>Edit Intention</th>
<th>Definition</th>
<th>Example</th>
</tr>
<tr>
<td>clarity</td>
<td>Make the text more formal, concise, readable and understandable.</td>
<td>
Original: It's like a house which anyone can enter in it. <br>
Revised: It's like a house which anyone can enter.
</td>
</tr>
<tr>
<td>fluency</td>
<td>Fix grammatical errors in the text.</td>
<td>
Original: In the same year he became the Fellow of the Royal Society. <br>
Revised: In the same year, he became the Fellow of the Royal Society.
</td>
</tr>
<tr>
<td>coherence</td>
<td>Make the text more cohesive, logically linked and consistent as a whole.</td>
<td>
Original: Achievements and awards Among his other activities, he founded the Karachi Film Guild and Pakistan Film and TV Academy. <br>
Revised: Among his other activities, he founded the Karachi Film Guild and Pakistan Film and TV Academy.
</td>
</tr>
<tr>
<td>style</td>
<td>Convey the writer’s writing preferences, including emotions, tone, voice, etc..</td>
<td>
Original: She was last seen on 2005-10-22. <br>
Revised: She was last seen on October 22, 2005.
</td>
</tr>
</table>
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("wanyu/IteraTeR-PEGASUS-Revision-Generator")
model = AutoModelForSeq2SeqLM.from_pretrained("wanyu/IteraTeR-PEGASUS-Revision-Generator")
before_input = '<fluency> I likes coffee.'
model_input = tokenizer(before_input, return_tensors='pt')
model_outputs = model.generate(**model_input, num_beams=8, max_length=1024)
after_text = tokenizer.batch_decode(model_outputs, skip_special_tokens=True)[0]
```
|
CrisLeaf/generador-de-historias-de-tolkien
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
datasets:
- IteraTeR_full_sent
---
# IteraTeR RoBERTa model
This model was obtained by fine-tuning [roberta-large](https://huggingface.co/roberta-large) on [IteraTeR-human-sent](https://huggingface.co/datasets/wanyu/IteraTeR_human_sent) dataset.
Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802) <br>
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
## Edit Intention Prediction Task
Given a pair of original sentence and revised sentence, our model can predict the edit intention for this revision pair.<br>
More specifically, the model will predict the probability of the following edit intentions:
<table>
<tr>
<th>Edit Intention</th>
<th>Definition</th>
<th>Example</th>
</tr>
<tr>
<td>clarity</td>
<td>Make the text more formal, concise, readable and understandable.</td>
<td>
Original: It's like a house which anyone can enter in it. <br>
Revised: It's like a house which anyone can enter.
</td>
</tr>
<tr>
<td>fluency</td>
<td>Fix grammatical errors in the text.</td>
<td>
Original: In the same year he became the Fellow of the Royal Society. <br>
Revised: In the same year, he became the Fellow of the Royal Society.
</td>
</tr>
<tr>
<td>coherence</td>
<td>Make the text more cohesive, logically linked and consistent as a whole.</td>
<td>
Original: Achievements and awards Among his other activities, he founded the Karachi Film Guild and Pakistan Film and TV Academy. <br>
Revised: Among his other activities, he founded the Karachi Film Guild and Pakistan Film and TV Academy.
</td>
</tr>
<tr>
<td>style</td>
<td>Convey the writer’s writing preferences, including emotions, tone, voice, etc..</td>
<td>
Original: She was last seen on 2005-10-22. <br>
Revised: She was last seen on October 22, 2005.
</td>
</tr>
<tr>
<td>meaning-changed</td>
<td>Update or add new information to the text.</td>
<td>
Original: This method improves the model accuracy from 64% to 78%. <br>
Revised: This method improves the model accuracy from 64% to 83%.
</td>
</tr>
</table>
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("wanyu/IteraTeR-ROBERTA-Intention-Classifier")
model = AutoModelForSequenceClassification.from_pretrained("wanyu/IteraTeR-ROBERTA-Intention-Classifier")
id2label = {0: "clarity", 1: "fluency", 2: "coherence", 3: "style", 4: "meaning-changed"}
before_text = 'I likes coffee.'
after_text = 'I like coffee.'
model_input = tokenizer(before_text, after_text, return_tensors='pt')
model_output = model(**model_input)
softmax_scores = torch.softmax(model_output.logits, dim=-1)
pred_id = torch.argmax(softmax_scores)
pred_label = id2label[pred_id.int()]
```
|
Cryptikdw/DialoGPT-small-rick
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
language:
- multilingual
- af
- am
- ar
- ast
- az
- ba
- be
- bg
- bn
- br
- bs
- ca
- ceb
- cs
- cy
- da
- de
- el
- en
- es
- et
- fa
- ff
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- ht
- hu
- hy
- id
- ig
- ilo
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- lb
- lg
- ln
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- ns
- oc
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- so
- sq
- sr
- ss
- su
- sv
- sw
- ta
- th
- tl
- tn
- tr
- uk
- ur
- uz
- vi
- wo
- xh
- yi
- yo
- zh
- zu
license: mit
tags:
- m2m100-12B
---
# M2M100 12B (average of last 10 checkpoints)
M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation.
It was introduced in this [paper](https://arxiv.org/abs/2010.11125) and first released in [this](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100) repository.
The model that can directly translate between the 9,900 directions of 100 languages.
To translate into a target language, the target language id is forced as the first generated token.
To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method.
*Note: `M2M100Tokenizer` depends on `sentencepiece`, so make sure to install it before running the example.*
To install `sentencepiece` run `pip install sentencepiece`
```python
from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
chinese_text = "生活就像一盒巧克力。"
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100-12B-avg-10-ckpt")
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100-12B-avg-10-ckpt")
# translate Hindi to French
tokenizer.src_lang = "hi"
encoded_hi = tokenizer(hi_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "La vie est comme une boîte de chocolat."
# translate Chinese to English
tokenizer.src_lang = "zh"
encoded_zh = tokenizer(chinese_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "Life is like a box of chocolate."
```
See the [model hub](https://huggingface.co/models?filter=m2m_100) to look for more fine-tuned versions.
## Languages covered
Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu)
## BibTeX entry and citation info
```
@misc{fan2020englishcentric,
title={Beyond English-Centric Multilingual Machine Translation},
author={Angela Fan and Shruti Bhosale and Holger Schwenk and Zhiyi Ma and Ahmed El-Kishky and Siddharth Goyal and Mandeep Baines and Onur Celebi and Guillaume Wenzek and Vishrav Chaudhary and Naman Goyal and Tom Birch and Vitaliy Liptchinsky and Sergey Edunov and Edouard Grave and Michael Auli and Armand Joulin},
year={2020},
eprint={2010.11125},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Crystal/distilbert-base-uncased-finetuned-squad
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2022-03-13T21:11:41Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: noinfo
datasets:
- swbd
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/roshansh_asr_base_sp_conformer_swbd`
This model was trained by roshansh-cmu using swbd recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout a04a98c98797b314f2425082bc40261757fd47de
pip install -e .
cd egs2/swbd/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/roshansh_asr_base_sp_conformer_swbd
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sun Mar 13 17:23:58 EDT 2022`
- python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `a04a98c98797b314f2425082bc40261757fd47de`
- Commit date: `Thu Mar 3 16:09:41 2022 -0500`
## roshansh_asr_base_sp_conformer_swbd
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave_10best/eval2000/hyp.callhm.ctm.filt.sys|2628|21594|87.4|9.6|3.0|2.0|14.6|49.7|
|decode_asr_asr_model_valid.acc.ave_10best/eval2000/hyp.ctm.filt.sys|4459|42989|90.5|7.0|2.5|1.5|10.9|44.7|
|decode_asr_asr_model_valid.acc.ave_10best/eval2000/hyp.swbd.ctm.filt.sys|1831|21395|93.7|4.3|2.0|0.9|7.2|37.7|
|decode_lm_lm_lm_base_lm_transformer_valid.loss.ave_asr_model_valid.acc.ave_10best/eval2000/hyp.callhm.ctm.filt.sys|2628|21594|88.0|8.9|3.1|2.0|14.0|48.0|
|decode_lm_lm_lm_base_lm_transformer_valid.loss.ave_asr_model_valid.acc.ave_10best/eval2000/hyp.ctm.filt.sys|4459|42989|91.0|6.5|2.5|1.4|10.4|43.0|
|decode_lm_lm_lm_base_lm_transformer_valid.loss.ave_asr_model_valid.acc.ave_10best/eval2000/hyp.swbd.ctm.filt.sys|1831|21395|94.0|4.0|2.0|0.9|6.8|35.9|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave_10best/eval2000|4458|181952|92.3|3.7|4.0|11.9|19.5|69.9|
|decode_lm_lm_lm_base_lm_transformer_valid.loss.ave_asr_model_valid.acc.ave_10best/eval2000|4458|181952|92.3|3.7|4.1|11.6|19.3|69.1|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave_10best/eval2000|4458|47302|81.7|13.5|4.8|16.7|34.9|69.9|
|decode_lm_lm_lm_base_lm_transformer_valid.loss.ave_asr_model_valid.acc.ave_10best/eval2000|4458|47302|81.9|13.1|5.0|16.4|34.5|69.1|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_confformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_base_sp_conformer
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 8
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 52583
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 150
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 3000
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
detect_anomaly: false
pretrain_path: null
init_param: []
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 75000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_base_sp/train/speech_shape
- exp/asr_stats_base_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_base_sp/valid/speech_shape
- exp/asr_stats_base_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
- 800
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/fbank_pitch/train_nodup_sp/feats.scp
- speech
- kaldi_ark
- - dump/fbank_pitch/train_nodup_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/fbank_pitch/train_dev/feats.scp
- speech
- kaldi_ark
- - dump/fbank_pitch/train_dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.006
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ▁i
- s
- ''''
- ▁and
- ▁the
- ▁you
- ▁that
- ▁a
- ▁it
- ▁uh
- ▁to
- t
- ▁of
- ▁know
- ▁they
- '-'
- ▁in
- ▁we
- ']'
- ▁[
- ▁yeah
- ▁have
- ▁but
- ▁so
- ▁was
- ▁like
- re
- m
- ▁um
- ▁just
- ▁well
- ▁do
- ▁for
- d
- ▁think
- ing
- ▁don
- ▁is
- ▁there
- ▁or
- ▁on
- ▁be
- noise
- ▁what
- laughter
- ▁oh
- ▁my
- ed
- ve
- ▁not
- ▁really
- ▁with
- n
- ▁he
- ▁one
- ▁if
- ▁are
- ▁all
- ▁get
- ▁right
- ▁about
- ▁can
- ▁because
- ▁out
- ▁had
- ▁up
- ▁them
- ▁lot
- a
- v
- ▁at
- ▁this
- ▁would
- ▁when
- ▁go
- ▁some
- p
- i
- r
- er
- ▁people
- ▁no
- ▁mean
- ▁kind
- ▁then
- o
- ▁good
- ▁now
- ▁me
- ▁got
- e
- ▁time
- ll
- ▁as
- ▁she
- ▁going
- y
- ▁see
- ▁more
- ▁were
- ▁been
- ▁from
- ▁too
- ▁an
- ▁things
- ly
- ▁how
- ▁something
- c
- ▁your
- b
- ▁where
- ▁much
- u
- '._'
- ▁guess
- ▁little
- g
- ▁here
- .
- ▁thing
- ▁our
- le
- ocalized
- ▁very
- ▁did
- ▁their
- ▁other
- ▁work
- ▁could
- ▁okay
- l
- in
- ▁even
- ▁t
- al
- ▁two
- huh
- ▁way
- or
- ▁say
- f
- ▁has
- ▁any
- ▁s
- ▁years
- ▁want
- ▁back
- ▁down
- ▁those
- ▁who
- ▁pretty
- ▁probably
- ▁home
- ▁didn
- ▁real
- ▁year
- ▁take
- ▁over
- ▁yes
- ▁than
- ▁re
- ▁sure
- ▁into
- ar
- hum
- ▁school
- ▁put
- ▁stuff
- an
- ▁make
- ▁kids
- ▁her
- ▁by
- ▁said
- ▁never
- w
- ▁c
- ▁which
- ▁off
- k
- ▁went
- ic
- ▁f
- ▁only
- ▁big
- ▁car
- ▁always
- ▁these
- ▁around
- ▁money
- ▁day
- ▁anything
- ▁three
- ▁nice
- ▁doing
- ▁need
- ▁come
- ▁actually
- ▁will
- ▁maybe
- ▁care
- ▁him
- h
- ent
- en
- ▁still
- ▁should
- ▁new
- ▁used
- ▁five
- 'on'
- ch
- ion
- ▁long
- ▁sort
- ▁his
- th
- ter
- ▁old
- ▁most
- ▁house
- ▁bit
- ▁e
- ▁every
- ▁different
- ▁last
- ▁use
- ▁let
- il
- es
- it
- ▁many
- ▁us
- ▁look
- ▁course
- ▁getting
- ▁true
- ▁everything
- ▁feel
- ▁first
- ck
- ▁part
- ▁does
- ▁pay
- ▁great
- ▁hard
- ▁same
- ▁thought
- ▁de
- ▁problem
- ▁also
- ▁keep
- ers
- at
- ▁through
- ▁doesn
- ▁children
- ▁four
- ▁find
- ▁done
- ment
- ▁th
- ies
- ur
- ▁before
- ▁far
- ▁though
- ▁w
- ▁area
- ate
- ▁haven
- ▁o
- ▁ever
- ▁p
- ▁
- ▁being
- ▁family
- ▁bad
- ▁seems
- ation
- ▁d
- ▁live
- ▁whole
- ▁fact
- ▁own
- se
- ▁why
- ▁b
- ▁play
- ▁talking
- ▁tell
- ▁better
- ▁interesting
- ▁another
- ▁place
- ▁try
- ▁trying
- ▁huh
- ▁ten
- te
- ▁twenty
- ▁else
- ol
- ▁watch
- ▁read
- ▁type
- ro
- ▁quite
- ▁job
- ▁hundred
- ▁high
- ▁call
- ▁con
- ▁ago
- ▁after
- ▁give
- ▁couple
- ▁enough
- ▁whatever
- ke
- is
- id
- ▁either
- ▁start
- ▁having
- ▁texas
- el
- ▁somebody
- ▁husband
- ▁sometimes
- ▁dollars
- ir
- ow
- ▁usually
- ▁show
- ▁help
- ▁while
- ▁few
- ▁away
- ive
- ▁se
- ▁college
- ▁y
- ▁system
- ▁might
- ▁mo
- ▁co
- ▁heard
- ▁ma
- us
- ▁person
- ▁once
- ▁made
- ▁point
- ▁six
- ce
- ▁n
- ▁fun
- ra
- ▁week
- ▁pa
- ▁buy
- ▁seen
- ▁state
- ▁anyway
- ▁again
- ▁love
- ▁gonna
- ▁dallas
- ne
- ▁started
- ▁exactly
- ▁pro
- ▁country
- ▁life
- ▁enjoy
- ▁everybody
- ▁ha
- ▁talk
- ▁lo
- ▁v
- ▁night
- ▁able
- ▁may
- ▁stay
- ▁remember
- est
- ▁news
- ▁sa
- ▁k
- ▁came
- ▁hear
- ▁end
- able
- ▁least
- ▁working
- et
- ▁un
- ry
- ▁fl
- ▁po
- ▁g
- ▁since
- ▁ra
- ▁change
- ul
- ▁idea
- ▁both
- ▁h
- ▁boy
- ▁agree
- age
- ▁program
- un
- ▁pre
- ▁st
- ▁almost
- ▁dis
- ▁someone
- ▁run
- ▁di
- um
- z
- ▁ba
- ▁ho
- ist
- ▁la
- ▁dog
- ▁m
- ▁reason
- ▁took
- ▁believe
- ant
- ▁bye
- ▁company
- ▁eight
- ▁times
- ▁half
- ▁wife
- ▁isn
- ▁paper
- ▁deal
- ▁goes
- ▁hand
- ▁guy
- ▁called
- ▁next
- ▁close
- ▁month
- ▁thirty
- ▁wanted
- ▁thousand
- ▁yet
- ▁mi
- ▁understand
- ▁bu
- tion
- ▁cost
- ▁pick
- ge
- am
- ▁drive
- ▁sp
- ▁looking
- ▁government
- ▁child
- ▁crime
- ac
- ▁tax
- ▁li
- ▁spend
- lo
- ee
- ▁women
- ▁parents
- ▁bo
- ▁days
- ▁especially
- ▁wow
- ▁saying
- ▁cut
- ▁name
- ▁eat
- ▁gone
- ▁whether
- ▁happen
- ity
- ▁less
- ated
- ▁small
- ▁saw
- ▁sounds
- ▁supposed
- ▁number
- ▁world
- ▁mother
- ▁music
- ▁set
- ▁such
- ▁until
- ▁hi
- ▁movie
- ru
- ▁credit
- ▁bought
- ▁turn
- ▁city
- ▁myself
- ▁ga
- ▁walk
- ▁food
- if
- ▁le
- ▁seem
- ▁problems
- ting
- ▁computer
- ▁makes
- ▁am
- ▁man
- ▁found
- ▁percent
- ▁together
- ▁sit
- ▁ro
- ▁coming
- ure
- ▁basically
- ▁young
- ▁best
- ▁sc
- ▁listen
- ▁hum
- ▁water
- ▁check
- ance
- ▁son
- ▁business
- ▁u
- co
- ▁comp
- ▁seven
- ▁summer
- ▁each
- ▁situation
- ie
- ian
- ▁war
- ▁j
- ▁worked
- x
- ward
- ▁side
- ▁definitely
- ▁certain
- ▁game
- ▁wh
- ▁won
- ▁cl
- ia
- ▁wonderful
- ▁wonder
- ▁matter
- ▁public
- ▁ex
- op
- ▁lived
- ▁fifty
- ▁certainly
- ▁cat
- ▁cook
- ▁funny
- ▁air
- ty
- ▁age
- ▁room
- ▁nothing
- ▁class
- ▁health
- ▁ch
- ▁sh
- ▁large
- ig
- na
- ▁r
- ▁fa
- ▁gotten
- ▁ju
- ▁mine
- ▁town
- ▁per
- ▁months
- ma
- ▁ti
- ide
- ▁test
- ▁places
- ▁yep
- ▁comes
- ▁anymore
- ▁ca
- ▁under
- he
- ▁plan
- ▁vote
- ▁fi
- ▁important
- ▁taking
- ▁da
- ▁daughter
- ▁thinking
- ▁team
- port
- ▁learn
- ▁budget
- ▁american
- ful
- ▁taxes
- de
- ▁hm
- ▁gun
- ▁str
- ▁eighty
- ▁control
- ▁service
- ▁today
- ▁drug
- ▁cars
- ▁paying
- ally
- ▁rather
- ▁neat
- ▁line
- ▁tend
- ▁law
- ▁fr
- tic
- rs
- time
- ▁insurance
- man
- ▁wear
- ▁friends
- ▁outside
- ▁easy
- ▁north
- ▁friend
- ▁during
- und
- ▁l
- ▁card
- ▁nine
- me
- bye
- ▁living
- ▁mind
- ▁involved
- ▁gosh
- ▁moved
- ight
- ▁camping
- ▁several
- ence
- ical
- ▁bring
- ice
- ▁tried
- ▁major
- ▁newspaper
- ▁favorite
- ▁en
- ▁student
- ▁consider
- ▁making
- la
- ▁morning
- ous
- ▁dr
- ph
- ▁question
- ▁between
- ▁jury
- ▁amount
- ▁mar
- ▁ones
- ▁older
- ▁case
- ▁education
- ▁wa
- ▁paid
- ▁ri
- ▁depend
- ish
- ▁bill
- ▁must
- ine
- gg
- ▁happened
- ▁hour
- ▁difference
- ▁du
- ▁hope
- ▁experience
- ▁absolutely
- ▁group
- ▁figure
- ▁anybody
- ▁miles
- ▁aren
- ating
- ▁although
- ▁worth
- ▁su
- ▁ta
- ▁interest
- ▁book
- ▁sha
- ▁forty
- ▁expensive
- ▁second
- ▁without
- up
- ▁gets
- ▁full
- ▁app
- ex
- ▁along
- ▁recently
- ▁paint
- ▁leave
- ▁ru
- all
- ▁weather
- ▁miss
- ▁free
- ▁com
- ▁often
- ▁gra
- ▁minutes
- ition
- ill
- ▁magazine
- ▁wait
- ca
- ▁ahead
- ▁wrong
- ▁hours
- ▁already
- ▁married
- ▁left
- ▁hit
- ▁camp
- ▁fifteen
- ▁pr
- ▁men
- ▁drugs
- ▁rain
- ▁schools
- ious
- ▁fish
- ▁girl
- ick
- ▁office
- ▁weeks
- ▁ski
- ▁middle
- ▁knew
- ▁al
- ▁store
- ▁watching
- ▁cha
- ▁sl
- ▁hot
- ▁running
- ▁yourself
- ▁act
- ▁cold
- ▁price
- ▁lake
- ▁death
- ▁dad
- ▁enjoyed
- ▁benefits
- ▁word
- ▁main
- ▁grow
- ▁recycling
- ▁past
- ▁weekend
- ▁break
- 'no'
- ber
- ▁against
- ▁base
- ▁movies
- ▁mostly
- ial
- ▁guys
- ▁san
- ▁pi
- ay
- ▁sense
- ▁sell
- ▁sister
- ▁thank
- ▁issue
- way
- ▁pet
- ▁throw
- ▁cover
- ary
- ▁baby
- ▁doctor
- ▁local
- ▁difficult
- ▁nursing
- ▁wi
- ▁wanna
- ▁open
- ▁head
- ought
- ▁vacation
- ▁-
- ▁brother
- ▁instead
- ▁kid
- ▁reading
- ▁add
- ▁rest
- ▁qu
- ▁interested
- ▁short
- ▁degree
- ▁charge
- ▁rec
- ▁topic
- ha
- ▁talked
- ▁move
- land
- cy
- ▁trouble
- ▁told
- ▁fairly
- ▁hate
- ▁stand
- do
- ▁unless
- ▁winter
- ▁sta
- ▁twelve
- ▁plano
- ▁wish
- ▁yard
- ▁exercise
- ▁front
- ▁somewhere
- ▁east
- ▁everyone
- ▁regular
- ▁restaurant
- ▁gre
- ▁plant
- ▁catch
- ▁states
- ▁near
- ▁decided
- ▁imagine
- ▁except
- ▁chance
- ▁says
- ▁kill
- ▁california
- ▁looked
- ▁pe
- ling
- ▁ask
- ▁punishment
- ▁pull
- ▁fan
- ▁south
- ▁fine
- ▁hold
- ▁taken
- ▁tra
- ▁garden
- ▁park
- ▁late
- ▁ja
- ▁takes
- ▁street
- ▁door
- ▁fall
- ▁clean
- ▁dress
- ▁mom
- ▁income
- ▁teach
- ▁companies
- ▁works
- ▁ready
- ▁capital
- ▁spent
- ▁recycle
- ▁york
- ▁using
- ▁gu
- ▁tough
- ▁social
- ▁raise
- ▁father
- ▁seventy
- ▁ne
- ▁gr
- ▁realize
- ▁early
- ▁send
- ▁terms
- ▁become
- ▁sixty
- ▁themselves
- ▁level
- ▁phone
- ▁god
- ▁woman
- ▁oil
- ▁rent
- ▁exp
- ▁changed
- ▁felt
- ▁particular
- ▁radio
- ▁christmas
- ▁station
- ▁top
- ▁goodness
- ▁save
- ▁power
- ▁pass
- ▁bar
- ▁die
- ▁society
- ▁choice
- ▁bra
- ▁ge
- ▁personal
- ▁na
- ▁dollar
- ▁playing
- ▁tha
- ▁rate
- ard
- ▁national
- ▁special
- ▁general
- ▁awful
- ible
- ▁cards
- ▁plastic
- ▁visit
- ▁fix
- ▁train
- ▁rid
- ▁dec
- ▁lives
- ▁expect
- ▁support
- ▁wood
- ▁books
- ▁feeling
- ▁pu
- ▁acc
- line
- ▁center
- ized
- ▁putting
- ▁bag
- ness
- ▁growing
- ▁later
- ▁guns
- ton
- ▁land
- ▁travel
- der
- ▁subject
- ▁period
- ▁dinner
- ▁judge
- ▁season
- ▁happens
- ▁machine
- ▁extra
- ▁manage
- ▁gave
- ▁vi
- ▁force
- ▁ph
- ▁lately
- ▁effect
- ner
- ▁starting
- ▁saving
- one
- ▁building
- ▁trip
- ▁sitting
- ▁cases
- ▁bri
- ▁kept
- ▁finally
- ▁fast
- ▁red
- ▁forth
- ▁mu
- ▁stop
- ▁testing
- less
- ▁spring
- ▁cause
- ▁require
- ▁built
- ▁kn
- ▁sw
- ▁murder
- ▁black
- ▁quick
- ▁community
- ▁record
- ▁snow
- gra
- j
- ▁cra
- ▁plus
- ▁bank
- ▁bi
- ▁beautiful
- ▁grade
- ran
- ▁afford
- ▁graduate
- ▁space
- ▁countries
- ▁cats
- ▁fire
- ▁process
- ▁sound
- ▁played
- ▁limit
- ▁white
- ny
- ▁sad
- que
- ▁university
- ▁trans
- ▁mess
- ▁nineteen
- ▁shoot
- ▁nobody
- ▁football
- ▁speak
- ▁story
- ▁light
- ▁longer
- ▁jo
- king
- ▁ninety
- ▁road
- ▁totally
- ▁fishing
- ▁order
- ▁information
- ▁sign
- ▁worry
- ▁spending
- ▁product
- ▁soon
- ▁bother
- ▁across
- ▁write
- ▁bl
- ▁bunch
- ▁pen
- ▁carry
- ▁truck
- ▁hey
- ▁ball
- be
- ▁driving
- ▁needed
- ▁church
- ▁teachers
- ▁low
- ▁amazing
- ▁decision
- ▁hurt
- ▁golf
- ▁sorry
- ite
- ▁younger
- ities
- ▁account
- ▁terrible
- ▁wind
- ▁report
- ▁suppose
- ▁wor
- ▁color
- ▁hunt
- ▁teacher
- ▁concerned
- ▁easier
- ▁strange
- ▁sub
- ▁size
- ▁strong
- ▁safe
- ▁turned
- ▁given
- ▁lost
- ▁families
- ▁happy
- ▁follow
- ▁view
- ▁market
- ▁handle
- ▁ye
- ▁single
- ▁shop
- ▁si
- ▁within
- ze
- ▁television
- ▁cheap
- vis
- ▁rock
- ▁engineer
- ▁individual
- ▁shot
- ▁tri
- ▁criminal
- ▁united
- ▁worse
- ▁trial
- out
- ▁serious
- ▁neighborhood
- ▁brought
- ▁answer
- ▁trees
- mon
- ▁build
- ▁example
- ▁fair
- ▁buying
- ▁caught
- ▁military
- ▁private
- ▁field
- ▁weight
- ▁che
- ship
- ▁crazy
- law
- ▁serve
- ▁decide
- ▁opinion
- ▁medical
- ▁push
- ▁step
- ▁meet
- ▁stick
- clock
- ▁boat
- ▁quality
- ▁win
- ▁green
- ▁term
- ▁lose
- ▁fo
- ▁scary
- ▁ended
- ▁cu
- ▁hospital
- ▁police
- ▁biggest
- ▁apartment
- ▁repair
- ▁finish
- ▁glad
- ▁inside
- ▁learned
- ▁prison
- ▁cri
- ▁familiar
- ▁third
- ▁seemed
- uh
- ▁pan
- ▁mountain
- ▁whenever
- ▁range
- ▁watched
- ▁necessarily
- ▁piece
- ook
- lie
- ▁noticed
- ▁president
- ▁collect
- ▁twice
- ative
- ▁glass
- ▁super
- ▁ran
- ▁fund
- ▁sleep
- ▁lawn
- ▁chi
- ▁behind
- ▁guilty
- ▁drop
- ▁mix
- ▁killed
- ▁court
- ▁completely
- ▁party
- ▁current
- ▁tape
- ▁commit
- ▁benefit
- ▁wall
- ▁particularly
- ▁personally
- ▁anywhere
- ▁project
- ▁clothes
- ▁eighteen
- ▁bigger
- ▁arm
- ▁list
- ▁hang
- ▁warm
- ▁eleven
- ▁research
- uff
- ▁gee
- ▁grand
- ron
- ▁fight
- ▁grass
- ▁teaching
- ▁million
- istic
- ▁trash
- ▁cash
- ▁waiting
- ▁neighbor
- ▁club
- ability
- ▁develop
- ▁unfortunately
- ▁loan
- ▁picked
- ▁star
- ▁generally
- ▁cur
- ▁environment
- ▁minute
- ▁obviously
- ▁protect
- ▁opera
- ize
- ▁anyone
- ▁employee
- ▁houston
- ▁fill
- ▁treat
- ▁baseball
- ▁ground
- ▁video
- ▁pollution
- ▁higher
- ▁available
- ▁generation
- ▁luck
- ▁excuse
- ▁pound
- ▁picture
- ▁roll
- ▁america
- ade
- ▁eventually
- ▁itself
- ▁ooh
- ▁asked
- ▁forget
- ▁surprised
- ▁sun
- ▁federal
- ▁jail
- qui
- ▁pla
- ome
- ▁basic
- ▁extreme
- ▁washington
- ▁attention
- ▁penalty
- ▁sentence
- ▁poor
- ▁mail
- ▁cool
- ▁florida
- ▁clear
- ▁fortunate
- ▁huge
- ▁aware
- ▁lay
- ▁civil
- ▁value
- ▁band
- ▁lead
- ▁parent
- ▁giving
- ▁bottle
- ▁blue
- ▁standard
- ▁rob
- ▁afraid
- ▁bedroom
- ▁comfortable
- ▁separate
- ▁position
- ▁foot
- ▁eye
- ▁art
- ▁europe
- ▁sunday
- ▁cap
- ▁discuss
- ▁provide
- ▁lucky
- ▁sick
- ▁excellent
- ▁utah
- ▁classes
- ▁el
- ▁apparently
- ▁condition
- ▁perhaps
- ▁weapon
- ▁burn
- ▁originally
- q
- ▁self
- ▁beginning
- ▁prefer
- ▁cou
- ▁count
- ▁quit
- ▁typical
- 'off'
- ▁economic
- ▁broke
- ▁average
- ▁smaller
- ▁security
- ▁virginia
- ▁weird
- ▁future
- ▁similar
- ▁hopefully
- ▁economy
- ▁political
- ▁relative
- ▁master
- ▁slow
- ▁financial
- ▁respect
- ▁expense
- ▁accept
- ▁appeal
- ▁normally
- ▁channel
- ▁alone
- ▁human
- ▁union
- ▁privacy
- ▁science
- ▁lawyer
- ▁busy
- ▁window
- ▁automatic
- ▁sold
- ▁county
- ▁advantage
- ▁bush
- ▁direct
- ▁affect
- ▁drink
- ▁van
- ▁entire
- ▁lunch
- ▁switch
- ▁role
- ▁basis
- ▁z
- ▁table
- ▁animal
- ▁basketball
- ▁industry
- ▁peace
- ▁reunion
- ▁blow
- ▁department
- ▁present
- ▁relate
- ▁positive
- ▁article
- ▁heavy
- ▁return
- place
- ▁chicken
- ▁stories
- ▁honest
- ▁somehow
- ▁ride
- ▁history
- ▁saturday
- ▁salary
- ▁member
- ▁payment
- ▁moving
- ▁port
- ▁professional
- ▁mexico
- ▁normal
- ▁lower
- ▁jump
- ▁mow
- ▁rich
- ▁organization
- ▁design
- ▁straight
- ▁draw
- ▁smoke
- ▁possible
- ▁bucks
- ▁debt
- work
- ▁property
- ▁rough
- ▁teenage
- ▁garage
- ▁wild
- ▁scout
- ▁touch
- ash
- ▁suit
- ▁purchase
- ▁retirement
- ▁election
- over
- ▁carolina
- ▁recipe
- ▁track
- ▁entertain
- ▁changing
- ▁grandmother
- ▁thirteen
- ▁instance
- ▁coverage
- ▁attitude
- ▁box
- ▁face
- ▁background
- ▁study
- ▁kidding
- ▁english
- ▁ridiculous
- ▁legal
- ▁tonight
- ▁trade
- ▁random
- ▁john
- ▁coast
- ▁cable
- ▁aluminum
- ▁choose
- ▁cowboy
- ▁colorado
- ▁lu
- ▁continue
- ▁contract
- ▁england
- ▁ticket
- ▁board
- ▁replace
- ▁join
- ▁folks
- ▁sudden
- ▁garbage
- ▁engine
- ▁himself
- ▁instrument
- ▁row
- ▁spot
- ▁activities
- ▁cross
- ▁shape
- ▁scare
- ▁mini
- ▁district
- ▁floor
- ▁taste
- ▁corn
- ▁correct
- ▁opportunity
- ified
- ▁threat
- ▁concern
- ▁popular
- ▁everyday
- ▁adult
- ▁terr
- ▁doubt
- ▁brand
- ▁dead
- ▁defense
- ▁worst
- ▁mexican
- ▁policy
- ▁taught
- ▁vietnam
- ▁pressure
- ▁balance
- ▁body
- ▁cities
- ▁accident
- ▁afternoon
- ▁horrible
- ▁german
- ▁electric
- ▁tired
- ▁everywhere
- ▁opposed
- ▁squa
- ▁bike
- ▁hair
- ▁congress
- ▁foreign
- ▁physical
- ▁yesterday
- ▁increase
- ▁metric
- ▁style
- ▁minor
- ▁majority
- ▁perfect
- ▁responsibility
- ▁common
- ▁central
- ▁improve
- ▁kitchen
- ▁vegetable
- ▁sixteen
- ▁forever
- ▁nurse
- ▁stopped
- ▁tech
- ▁bird
- ▁born
- ▁jeez
- ▁mistake
- ▁richardson
- ▁express
- ▁lady
- ▁russia
- ▁print
- ▁hook
- ▁bottom
- ▁easily
- ▁select
- ▁option
- ▁coach
- ▁direction
- ville
- ▁favor
- ▁pennsylvania
- ▁key
- ject
- ▁effort
- ▁schedule
- ▁execut
- ▁spread
- ▁hobby
- ▁immediate
- ▁simple
- ▁somewhat
- ▁however
- ▁natural
- ▁fourteen
- ▁block
- ▁dump
- ▁perform
- ▁equipment
- ▁complain
- ▁planning
- ▁river
- ▁occasionally
- ▁conversation
- ▁grocery
- ▁fresh
- ▁besides
- ▁friday
- ▁result
- ▁smart
- ▁various
- ▁discover
- ▁storm
- ▁appreciate
- ▁equal
- ▁nowadays
- ▁brown
- ▁elderly
- ▁invasion
- ▁oklahoma
- ▁politics
- ▁maryland
- ▁regard
- ▁upset
- ▁commercial
- ▁incredible
- ▁french
- ▁trust
- ▁seventies
- ▁league
- ▁ourselves
- ▁possibly
- ▁purpose
- ▁network
- ▁stuck
- ▁admit
- ▁sweat
- ▁cousin
- ▁begin
- ably
- ▁elect
- board
- ▁alcohol
- ▁contribut
- ▁solution
- ▁material
- ▁supp
- ▁deep
- ▁specific
- ▁convict
- ▁motor
- ▁tree
- ▁junior
- ▁nature
- ▁oak
- ▁restrict
- ▁mentioned
- ▁shoes
- ▁laugh
- ▁volunteer
- ▁temp
- ▁austin
- ▁prior
- ▁extent
- ▁otherwise
- ▁blood
- ▁deduct
- ▁hobbies
- ▁influence
- ▁writing
- ▁abuse
- ▁soviet
- ▁mental
- ▁awhile
- ▁connect
- ▁western
- ▁italian
- ▁convenient
- ▁language
- ▁recommend
- ▁downtown
- ▁border
- ▁character
- ▁politician
- ▁truth
- ▁pitch
- ▁sixties
- ▁strict
- ▁hello
- ▁chinese
- ▁relax
- ▁wheel
- ▁drove
- ▁access
- ▁cannot
- ▁plenty
- ▁pardon
- ▁model
- ▁visa
- ▁section
- ▁boston
- ▁dirt
- ▁aspect
- ▁electronic
- ▁responsible
- ▁participate
- ▁steak
- ▁profit
- ▁roof
- ▁cabin
- ▁bowl
- ▁japanese
- ▁telephone
- ▁variety
- ▁piano
- ▁broad
- ▁chicago
- ▁citizen
- ▁corps
- ▁assume
- ▁automobile
- ▁crowd
- ▁simply
- ▁technical
- ▁quarter
- ▁wrote
- ▁damage
- ▁dental
- ▁corporation
- ▁honda
- ▁necessary
- ▁traffic
- ▁vehicle
- ▁salad
- ▁southern
- ▁unusual
- '0'
- ▁voting
- ▁screen
- ▁stress
- ▁mandatory
- ▁monday
- ▁secret
- ▁above
- ▁source
- ▁load
- ▁suspect
- ▁license
- ▁population
- ▁subscribe
- ▁atlanta
- ▁draft
- ▁tremendous
- ▁knowledge
- ▁earth
- ▁match
- ▁atmosphere
- ▁democrat
- ▁habit
- ▁edge
- ▁film
- ▁auto
- ▁earlier
- ▁encourage
- ▁exciting
- ▁fellow
- ▁suburb
- ▁became
- ▁shut
- ▁ceiling
- ▁disease
- ▁cheese
- ▁actual
- ▁bathroom
- ▁divorce
- ▁further
- ▁pattern
- ▁practical
- ▁technology
- ▁becoming
- ▁double
- ▁investment
- ▁trend
- ▁dark
- ▁discipline
- ▁occur
- ▁christian
- ▁liberal
- ▁senior
- ▁israel
- ▁scene
- ▁deterrent
- ▁jazz
- ▁suggest
- ▁beyond
- ▁seventeen
- ▁sauce
- ▁interview
- ▁swimming
- ▁stupid
- ▁voice
- ▁pump
- ▁consumer
- ▁independent
- ▁practice
- ▁tomatoes
- ▁outdoor
- ▁blame
- ▁northern
- ▁craft
- ▁republic
- ▁antonio
- ▁written
- ▁tennis
- ▁tune
- ology
- ▁legislat
- ▁finance
- ipped
- ▁adjust
- ▁massachusetts
- ▁successful
- ▁repeat
- ▁versus
- ▁chemical
- ▁milk
- ▁carpet
- ▁horse
- ▁address
- ▁speed
- ▁apart
- ▁occasion
- ▁belong
- ▁francisco
- ▁grandchildren
- ▁quiet
- ▁holiday
- ▁register
- ▁resource
- ▁mechanic
- ▁staff
- ▁steal
- ▁maintain
- ▁toyota
- ▁psych
- ▁casual
- ▁backyard
- ▁receive
- ▁chose
- ▁energy
- ▁author
- ▁bread
- ▁focus
- ▁iraq
- ▁journal
- ▁professor
- ▁sentencing
- ▁explain
- ▁knock
- ▁series
- ficial
- ▁amazed
- ▁baltimore
- ▁facilities
- ▁neither
- ▁potato
- ▁advance
- ▁gulf
- ▁sweet
- hold
- ▁candidate
- ▁pittsburgh
- ▁garland
- ▁hung
- ▁babies
- ▁involve
- ▁spec
- ▁concept
- ▁convince
- ▁impressed
- ▁leaving
- ▁primarily
- ▁produce
- ▁victim
- ▁herself
- ▁shock
- ▁convert
- ▁juries
- ▁loose
- wood
- ▁represent
- ▁georgia
- ▁kindergarten
- ▁progress
- ▁yellow
- ▁stock
- ▁junk
- ▁surprise
- ▁circumstances
- ▁dangerous
- ▁illegal
- ▁concert
- ▁shift
- ▁gang
- ▁advertise
- ▁disappoint
- ▁educate
- ▁female
- ▁minimum
- ▁establish
- ▁fantastic
- ▁welfare
- house
- ▁extend
- ▁birthday
- ▁cruise
- ▁culture
- ▁elementary
- ▁employer
- ▁incentive
- ▁relationship
- ▁speech
- ▁reduce
- ▁smell
- ▁carrie
- ▁original
- ▁august
- ▁grandparents
- ▁preschool
- ▁quarterback
- ▁violent
- ▁barbecue
- ▁fifties
- ▁rabbit
- ▁freedom
- ▁parole
- ▁fascinat
- ▁emotion
- ▁innocent
- ▁perspective
- ▁temperature
- ▁attract
- apped
- ▁pollut
- ▁negative
- ▁wisconsin
- ▁contact
- ▁impact
- ▁jersey
- ▁recognize
- ▁conscious
- ▁detail
- ▁complete
- ▁claim
- ▁creek
- ▁attack
- ▁continu
- ▁enforce
- '1'
- ▁attorney
- ▁campaign
- ▁conservative
- ▁excited
- ▁canada
- ▁split
- ▁multi
- ▁challenge
- ▁evidence
- ▁maintenance
- ▁pepper
- ▁release
- ▁frame
- employed
- ▁include
- ▁paycheck
- ▁raleigh
- '4'
- '2'
- '&'
- '6'
- '8'
- '9'
- '7'
- '5'
- '3'
- /
- '['
- _
- <sos/eos>
init: null
input_size: 83
ctc_conf:
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram2000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: null
frontend_conf: {}
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_base_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Cthyllax/DialoGPT-medium-PaladinDanse
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- adversarial_qa
model-index:
- name: distilbert-base-uncased-finetuned-advers
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-advers
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the adversarial_qa dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6424 | 0.18 | 3000 | 3.6462 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Culmenus/checkpoint-168500-finetuned-de-to-is_nr2
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language: en
thumbnail: http://www.huggingtweets.com/ayurastro/1647214031676/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/493786234221641730/OFQm2K8M_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">AyurAstro®</div>
<div style="text-align: center; font-size: 14px;">@ayurastro</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from AyurAstro®.
| Data | AyurAstro® |
| --- | --- |
| Tweets downloaded | 1437 |
| Retweets | 112 |
| Short tweets | 65 |
| Tweets kept | 1260 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/36zw53cv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ayurastro's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/nhbmyyli) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/nhbmyyli/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ayurastro')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Culmenus/opus-mt-de-is-finetuned-de-to-is_35g65cc_2
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2022-03-14T00:27:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.919
- name: F1
type: f1
value: 0.919270748741723
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2243
- Accuracy: 0.919
- F1: 0.9193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.833 | 1.0 | 250 | 0.3188 | 0.9015 | 0.8975 |
| 0.2513 | 2.0 | 500 | 0.2243 | 0.919 | 0.9193 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Culmenus/opus-mt-de-is-finetuned-de-to-is_ekkicc
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- conversational
---
# Peter from Your Boyfriend Game.
|
CurtisASmith/GPT-JRT
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: mit
---
# GPT2-Chinese-Gulong
## Description
自[GPT2-Chinese](https://github.com/Morizeyao/GPT2-Chinese)开源模型涌现了很多有趣的模型。本模型受到LEE Meng的[直觀理解 GPT-2 語言模型並生成金庸武俠小說](https://leemeng.tw/gpt2-language-model-generate-chinese-jing-yong-novels.html)一文启发,在文中GPT2被证明能够较好地学习到金庸的风格并能较为通顺地续写。金古二人并为当代武侠巨擘,但两人的写作风格大相径庭。金庸重形,古龙重意。本模型旨在尝试GPT2是否能用于生成古龙式武侠小说。
Many interesting models have emerged based on the [GPT2-Chinese](https://github.com/Morizeyao/GPT2-Chinese). This model is inspired by LEE Meng's [article](https://leemeng.tw/gpt2-language-model-generate-chinese-jing-yong-novels.html), in which GPT2
is proven able to learn Jin Yong's writing style and generate a rather reasonable context given an input prefix. Jin Yong and Gu Long are both contemporary Wuxia novel giants, but their writing styles are very different. Jin Yong emphasized the storyline, while Gu Long emphasized the spirit. This model aims to examine whether GPT2 can be used to generate Wuxia content in Gu Long's style.
## Data&Training
训练数据是共计9.4MB的十本古龙小说,包含绝代双骄、武林外史、小李飞刀系列(共五本)以及楚留香系列(共三本)。一共有461个章节,约378万字被提取用于训练。训练在Google Colab上完成,参数如下:batch_size=3, gradient_accumilation=2, lr=1e-4, epochs=40.
The training data are ten novels (about 9.4MB) written by Gu Long. There are 461 chapters and around 3780 thousand Chinese characters in total. The training was done on Google Colab with batch_size=3, gradient_accumilation=2, lr=1e-4, epochs=40.
## Results&Discussion
给定上文是“*她的匕首此刻就架在他的咽喉上,只要稍一用力气*”,生成的样例如下。
* 例1:*她的匕首此刻就架在他的咽喉上,只要稍一用力气,他就会倒下去。他的手已握住了他的脉,眼睛里充满了惊讶和怀疑和怀疑。他的确已无法再戴上这种感觉,他实在不忍看。“你不该放下她?”“我还是很感激你?”“我不能。”“那么你就算杀了我,也不会放过你,但我们是如此,还要看你一眼的男人,我就算杀了你也没关系。”“你现在还不配?”“我迟早总是要看你的,你已是个老人,你难道还不满意?”“不错,我的确看错了,可是我对你也看不出。”他的确像是个完全看不出来的人,可以感觉到他的女人在笑,他的笑容忽然变得说不出的诡秘。这种感觉就像是冰山般消失在他身上,他的心一直都*
* 例2:*她的匕首此刻就架在他的咽喉上,只要稍一用力气,就可以解脱,你的本事就非但不能反抗,也不能还手。”楚留香道:“为什麽?”南宫灵道:“你要我放了她。”楚留香讶然道:“那只因你已没有力气去对她了。”南宫灵大笑道:“我知道你是个很会平凡的人,我若是你,她也会这么样做的,那就是你,我的人难道从来没有把握能杀死你么?”楚留香长长叹息了一声,道:“我有时真的,希望你不会杀她的,你只要能杀得死我,就不会杀我十分她的。”南宫灵扭转了头,目中忽然露出幽怨之色,缓缓道:“我难道真的没有杀死你?”楚留香握紧双拳,指节都已停止,他既没有说一句话,也没有阻拦,因为*
* 例3:*她的匕首此刻就在他的咽喉上,只要稍一用力气,就休想再活下去。谁知就在这时,突听“砰”的一响,竟又是“嗖”的一响,接着“咔嚓”一声,火星四射了进来。这一片火光,正是一片黑暗,天地间的荒坟墓。小鱼儿不觉奇怪,暗道:“这究竟是怎么回事?”这人的语声,竟似真的是慕容九妹。小鱼儿不禁暗暗叹息:“他为什么要救他?为什么不说?”这人的语声,竟似有许多人难测的,而小鱼儿的手,竟和慕容九妹在一起。小鱼儿暗笑忖道:“不错,这人是谁?”这人不但慕容九妹与慕容九妹见面面面,却是慕容九妹的。小鱼儿也瞧不见她说的是什么人,心念一转,转头去瞧那“小鱼儿”就走,慕容*
可以看出,模型(有时)能够推断出匕首刺进咽喉的结果是“倒下去”、‘解脱’或者“休想再活下去”。例1和例2显示了模型一定程度上学到了古龙式的人物对话,尤其是男女之间的恩怨情仇,例3里模型则学会了在生死攸关的情节给出了一个转折。但是大部分时间模型还是在胡扯。
生成文本的质量总的来说差强人意。考虑这次训练集的规模不算很大,如果将古龙全集用于训练,应该可以有更好的结果。
Given the prefix "*Her dagger was at his throat at the moment, and with the slightest effort*", the resulting samples are as follows.
* Sample 1: *Her dagger was at his throat at the moment, and with the slightest effort he would fall. His hand had taken hold of his pulse, and his eyes were filled with surprise and doubt, and suspicion. It was true that he could no longer wear the feeling, and he could not bear to look at it. "You should not give up on her?" "I'm still grateful to you?" "I can't." "Then you will not be spared even if you kill me, but we are like so, a man who still looks at you, I do not care if I kill you." "You are not worthy now?" "Sooner or later I always have to look at you, you are already an old man, are you still not satisfied?" "Sure, I did look at you wrong, but I can't see anything in you either." He did seem like a man who could not be seen through at all, could feel his woman smiling, and his smile suddenly became indescribably secretive. This feeling disappeared from him like an iceberg, and his heart kept*
* Sample 2: *Her dagger was at his throat at the moment, and with the slightest effort, there would be a relief, your skills will not only be unable to resist but also can not fight back." Chu Liuxiang said: "Why?" Nangong Ling said: "You want me to let her go." Chu Liuxiang was surprised: "That's only because you have no more strength for dealing with her." Nangong Ling laughed and said: "I know you are a very ordinary person, if I were you, she would also do the same, that is you, aren't my people sure they can kill you?" Chu Liuxiang a long sigh, said: "Sometimes I really hope you will not kill her if you could just kill me, will not kill me very her." Nangong Ling twisted his head, her eyes suddenly showed a sorrowful color, slowly said: "Didn't I really kill you?" Chu Liuxiang clenched both fists, knuckles stopped, he neither said a word nor blocked, because*
* Sample 3: *Her dagger was at his throat at the moment, and with the slightest effort, he would not be alive anymore. No one could have anticipated that at this moment, there was suddenly a "bang", but also a "whoosh", then a "click", sparks shot in. This piece of fire, is darkness, the desolate grave between heaven and earth. Xiaoyu'er felt strange, thinking in mind: "What the hell is going on here?" This person's voice sounds like Murong Jiu, Xiaoyu'er couldn't help but secretly sigh: "Why did he want to save him? Why didn't he say so?" This person's voice actually seems to have many people difficult to predict, and Xiaoyu'er's hand and Murong Jiumei's hand were actually holding together. Xiaoyu'er secretly smiled and thought: "Good, who is this person?" This person not only Murong Jiumei and Murong Jiumei meeting face to face, but Murong Jiu. Xiaoyu'er also can not see the person she was talking about, he thought about it again, turn his head to see that "Xiaoyu'er" and go, Murong*
It can be seen that the model is (sometimes) able to infer that a dagger in the throat results in a "fall", "relief", or "would not be alive anymore". Examples 1 and 2 show that the model has learned a certain degree of Gu Long-style character dialogue, especially when it comes to the feud between a man and a woman, while in example 3 the model has learned to give a twist in a life-and-death situation. But for most of it, the model was speaking nonsense.
The quality of the generated text is barely satisfactory. Considering the size of the training set is relatively small this time, we can expect better results if the complete works of Gu Long are used for training.
## Conclusion
GPT2在长文本生成方面确实表现不俗。通过用古龙的小说训练GPT2,我们得到了可以以古龙风格进行仿写的模型。但是生成的文本还是较为生硬,不能以假乱真。下一步是用更大的数据集和更先进的模型进行测试,比如[CPM](https://github.com/TsinghuaAI/CPM-1-Generate)。
GPT2 does have a notable performance in long text generation. By training GPT2 on Gu Long's novels, we obtained a model that can imitate Gu Long's writing style. However, the generated text is still relatively rigid and cannot fool humans. Future works include further testing with a larger dataset and a more advanced model, such as [CPM](https://github.com/TsinghuaAI/CPM-1-Generate).
## GPT3.5&GPT4
离训练这个模型已经过去了一年,最近ChatGPT和GPT4让一切都像昨日黄花。我也用他们试着做了同一个任务,下面是结果。
GPT3.5生成的文本:
她的匕首此刻就架在他的咽喉上,只要稍一用力气,他就会死在这里。他看着她,眼中充满了恐惧与绝望。他知道自己没有任何机会挣扎,只能任由命运降临。
她深深地看了他一眼,轻轻抖了抖匕首。
"我不想杀你。"她说道,声音冷冽而不失温柔。"但是你必须要记住,我们也有我们的武器。别再欺负我们这些女孩子了,否则的话……"
她没有继续说下去,但他明白她的意思。他点了点头,轻声细语地保证他再也不会对她或者其他女孩子动手。她视线转移到了他的眼睛,观察了一会儿,然后把匕首收回到了身体里。她转身离开了这里,留下他愧疚与畏惧的背影。
GPT4生成的文本:
她的匕首此刻就架在他的咽喉上,只要稍一用力气,他便会送命于这狭小的密室之中。
她的眼神冷漠,似乎对他的生死毫不关心。她的声音低沉而冰冷:“你知道,我只要轻轻一动手指,你就会死得无法再死。告诉我,为什么我不应该杀了你?”
他的眼神却没有任何恐惧,反而显得从容淡定。他微微一笑,嘴角挂着一丝嘲讽:“你要杀我当然可以,但你杀了我,这世上就再也没有人能解开那个秘密了。”
她的眉头微微一皱,显然心中波动。她知道他所说的秘密对自己来说意义重大,足以改变她的命运。她咬了咬下唇,犹豫了片刻,终究放松了手中的匕首,离他的咽喉稍稍远了些。
“好吧,我给你一个机会。你有什么条件?”她问道,声音依然冷漠。
他轻笑一声,缓缓站起身来,整理了一下散乱的衣襟。“我需要你帮我完成一件事,事成之后,我便将那个秘密告诉你。”
不管是GPT3.5还是GPT4生成的文本质量都很高。GPT3.5仍有一处不自然的表述——“把匕首收回到了身体里”,但是GPT4文法上没有任何问题,几乎以假乱真到让武侠小说爱好者也分辨不出。
说点题外话。我一直觉得自己是一个对强人工智能,基因编辑和人体增强这些事充满期待的人,但是面临这疑似奇点到来的时刻我还是很忐忑。我相信很多人也和我一样。但是无论如何,历史的潮流是没法抗拒的,我们只能顺流而下。
It's been about a year since I trained this model and during which we see disruptive LLM emerging. Yes, I mean GPT3.5(ChatGPT) and GPT4. I ask GPT3.5 and GPT4 to do the same task, and here are the results.
GPT3.5 generated contents:
Her dagger was at his throat at the moment, and with the slightest effort, he would be dead. He looked at her, fear and despair in his eyes. He knew he had no chance to fight back, and could only await his fate.
She looked at him deeply, gently shaking the dagger.
"I don't want to kill you," she said, her voice cold yet gentle. "But you must remember that we also have our weapons. Don't bully our girls anymore, or else..."
She didn't finish her sentence, but he understood her meaning. He nodded silently, quietly promising that he would never lay a hand on her or any other girl again. She looked into his eyes for a moment, studying them, then withdrew the dagger into her body. She turned and left, leaving behind his remorseful and fearful figure.
GPT4 generated contents:
Her dagger is at his throat right now, and with a little effort, he will die in this narrow secret chamber."
Her eyes were cold, seemingly indifferent to his fate. Her voice was deep and freezing: "You know, I only have to lift a finger and you will be too dead to die. So tell me, why shouldn't I kill you?"
His eyes, however, did not show any fear but rather looked calm and light-hearted. He smiled slightly, with a hint of sarcasm on the edge of his lips, "Of course, you can kill me if you want to, but if you do so, no one else in this world will be able to unlock that secret."
Her brow furrowed slightly, and it was obvious that her mind had fluctuated. She knew that the secret he was talking about meant a lot to her, enough to change her destiny. She bit her lower lip and hesitated for a moment before finally relaxing the dagger in her hand and moving it a little further away from his throat.
"All right, I'll give you a chance. What is your deal?" She asked, voice still cold.
He gave a slight chuckle and slowly stood up, straightening his scattered lapels. "I will need you to help me accomplish one thing, and after it is done, I will tell you that secret."
The text generated by both GPT3.5 and GPT4 is of high quality. GPT3.5 still has an unnatural expression "withdrew the dagger into her body", but GPT4 has no grammatical problems and could possibly fool even a wuxia novel fan.
What will GPT5,6,7,8 be capable of? How far are we away from strong AI? Part of me wishes I were born in the 50s so I will go through life before all this happens. But the other part is exciting about this unpredictable future. 5 years or 10 years, it doesn't matter. That day will come. I guess what is important is to figure out what to do until then.
|
CurtisBowser/DialoGPT-medium-sora-two
|
[
"pytorch",
"conversational"
] |
conversational
|
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: efl-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.6097804486545971
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# efl-finetuned-cola
This model is a fine-tuned version of [nghuyong/ernie-2.0-en](https://huggingface.co/nghuyong/ernie-2.0-en) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4688
- Matthews Correlation: 0.6098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 134 | 0.4795 | 0.5403 |
| No log | 2.0 | 268 | 0.4061 | 0.6082 |
| No log | 3.0 | 402 | 0.4688 | 0.6098 |
| 0.2693 | 4.0 | 536 | 0.5332 | 0.6050 |
| 0.2693 | 5.0 | 670 | 0.6316 | 0.6098 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
CurtisBowser/DialoGPT-medium-sora
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9346330275229358
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2745
- Accuracy: 0.9346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1778 | 1.0 | 4210 | 0.3553 | 0.9060 |
| 0.1257 | 2.0 | 8420 | 0.2745 | 0.9346 |
| 0.0779 | 3.0 | 12630 | 0.3272 | 0.9300 |
| 0.0655 | 4.0 | 16840 | 0.3412 | 0.9323 |
| 0.0338 | 5.0 | 21050 | 0.3994 | 0.9300 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Cyrell/Cyrell
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2022-03-14T06:28:29Z |
---
language: ko
tags:
- gpt2
license: cc-by-nc-sa-4.0
---
- This model forked from [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2).
- You can use this model in [Teachable-NLP](https://ainize.ai/teachable-nlp).
For more details: https://github.com/SKT-AI/KoGPT2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.