modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 00:39:05
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 532
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 00:38:59
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
felipetanios/opus-mt-de-en-finetuned-de-to-en-second
|
felipetanios
| 2021-12-04T18:48:17Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-de-en-finetuned-de-to-en-second
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: de-en
metrics:
- name: Bleu
type: bleu
value: 37.9762
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-de-en-finetuned-de-to-en-second
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-de-en](https://huggingface.co/Helsinki-NLP/opus-mt-de-en) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2282
- Bleu: 37.9762
- Gen Len: 25.3696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 157 | 1.1837 | 38.8278 | 25.22 |
| No log | 2.0 | 314 | 1.2057 | 38.3047 | 25.2908 |
| No log | 3.0 | 471 | 1.2167 | 38.231 | 25.316 |
| 1.4808 | 4.0 | 628 | 1.2256 | 37.9871 | 25.3556 |
| 1.4808 | 5.0 | 785 | 1.2282 | 37.9762 | 25.3696 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dee4hf/deeBERT
|
dee4hf
| 2021-12-04T18:44:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
trying to create my first BERT model
|
Mirelle/t5-small-finetuned-ro-to-en
|
Mirelle
| 2021-12-04T18:09:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-ro-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 13.4499
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-ro-to-en
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5877
- Bleu: 13.4499
- Gen Len: 17.5073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.6167 | 0.05 | 2000 | 1.8649 | 9.7029 | 17.5753 |
| 1.4551 | 0.1 | 4000 | 1.7810 | 10.6382 | 17.5358 |
| 1.3723 | 0.16 | 6000 | 1.7369 | 11.1285 | 17.5158 |
| 1.3373 | 0.21 | 8000 | 1.7086 | 11.6173 | 17.5013 |
| 1.2935 | 0.26 | 10000 | 1.6890 | 12.0641 | 17.5038 |
| 1.2632 | 0.31 | 12000 | 1.6670 | 12.3012 | 17.5253 |
| 1.2463 | 0.37 | 14000 | 1.6556 | 12.3991 | 17.5153 |
| 1.2272 | 0.42 | 16000 | 1.6442 | 12.7392 | 17.4732 |
| 1.2052 | 0.47 | 18000 | 1.6328 | 12.8446 | 17.5143 |
| 1.1985 | 0.52 | 20000 | 1.6233 | 13.0892 | 17.4807 |
| 1.1821 | 0.58 | 22000 | 1.6153 | 13.1529 | 17.4952 |
| 1.1791 | 0.63 | 24000 | 1.6079 | 13.2964 | 17.5088 |
| 1.1698 | 0.68 | 26000 | 1.6038 | 13.3548 | 17.4842 |
| 1.154 | 0.73 | 28000 | 1.5957 | 13.3012 | 17.5053 |
| 1.1634 | 0.79 | 30000 | 1.5931 | 13.4203 | 17.5083 |
| 1.1487 | 0.84 | 32000 | 1.5893 | 13.3959 | 17.5123 |
| 1.1495 | 0.89 | 34000 | 1.5875 | 13.3745 | 17.4902 |
| 1.1458 | 0.94 | 36000 | 1.5877 | 13.4129 | 17.5043 |
| 1.1465 | 1.0 | 38000 | 1.5877 | 13.4499 | 17.5073 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
afreireosorio/opus-mt-en-de-finetuned-en-to-de
|
afreireosorio
| 2021-12-04T17:43:39Z | 148 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-de-finetuned-en-to-de
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: de-en
metrics:
- name: Bleu
type: bleu
value: 26.4396
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-de-finetuned-en-to-de
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6798
- Bleu: 26.4396
- Gen Len: 24.8156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|
| 2.0864 | 1.0 | 568611 | 1.6798 | 26.4396 | 24.8156 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.0.dev20210415+cu101
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-small-finetuned-de-en-final
|
rossanez
| 2021-12-04T14:59:44Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt14",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
metrics:
- bleu
model-index:
- name: t5-small-finetuned-de-en-final
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt14
type: wmt14
args: de-en
metrics:
- name: Bleu
type: bleu
value: 9.8394
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-final
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3285
- Bleu: 9.8394
- Gen Len: 17.325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.3867 | 9.7928 | 17.2581 |
| No log | 2.0 | 376 | 2.3942 | 9.7222 | 17.4186 |
| 0.7948 | 3.0 | 564 | 2.3909 | 9.6495 | 17.3513 |
| 0.7948 | 4.0 | 752 | 2.3496 | 9.7376 | 17.3417 |
| 0.7948 | 5.0 | 940 | 2.3285 | 9.8394 | 17.325 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
NbAiLabArchive/test_NCC_small_pytorch
|
NbAiLabArchive
| 2021-12-04T12:45:02Z | 6 | 0 |
transformers
|
[
"transformers",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
Just for performing some experiments. Do not use.
|
chandank/bart-base-finetuned-kaggglenews-batch8-LR1
|
chandank
| 2021-12-04T11:37:31Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-base-finetuned-kaggglenews-batch8-LR1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-batch8-LR1
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.6826 | 27.5191 | 15.0672 | 23.3065 | 24.7163 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Eyvaz/wav2vec2-base-russian-demo-kaggle
|
Eyvaz
| 2021-12-04T11:00:23Z | 33 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-russian-demo-kaggle
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-russian-demo-kaggle
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.0102 | 1.03 | 500 | inf | 0.9997 |
| 0.0068 | 2.06 | 1000 | inf | 0.9997 |
| 0.0 | 3.09 | 1500 | inf | 0.9997 |
| 0.0313 | 4.12 | 2000 | inf | 0.9997 |
| 0.0 | 5.15 | 2500 | inf | 0.9997 |
| 0.0052 | 6.19 | 3000 | inf | 0.9997 |
| 0.0287 | 7.22 | 3500 | inf | 0.9997 |
| 0.0 | 8.25 | 4000 | inf | 0.9997 |
| 0.01 | 9.28 | 4500 | inf | 0.9997 |
| 0.0 | 10.31 | 5000 | inf | 0.9997 |
| 0.3919 | 11.34 | 5500 | inf | 0.9997 |
| 0.0 | 12.37 | 6000 | inf | 0.9997 |
| 0.0 | 13.4 | 6500 | inf | 0.9997 |
| 0.0 | 14.43 | 7000 | inf | 0.9997 |
| 0.6422 | 15.46 | 7500 | inf | 0.9997 |
| 0.0 | 16.49 | 8000 | inf | 0.9997 |
| 0.0 | 17.53 | 8500 | inf | 0.9997 |
| 0.0 | 18.56 | 9000 | inf | 0.9997 |
| 0.0 | 19.59 | 9500 | inf | 0.9997 |
| 0.0 | 20.62 | 10000 | inf | 0.9997 |
| 0.0427 | 21.65 | 10500 | inf | 0.9997 |
| 0.0 | 22.68 | 11000 | inf | 0.9997 |
| 0.0 | 23.71 | 11500 | inf | 0.9997 |
| 0.0 | 24.74 | 12000 | inf | 0.9997 |
| 0.0091 | 25.77 | 12500 | inf | 0.9997 |
| 0.1243 | 26.8 | 13000 | inf | 0.9997 |
| 0.0 | 27.83 | 13500 | inf | 0.9997 |
| 0.0 | 28.87 | 14000 | inf | 0.9997 |
| 0.0 | 29.9 | 14500 | inf | 0.9997 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Edomonndo/opus-mt-ja-en-finetuned-ja-to-en_xml
|
Edomonndo
| 2021-12-04T10:23:03Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model_index:
- name: opus-mt-ja-en-finetuned-ja-to-en_xml
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metric:
name: Bleu
type: bleu
value: 73.8646
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ja-en-finetuned-ja-to-en_xml
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7520
- Bleu: 73.8646
- Gen Len: 27.0884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 1.0512 | 1.0 | 748 | 0.8333 | 59.8234 | 27.905 |
| 0.6076 | 2.0 | 1496 | 0.7817 | 62.5606 | 26.1834 |
| 0.4174 | 3.0 | 2244 | 0.7817 | 64.8346 | 28.2918 |
| 0.2971 | 4.0 | 2992 | 0.7653 | 67.6013 | 27.2222 |
| 0.2172 | 5.0 | 3740 | 0.7295 | 69.4017 | 27.0174 |
| 0.1447 | 6.0 | 4488 | 0.7522 | 68.8355 | 28.2865 |
| 0.0953 | 7.0 | 5236 | 0.7596 | 71.4743 | 27.1861 |
| 0.0577 | 8.0 | 5984 | 0.7469 | 72.0684 | 26.921 |
| 0.04 | 9.0 | 6732 | 0.7526 | 73.2821 | 27.1365 |
| 0.0213 | 10.0 | 7480 | 0.7520 | 73.8646 | 27.0884 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.10.0+cu111
- Datasets 1.10.2
- Tokenizers 0.10.3
|
AlexMaclean/sentence-compression
|
AlexMaclean
| 2021-12-04T08:10:24Z | 69 | 2 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: sentence-compression
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentence-compression
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2973
- Accuracy: 0.8912
- F1: 0.8367
- Precision: 0.8495
- Recall: 0.8243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2686 | 1.0 | 10000 | 0.2667 | 0.8894 | 0.8283 | 0.8725 | 0.7884 |
| 0.2205 | 2.0 | 20000 | 0.2704 | 0.8925 | 0.8372 | 0.8579 | 0.8175 |
| 0.1476 | 3.0 | 30000 | 0.2973 | 0.8912 | 0.8367 | 0.8495 | 0.8243 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
eli/zero-shot-absa
|
eli
| 2021-12-04T06:02:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
# zero-shot-absa
## About
The goal of this project is to accomplish aspect-based sentiment analysis without dependence on the severely limited training data available - that is, the task of aspect-based sentiment analysis is not explicitly supervised, an approach known as “zero-shot learning”. Sentiment analysis has already been used extensively in industry for things such as customer feedback; however, a model such as the one I am proposing would be able to identify topics in a document and also identify the sentiment of the author toward (or associated with) each topic, which allows for detection of much more specific feedback or commentary than simple sentiment analysis.
## Details
There will be three models in the project; the first, m1, will use Latent Dirichlet Allocation to find topics in documents, implemented through gensim. The second, m2, is a zero-shot learning text classification model, available at Hugging Face, which I plan to fine-tune on output of the LDA model on various tweets and reviews. The final piece, m3, is the sentiment intensity analyzer available from NLTK’s vader module. The architecture is as follows: m1 will generate a list of topics for each document in the dataset. I will then create a mapping T from each document to the corresponding list of topics. It would be nice to have labeled data here that, given the output T(doc), supplies the human-generated topic name. Since that isn’t available, the zero-shot text classifier from Hugging Face will be used to generate a topic name, which exists only to interpret the output. Then for each topic t in T, we search the document for all sentences containing at least one word in t and use NLTK to compute the average sentiment score of each of these sentences. We then return, as the model output, the dictionary with all topic names found in the document as keys and the average sentiment from NLTK as the values.
## Dependencies
- `scikit-learn`
- `gensim`
- `NLTK`
- `huggingface.ai`
## Data
The data this project will be trained on come from Twitter and Yelp. With access to the Twitter API through a developer account, one can create a large corpus from tweets. Yelp has very relevant data for this task available at https://www.yelp.com/dataset. I will train / fine-tune each model twice, once for Twitter and once for Yelp, on a training set generated by scikit-learn.
Labeled data for testing are available at https://europe.naverlabs.com/Research/Natural-Language-Processing/Aspect-Based-Sentiment-Analysis-Dataset/ . These data are very straightforward to use, as they have annotations of topics and the associated sentiment scores for each sentence.
|
marefa-nlp/marefa-ner
|
marefa-nlp
| 2021-12-04T05:21:57Z | 2,850 | 23 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"ar",
"dataset:Marefa-NER",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language: ar
datasets:
- Marefa-NER
widget:
- text: "في استاد القاهرة، بدأ حفل افتتاح بطولة كأس الأمم الأفريقية بحضور رئيس الجمهورية و رئيس الاتحاد الدولي لكرة القدم"
---
# Tebyan تبيـان
## Marefa Arabic Named Entity Recognition Model
## نموذج المعرفة لتصنيف أجزاء النص
<p align="center">
<img src="https://huggingface.co/marefa-nlp/marefa-ner/resolve/main/assets/marefa-tebyan-banner.png" alt="Marfa Arabic NER Model" width="600"/>
</p?
---------
**Version**: 1.3
**Last Update:** 3-12-2021
## Model description
**Marefa-NER** is a Large Arabic Named Entity Recognition (NER) model built on a completely new dataset and targets to extract up to 9 different types of entities
```
Person, Location, Organization, Nationality, Job, Product, Event, Time, Art-Work
```
نموذج المعرفة لتصنيف أجزاء النص. نموذج جديد كليا من حيث البيانات المستخدمة في تدريب النموذج.
كذلك يستهدف النموذج تصنيف حتى 9 أنواع مختلفة من أجزاء النص
```
شخص - مكان - منظمة - جنسية - وظيفة - منتج - حدث - توقيت - عمل إبداعي
```
## How to use كيف تستخدم النموذج
*You can test the model quickly by checking this [Colab notebook](https://colab.research.google.com/drive/1OGp9Wgm-oBM5BBhTLx6Qow4dNRSJZ-F5?usp=sharing)*
----
Install the following Python packages
`$ pip3 install transformers==4.8.0 nltk==3.5 protobuf==3.15.3 torch==1.9.0 `
> If you are using `Google Colab`, please restart your runtime after installing the packages.
-----------
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
import torch
import numpy as np
import nltk
nltk.download('punkt')
from nltk.tokenize import word_tokenize
custom_labels = ["O", "B-job", "I-job", "B-nationality", "B-person", "I-person", "B-location","B-time", "I-time", "B-event", "I-event", "B-organization", "I-organization", "I-location", "I-nationality", "B-product", "I-product", "B-artwork", "I-artwork"]
def _extract_ner(text: str, model: AutoModelForTokenClassification,
tokenizer: AutoTokenizer, start_token: str="▁"):
tokenized_sentence = tokenizer([text], padding=True, truncation=True, return_tensors="pt")
tokenized_sentences = tokenized_sentence['input_ids'].numpy()
with torch.no_grad():
output = model(**tokenized_sentence)
last_hidden_states = output[0].numpy()
label_indices = np.argmax(last_hidden_states[0], axis=1)
tokens = tokenizer.convert_ids_to_tokens(tokenized_sentences[0])
special_tags = set(tokenizer.special_tokens_map.values())
grouped_tokens = []
for token, label_idx in zip(tokens, label_indices):
if token not in special_tags:
if not token.startswith(start_token) and len(token.replace(start_token,"").strip()) > 0:
grouped_tokens[-1]["token"] += token
else:
grouped_tokens.append({"token": token, "label": custom_labels[label_idx]})
# extract entities
ents = []
prev_label = "O"
for token in grouped_tokens:
label = token["label"].replace("I-","").replace("B-","")
if token["label"] != "O":
if label != prev_label:
ents.append({"token": [token["token"]], "label": label})
else:
ents[-1]["token"].append(token["token"])
prev_label = label
# group tokens
ents = [{"token": "".join(rec["token"]).replace(start_token," ").strip(), "label": rec["label"]} for rec in ents ]
return ents
model_cp = "marefa-nlp/marefa-ner"
tokenizer = AutoTokenizer.from_pretrained(model_cp)
model = AutoModelForTokenClassification.from_pretrained(model_cp, num_labels=len(custom_labels))
samples = [
"تلقى تعليمه في الكتاب ثم انضم الى الأزهر عام 1873م. تعلم على يد السيد جمال الدين الأفغاني والشيخ محمد عبده",
"بعد عودته إلى القاهرة، التحق نجيب الريحاني فرقة جورج أبيض، الذي كان قد ضمَّ - قُبيل ذلك - فرقته إلى فرقة سلامة حجازي . و منها ذاع صيته",
"في استاد القاهرة، قام حفل افتتاح بطولة كأس الأمم الأفريقية بحضور رئيس الجمهورية و رئيس الاتحاد الدولي لكرة القدم",
"من فضلك أرسل هذا البريد الى صديقي جلال الدين في تمام الساعة الخامسة صباحا في يوم الثلاثاء القادم",
"امبارح اتفرجت على مباراة مانشستر يونايتد مع ريال مدريد في غياب الدون كرستيانو رونالدو",
"لا تنسى تصحيني الساعة سبعة, و ضيف في الجدول اني احضر مباراة نادي النصر غدا",
]
# [optional]
samples = [ " ".join(word_tokenize(sample.strip())) for sample in samples if sample.strip() != "" ]
for sample in samples:
ents = _extract_ner(text=sample, model=model, tokenizer=tokenizer, start_token="▁")
print(sample)
for ent in ents:
print("\t",ent["token"],"==>",ent["label"])
print("========\n")
```
Output
```
تلقى تعليمه في الكتاب ثم انضم الى الأزهر عام 1873م . تعلم على يد السيد جمال الدين الأفغاني والشيخ محمد عبده
الأزهر ==> organization
عام 1873م ==> time
السيد جمال الدين الأفغاني ==> person
محمد عبده ==> person
========
بعد عودته إلى القاهرة، التحق نجيب الريحاني فرقة جورج أبيض، الذي كان قد ضمَّ - قُبيل ذلك - فرقته إلى فرقة سلامة حجازي . و منها ذاع صيته
القاهرة، ==> location
نجيب الريحاني ==> person
فرقة جورج أبيض، ==> organization
فرقة سلامة حجازي ==> organization
========
في استاد القاهرة، قام حفل افتتاح بطولة كأس الأمم الأفريقية بحضور رئيس الجمهورية و رئيس الاتحاد الدولي لكرة القدم
استاد القاهرة، ==> location
بطولة كأس الأمم الأفريقية ==> event
رئيس الجمهورية ==> job
رئيس ==> job
الاتحاد الدولي لكرة القدم ==> organization
========
من فضلك أرسل هذا البريد الى صديقي جلال الدين في تمام الساعة الخامسة صباحا في يوم الثلاثاء القادم
جلال الدين ==> person
الساعة الخامسة صباحا ==> time
يوم الثلاثاء القادم ==> time
========
امبارح اتفرجت على مباراة مانشستر يونايتد مع ريال مدريد في غياب الدون كرستيانو رونالدو
مانشستر يونايتد ==> organization
ريال مدريد ==> organization
كرستيانو رونالدو ==> person
========
لا تنسى تصحيني الساعة سبعة , و ضيف في الجدول اني احضر مباراة نادي النصر غدا
الساعة سبعة ==> time
نادي النصر ==> organization
غدا ==> time
========
```
## Fine-Tuning
Check this [notebook](https://colab.research.google.com/drive/1WUYrnmDFFEItqGMvbyjqZEJJqwU7xQR-?usp=sharing) to fine-tune the NER model
## Evaluation
We tested the model agains a test set of 1959 sentences. The results is in the follwing table
| type | f1-score | precision | recall | support |
|:-------------|-----------:|------------:|---------:|----------:|
| person | 0.93298 | 0.931479 | 0.934487 | 4335 |
| location | 0.891537 | 0.896926 | 0.886212 | 4939 |
| time | 0.873003 | 0.876087 | 0.869941 | 1853 |
| nationality | 0.871246 | 0.843153 | 0.901277 | 2350 |
| job | 0.837656 | 0.79912 | 0.880097 | 2477 |
| organization | 0.781317 | 0.773328 | 0.789474 | 2299 |
| event | 0.686695 | 0.733945 | 0.645161 | 744 |
| artwork | 0.653552 | 0.678005 | 0.630802 | 474 |
| product | 0.625483 | 0.553531 | 0.718935 | 338 |
| **weighted avg** | 0.859008 | 0.852365 | 0.86703 | 19809 |
| **micro avg** | 0.858771 | 0.850669 | 0.86703 | 19809 |
| **macro avg** | 0.79483 | 0.787286 | 0.806265 | 19809 |
## Acknowledgment شكر و تقدير
قام بإعداد البيانات التي تم تدريب النموذج عليها, مجموعة من المتطوعين الذين قضوا ساعات يقومون بتنقيح البيانات و مراجعتها
- على سيد عبد الحفيظ - إشراف
- نرمين محمد عطيه
- صلاح خيرالله
- احمد علي عبدربه
- عمر بن عبد العزيز سليمان
- محمد ابراهيم الجمال
- عبدالرحمن سلامه خلف
- إبراهيم كمال محمد سليمان
- حسن مصطفى حسن
- أحمد فتحي سيد
- عثمان مندو
- عارف الشريف
- أميرة محمد محمود
- حسن سعيد حسن
- عبد العزيز علي البغدادي
- واثق عبدالملك الشويطر
- عمرو رمضان عقل الحفناوي
- حسام الدين أحمد على
- أسامه أحمد محمد محمد
- حاتم محمد المفتي
- عبد الله دردير
- أدهم البغدادي
- أحمد صبري
- عبدالوهاب محمد محمد
- أحمد محمد عوض
|
marciovbarbosa/t5-small-finetuned-de-to-en-swd
|
marciovbarbosa
| 2021-12-04T05:05:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-de-to-en-swd
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: de-en
metrics:
- name: Bleu
type: bleu
value: 9.2293
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-to-en-swd
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9422
- Bleu: 9.2293
- Gen Len: 17.3454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 272 | 2.1658 | 3.8987 | 17.6419 |
| 2.6679 | 2.0 | 544 | 2.0659 | 6.4465 | 17.4758 |
| 2.6679 | 3.0 | 816 | 2.0210 | 7.3632 | 17.3708 |
| 2.4322 | 4.0 | 1088 | 1.9929 | 8.1559 | 17.3721 |
| 2.4322 | 5.0 | 1360 | 1.9744 | 8.6269 | 17.3518 |
| 2.3315 | 6.0 | 1632 | 1.9607 | 8.9017 | 17.3741 |
| 2.3315 | 7.0 | 1904 | 1.9515 | 9.1157 | 17.3484 |
| 2.2955 | 8.0 | 2176 | 1.9471 | 9.1308 | 17.3488 |
| 2.2955 | 9.0 | 2448 | 1.9432 | 9.2239 | 17.3414 |
| 2.2676 | 10.0 | 2720 | 1.9422 | 9.2293 | 17.3454 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
marciovbarbosa/t5-small-finetuned-de-to-en-fp16
|
marciovbarbosa
| 2021-12-04T04:27:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-de-to-en-fp16
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: de-en
metrics:
- name: Bleu
type: bleu
value: 9.2226
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-to-en-fp16
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9416
- Bleu: 9.2226
- Gen Len: 17.3311
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 272 | 2.1671 | 3.8489 | 17.6382 |
| 2.6715 | 2.0 | 544 | 2.0660 | 6.4354 | 17.4905 |
| 2.6715 | 3.0 | 816 | 2.0206 | 7.4092 | 17.3708 |
| 2.4325 | 4.0 | 1088 | 1.9926 | 8.1453 | 17.3685 |
| 2.4325 | 5.0 | 1360 | 1.9739 | 8.6739 | 17.3521 |
| 2.3312 | 6.0 | 1632 | 1.9602 | 8.8808 | 17.3681 |
| 2.3312 | 7.0 | 1904 | 1.9509 | 9.1173 | 17.3491 |
| 2.2946 | 8.0 | 2176 | 1.9465 | 9.1504 | 17.3414 |
| 2.2946 | 9.0 | 2448 | 1.9426 | 9.2372 | 17.3398 |
| 2.2665 | 10.0 | 2720 | 1.9416 | 9.2226 | 17.3311 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
templates/text-classification
|
templates
| 2021-12-04T03:29:21Z | 0 | 2 |
generic
|
[
"generic",
"text-classification",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- text-classification
library_name: generic
---
# Text Classification repository template
This is a template repository for Text Classification to support generic inference with Hugging Face Hub generic Inference API. There are two required steps:
1. Specify the requirements by defining a `requirements.txt` file.
2. Implement the `pipeline.py` `__init__` and `__call__` methods. These methods are called by the Inference API. The `__init__` method should load the model and preload all the elements needed for inference (model, processors, tokenizers, etc.). This is only called once. The `__call__` method performs the actual inference. Make sure to follow the same input/output specifications defined in the template for the pipeline to work.
## How to start
First create a repo in https://hf.co/new.
Then clone this template and push it to your repo.
```
git clone https://huggingface.co/templates/text-classification
cd text-classification
git remote set-url origin https://huggingface.co/$YOUR_USER/$YOUR_REPO_NAME
git push --force
```
|
marciovbarbosa/t5-small-finetuned-de-to-en-lr1e-4
|
marciovbarbosa
| 2021-12-04T02:55:33Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-de-to-en-lr1e-4
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: de-en
metrics:
- name: Bleu
type: bleu
value: 11.427
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-to-en-lr1e-4
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8228
- Bleu: 11.427
- Gen Len: 17.2674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 272 | 1.9605 | 9.0786 | 17.3148 |
| 2.3992 | 2.0 | 544 | 1.8884 | 10.1443 | 17.3301 |
| 2.3992 | 3.0 | 816 | 1.8647 | 10.4816 | 17.3258 |
| 2.0832 | 4.0 | 1088 | 1.8473 | 10.7396 | 17.3231 |
| 2.0832 | 5.0 | 1360 | 1.8343 | 11.0937 | 17.2621 |
| 1.9193 | 6.0 | 1632 | 1.8282 | 11.1303 | 17.3098 |
| 1.9193 | 7.0 | 1904 | 1.8234 | 11.2971 | 17.2991 |
| 1.8351 | 8.0 | 2176 | 1.8241 | 11.3433 | 17.2621 |
| 1.8351 | 9.0 | 2448 | 1.8224 | 11.394 | 17.2691 |
| 1.7747 | 10.0 | 2720 | 1.8228 | 11.427 | 17.2674 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
marciovbarbosa/t5-small-finetuned-de-to-en
|
marciovbarbosa
| 2021-12-04T00:56:09Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-de-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: de-en
metrics:
- name: Bleu
type: bleu
value: 9.2166
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-to-en
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9417
- Bleu: 9.2166
- Gen Len: 17.3404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 272 | 2.1660 | 3.8515 | 17.6289 |
| 2.6678 | 2.0 | 544 | 2.0656 | 6.4422 | 17.4842 |
| 2.6678 | 3.0 | 816 | 2.0203 | 7.4348 | 17.3741 |
| 2.4316 | 4.0 | 1088 | 1.9926 | 8.0914 | 17.3658 |
| 2.4316 | 5.0 | 1360 | 1.9739 | 8.6535 | 17.3461 |
| 2.3307 | 6.0 | 1632 | 1.9603 | 8.8757 | 17.3768 |
| 2.3307 | 7.0 | 1904 | 1.9509 | 9.0744 | 17.3511 |
| 2.2945 | 8.0 | 2176 | 1.9466 | 9.1111 | 17.3418 |
| 2.2945 | 9.0 | 2448 | 1.9427 | 9.1969 | 17.3351 |
| 2.2666 | 10.0 | 2720 | 1.9417 | 9.2166 | 17.3404 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
tyoyo/t5-base-TEDxJP-1body-3context
|
tyoyo
| 2021-12-03T21:07:34Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:te_dx_jp",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- te_dx_jp
model-index:
- name: t5-base-TEDxJP-1body-3context
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-TEDxJP-1body-3context
This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4926
- Wer: 0.1968
- Mer: 0.1894
- Wil: 0.2793
- Wip: 0.7207
- Hits: 55899
- Substitutions: 6836
- Deletions: 3636
- Insertions: 2590
- Cer: 0.1733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:------:|:-----:|:-------------:|:---------:|:----------:|:------:|
| 0.7082 | 1.0 | 746 | 0.5637 | 0.2626 | 0.2430 | 0.3355 | 0.6645 | 54301 | 7195 | 4875 | 5358 | 0.2552 |
| 0.6213 | 2.0 | 1492 | 0.5150 | 0.2068 | 0.1994 | 0.2899 | 0.7101 | 55107 | 6861 | 4403 | 2462 | 0.1866 |
| 0.5331 | 3.0 | 2238 | 0.4945 | 0.2038 | 0.1958 | 0.2858 | 0.7142 | 55551 | 6845 | 3975 | 2705 | 0.1816 |
| 0.5185 | 4.0 | 2984 | 0.4880 | 0.2003 | 0.1929 | 0.2831 | 0.7169 | 55639 | 6860 | 3872 | 2563 | 0.1779 |
| 0.4963 | 5.0 | 3730 | 0.4858 | 0.1988 | 0.1912 | 0.2810 | 0.7190 | 55837 | 6838 | 3696 | 2662 | 0.1772 |
| 0.4625 | 6.0 | 4476 | 0.4885 | 0.1964 | 0.1894 | 0.2799 | 0.7201 | 55785 | 6875 | 3711 | 2448 | 0.1720 |
| 0.4416 | 7.0 | 5222 | 0.4898 | 0.1962 | 0.1890 | 0.2788 | 0.7212 | 55870 | 6819 | 3682 | 2522 | 0.1726 |
| 0.4287 | 8.0 | 5968 | 0.4894 | 0.1968 | 0.1894 | 0.2790 | 0.7210 | 55889 | 6804 | 3678 | 2580 | 0.1743 |
| 0.4457 | 9.0 | 6714 | 0.4909 | 0.1964 | 0.1891 | 0.2792 | 0.7208 | 55919 | 6858 | 3594 | 2586 | 0.1739 |
| 0.4068 | 10.0 | 7460 | 0.4926 | 0.1968 | 0.1894 | 0.2793 | 0.7207 | 55899 | 6836 | 3636 | 2590 | 0.1733 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
emrecan/bert-base-multilingual-cased-allnli_tr
|
emrecan
| 2021-12-03T20:46:47Z | 18 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"zero-shot-classification",
"nli",
"tr",
"dataset:nli_tr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:05Z |
---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: mit
datasets:
- nli_tr
metrics:
- accuracy
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased_allnli_tr
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6144
- Accuracy: 0.7662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8623 | 0.03 | 1000 | 0.9076 | 0.5917 |
| 0.7528 | 0.07 | 2000 | 0.8587 | 0.6119 |
| 0.7074 | 0.1 | 3000 | 0.7867 | 0.6647 |
| 0.6949 | 0.14 | 4000 | 0.7474 | 0.6772 |
| 0.6681 | 0.17 | 5000 | 0.7661 | 0.6814 |
| 0.6597 | 0.2 | 6000 | 0.7264 | 0.6943 |
| 0.6495 | 0.24 | 7000 | 0.7841 | 0.6781 |
| 0.6323 | 0.27 | 8000 | 0.7256 | 0.6952 |
| 0.6308 | 0.31 | 9000 | 0.7319 | 0.6958 |
| 0.6254 | 0.34 | 10000 | 0.7054 | 0.7004 |
| 0.6233 | 0.37 | 11000 | 0.7069 | 0.7085 |
| 0.6165 | 0.41 | 12000 | 0.6880 | 0.7181 |
| 0.6033 | 0.44 | 13000 | 0.6844 | 0.7197 |
| 0.6014 | 0.48 | 14000 | 0.6753 | 0.7129 |
| 0.5947 | 0.51 | 15000 | 0.7000 | 0.7039 |
| 0.5965 | 0.54 | 16000 | 0.6708 | 0.7263 |
| 0.5979 | 0.58 | 17000 | 0.6562 | 0.7285 |
| 0.5787 | 0.61 | 18000 | 0.6554 | 0.7297 |
| 0.58 | 0.65 | 19000 | 0.6544 | 0.7315 |
| 0.574 | 0.68 | 20000 | 0.6549 | 0.7339 |
| 0.5751 | 0.71 | 21000 | 0.6545 | 0.7289 |
| 0.5659 | 0.75 | 22000 | 0.6467 | 0.7371 |
| 0.5732 | 0.78 | 23000 | 0.6448 | 0.7362 |
| 0.5637 | 0.82 | 24000 | 0.6520 | 0.7355 |
| 0.5648 | 0.85 | 25000 | 0.6412 | 0.7345 |
| 0.5622 | 0.88 | 26000 | 0.6350 | 0.7358 |
| 0.5579 | 0.92 | 27000 | 0.6347 | 0.7393 |
| 0.5518 | 0.95 | 28000 | 0.6417 | 0.7392 |
| 0.5547 | 0.99 | 29000 | 0.6321 | 0.7437 |
| 0.524 | 1.02 | 30000 | 0.6430 | 0.7412 |
| 0.4982 | 1.05 | 31000 | 0.6253 | 0.7458 |
| 0.5002 | 1.09 | 32000 | 0.6316 | 0.7418 |
| 0.4993 | 1.12 | 33000 | 0.6197 | 0.7487 |
| 0.4963 | 1.15 | 34000 | 0.6307 | 0.7462 |
| 0.504 | 1.19 | 35000 | 0.6272 | 0.7480 |
| 0.4922 | 1.22 | 36000 | 0.6410 | 0.7433 |
| 0.5016 | 1.26 | 37000 | 0.6295 | 0.7461 |
| 0.4957 | 1.29 | 38000 | 0.6183 | 0.7506 |
| 0.4883 | 1.32 | 39000 | 0.6261 | 0.7502 |
| 0.4985 | 1.36 | 40000 | 0.6315 | 0.7496 |
| 0.4885 | 1.39 | 41000 | 0.6189 | 0.7529 |
| 0.4909 | 1.43 | 42000 | 0.6189 | 0.7473 |
| 0.4894 | 1.46 | 43000 | 0.6314 | 0.7433 |
| 0.4912 | 1.49 | 44000 | 0.6184 | 0.7446 |
| 0.4851 | 1.53 | 45000 | 0.6258 | 0.7461 |
| 0.4879 | 1.56 | 46000 | 0.6286 | 0.7480 |
| 0.4907 | 1.6 | 47000 | 0.6196 | 0.7512 |
| 0.4884 | 1.63 | 48000 | 0.6157 | 0.7526 |
| 0.4755 | 1.66 | 49000 | 0.6056 | 0.7591 |
| 0.4811 | 1.7 | 50000 | 0.5977 | 0.7582 |
| 0.4787 | 1.73 | 51000 | 0.5915 | 0.7621 |
| 0.4779 | 1.77 | 52000 | 0.6014 | 0.7583 |
| 0.4767 | 1.8 | 53000 | 0.6041 | 0.7623 |
| 0.4737 | 1.83 | 54000 | 0.6093 | 0.7563 |
| 0.4836 | 1.87 | 55000 | 0.6001 | 0.7568 |
| 0.4765 | 1.9 | 56000 | 0.6109 | 0.7601 |
| 0.4776 | 1.94 | 57000 | 0.6046 | 0.7599 |
| 0.4769 | 1.97 | 58000 | 0.5970 | 0.7568 |
| 0.4654 | 2.0 | 59000 | 0.6147 | 0.7614 |
| 0.4144 | 2.04 | 60000 | 0.6439 | 0.7566 |
| 0.4101 | 2.07 | 61000 | 0.6373 | 0.7527 |
| 0.4192 | 2.11 | 62000 | 0.6136 | 0.7575 |
| 0.4128 | 2.14 | 63000 | 0.6283 | 0.7560 |
| 0.4204 | 2.17 | 64000 | 0.6187 | 0.7625 |
| 0.4114 | 2.21 | 65000 | 0.6127 | 0.7621 |
| 0.4097 | 2.24 | 66000 | 0.6188 | 0.7626 |
| 0.4129 | 2.28 | 67000 | 0.6156 | 0.7639 |
| 0.4085 | 2.31 | 68000 | 0.6232 | 0.7616 |
| 0.4074 | 2.34 | 69000 | 0.6240 | 0.7605 |
| 0.409 | 2.38 | 70000 | 0.6153 | 0.7591 |
| 0.4046 | 2.41 | 71000 | 0.6375 | 0.7587 |
| 0.4117 | 2.45 | 72000 | 0.6145 | 0.7629 |
| 0.4002 | 2.48 | 73000 | 0.6279 | 0.7610 |
| 0.4042 | 2.51 | 74000 | 0.6176 | 0.7646 |
| 0.4055 | 2.55 | 75000 | 0.6277 | 0.7643 |
| 0.4021 | 2.58 | 76000 | 0.6196 | 0.7642 |
| 0.4081 | 2.62 | 77000 | 0.6127 | 0.7659 |
| 0.408 | 2.65 | 78000 | 0.6237 | 0.7638 |
| 0.3997 | 2.68 | 79000 | 0.6190 | 0.7636 |
| 0.4093 | 2.72 | 80000 | 0.6152 | 0.7648 |
| 0.4095 | 2.75 | 81000 | 0.6155 | 0.7627 |
| 0.4088 | 2.79 | 82000 | 0.6130 | 0.7641 |
| 0.4063 | 2.82 | 83000 | 0.6072 | 0.7646 |
| 0.3978 | 2.85 | 84000 | 0.6128 | 0.7662 |
| 0.4034 | 2.89 | 85000 | 0.6157 | 0.7627 |
| 0.4044 | 2.92 | 86000 | 0.6127 | 0.7661 |
| 0.403 | 2.96 | 87000 | 0.6126 | 0.7664 |
| 0.4033 | 2.99 | 88000 | 0.6144 | 0.7662 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
chrommium/sbert_large-finetuned-sent_in_news_sents
|
chrommium
| 2021-12-03T16:18:40Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sbert_large-finetuned-sent_in_news_sents
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sbert_large-finetuned-sent_in_news_sents
This model is a fine-tuned version of [sberbank-ai/sbert_large_nlu_ru](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7056
- Accuracy: 0.7301
- F1: 0.5210
## Model examples
Model responds to label X in news text. For exaple:
For 'Газпром отозвал лицензию у X, сообщает Финам' the model will return negative label -3
For 'X отозвал лицензию у Сбербанка, сообщает Финам' the model will return neutral label 0
For 'Газпром отозвал лицензию у Сбербанка, сообщает X' the model will return neutral label 0
For 'X демонстрирует высокую прибыль, сообщает Финам' the model will return positive label 1
## Simple example of News preprocessing for Russian before BERT
```
from natasha import (
Segmenter,
MorphVocab,
NewsEmbedding,
NewsMorphTagger,
NewsSyntaxParser,
NewsNERTagger,
PER,
NamesExtractor,
Doc
)
segmenter = Segmenter()
emb = NewsEmbedding()
morph_tagger = NewsMorphTagger(emb)
syntax_parser = NewsSyntaxParser(emb)
morph_vocab = MorphVocab()
### ----------------------------- key sentences block -----------------------------
def find_synax_tokens_with_order(doc, start, tokens, text_arr, full_str):
''' Находит все синтаксические токены, соответствующие заданному набору простых токенов (найденные
для определенной NER другими функциями).
Возвращает словарь найденных синтаксических токенов (ключ - идентификатор токена, состоящий
из номера предложения и номера токена внутри предложения).
Начинает поиск с указанной позиции в списке синтаксических токенов, дополнительно возвращает
позицию остановки, с которой нужно продолжить поиск следующей NER.
'''
found = []
in_str = False
str_candidate = ''
str_counter = 0
if len(text_arr) == 0:
return [], start
for i in range(start, len(doc.syntax.tokens)):
t = doc.syntax.tokens[i]
if in_str:
str_counter += 1
if str_counter < len(text_arr) and t.text == text_arr[str_counter]:
str_candidate += t.text
found.append(t)
if str_candidate == full_str:
return found, i+1
else:
in_str = False
str_candidate = ''
str_counter = 0
found = []
if t.text == text_arr[0]:
found.append(t)
str_candidate = t.text
if str_candidate == full_str:
return found, i+1
in_str = True
return [], len(doc.syntax.tokens)
def find_tokens_in_diap_with_order(doc, start_token, diap):
''' Находит все простые токены (без синтаксической информации), которые попадают в
указанный диапазон. Эти диапазоны мы получаем из разметки NER.
Возвращает набор найденных токенов и в виде массива токенов, и в виде массива строчек.
Начинает поиск с указанной позиции в строке и дополнительно возвращает позицию остановки.
'''
found_tokens = []
found_text = []
full_str = ''
next_i = 0
for i in range(start_token, len(doc.tokens)):
t = doc.tokens[i]
if t.start > diap[-1]:
next_i = i
break
if t.start in diap:
found_tokens.append(t)
found_text.append(t.text)
full_str += t.text
return found_tokens, found_text, full_str, next_i
def add_found_arr_to_dict(found, dict_dest):
for synt in found:
dict_dest.update({synt.id: synt})
return dict_dest
def make_all_syntax_dict(doc):
all_syntax = {}
for synt in doc.syntax.tokens:
all_syntax.update({synt.id: synt})
return all_syntax
def is_consiquent(id_1, id_2):
''' Проверяет идут ли токены друг за другом без промежутка по ключам. '''
id_1_list = id_1.split('_')
id_2_list = id_2.split('_')
if id_1_list[0] != id_2_list[0]:
return False
return int(id_1_list[1]) + 1 == int(id_2_list[1])
def replace_found_to(found, x_str):
''' Заменяет последовательность токенов NER на «заглушку». '''
prev_id = '0_0'
for synt in found:
if is_consiquent(prev_id, synt.id):
synt.text = ''
else:
synt.text = x_str
prev_id = synt.id
def analyze_doc(text):
''' Запускает Natasha для анализа документа. '''
doc = Doc(text)
doc.segment(segmenter)
doc.tag_morph(morph_tagger)
doc.parse_syntax(syntax_parser)
ner_tagger = NewsNERTagger(emb)
doc.tag_ner(ner_tagger)
return doc
def find_non_sym_syntax_short(entity_name, doc, add_X=False, x_str='X'):
''' Отыскивает заданную сущность в тексте, среди всех NER (возможно, в другой грамматической форме).
entity_name - сущность, которую ищем;
doc - документ, в котором сделан препроцессинг Natasha;
add_X - сделать ли замену сущности на «заглушку»;
x_str - текст замены.
Возвращает:
all_found_syntax - словарь всех подходящих токенов образующих искомые сущности, в котором
в случае надобности произведена замена NER на «заглушку»;
all_syntax - словарь всех токенов.
'''
all_found_syntax = {}
current_synt_number = 0
current_tok_number = 0
# идем по всем найденным NER
for span in doc.spans:
span.normalize(morph_vocab)
if span.type != 'ORG':
continue
diap = range(span.start, span.stop)
# создаем словарь всех синтаксических элементов (ключ -- id из номера предложения и номера внутри предложения)
all_syntax = make_all_syntax_dict(doc)
# находим все простые токены внутри NER
found_tokens, found_text, full_str, current_tok_number = find_tokens_in_diap_with_order(doc, current_tok_number,
diap)
# по найденным простым токенам находим все синтаксические токены внутри данного NER
found, current_synt_number = find_synax_tokens_with_order(doc, current_synt_number, found_tokens, found_text,
full_str)
# если текст NER совпадает с указанной сущностью, то делаем замену
if entity_name.find(span.normal) >= 0 or span.normal.find(entity_name) >= 0:
if add_X:
replace_found_to(found, x_str)
all_found_syntax = add_found_arr_to_dict(found, all_found_syntax)
return all_found_syntax, all_syntax
def key_sentences(all_found_syntax):
''' Находит номера предложений с искомой NER. '''
key_sent_numb = {}
for synt in all_found_syntax.keys():
key_sent_numb.update({synt.split('_')[0]: 1})
return key_sent_numb
def openinig_punct(x):
opennings = ['«', '(']
return x in opennings
def key_sentences_str(entitiy_name, doc, add_X=False, x_str='X', return_all=True):
''' Составляет окончательный текст, в котором есть только предложения, где есть ключевая сущность,
эта сущность, если указано, заменяется на «заглушку».
'''
all_found_syntax, all_syntax = find_non_sym_syntax_short(entitiy_name, doc, add_X, x_str)
key_sent_numb = key_sentences(all_found_syntax)
str_ret = ''
for s in all_syntax.keys():
if (s.split('_')[0] in key_sent_numb.keys()) or (return_all):
to_add = all_syntax[s]
if s in all_found_syntax.keys():
to_add = all_found_syntax[s]
else:
if to_add.rel == 'punct' and not openinig_punct(to_add.text):
str_ret = str_ret.rstrip()
str_ret += to_add.text
if (not openinig_punct(to_add.text)) and (to_add.text != ''):
str_ret += ' '
return str_ret
### ----------------------------- key entities block -----------------------------
def find_synt(doc, synt_id):
for synt in doc.syntax.tokens:
if synt.id == synt_id:
return synt
return None
def is_subj(doc, synt, recursion_list=[]):
''' Сообщает является ли слово подлежащим или частью сложного подлежащего. '''
if synt.rel == 'nsubj':
return True
if synt.rel == 'appos':
found_head = find_synt(doc, synt.head_id)
if found_head.id in recursion_list:
return False
return is_subj(doc, found_head, recursion_list + [synt.id])
return False
def find_subjects_in_syntax(doc):
''' Выдает словарик, в котором для каждой NER написано, является ли он
подлежащим в предложении.
Выдает стартовую позицию NER и было ли оно подлежащим (или appos)
'''
found_subjects = {}
current_synt_number = 0
current_tok_number = 0
for span in doc.spans:
span.normalize(morph_vocab)
if span.type != 'ORG':
continue
found_subjects.update({span.start: 0})
diap = range(span.start, span.stop)
found_tokens, found_text, full_str, current_tok_number = find_tokens_in_diap_with_order(doc,
current_tok_number,
diap)
found, current_synt_number = find_synax_tokens_with_order(doc, current_synt_number, found_tokens,
found_text, full_str)
found_subjects.update({span.start: 0})
for synt in found:
if is_subj(doc, synt):
found_subjects.update({span.start: 1})
return found_subjects
def entity_weight(lst, c=1):
return c*lst[0]+lst[1]
def determine_subject(found_subjects, doc, new_agency_list, return_best=True, threshold=0.75):
''' Определяет ключевую NER и список самых важных NER, основываясь на том, сколько
раз каждая из них встречается в текста вообще и сколько раз в роли подлежащего '''
objects_arr = []
objects_arr_ners = []
should_continue = False
for span in doc.spans:
should_continue = False
span.normalize(morph_vocab)
if span.type != 'ORG':
continue
if span.normal in new_agency_list:
continue
for i in range(len(objects_arr)):
t, lst = objects_arr[i]
if t.find(span.normal) >= 0:
lst[0] += 1
lst[1] += found_subjects[span.start]
should_continue = True
break
if span.normal.find(t) >= 0:
objects_arr[i] = (span.normal, [lst[0]+1, lst[1]+found_subjects[span.start]])
should_continue = True
break
if should_continue:
continue
objects_arr.append((span.normal, [1, found_subjects[span.start]]))
objects_arr_ners.append(span.normal)
max_weight = 0
opt_ent = 0
for obj in objects_arr:
t, lst = obj
w = entity_weight(lst)
if max_weight < w:
max_weight = w
opt_ent = t
if not return_best:
return opt_ent, objects_arr_ners
bests = []
for obj in objects_arr:
t, lst = obj
w = entity_weight(lst)
if max_weight*threshold < w:
bests.append(t)
return opt_ent, bests
text = '''В офисах Сбера начали тестировать технологию помощи посетителям в экстренных ситуациях. «Зеленая кнопка» будет
в зонах круглосуточного обслуживания офисов банка в Воронеже, Санкт-Петербурге, Подольске, Пскове, Орле и Ярославле.
В них находятся стенды с сенсорными кнопками, обеспечивающие связь с операторами центра мониторинга службы безопасности
банка. Получив сигнал о помощи, оператор центра может подключиться к объекту по голосовой связи. С помощью камер
видеонаблюдения он оценит обстановку и при необходимости вызовет полицию или скорую помощь. «Зеленой кнопкой» можно
воспользоваться в нерабочее для отделения время, если возникла угроза жизни или здоровью. В остальных случаях помочь
клиентам готовы сотрудники отделения банка. «Одно из направлений нашей работы в области ESG и устойчивого развития
— это забота об обществе. И здоровье людей как высшая ценность является его основой. Поэтому задача банка в области
безопасности гораздо масштабнее, чем обеспечение только финансовой безопасности клиентов. Этот пилотный проект
приурочен к 180-летию Сбербанка: мы хотим, чтобы, приходя в банк, клиент чувствовал, что его жизнь и безопасность
— наша ценность», — отметил заместитель председателя правления Сбербанка Станислав Кузнецов.'''
doc = analyze_doc(text)
key_entity = determine_subject(find_subjects_in_syntax(doc), doc, [])[0]
text_for_model = key_sentences_str(key_entity, doc, add_X=True, x_str='X', return_all=False)
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 176 | 0.9504 | 0.6903 | 0.2215 |
| No log | 2.0 | 352 | 0.9065 | 0.7159 | 0.4760 |
| 0.8448 | 3.0 | 528 | 0.9687 | 0.7045 | 0.4774 |
| 0.8448 | 4.0 | 704 | 1.2436 | 0.7045 | 0.4686 |
| 0.8448 | 5.0 | 880 | 1.4809 | 0.7273 | 0.4630 |
| 0.2074 | 6.0 | 1056 | 1.5866 | 0.7330 | 0.5185 |
| 0.2074 | 7.0 | 1232 | 1.7056 | 0.7301 | 0.5210 |
| 0.2074 | 8.0 | 1408 | 1.6982 | 0.7415 | 0.5056 |
| 0.0514 | 9.0 | 1584 | 1.8088 | 0.7273 | 0.5203 |
| 0.0514 | 10.0 | 1760 | 1.9250 | 0.7102 | 0.4879 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
minemile/distilbert-base-uncased-finetuned-imdb
|
minemile
| 2021-12-03T15:15:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.707 | 1.0 | 157 | 2.4883 |
| 2.572 | 2.0 | 314 | 2.4240 |
| 2.5377 | 3.0 | 471 | 2.4355 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
staceythompson/autonlp-new-text-classification-38319698
|
staceythompson
| 2021-12-03T14:06:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"unk",
"dataset:staceythompson/autonlp-data-new-text-classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- staceythompson/autonlp-data-new-text-classification
co2_eq_emissions: 2.0318857468309206
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 38319698
- CO2 Emissions (in grams): 2.0318857468309206
## Validation Metrics
- Loss: 0.04461582377552986
- Accuracy: 0.9909255898366606
- Macro F1: 0.9951842095089771
- Micro F1: 0.9909255898366606
- Weighted F1: 0.9909493945587176
- Macro Precision: 0.9942196531791907
- Micro Precision: 0.9909255898366606
- Weighted Precision: 0.9911878560263526
- Macro Recall: 0.9962686567164181
- Micro Recall: 0.9909255898366606
- Weighted Recall: 0.9909255898366606
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/staceythompson/autonlp-new-text-classification-38319698
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("staceythompson/autonlp-new-text-classification-38319698", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("staceythompson/autonlp-new-text-classification-38319698", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
admin-63/eToro
|
admin-63
| 2021-12-03T13:23:09Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
♕〖𖡦الس௸اهر𖡦〗♕
|
rtoguchi/t5-small-finetuned-en-to-ro-fp16_off
|
rtoguchi
| 2021-12-03T13:18:24Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-fp16_off
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.3056
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-fp16_off
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4078
- Bleu: 7.3056
- Gen Len: 18.2556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6037 | 1.0 | 7629 | 1.4078 | 7.3056 | 18.2556 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
marcosscarpim/t5-small-finetuned-en-to-ro
|
marcosscarpim
| 2021-12-03T11:44:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.3228
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4088
- Bleu: 7.3228
- Gen Len: 18.2581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 0.5959 | 0.4 | 30516 | 1.4088 | 7.3228 | 18.2581 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Gastron/lp-initial-aed-short
|
Gastron
| 2021-12-03T10:00:50Z | 4 | 0 |
speechbrain
|
[
"speechbrain",
"automatic-speech-recognition",
"Attention",
"pytorch",
"fi",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language: "fi"
thumbnail:
tags:
- automatic-speech-recognition
- Attention
- pytorch
- speechbrain
metrics:
- wer
- cer
---
# CRDNN with Attention trained on LP
This is a an initial model, partly wrong configuration, just to show an initial example.
|
aretw0/t5-small-finetuned-en-to-ro-dataset_20-input_64
|
aretw0
| 2021-12-03T00:53:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-dataset_20-input_64
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 8.6652
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-dataset_20-input_64
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4335
- Bleu: 8.6652
- Gen Len: 18.2596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6351 | 1.0 | 7629 | 1.4335 | 8.6652 | 18.2596 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aretw0/t5-small-finetuned-en-to-ro-dataset_20
|
aretw0
| 2021-12-03T00:48:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-dataset_20
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.3293
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-dataset_20
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4052
- Bleu: 7.3293
- Gen Len: 18.2556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6029 | 1.0 | 7629 | 1.4052 | 7.3293 | 18.2556 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
alexrfelicio/t5-small-finetuned8-en-to-de
|
alexrfelicio
| 2021-12-03T00:13:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: t5-small-finetuned8-en-to-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned8-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 136 | 3.6717 | 3.9127 | 4.0207 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
alexrfelicio/t5-small-finetuned32-en-to-de
|
alexrfelicio
| 2021-12-02T22:39:31Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: t5-small-finetuned32-en-to-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned32-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 136 | 1.4226 | 21.9554 | 17.8089 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
alexrfelicio/t5-small-finetuned128-en-to-de
|
alexrfelicio
| 2021-12-02T21:27:03Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: t5-small-finetuned128-en-to-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned128-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
fse/paragram-300-ws353
|
fse
| 2021-12-02T21:08:07Z | 0 | 0 | null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- glove
- gensim
- fse
---
# Paragram Embeddings
300 dimensional Paragram embeddings tuned on WordSim353 dataset
Read more:
* https://www.cs.cmu.edu/~jwieting/
|
fse/paragram-300-sl999
|
fse
| 2021-12-02T21:03:05Z | 0 | 0 | null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- glove
- gensim
- fse
---
# Paragram Embeddings
300 dimensional Paragram embeddings tuned on SimLex999 dataset
Read more:
* https://www.cs.cmu.edu/~jwieting/
|
fse/paragram-25
|
fse
| 2021-12-02T21:00:31Z | 0 | 0 | null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- glove
- gensim
- fse
---
# Paragram Embeddings
Towards Universal Paraphrastic Sentence Embeddings (25 dimensions)
Read more:
* https://www.cs.cmu.edu/~jwieting/
* https://www.cs.cmu.edu/~jwieting/wieting2016ICLR.pdf
|
gayanin/bart-mlm-pubmed-medterm
|
gayanin
| 2021-12-02T20:51:43Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-mlm-pubmed-medterm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-mlm-pubmed-medterm
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Rouge2 Precision: 0.985
- Rouge2 Recall: 0.7208
- Rouge2 Fmeasure: 0.8088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:------:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.0018 | 1.0 | 13833 | 0.0003 | 0.985 | 0.7208 | 0.8088 |
| 0.0014 | 2.0 | 27666 | 0.0006 | 0.9848 | 0.7207 | 0.8086 |
| 0.0009 | 3.0 | 41499 | 0.0002 | 0.9848 | 0.7207 | 0.8086 |
| 0.0007 | 4.0 | 55332 | 0.0002 | 0.985 | 0.7208 | 0.8088 |
| 0.0006 | 5.0 | 69165 | 0.0001 | 0.9848 | 0.7207 | 0.8087 |
| 0.0001 | 6.0 | 82998 | 0.0002 | 0.9846 | 0.7206 | 0.8086 |
| 0.0009 | 7.0 | 96831 | 0.0001 | 0.9848 | 0.7208 | 0.8087 |
| 0.0 | 8.0 | 110664 | 0.0000 | 0.9848 | 0.7207 | 0.8087 |
| 0.0001 | 9.0 | 124497 | 0.0000 | 0.985 | 0.7208 | 0.8088 |
| 0.0 | 10.0 | 138330 | 0.0000 | 0.985 | 0.7208 | 0.8088 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/angiejolielive
|
huggingtweets
| 2021-12-02T20:17:53Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/angiejolielive/1638476268574/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/817164380081180673/TJnt3Lxe_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Angelina Jolie</div>
<div style="text-align: center; font-size: 14px;">@angiejolielive</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Angelina Jolie.
| Data | Angelina Jolie |
| --- | --- |
| Tweets downloaded | 1118 |
| Retweets | 71 |
| Short tweets | 45 |
| Tweets kept | 1002 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3fb12gam/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @angiejolielive's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2g9ynpkt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2g9ynpkt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/angiejolielive')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
fse/fasttext-crawl-subwords-300
|
fse
| 2021-12-02T20:06:16Z | 0 | 0 | null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- glove
- gensim
- fse
---
# Fasttext
2 million word vectors trained with subword information on Common Crawl (600B tokens).
Read more:
* https://fasttext.cc/docs/en/english-vectors.html
|
kuppuluri/telugu_bertu_ner
|
kuppuluri
| 2021-12-02T18:15:04Z | 26 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# Named Entity Recognition Model for Telugu
#### How to use
Use the below script from your python terminal as the web interface for inference has few encoding issues for Telugu
PS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models.
```python
from simpletransformers.ner import NERModel
model = NERModel('bert',
'kuppuluri/telugu_bertu_ner',
labels=[
'B-PERSON', 'I-ORG', 'B-ORG', 'I-LOC', 'B-MISC',
'I-MISC', 'I-PERSON', 'B-LOC', 'O'
],
use_cuda=False,
args={"use_multiprocessing": False})
text = "విరాట్ కోహ్లీ కూడా అదే నిర్లక్ష్యాన్ని ప్రదర్శించి కేవలం ఒక పరుగుకే రనౌటై పెవిలియన్ చేరాడు ."
results = model.predict([text])
```
## Training data
Training data is from https://github.com/anikethjr/NER_Telugu
## Eval results
On the test set my results were
eval_loss = 0.0004407190410447974
f1_score = 0.999519076627124
precision = 0.9994389677005691
recall = 0.9995991983967936
|
rtoguchi/t5-small-finetuned-en-to-ro-weight_decay_0.001
|
rtoguchi
| 2021-12-02T17:46:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-weight_decay_0.001
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.3524
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-weight_decay_0.001
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4509
- Bleu: 7.3524
- Gen Len: 18.2581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6488 | 1.0 | 7629 | 1.4509 | 7.3524 | 18.2581 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
tyoyo/t5-base-TEDxJP-11body-0context
|
tyoyo
| 2021-12-02T17:37:36Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:te_dx_jp",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- te_dx_jp
model-index:
- name: t5-base-TEDxJP-11body-0context
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-TEDxJP-11body-0context
This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8068
- Wer: 0.1976
- Mer: 0.1904
- Wil: 0.2816
- Wip: 0.7184
- Hits: 602335
- Substitutions: 75050
- Deletions: 39435
- Insertions: 27185
- Cer: 0.1625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:------:|:------:|:-------------:|:---------:|:----------:|:------:|
| 0.8909 | 1.0 | 746 | 0.7722 | 0.3120 | 0.2861 | 0.3989 | 0.6011 | 558138 | 99887 | 58795 | 64983 | 0.2652 |
| 0.6786 | 2.0 | 1492 | 0.7021 | 0.2226 | 0.2122 | 0.3069 | 0.6931 | 592242 | 78773 | 45805 | 34978 | 0.1862 |
| 0.5627 | 3.0 | 2238 | 0.6996 | 0.2104 | 0.2016 | 0.2942 | 0.7058 | 597381 | 76593 | 42846 | 31392 | 0.1752 |
| 0.489 | 4.0 | 2984 | 0.7161 | 0.2030 | 0.1952 | 0.2865 | 0.7135 | 599808 | 75155 | 41857 | 28506 | 0.1684 |
| 0.4355 | 5.0 | 3730 | 0.7389 | 0.2000 | 0.1924 | 0.2837 | 0.7163 | 601815 | 75247 | 39758 | 28335 | 0.1651 |
| 0.3836 | 6.0 | 4476 | 0.7537 | 0.1992 | 0.1918 | 0.2829 | 0.7171 | 601846 | 75046 | 39928 | 27815 | 0.1640 |
| 0.3617 | 7.0 | 5222 | 0.7743 | 0.1995 | 0.1918 | 0.2832 | 0.7168 | 602287 | 75268 | 39265 | 28445 | 0.1642 |
| 0.3258 | 8.0 | 5968 | 0.7907 | 0.1971 | 0.1899 | 0.2809 | 0.7191 | 602800 | 74887 | 39133 | 27258 | 0.1620 |
| 0.3225 | 9.0 | 6714 | 0.8035 | 0.1981 | 0.1908 | 0.2823 | 0.7177 | 602418 | 75372 | 39030 | 27625 | 0.1630 |
| 0.3162 | 10.0 | 7460 | 0.8068 | 0.1976 | 0.1904 | 0.2816 | 0.7184 | 602335 | 75050 | 39435 | 27185 | 0.1625 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
fse/word2vec-google-news-300
|
fse
| 2021-12-02T16:46:03Z | 0 | 38 | null |
[
"glove",
"gensim",
"fse",
"arxiv:1301.3781",
"arxiv:1310.4546",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- glove
- gensim
- fse
---
# Word2Vec
Pre-trained vectors trained on a part of the Google News dataset (about 100 billion words). The model contains 300-dimensional vectors for 3 million words and phrases. The phrases were obtained using a simple data-driven approach described in 'Distributed Representations of Words and Phrases and their Compositionality'
Read more:
* https://code.google.com/archive/p/word2vec/
* https://arxiv.org/abs/1301.3781
* https://arxiv.org/abs/1310.4546
* https://www.microsoft.com/en-us/research/publication/linguistic-regularities-in-continuous-space-word-representations/?from=http%3A%2F%2Fresearch.microsoft.com%2Fpubs%2F189726%2Frvecs.pdf
|
fse/glove-wiki-gigaword-50
|
fse
| 2021-12-02T16:45:04Z | 0 | 1 | null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- glove
- gensim
- fse
---
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
|
fse/glove-wiki-gigaword-100
|
fse
| 2021-12-02T16:42:45Z | 0 | 1 | null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- glove
- gensim
- fse
---
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
|
fse/glove-twitter-100
|
fse
| 2021-12-02T16:39:20Z | 0 | 0 | null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- glove
- gensim
- fse
---
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
|
huggingtweets/derspiegel
|
huggingtweets
| 2021-12-02T16:13:08Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/derspiegel/1638461583796/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1214723509521387520/7UENeEVp_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">DER SPIEGEL</div>
<div style="text-align: center; font-size: 14px;">@derspiegel</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from DER SPIEGEL.
| Data | DER SPIEGEL |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 478 |
| Short tweets | 6 |
| Tweets kept | 2766 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2uv8zr0k/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @derspiegel's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/i3q4xu9o) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/i3q4xu9o/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/derspiegel')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
chandank/bart-base-finetuned-kaggglenews-batch8-epochs3
|
chandank
| 2021-12-02T15:10:13Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-finetuned-kaggglenews-batch8-epochs3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-batch8-epochs3
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5635
- Rouge1: 28.2335
- Rouge2: 16.0201
- Rougel: 24.0315
- Rougelsum: 25.647
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.5635 | 28.2335 | 16.0201 | 24.0315 | 25.647 | 20.0 |
| 1.5345 | 2.0 | 990 | 1.5635 | 28.2335 | 16.0201 | 24.0315 | 25.647 | 20.0 |
| 1.531 | 3.0 | 1485 | 1.5635 | 28.2335 | 16.0201 | 24.0315 | 25.647 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
emrecan/convbert-base-turkish-mc4-cased-allnli_tr
|
emrecan
| 2021-12-02T14:57:01Z | 97 | 2 |
transformers
|
[
"transformers",
"pytorch",
"convbert",
"text-classification",
"zero-shot-classification",
"nli",
"tr",
"dataset:nli_tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:05Z |
---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
metrics:
- accuracy
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convbert-base-turkish-mc4-cased_allnli_tr
This model is a fine-tuned version of [dbmdz/convbert-base-turkish-mc4-cased](https://huggingface.co/dbmdz/convbert-base-turkish-mc4-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5541
- Accuracy: 0.8111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7338 | 0.03 | 1000 | 0.6722 | 0.7236 |
| 0.603 | 0.07 | 2000 | 0.6465 | 0.7399 |
| 0.5605 | 0.1 | 3000 | 0.5801 | 0.7728 |
| 0.55 | 0.14 | 4000 | 0.5994 | 0.7626 |
| 0.529 | 0.17 | 5000 | 0.5720 | 0.7697 |
| 0.5196 | 0.2 | 6000 | 0.5692 | 0.7769 |
| 0.5117 | 0.24 | 7000 | 0.5725 | 0.7785 |
| 0.5044 | 0.27 | 8000 | 0.5532 | 0.7787 |
| 0.5016 | 0.31 | 9000 | 0.5546 | 0.7812 |
| 0.5031 | 0.34 | 10000 | 0.5461 | 0.7870 |
| 0.4949 | 0.37 | 11000 | 0.5725 | 0.7826 |
| 0.4894 | 0.41 | 12000 | 0.5419 | 0.7933 |
| 0.4796 | 0.44 | 13000 | 0.5278 | 0.7914 |
| 0.4795 | 0.48 | 14000 | 0.5193 | 0.7953 |
| 0.4713 | 0.51 | 15000 | 0.5534 | 0.7771 |
| 0.4738 | 0.54 | 16000 | 0.5098 | 0.8039 |
| 0.481 | 0.58 | 17000 | 0.5244 | 0.7958 |
| 0.4634 | 0.61 | 18000 | 0.5215 | 0.7972 |
| 0.465 | 0.65 | 19000 | 0.5129 | 0.7985 |
| 0.4624 | 0.68 | 20000 | 0.5062 | 0.8047 |
| 0.4597 | 0.71 | 21000 | 0.5114 | 0.8029 |
| 0.4571 | 0.75 | 22000 | 0.5070 | 0.8073 |
| 0.4602 | 0.78 | 23000 | 0.5115 | 0.7993 |
| 0.4552 | 0.82 | 24000 | 0.5085 | 0.8052 |
| 0.4538 | 0.85 | 25000 | 0.5118 | 0.7974 |
| 0.4517 | 0.88 | 26000 | 0.5036 | 0.8044 |
| 0.4517 | 0.92 | 27000 | 0.4930 | 0.8062 |
| 0.4413 | 0.95 | 28000 | 0.5307 | 0.7964 |
| 0.4483 | 0.99 | 29000 | 0.5195 | 0.7938 |
| 0.4036 | 1.02 | 30000 | 0.5238 | 0.8029 |
| 0.3724 | 1.05 | 31000 | 0.5125 | 0.8082 |
| 0.3777 | 1.09 | 32000 | 0.5099 | 0.8075 |
| 0.3753 | 1.12 | 33000 | 0.5172 | 0.8053 |
| 0.367 | 1.15 | 34000 | 0.5188 | 0.8053 |
| 0.3819 | 1.19 | 35000 | 0.5218 | 0.8046 |
| 0.363 | 1.22 | 36000 | 0.5202 | 0.7993 |
| 0.3794 | 1.26 | 37000 | 0.5240 | 0.8048 |
| 0.3749 | 1.29 | 38000 | 0.5026 | 0.8054 |
| 0.367 | 1.32 | 39000 | 0.5198 | 0.8075 |
| 0.3759 | 1.36 | 40000 | 0.5298 | 0.7993 |
| 0.3701 | 1.39 | 41000 | 0.5072 | 0.8091 |
| 0.3742 | 1.43 | 42000 | 0.5071 | 0.8098 |
| 0.3706 | 1.46 | 43000 | 0.5317 | 0.8037 |
| 0.3716 | 1.49 | 44000 | 0.5034 | 0.8052 |
| 0.3717 | 1.53 | 45000 | 0.5258 | 0.8012 |
| 0.3714 | 1.56 | 46000 | 0.5195 | 0.8050 |
| 0.3781 | 1.6 | 47000 | 0.5004 | 0.8104 |
| 0.3725 | 1.63 | 48000 | 0.5124 | 0.8113 |
| 0.3624 | 1.66 | 49000 | 0.5040 | 0.8094 |
| 0.3657 | 1.7 | 50000 | 0.4979 | 0.8111 |
| 0.3669 | 1.73 | 51000 | 0.4968 | 0.8100 |
| 0.3636 | 1.77 | 52000 | 0.5075 | 0.8079 |
| 0.36 | 1.8 | 53000 | 0.4985 | 0.8110 |
| 0.3624 | 1.83 | 54000 | 0.5125 | 0.8070 |
| 0.366 | 1.87 | 55000 | 0.4918 | 0.8117 |
| 0.3655 | 1.9 | 56000 | 0.5051 | 0.8109 |
| 0.3609 | 1.94 | 57000 | 0.5083 | 0.8105 |
| 0.3672 | 1.97 | 58000 | 0.5129 | 0.8085 |
| 0.3545 | 2.0 | 59000 | 0.5467 | 0.8109 |
| 0.2938 | 2.04 | 60000 | 0.5635 | 0.8049 |
| 0.29 | 2.07 | 61000 | 0.5781 | 0.8041 |
| 0.2992 | 2.11 | 62000 | 0.5470 | 0.8077 |
| 0.2957 | 2.14 | 63000 | 0.5765 | 0.8073 |
| 0.292 | 2.17 | 64000 | 0.5472 | 0.8106 |
| 0.2893 | 2.21 | 65000 | 0.5590 | 0.8085 |
| 0.2883 | 2.24 | 66000 | 0.5535 | 0.8064 |
| 0.2923 | 2.28 | 67000 | 0.5508 | 0.8095 |
| 0.2868 | 2.31 | 68000 | 0.5679 | 0.8098 |
| 0.2892 | 2.34 | 69000 | 0.5660 | 0.8057 |
| 0.292 | 2.38 | 70000 | 0.5494 | 0.8088 |
| 0.286 | 2.41 | 71000 | 0.5653 | 0.8085 |
| 0.2939 | 2.45 | 72000 | 0.5673 | 0.8070 |
| 0.286 | 2.48 | 73000 | 0.5600 | 0.8092 |
| 0.2844 | 2.51 | 74000 | 0.5508 | 0.8095 |
| 0.2913 | 2.55 | 75000 | 0.5645 | 0.8088 |
| 0.2859 | 2.58 | 76000 | 0.5677 | 0.8095 |
| 0.2892 | 2.62 | 77000 | 0.5598 | 0.8113 |
| 0.2898 | 2.65 | 78000 | 0.5618 | 0.8096 |
| 0.2814 | 2.68 | 79000 | 0.5664 | 0.8103 |
| 0.2917 | 2.72 | 80000 | 0.5484 | 0.8122 |
| 0.2907 | 2.75 | 81000 | 0.5522 | 0.8116 |
| 0.2896 | 2.79 | 82000 | 0.5540 | 0.8093 |
| 0.2907 | 2.82 | 83000 | 0.5469 | 0.8104 |
| 0.2882 | 2.85 | 84000 | 0.5471 | 0.8122 |
| 0.2878 | 2.89 | 85000 | 0.5532 | 0.8108 |
| 0.2858 | 2.92 | 86000 | 0.5511 | 0.8115 |
| 0.288 | 2.96 | 87000 | 0.5491 | 0.8111 |
| 0.2834 | 2.99 | 88000 | 0.5541 | 0.8111 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
project2you/wav2vec2-large-xlsr-53-demo-colab
|
project2you
| 2021-12-02T11:58:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-53-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6901
- Wer: 1.6299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.5034 | 3.42 | 400 | 3.5852 | 1.0 |
| 1.7853 | 6.83 | 800 | 0.7430 | 1.6774 |
| 0.5675 | 10.26 | 1200 | 0.6513 | 1.6330 |
| 0.3761 | 13.67 | 1600 | 0.6208 | 1.6081 |
| 0.2776 | 17.09 | 2000 | 0.6401 | 1.6081 |
| 0.2266 | 20.51 | 2400 | 0.6410 | 1.6295 |
| 0.1949 | 23.93 | 2800 | 0.6910 | 1.6287 |
| 0.1672 | 27.35 | 3200 | 0.6901 | 1.6299 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
tosin/dialogpt_sv
|
tosin
| 2021-12-02T10:04:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"en",
"dataset:GDC",
"arxiv:2110.06273",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
language:
- en
tags:
- conversational
- transformers
datasets:
- GDC
metrics:
- perplexity
license: cc-by-4.0
widget:
- text: "Jag ska fika."
---
## DialoGPT_SV
This is a fine-tuned model of the DialoGPT (medium) on the Swedish Gothenburg Dialogue Corpus (GDC). It is intended to be used as a Swedish conversational system. The GDC dataset it's trained on is limited in scope, as it's from the transcription of dialogues of about 25 different social activities, including debates, academic seminars and situations. The dataset has about 108K turns.
The perplexity achieved on the test and validation sets (from 80:10:10 split) are 27.81 and 27.81, respectivley. More information about the original pre-trained model can be found [here](https://huggingface.co/microsoft/DialoGPT-medium)
* Generation example from an interactive environment:
|Role | Response |
|---------|------------|
|User | Jag ska fika |
|Bot | ja det är ju så att jag kanske inte kan få den att göra det |
|User | tack så mycket |
|Bot | tack tack ses vi om sex vecko igen |
Please, find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
The paper for this work was accepted at the Northern Lights Deep Learning (NLDL) conference 2022. Arxiv paper: [https://arxiv.org/pdf/2110.06273.pdf](https://arxiv.org/pdf/2110.06273.pdf)
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("tosin/dialogpt_sv")
model = AutoModelForCausalLM.from_pretrained("tosin/dialogpt_sv")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("Swedish_GDC_Bot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
|
tosin/dialogpt_mwoz
|
tosin
| 2021-12-02T09:32:03Z | 52 | 6 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"en",
"dataset:multi_woz_v22",
"arxiv:2110.06273",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
language:
- en
license: cc-by-4.0
tags:
- conversational
- transformers
datasets:
- multi_woz_v22
metrics:
- perplexity
widget:
- text: "I would like to have breakfast."
---
## DialoGPT_MWOZ
This is a fine-tuned model of DialoGPT (medium) on the MultiWOZ v2.2 dataset. It is intended to be used as a conversational system.
The dataset it's trained on is limited in scope, as it covers only certain domains such as restaurants, hotel, taxi, train, hospital and police.
The perplexity achieved on the test and validation sets are 10.25 and 10.71, respectively. More information about the original pre-trained model can be found [here](https://huggingface.co/microsoft/DialoGPT-medium)
* Generation example from an interactive environment:
|Role | Response |
|---------|------------|
|User | I would like to have breakfast. |
|Bot | There are several options for Chinese in the city centre. Are you looking for a particular price range? |
|User | I'm looking for free food |
|Bot | There are 4 Chinese restaurants that are in the cheap price range. 3 in centre and one in north. |
|User | Not good enough |
|Bot | How about Charlie Chan? It is a Chinese restaurant in the centre of town. |
Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
The paper for this work was accepted at the Northern Lights Deep Learning (NLDL) conference 2022. Arxiv paper: [https://arxiv.org/pdf/2110.06273.pdf](https://arxiv.org/pdf/2110.06273.pdf)
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("tosin/dialogpt_mwoz")
model = AutoModelForCausalLM.from_pretrained("tosin/dialogpt_mwoz")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT_MWOZ_Bot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
|
chandank/bart-base-finetuned-kaggglenews-batch8
|
chandank
| 2021-12-02T09:16:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-base-finetuned-kaggglenews-batch8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-batch8
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.6409 | 27.9647 | 15.4352 | 23.611 | 25.107 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Jeska/VaccinChatSentenceClassifierDutch_fromBERTjeDIAL
|
Jeska
| 2021-12-02T08:29:44Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: VaccinChatSentenceClassifierDutch_fromBERTjeDIAL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VaccinChatSentenceClassifierDutch_fromBERTjeDIAL
This model is a fine-tuned version of [Jeska/BertjeWDialDataQA20k](https://huggingface.co/Jeska/BertjeWDialDataQA20k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8355
- Accuracy: 0.6322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.4418 | 1.0 | 1457 | 2.3866 | 0.5406 |
| 1.7742 | 2.0 | 2914 | 1.9365 | 0.6069 |
| 1.1313 | 3.0 | 4371 | 1.8355 | 0.6322 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
LzLzLz/Bert
|
LzLzLz
| 2021-12-02T06:50:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:04Z |
It's a sentiment inference model base on bert.
|
Akari/albert-base-v2-finetuned-squad
|
Akari
| 2021-12-02T05:36:13Z | 51 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"albert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: albert-base-v2-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-squad
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8695 | 1.0 | 8248 | 0.8813 |
| 0.6333 | 2.0 | 16496 | 0.8042 |
| 0.4372 | 3.0 | 24744 | 0.9492 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.7.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
eliotm/t5-small-finetuned-en-to-ro-lr_2e-6
|
eliotm
| 2021-12-02T03:07:16Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-lr_2e-6
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.2935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-lr_2e-6
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4232
- Bleu: 7.2935
- Gen Len: 18.2521
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.04375
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6703 | 0.04 | 2671 | 1.4232 | 7.2935 | 18.2521 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/afm_marketing
|
huggingtweets
| 2021-12-02T01:51:26Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1216156392/afm-marketing_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">AFM Marketing</div>
<div style="text-align: center; font-size: 14px;">@afm_marketing</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from AFM Marketing.
| Data | AFM Marketing |
| --- | --- |
| Tweets downloaded | 3238 |
| Retweets | 1051 |
| Short tweets | 64 |
| Tweets kept | 2123 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/6tgdc3wa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @afm_marketing's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/36mudapr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/36mudapr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/afm_marketing')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
aretw0/t5-small-finetuned-en-to-ro-epoch.04375
|
aretw0
| 2021-12-01T21:21:30Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-epoch.04375
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.3292
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-epoch.04375
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4137
- Bleu: 7.3292
- Gen Len: 18.2541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.04375
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6211 | 0.04 | 1669 | 1.4137 | 7.3292 | 18.2541 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
BigSalmon/MrLincoln11
|
BigSalmon
| 2021-12-01T20:17:55Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/MrLincoln11")
```
```
How To Make Prompt:
Original: freedom of the press is a check against political corruption.
Edited: fundamental to the spirit of democracy, freedom of the press is a check against political corruption.
Edited 2: ever at odds with tyranny, freedom of the press is a check against political corruption.
Edited 3: never to be neglected, freedom of the press is a check against political corruption.
Original: solar is a beacon of achievement.
Edited: central to decoupling from the perils of unsustainable energy, solar is a beacon of achievement.
Edited 2: key to a future beyond fossil fuels, solar is a beacon of achievement.
Original: milan is nevertheless ambivalent towards his costly terms.
Edited: keen on contracting him, milan is nevertheless ambivalent towards his costly terms.
Edited 2: intent on securing his services, milan is nevertheless ambivalent towards his costly terms.
Original:
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: meteors are much harder to see, because they are only there for a fraction of a second.
Translated into the Style of Abraham Lincoln: meteors are not ( easily / readily ) detectable, lasting for mere fractions of a second.
informal english:
````
|
emrecan/convbert-base-turkish-mc4-cased-multinli_tr
|
emrecan
| 2021-12-01T19:44:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"convbert",
"text-classification",
"zero-shot-classification",
"nli",
"tr",
"dataset:nli_tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:05Z |
---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
emrecan/convbert-base-turkish-mc4-cased-snli_tr
|
emrecan
| 2021-12-01T19:43:30Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"convbert",
"text-classification",
"zero-shot-classification",
"nli",
"tr",
"dataset:nli_tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:05Z |
---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
emrecan/bert-base-multilingual-cased-snli_tr
|
emrecan
| 2021-12-01T19:43:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"zero-shot-classification",
"nli",
"tr",
"dataset:nli_tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:05Z |
---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
emrecan/distilbert-base-turkish-cased-snli_tr
|
emrecan
| 2021-12-01T19:42:34Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"zero-shot-classification",
"nli",
"tr",
"dataset:nli_tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:05Z |
---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
rossanez/t5-small-finetuned-de-en-256
|
rossanez
| 2021-12-01T11:08:44Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt14",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
model-index:
- name: t5-small-finetuned-de-en-256
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-256
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.2663 | 4.5343 | 17.698 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Emmanuel/bert-finetuned-ner
|
Emmanuel
| 2021-12-01T11:05:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9317394888705688
- name: Recall
type: recall
value: 0.9510265903736116
- name: F1
type: f1
value: 0.9412842508536686
- name: Accuracy
type: accuracy
value: 0.9865779713898863
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0603
- Precision: 0.9317
- Recall: 0.9510
- F1: 0.9413
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0872 | 1.0 | 1756 | 0.0660 | 0.9152 | 0.9350 | 0.9250 | 0.9827 |
| 0.0386 | 2.0 | 3512 | 0.0579 | 0.9374 | 0.9498 | 0.9436 | 0.9864 |
| 0.0225 | 3.0 | 5268 | 0.0603 | 0.9317 | 0.9510 | 0.9413 | 0.9866 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ying-tina/wav2vec2-base-timit-demo-colab-32
|
ying-tina
| 2021-12-01T10:54:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: wav2vec2-base-timit-demo-colab-32
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab-32
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4488
- Wer: 0.3149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6155 | 4.0 | 500 | 2.2647 | 0.9992 |
| 0.9037 | 8.0 | 1000 | 0.4701 | 0.4336 |
| 0.3159 | 12.0 | 1500 | 0.4247 | 0.3575 |
| 0.1877 | 16.0 | 2000 | 0.4477 | 0.3442 |
| 0.1368 | 20.0 | 2500 | 0.4932 | 0.3384 |
| 0.1062 | 24.0 | 3000 | 0.4758 | 0.3202 |
| 0.0928 | 28.0 | 3500 | 0.4488 | 0.3149 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Tokenizers 0.10.3
|
emrecan/distilbert-base-turkish-cased-multinli_tr
|
emrecan
| 2021-12-01T10:50:34Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"zero-shot-classification",
"nli",
"tr",
"dataset:nli_tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:05Z |
---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
emrecan/bert-base-turkish-cased-multinli_tr
|
emrecan
| 2021-12-01T10:45:51Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"zero-shot-classification",
"nli",
"tr",
"dataset:nli_tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:05Z |
---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
glasses/vit_base_patch16_224
|
glasses
| 2021-12-01T08:23:58Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:2010.11929",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# vit_base_patch16_224
Implementation of Vision Transformer (ViT) proposed in [An Image Is
Worth 16x16 Words: Transformers For Image Recognition At
Scale](https://arxiv.org/pdf/2010.11929.pdf)
The following image from the authors shows the architecture.

``` python
ViT.vit_small_patch16_224()
ViT.vit_base_patch16_224()
ViT.vit_base_patch16_384()
ViT.vit_base_patch32_384()
ViT.vit_huge_patch16_224()
ViT.vit_huge_patch32_384()
ViT.vit_large_patch16_224()
ViT.vit_large_patch16_384()
ViT.vit_large_patch32_384()
```
Examples:
``` python
# change activation
ViT.vit_base_patch16_224(activation = nn.SELU)
# change number of classes (default is 1000 )
ViT.vit_base_patch16_224(n_classes=100)
# pass a different block, default is TransformerEncoderBlock
ViT.vit_base_patch16_224(block=MyCoolTransformerBlock)
# get features
model = ViT.vit_base_patch16_224
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[[torch.Size([1, 197, 768]), torch.Size([1, 197, 768]), ...]
# change the tokens, you have to subclass ViTTokens
class MyTokens(ViTTokens):
def __init__(self, emb_size: int):
super().__init__(emb_size)
self.my_new_token = nn.Parameter(torch.randn(1, 1, emb_size))
ViT(tokens=MyTokens)
```
|
glasses/efficientnet_b3
|
glasses
| 2021-12-01T08:08:37Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:1905.11946",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# efficientnet_b3
Implementation of EfficientNet proposed in [EfficientNet: Rethinking
Model Scaling for Convolutional Neural
Networks](https://arxiv.org/abs/1905.11946)

The basic architecture is similar to MobileNetV2 as was computed by
using [Progressive Neural Architecture
Search](https://arxiv.org/abs/1905.11946) .
The following table shows the basic architecture
(EfficientNet-efficientnet\_b0):

Then, the architecture is scaled up from
[-efficientnet\_b0]{.title-ref} to [-efficientnet\_b7]{.title-ref}
using compound scaling.

``` python
EfficientNet.efficientnet_b0()
EfficientNet.efficientnet_b1()
EfficientNet.efficientnet_b2()
EfficientNet.efficientnet_b3()
EfficientNet.efficientnet_b4()
EfficientNet.efficientnet_b5()
EfficientNet.efficientnet_b6()
EfficientNet.efficientnet_b7()
EfficientNet.efficientnet_b8()
EfficientNet.efficientnet_l2()
```
Examples:
``` python
EfficientNet.efficientnet_b0(activation = nn.SELU)
# change number of classes (default is 1000 )
EfficientNet.efficientnet_b0(n_classes=100)
# pass a different block
EfficientNet.efficientnet_b0(block=...)
# store each feature
x = torch.rand((1, 3, 224, 224))
model = EfficientNet.efficientnet_b0()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
# [torch.Size([1, 32, 112, 112]), torch.Size([1, 24, 56, 56]), torch.Size([1, 40, 28, 28]), torch.Size([1, 80, 14, 14])]
```
|
glasses/efficientnet_b2
|
glasses
| 2021-12-01T08:08:06Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:1905.11946",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# efficientnet_b2
Implementation of EfficientNet proposed in [EfficientNet: Rethinking
Model Scaling for Convolutional Neural
Networks](https://arxiv.org/abs/1905.11946)

The basic architecture is similar to MobileNetV2 as was computed by
using [Progressive Neural Architecture
Search](https://arxiv.org/abs/1905.11946) .
The following table shows the basic architecture
(EfficientNet-efficientnet\_b0):

Then, the architecture is scaled up from
[-efficientnet\_b0]{.title-ref} to [-efficientnet\_b7]{.title-ref}
using compound scaling.

``` python
EfficientNet.efficientnet_b0()
EfficientNet.efficientnet_b1()
EfficientNet.efficientnet_b2()
EfficientNet.efficientnet_b3()
EfficientNet.efficientnet_b4()
EfficientNet.efficientnet_b5()
EfficientNet.efficientnet_b6()
EfficientNet.efficientnet_b7()
EfficientNet.efficientnet_b8()
EfficientNet.efficientnet_l2()
```
Examples:
``` python
EfficientNet.efficientnet_b0(activation = nn.SELU)
# change number of classes (default is 1000 )
EfficientNet.efficientnet_b0(n_classes=100)
# pass a different block
EfficientNet.efficientnet_b0(block=...)
# store each feature
x = torch.rand((1, 3, 224, 224))
model = EfficientNet.efficientnet_b0()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
# [torch.Size([1, 32, 112, 112]), torch.Size([1, 24, 56, 56]), torch.Size([1, 40, 28, 28]), torch.Size([1, 80, 14, 14])]
```
|
glasses/vgg13_bn
|
glasses
| 2021-12-01T08:02:05Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:1409.1556",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# vgg13_bn
Implementation of VGG proposed in [Very Deep Convolutional Networks For
Large-Scale Image Recognition](https://arxiv.org/pdf/1409.1556.pdf)
``` python
VGG.vgg11()
VGG.vgg13()
VGG.vgg16()
VGG.vgg19()
VGG.vgg11_bn()
VGG.vgg13_bn()
VGG.vgg16_bn()
VGG.vgg19_bn()
```
Please be aware that the [bn]{.title-ref} models uses BatchNorm but
they are very old and people back then don\'t know the bias is
superfluous in a conv followed by a batchnorm.
Examples:
``` python
# change activation
VGG.vgg11(activation = nn.SELU)
# change number of classes (default is 1000 )
VGG.vgg11(n_classes=100)
# pass a different block
from nn.models.classification.senet import SENetBasicBlock
VGG.vgg11(block=SENetBasicBlock)
# store the features tensor after every block
```
|
glasses/vgg11_bn
|
glasses
| 2021-12-01T07:58:18Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:1409.1556",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# vgg11_bn
Implementation of VGG proposed in [Very Deep Convolutional Networks For
Large-Scale Image Recognition](https://arxiv.org/pdf/1409.1556.pdf)
``` python
VGG.vgg11()
VGG.vgg13()
VGG.vgg16()
VGG.vgg19()
VGG.vgg11_bn()
VGG.vgg13_bn()
VGG.vgg16_bn()
VGG.vgg19_bn()
```
Please be aware that the [bn]{.title-ref} models uses BatchNorm but
they are very old and people back then don\'t know the bias is
superfluous in a conv followed by a batchnorm.
Examples:
``` python
# change activation
VGG.vgg11(activation = nn.SELU)
# change number of classes (default is 1000 )
VGG.vgg11(n_classes=100)
# pass a different block
from nn.models.classification.senet import SENetBasicBlock
VGG.vgg11(block=SENetBasicBlock)
# store the features tensor after every block
```
|
glasses/densenet161
|
glasses
| 2021-12-01T07:50:20Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:1608.06993",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# densenet161
Implementation of DenseNet proposed in [Densely Connected Convolutional
Networks](https://arxiv.org/abs/1608.06993)
Create a default models
``` {.sourceCode .}
DenseNet.densenet121()
DenseNet.densenet161()
DenseNet.densenet169()
DenseNet.densenet201()
```
Examples:
``` {.sourceCode .}
# change activation
DenseNet.densenet121(activation = nn.SELU)
# change number of classes (default is 1000 )
DenseNet.densenet121(n_classes=100)
# pass a different block
DenseNet.densenet121(block=...)
# change the initial convolution
model = DenseNet.densenet121()
model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3)
# store each feature
x = torch.rand((1, 3, 224, 224))
model = DenseNet.densenet121()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
# [torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14]), torch.Size([1, 512, 7, 7]), torch.Size([1, 1024, 7, 7])]
```
|
glasses/densenet169
|
glasses
| 2021-12-01T07:48:55Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:1608.06993",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# densenet169
Implementation of DenseNet proposed in [Densely Connected Convolutional
Networks](https://arxiv.org/abs/1608.06993)
Create a default models
``` {.sourceCode .}
DenseNet.densenet121()
DenseNet.densenet161()
DenseNet.densenet169()
DenseNet.densenet201()
```
Examples:
``` {.sourceCode .}
# change activation
DenseNet.densenet121(activation = nn.SELU)
# change number of classes (default is 1000 )
DenseNet.densenet121(n_classes=100)
# pass a different block
DenseNet.densenet121(block=...)
# change the initial convolution
model = DenseNet.densenet121()
model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3)
# store each feature
x = torch.rand((1, 3, 224, 224))
model = DenseNet.densenet121()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
# [torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14]), torch.Size([1, 512, 7, 7]), torch.Size([1, 1024, 7, 7])]
```
|
glasses/regnety_008
|
glasses
| 2021-12-01T07:46:29Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:2003.13678",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# regnety_008
Implementation of RegNet proposed in [Designing Network Design
Spaces](https://arxiv.org/abs/2003.13678)
The main idea is to start with a high dimensional search space and
iteratively reduce the search space by empirically apply constrains
based on the best performing models sampled by the current search
space.
The resulting models are light, accurate, and faster than
EfficientNets (up to 5x times!)
For example, to go from $AnyNet_A$ to $AnyNet_B$ they fixed the
bottleneck ratio $b_i$ for all stage $i$. The following table shows
all the restrictions applied from one search space to the next one.

The paper is really well written and very interesting, I highly
recommended read it.
``` python
ResNet.regnetx_002()
ResNet.regnetx_004()
ResNet.regnetx_006()
ResNet.regnetx_008()
ResNet.regnetx_016()
ResNet.regnetx_040()
ResNet.regnetx_064()
ResNet.regnetx_080()
ResNet.regnetx_120()
ResNet.regnetx_160()
ResNet.regnetx_320()
# Y variants (with SE)
ResNet.regnety_002()
# ...
ResNet.regnetx_320()
You can easily customize your model
```
Examples:
``` python
# change activation
RegNet.regnetx_004(activation = nn.SELU)
# change number of classes (default is 1000 )
RegNet.regnetx_004(n_classes=100)
# pass a different block
RegNet.regnetx_004(block=RegNetYBotteneckBlock)
# change the steam
model = RegNet.regnetx_004(stem=ResNetStemC)
change shortcut
model = RegNet.regnetx_004(block=partial(RegNetYBotteneckBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = RegNet.regnetx_004()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 32, 112, 112]), torch.Size([1, 32, 56, 56]), torch.Size([1, 64, 28, 28]), torch.Size([1, 160, 14, 14])]
```
|
glasses/regnety_004
|
glasses
| 2021-12-01T07:45:42Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:2003.13678",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# regnety_004
Implementation of RegNet proposed in [Designing Network Design
Spaces](https://arxiv.org/abs/2003.13678)
The main idea is to start with a high dimensional search space and
iteratively reduce the search space by empirically apply constrains
based on the best performing models sampled by the current search
space.
The resulting models are light, accurate, and faster than
EfficientNets (up to 5x times!)
For example, to go from $AnyNet_A$ to $AnyNet_B$ they fixed the
bottleneck ratio $b_i$ for all stage $i$. The following table shows
all the restrictions applied from one search space to the next one.

The paper is really well written and very interesting, I highly
recommended read it.
``` python
ResNet.regnetx_002()
ResNet.regnetx_004()
ResNet.regnetx_006()
ResNet.regnetx_008()
ResNet.regnetx_016()
ResNet.regnetx_040()
ResNet.regnetx_064()
ResNet.regnetx_080()
ResNet.regnetx_120()
ResNet.regnetx_160()
ResNet.regnetx_320()
# Y variants (with SE)
ResNet.regnety_002()
# ...
ResNet.regnetx_320()
You can easily customize your model
```
Examples:
``` python
# change activation
RegNet.regnetx_004(activation = nn.SELU)
# change number of classes (default is 1000 )
RegNet.regnetx_004(n_classes=100)
# pass a different block
RegNet.regnetx_004(block=RegNetYBotteneckBlock)
# change the steam
model = RegNet.regnetx_004(stem=ResNetStemC)
change shortcut
model = RegNet.regnetx_004(block=partial(RegNetYBotteneckBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = RegNet.regnetx_004()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 32, 112, 112]), torch.Size([1, 32, 56, 56]), torch.Size([1, 64, 28, 28]), torch.Size([1, 160, 14, 14])]
```
|
glasses/regnety_002
|
glasses
| 2021-12-01T07:45:22Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:2003.13678",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# regnety_002
Implementation of RegNet proposed in [Designing Network Design
Spaces](https://arxiv.org/abs/2003.13678)
The main idea is to start with a high dimensional search space and
iteratively reduce the search space by empirically apply constrains
based on the best performing models sampled by the current search
space.
The resulting models are light, accurate, and faster than
EfficientNets (up to 5x times!)
For example, to go from $AnyNet_A$ to $AnyNet_B$ they fixed the
bottleneck ratio $b_i$ for all stage $i$. The following table shows
all the restrictions applied from one search space to the next one.

The paper is really well written and very interesting, I highly
recommended read it.
``` python
ResNet.regnetx_002()
ResNet.regnetx_004()
ResNet.regnetx_006()
ResNet.regnetx_008()
ResNet.regnetx_016()
ResNet.regnetx_040()
ResNet.regnetx_064()
ResNet.regnetx_080()
ResNet.regnetx_120()
ResNet.regnetx_160()
ResNet.regnetx_320()
# Y variants (with SE)
ResNet.regnety_002()
# ...
ResNet.regnetx_320()
You can easily customize your model
```
Examples:
``` python
# change activation
RegNet.regnetx_004(activation = nn.SELU)
# change number of classes (default is 1000 )
RegNet.regnetx_004(n_classes=100)
# pass a different block
RegNet.regnetx_004(block=RegNetYBotteneckBlock)
# change the steam
model = RegNet.regnetx_004(stem=ResNetStemC)
change shortcut
model = RegNet.regnetx_004(block=partial(RegNetYBotteneckBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = RegNet.regnetx_004()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 32, 112, 112]), torch.Size([1, 32, 56, 56]), torch.Size([1, 64, 28, 28]), torch.Size([1, 160, 14, 14])]
```
|
ykliu1892/translation-en-pt-t5-finetuned-Duolingo
|
ykliu1892
| 2021-12-01T04:58:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: translation-en-pt-t5-finetuned-Duolingo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translation-en-pt-t5-finetuned-Duolingo
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7362
- Bleu: 39.4725
- Gen Len: 9.002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.5429 | 0.24 | 9000 | 0.7461 | 39.4744 | 9.0 |
| 0.5302 | 0.48 | 18000 | 0.7431 | 39.7559 | 8.97 |
| 0.5309 | 0.72 | 27000 | 0.7388 | 39.6751 | 8.998 |
| 0.5336 | 0.96 | 36000 | 0.7362 | 39.4725 | 9.002 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-small-finetuned-de-en-256-wd-01
|
rossanez
| 2021-12-01T00:48:47Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt14",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
model-index:
- name: t5-small-finetuned-de-en-256-wd-01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-256-wd-01
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.1202 | 7.5964 | 17.3996 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-small-finetuned-de-en-256-lr2e-4
|
rossanez
| 2021-12-01T00:40:20Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt14",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
model-index:
- name: t5-small-finetuned-de-en-256-lr2e-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-256-lr2e-4
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.1169 | 7.6948 | 17.4103 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
alexrfelicio/t5-small-finetuned-en-to-de
|
alexrfelicio
| 2021-11-30T23:07:35Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: t5-small-finetuned-en-to-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 136 | 1.7446 | 9.0564 | 17.8356 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
kaporter/bert-base-uncased-finetuned-squad
|
kaporter
| 2021-11-30T22:42:17Z | 267 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model_index:
- name: bert-base-uncased-finetuned-squad
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: squad
type: squad
args: plain_text
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0749 | 1.0 | 5533 | 1.0167 |
| 0.7851 | 2.0 | 11066 | 1.0299 |
| 0.6067 | 3.0 | 16599 | 1.0725 |
### Framework versions
- Transformers 4.8.1
- Pytorch 1.8.1
- Datasets 1.16.1
- Tokenizers 0.10.1
|
mmcquade11-test/reuters-summarization
|
mmcquade11-test
| 2021-11-30T21:43:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autonlp",
"en",
"dataset:mmcquade11/autonlp-data-reuters-summarization",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- mmcquade11/autonlp-data-reuters-summarization
co2_eq_emissions: 286.4350821612984
---
This is an autoNLP model I trained on Reuters dataset
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 34018133
- CO2 Emissions (in grams): 286.4350821612984
## Validation Metrics
- Loss: 1.1805976629257202
- Rouge1: 55.4013
- Rouge2: 30.8004
- RougeL: 52.57
- RougeLsum: 52.6103
- Gen Len: 15.3458
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/mmcquade11/autonlp-reuters-summarization-34018133
```
|
nouamanetazi/cover-letter-t5-base
|
nouamanetazi
| 2021-11-30T21:14:47Z | 7 | 4 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"t5-base",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
license: apache-2.0
tags:
- generated_from_trainer
- t5-base
model-index:
- name: cover-letter-t5-base
results: []
widget:
- text: "coverletter name: Nouamane Tazi job: Machine Learning Engineer at HuggingFace background: Master's student in AI at the University of Paris Saclay experiences: I participated in the Digital Tech Year program, developing three minimal valuable products for three companies in a 7-week constraint. I also spent 1 year as a machine learning engineer for Flashbrand where I mainly worked on their chatbot . And I recently completed the HuggingFace course, where I built an amazing huggingface space. I am a strong team player."
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cover-letter-t5-base
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on cover letter samples scraped from Indeed and JobHero.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
glasses/regnetx_016
|
glasses
| 2021-11-30T20:26:57Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:2003.13678",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# regnetx_016
Implementation of RegNet proposed in [Designing Network Design
Spaces](https://arxiv.org/abs/2003.13678)
The main idea is to start with a high dimensional search space and
iteratively reduce the search space by empirically apply constrains
based on the best performing models sampled by the current search
space.
The resulting models are light, accurate, and faster than
EfficientNets (up to 5x times!)
For example, to go from $AnyNet_A$ to $AnyNet_B$ they fixed the
bottleneck ratio $b_i$ for all stage $i$. The following table shows
all the restrictions applied from one search space to the next one.

The paper is really well written and very interesting, I highly
recommended read it.
``` python
ResNet.regnetx_002()
ResNet.regnetx_004()
ResNet.regnetx_006()
ResNet.regnetx_008()
ResNet.regnetx_016()
ResNet.regnetx_040()
ResNet.regnetx_064()
ResNet.regnetx_080()
ResNet.regnetx_120()
ResNet.regnetx_160()
ResNet.regnetx_320()
# Y variants (with SE)
ResNet.regnety_002()
# ...
ResNet.regnetx_320()
You can easily customize your model
```
Examples:
``` python
# change activation
RegNet.regnetx_004(activation = nn.SELU)
# change number of classes (default is 1000 )
RegNet.regnetx_004(n_classes=100)
# pass a different block
RegNet.regnetx_004(block=RegNetYBotteneckBlock)
# change the steam
model = RegNet.regnetx_004(stem=ResNetStemC)
change shortcut
model = RegNet.regnetx_004(block=partial(RegNetYBotteneckBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = RegNet.regnetx_004()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 32, 112, 112]), torch.Size([1, 32, 56, 56]), torch.Size([1, 64, 28, 28]), torch.Size([1, 160, 14, 14])]
```
|
glasses/regnetx_002
|
glasses
| 2021-11-30T20:25:54Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:2003.13678",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# regnetx_002
Implementation of RegNet proposed in [Designing Network Design
Spaces](https://arxiv.org/abs/2003.13678)
The main idea is to start with a high dimensional search space and
iteratively reduce the search space by empirically apply constrains
based on the best performing models sampled by the current search
space.
The resulting models are light, accurate, and faster than
EfficientNets (up to 5x times!)
For example, to go from $AnyNet_A$ to $AnyNet_B$ they fixed the
bottleneck ratio $b_i$ for all stage $i$. The following table shows
all the restrictions applied from one search space to the next one.

The paper is really well written and very interesting, I highly
recommended read it.
``` python
ResNet.regnetx_002()
ResNet.regnetx_004()
ResNet.regnetx_006()
ResNet.regnetx_008()
ResNet.regnetx_016()
ResNet.regnetx_040()
ResNet.regnetx_064()
ResNet.regnetx_080()
ResNet.regnetx_120()
ResNet.regnetx_160()
ResNet.regnetx_320()
# Y variants (with SE)
ResNet.regnety_002()
# ...
ResNet.regnetx_320()
You can easily customize your model
```
Examples:
``` python
# change activation
RegNet.regnetx_004(activation = nn.SELU)
# change number of classes (default is 1000 )
RegNet.regnetx_004(n_classes=100)
# pass a different block
RegNet.regnetx_004(block=RegNetYBotteneckBlock)
# change the steam
model = RegNet.regnetx_004(stem=ResNetStemC)
change shortcut
model = RegNet.regnetx_004(block=partial(RegNetYBotteneckBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = RegNet.regnetx_004()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 32, 112, 112]), torch.Size([1, 32, 56, 56]), torch.Size([1, 64, 28, 28]), torch.Size([1, 160, 14, 14])]
```
|
glasses/eca_resnet26t
|
glasses
| 2021-11-30T20:21:22Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"image-classification",
"dataset:imagenet",
"arxiv:1512.03385",
"arxiv:1812.01187",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
---
# eca_resnet26t
Implementation of ResNet proposed in [Deep Residual Learning for Image
Recognition](https://arxiv.org/abs/1512.03385)
``` python
ResNet.resnet18()
ResNet.resnet26()
ResNet.resnet34()
ResNet.resnet50()
ResNet.resnet101()
ResNet.resnet152()
ResNet.resnet200()
Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_
ResNet.resnet26d()
ResNet.resnet34d()
ResNet.resnet50d()
# You can construct your own one by chaning `stem` and `block`
resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD))
```
Examples:
``` python
# change activation
ResNet.resnet18(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNet.resnet18(n_classes=100)
# pass a different block
ResNet.resnet18(block=SENetBasicBlock)
# change the steam
model = ResNet.resnet18(stem=ResNetStemC)
change shortcut
model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = ResNet.resnet18()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
glasses/wide_resnet101_2
|
glasses
| 2021-11-30T20:20:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:1605.07146",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# wide_resnet101_2
Implementation of Wide ResNet proposed in [\"Wide Residual
Networks\"](https://arxiv.org/pdf/1605.07146.pdf)
Create a default model
``` python
WideResNet.wide_resnet50_2()
WideResNet.wide_resnet101_2()
# create a wide_resnet18_4
WideResNet.resnet18(block=WideResNetBottleNeckBlock, width_factor=4)
```
Examples:
``` python
# change activation
WideResNet.resnext50_32x4d(activation = nn.SELU)
# change number of classes (default is 1000 )
WideResNet.resnext50_32x4d(n_classes=100)
# pass a different block
WideResNet.resnext50_32x4d(block=SENetBasicBlock)
# change the initial convolution
model = WideResNet.resnext50_32x4d
model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3)
# store each feature
x = torch.rand((1, 3, 224, 224))
model = WideResNet.wide_resnet50_2()
features = []
x = model.encoder.gate(x)
for block in model.encoder.layers:
x = block(x)
features.append(x)
print([x.shape for x in features])
# [torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14]), torch.Size([1, 512, 7, 7])]
```
|
glasses/resnext101_32x8d
|
glasses
| 2021-11-30T20:15:04Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:1611.05431",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# resnext101_32x8d
Implementation of ResNetXt proposed in [\"Aggregated Residual
Transformation for Deep Neural
Networks\"](https://arxiv.org/pdf/1611.05431.pdf)
Create a default model
``` python
ResNetXt.resnext50_32x4d()
ResNetXt.resnext101_32x8d()
# create a resnetxt18_32x4d
ResNetXt.resnet18(block=ResNetXtBottleNeckBlock, groups=32, base_width=4)
```
Examples:
: ``` python
# change activation
ResNetXt.resnext50_32x4d(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNetXt.resnext50_32x4d(n_classes=100)
# pass a different block
ResNetXt.resnext50_32x4d(block=SENetBasicBlock)
# change the initial convolution
model = ResNetXt.resnext50_32x4d
model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3)
# store each feature
x = torch.rand((1, 3, 224, 224))
model = ResNetXt.resnext50_32x4d()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
glasses/resnext50_32x4d
|
glasses
| 2021-11-30T20:13:20Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:1611.05431",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# resnext50_32x4d
Implementation of ResNetXt proposed in [\"Aggregated Residual
Transformation for Deep Neural
Networks\"](https://arxiv.org/pdf/1611.05431.pdf)
Create a default model
``` python
ResNetXt.resnext50_32x4d()
ResNetXt.resnext101_32x8d()
# create a resnetxt18_32x4d
ResNetXt.resnet18(block=ResNetXtBottleNeckBlock, groups=32, base_width=4)
```
Examples:
: ``` python
# change activation
ResNetXt.resnext50_32x4d(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNetXt.resnext50_32x4d(n_classes=100)
# pass a different block
ResNetXt.resnext50_32x4d(block=SENetBasicBlock)
# change the initial convolution
model = ResNetXt.resnext50_32x4d
model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3)
# store each feature
x = torch.rand((1, 3, 224, 224))
model = ResNetXt.resnext50_32x4d()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
glasses/resnet152
|
glasses
| 2021-11-30T20:12:19Z | 30 | 0 |
transformers
|
[
"transformers",
"pytorch",
"image-classification",
"dataset:imagenet",
"arxiv:1512.03385",
"arxiv:1812.01187",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
---
# resnet152
Implementation of ResNet proposed in [Deep Residual Learning for Image
Recognition](https://arxiv.org/abs/1512.03385)
``` python
ResNet.resnet18()
ResNet.resnet26()
ResNet.resnet34()
ResNet.resnet50()
ResNet.resnet101()
ResNet.resnet152()
ResNet.resnet200()
Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_
ResNet.resnet26d()
ResNet.resnet34d()
ResNet.resnet50d()
# You can construct your own one by chaning `stem` and `block`
resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD))
```
Examples:
``` python
# change activation
ResNet.resnet18(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNet.resnet18(n_classes=100)
# pass a different block
ResNet.resnet18(block=SENetBasicBlock)
# change the steam
model = ResNet.resnet18(stem=ResNetStemC)
change shortcut
model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = ResNet.resnet18()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
glasses/resnet50d
|
glasses
| 2021-11-30T20:10:20Z | 29 | 0 |
transformers
|
[
"transformers",
"pytorch",
"image-classification",
"dataset:imagenet",
"arxiv:1512.03385",
"arxiv:1812.01187",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
---
# resnet50d
Implementation of ResNet proposed in [Deep Residual Learning for Image
Recognition](https://arxiv.org/abs/1512.03385)
``` python
ResNet.resnet18()
ResNet.resnet26()
ResNet.resnet34()
ResNet.resnet50()
ResNet.resnet101()
ResNet.resnet152()
ResNet.resnet200()
Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_
ResNet.resnet26d()
ResNet.resnet34d()
ResNet.resnet50d()
# You can construct your own one by chaning `stem` and `block`
resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD))
```
Examples:
``` python
# change activation
ResNet.resnet18(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNet.resnet18(n_classes=100)
# pass a different block
ResNet.resnet18(block=SENetBasicBlock)
# change the steam
model = ResNet.resnet18(stem=ResNetStemC)
change shortcut
model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = ResNet.resnet18()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
glasses/resnet50
|
glasses
| 2021-11-30T20:09:35Z | 29 | 0 |
transformers
|
[
"transformers",
"pytorch",
"image-classification",
"dataset:imagenet",
"arxiv:1512.03385",
"arxiv:1812.01187",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
---
# resnet50
Implementation of ResNet proposed in [Deep Residual Learning for Image
Recognition](https://arxiv.org/abs/1512.03385)
``` python
ResNet.resnet18()
ResNet.resnet26()
ResNet.resnet34()
ResNet.resnet50()
ResNet.resnet101()
ResNet.resnet152()
ResNet.resnet200()
Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_
ResNet.resnet26d()
ResNet.resnet34d()
ResNet.resnet50d()
# You can construct your own one by chaning `stem` and `block`
resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD))
```
Examples:
``` python
# change activation
ResNet.resnet18(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNet.resnet18(n_classes=100)
# pass a different block
ResNet.resnet18(block=SENetBasicBlock)
# change the steam
model = ResNet.resnet18(stem=ResNetStemC)
change shortcut
model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = ResNet.resnet18()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
glasses/resnet26d
|
glasses
| 2021-11-30T20:07:33Z | 30 | 0 |
transformers
|
[
"transformers",
"pytorch",
"image-classification",
"dataset:imagenet",
"arxiv:1512.03385",
"arxiv:1812.01187",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
---
# resnet26d
Implementation of ResNet proposed in [Deep Residual Learning for Image
Recognition](https://arxiv.org/abs/1512.03385)
``` python
ResNet.resnet18()
ResNet.resnet26()
ResNet.resnet34()
ResNet.resnet50()
ResNet.resnet101()
ResNet.resnet152()
ResNet.resnet200()
Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_
ResNet.resnet26d()
ResNet.resnet34d()
ResNet.resnet50d()
# You can construct your own one by chaning `stem` and `block`
resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD))
```
Examples:
``` python
# change activation
ResNet.resnet18(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNet.resnet18(n_classes=100)
# pass a different block
ResNet.resnet18(block=SENetBasicBlock)
# change the steam
model = ResNet.resnet18(stem=ResNetStemC)
change shortcut
model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = ResNet.resnet18()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
glasses/resnet26
|
glasses
| 2021-11-30T20:06:59Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"image-classification",
"dataset:imagenet",
"arxiv:1512.03385",
"arxiv:1812.01187",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
---
# resnet26
Implementation of ResNet proposed in [Deep Residual Learning for Image
Recognition](https://arxiv.org/abs/1512.03385)
``` python
ResNet.resnet18()
ResNet.resnet26()
ResNet.resnet34()
ResNet.resnet50()
ResNet.resnet101()
ResNet.resnet152()
ResNet.resnet200()
Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_
ResNet.resnet26d()
ResNet.resnet34d()
ResNet.resnet50d()
# You can construct your own one by chaning `stem` and `block`
resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD))
```
Examples:
``` python
# change activation
ResNet.resnet18(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNet.resnet18(n_classes=100)
# pass a different block
ResNet.resnet18(block=SENetBasicBlock)
# change the steam
model = ResNet.resnet18(stem=ResNetStemC)
change shortcut
model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = ResNet.resnet18()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
glasses/resnet18
|
glasses
| 2021-11-30T20:06:28Z | 37 | 0 |
transformers
|
[
"transformers",
"pytorch",
"image-classification",
"dataset:imagenet",
"arxiv:1512.03385",
"arxiv:1812.01187",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
---
# resnet18
Implementation of ResNet proposed in [Deep Residual Learning for Image
Recognition](https://arxiv.org/abs/1512.03385)
``` python
ResNet.resnet18()
ResNet.resnet26()
ResNet.resnet34()
ResNet.resnet50()
ResNet.resnet101()
ResNet.resnet152()
ResNet.resnet200()
Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_
ResNet.resnet26d()
ResNet.resnet34d()
ResNet.resnet50d()
# You can construct your own one by chaning `stem` and `block`
resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD))
```
Examples:
``` python
# change activation
ResNet.resnet18(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNet.resnet18(n_classes=100)
# pass a different block
ResNet.resnet18(block=SENetBasicBlock)
# change the steam
model = ResNet.resnet18(stem=ResNetStemC)
change shortcut
model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = ResNet.resnet18()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
kenlevine/distilbert-base-uncased-finetuned-squad
|
kenlevine
| 2021-11-30T18:04:35Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jimregan/wav2vec2-large-xls-r-300m-irish-colab
|
jimregan
| 2021-11-30T17:53:09Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-irish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-irish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4286
- Wer: 0.5097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 210
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 4.3406 | 24.97 | 400 | 1.1677 | 0.7270 |
| 0.2527 | 49.97 | 800 | 1.2686 | 0.5927 |
| 0.0797 | 74.97 | 1200 | 1.3970 | 0.5769 |
| 0.0424 | 99.97 | 1600 | 1.4093 | 0.5600 |
| 0.0286 | 124.97 | 2000 | 1.3684 | 0.5407 |
| 0.0174 | 149.97 | 2400 | 1.4571 | 0.5205 |
| 0.0109 | 174.97 | 2800 | 1.4327 | 0.5178 |
| 0.0072 | 199.97 | 3200 | 1.4286 | 0.5097 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.13.3
- Tokenizers 0.10.3
|
ffsouza/tiny-mbart-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
|
ffsouza
| 2021-11-30T17:39:53Z | 25 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16_en_ro_pre_processed",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- wmt16_en_ro_pre_processed
metrics:
- bleu
model-index:
- name: tiny-mbart-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16_en_ro_pre_processed
type: wmt16_en_ro_pre_processed
args: enro
metrics:
- name: Bleu
type: bleu
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mbart-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
This model is a fine-tuned version of [sshleifer/tiny-mbart](https://huggingface.co/sshleifer/tiny-mbart) on the wmt16_en_ro_pre_processed dataset.
It achieves the following results on the evaluation set:
- Loss: 8.5137
- Bleu: 0.0
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:----:|:-------:|
| 8.2817 | 1.0 | 76290 | 8.5137 | 0.0 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
NDugar/3epoch-3large
|
NDugar
| 2021-11-30T17:34:56Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"deberta-v3",
"deberta-v2`",
"deberta-mnli",
"zero-shot-classification",
"en",
"arxiv:2006.03654",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:04Z |
---
language: en
tags:
- deberta-v3
- deberta-v2`
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
pipeline_tag: zero-shot-classification
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This is the DeBERTa V2 xxlarge model with 48 layers, 1536 hidden size. The total parameters are 1.5B and it is trained with 160GB raw data.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
Run with `Deepspeed`,
```bash
pip install datasets
pip install deepspeed
# Download the deepspeed config file
wget https://huggingface.co/microsoft/deberta-v2-xxlarge/resolve/main/ds_config.json -O ds_config.json
export TASK_NAME=mnli
output_dir="ds_results"
num_gpus=8
batch_size=8
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
run_glue.py \\
--model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 256 \\
--per_device_train_batch_size ${batch_size} \\
--learning_rate 3e-6 \\
--num_train_epochs 3 \\
--output_dir $output_dir \\
--overwrite_output_dir \\
--logging_steps 10 \\
--logging_dir $output_dir \\
--deepspeed ds_config.json
```
You can also run with `--sharded_ddp`
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mnli
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 8 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.