modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 00:39:05
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 532
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 00:38:59
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Chikashi/t5-small-finetuned-cnndm3-wikihow3
|
Chikashi
| 2022-04-16T01:42:47Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wikihow",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-15T23:11:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikihow
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm3-wikihow3
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wikihow
type: wikihow
args: all
metrics:
- name: Rouge1
type: rouge
value: 27.2654
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm3-wikihow3
This model is a fine-tuned version of [Chikashi/t5-small-finetuned-cnndm3-wikihow2](https://huggingface.co/Chikashi/t5-small-finetuned-cnndm3-wikihow2) on the wikihow dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3138
- Rouge1: 27.2654
- Rouge2: 10.5461
- Rougel: 23.2451
- Rougelsum: 26.6151
- Gen Len: 18.5263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.5019 | 1.0 | 39313 | 2.3138 | 27.2654 | 10.5461 | 23.2451 | 26.6151 | 18.5263 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AdwayK/hugging_face_biobert_MLMA
|
AdwayK
| 2022-04-16T00:19:03Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-14T22:28:53Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: AdwayK/hugging_face_biobert_MLMA
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# AdwayK/hugging_face_biobert_MLMA
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0
- Validation Loss: 0.0814
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3390, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0 | 0.0579 | 0 |
| 0.0 | 0.0509 | 1 |
| 0.0 | 0.0544 | 2 |
| 0.0 | 0.0621 | 3 |
| 0.0 | 0.0671 | 4 |
| 0.0 | 0.0811 | 5 |
| 0.0 | 0.0798 | 6 |
| 0.0 | 0.0774 | 7 |
| 0.0 | 0.0811 | 8 |
| 0.0 | 0.0814 | 9 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
pdroberts/xlm-roberta-base-finetuned-panx-de
|
pdroberts
| 2022-04-15T23:05:00Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-15T22:55:21Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8632527372262775
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1367
- F1: 0.8633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2582 | 1.0 | 525 | 0.1653 | 0.8238 |
| 0.1301 | 2.0 | 1050 | 0.1417 | 0.8439 |
| 0.0841 | 3.0 | 1575 | 0.1367 | 0.8633 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ssavla2/bert-finetuned-ner
|
ssavla2
| 2022-04-15T23:02:56Z | 6 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-15T18:52:18Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ssavla2/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ssavla2/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0243
- Validation Loss: 0.0603
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1017, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1199 | 0.0570 | 0 |
| 0.0399 | 0.0586 | 1 |
| 0.0243 | 0.0603 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
nila-yuki/final_lab
|
nila-yuki
| 2022-04-15T22:02:04Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-15T18:47:57Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: nila-yuki/final_lab
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nila-yuki/final_lab
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0240
- Validation Loss: 0.0593
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1017, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1059 | 0.0572 | 0 |
| 0.0391 | 0.0542 | 1 |
| 0.0240 | 0.0593 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Chikashi/t5-small-finetuned-cnndm3-wikihow2
|
Chikashi
| 2022-04-15T21:49:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-15T16:30:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm3-wikihow2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.6704
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm3-wikihow2
This model is a fine-tuned version of [Chikashi/t5-small-finetuned-cnndm2-wikihow2](https://huggingface.co/Chikashi/t5-small-finetuned-cnndm2-wikihow2) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6265
- Rouge1: 24.6704
- Rouge2: 11.9038
- Rougel: 20.3622
- Rougelsum: 23.2612
- Gen Len: 18.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8071 | 1.0 | 71779 | 1.6265 | 24.6704 | 11.9038 | 20.3622 | 23.2612 | 18.9997 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
gary109/wav2vec2-base-MIR_ST500_ASR_109
|
gary109
| 2022-04-15T21:15:56Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"/workspace/datasets/datasets/MIR_ST500/MIR_ST500.py",
"generated_from_trainer",
"dataset:mir_st500",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-15T14:52:50Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- /workspace/datasets/datasets/MIR_ST500/MIR_ST500.py
- generated_from_trainer
datasets:
- mir_st500
model-index:
- name: wav2vec2-base-MIR_ST500_ASR_109
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-MIR_ST500_ASR_109
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the /WORKSPACE/DATASETS/DATASETS/MIR_ST500/MIR_ST500.PY - ASR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6452
- Wer: 0.3732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 12.5751 | 0.27 | 100 | 6.0291 | 1.0 |
| 4.343 | 0.53 | 200 | 2.8709 | 1.0 |
| 4.1911 | 0.8 | 300 | 2.5472 | 1.0 |
| 2.4535 | 1.06 | 400 | 2.4323 | 1.0 |
| 2.6157 | 1.33 | 500 | 2.2799 | 1.0 |
| 2.4839 | 1.6 | 600 | 2.2722 | 1.0 |
| 2.2787 | 1.86 | 700 | 2.2269 | 1.0 |
| 2.1981 | 2.13 | 800 | 2.2221 | 1.0 |
| 2.159 | 2.39 | 900 | 2.1657 | 1.0 |
| 2.1421 | 2.66 | 1000 | 2.1769 | 1.0 |
| 2.0841 | 2.93 | 1100 | 2.1688 | 1.0 |
| 2.0599 | 3.19 | 1200 | 2.1141 | 1.0 |
| 2.0257 | 3.46 | 1300 | 2.0445 | 1.0 |
| 1.979 | 3.72 | 1400 | 2.0180 | 1.0 |
| 1.9366 | 3.99 | 1500 | 1.9419 | 1.0 |
| 1.8547 | 4.26 | 1600 | 1.8765 | 1.0 |
| 1.3988 | 4.52 | 1700 | 1.4151 | 0.7999 |
| 1.1881 | 4.79 | 1800 | 1.1158 | 0.7347 |
| 0.9557 | 5.05 | 1900 | 1.0095 | 0.6485 |
| 0.9087 | 5.32 | 2000 | 0.9644 | 0.6848 |
| 0.8086 | 5.59 | 2100 | 0.8960 | 0.6119 |
| 0.9106 | 5.85 | 2200 | 0.8892 | 0.5941 |
| 0.8252 | 6.12 | 2300 | 0.8333 | 0.5756 |
| 0.8299 | 6.38 | 2400 | 0.8559 | 0.5838 |
| 0.8021 | 6.65 | 2500 | 0.8201 | 0.5883 |
| 0.7979 | 6.91 | 2600 | 0.8349 | 0.575 |
| 0.7223 | 7.18 | 2700 | 0.7883 | 0.5563 |
| 0.6754 | 7.45 | 2800 | 0.7590 | 0.5393 |
| 0.6454 | 7.71 | 2900 | 0.7411 | 0.5291 |
| 0.6228 | 7.98 | 3000 | 0.7464 | 0.5300 |
| 0.6475 | 8.24 | 3100 | 0.7478 | 0.5295 |
| 0.6452 | 8.51 | 3200 | 0.7555 | 0.5360 |
| 0.5636 | 8.78 | 3300 | 0.7369 | 0.5232 |
| 0.564 | 9.04 | 3400 | 0.7331 | 0.5076 |
| 0.6173 | 9.31 | 3500 | 0.7199 | 0.5034 |
| 0.625 | 9.57 | 3600 | 0.7243 | 0.5193 |
| 0.8122 | 9.84 | 3700 | 0.7436 | 0.5242 |
| 0.5455 | 10.11 | 3800 | 0.7111 | 0.4920 |
| 0.7928 | 10.37 | 3900 | 0.7137 | 0.4858 |
| 0.5446 | 10.64 | 4000 | 0.6874 | 0.4828 |
| 0.4772 | 10.9 | 4100 | 0.6760 | 0.4801 |
| 0.6447 | 11.17 | 4200 | 0.6893 | 0.4886 |
| 0.5818 | 11.44 | 4300 | 0.6789 | 0.4740 |
| 0.4952 | 11.7 | 4400 | 0.7043 | 0.4811 |
| 0.5722 | 11.97 | 4500 | 0.6794 | 0.4766 |
| 0.58 | 12.23 | 4600 | 0.6629 | 0.4580 |
| 0.5432 | 12.5 | 4700 | 0.6907 | 0.4906 |
| 0.4786 | 12.77 | 4800 | 0.6925 | 0.4854 |
| 0.5177 | 13.03 | 4900 | 0.6666 | 0.4532 |
| 0.5448 | 13.3 | 5000 | 0.6744 | 0.4542 |
| 0.5732 | 13.56 | 5100 | 0.6930 | 0.4986 |
| 0.5065 | 13.83 | 5200 | 0.6647 | 0.4351 |
| 0.4005 | 14.1 | 5300 | 0.6659 | 0.4508 |
| 0.4256 | 14.36 | 5400 | 0.6682 | 0.4533 |
| 0.4459 | 14.63 | 5500 | 0.6594 | 0.4326 |
| 0.4645 | 14.89 | 5600 | 0.6615 | 0.4287 |
| 0.4275 | 15.16 | 5700 | 0.6423 | 0.4299 |
| 0.4026 | 15.43 | 5800 | 0.6539 | 0.4217 |
| 0.3507 | 15.69 | 5900 | 0.6555 | 0.4299 |
| 0.3998 | 15.96 | 6000 | 0.6526 | 0.4213 |
| 0.4462 | 16.22 | 6100 | 0.6469 | 0.4230 |
| 0.4095 | 16.49 | 6200 | 0.6516 | 0.4210 |
| 0.4452 | 16.76 | 6300 | 0.6373 | 0.4133 |
| 0.3997 | 17.02 | 6400 | 0.6456 | 0.4211 |
| 0.3826 | 17.29 | 6500 | 0.6278 | 0.4042 |
| 0.3867 | 17.55 | 6600 | 0.6459 | 0.4112 |
| 0.4367 | 17.82 | 6700 | 0.6464 | 0.4131 |
| 0.3887 | 18.09 | 6800 | 0.6567 | 0.4150 |
| 0.3481 | 18.35 | 6900 | 0.6548 | 0.4145 |
| 0.4241 | 18.62 | 7000 | 0.6490 | 0.4123 |
| 0.3742 | 18.88 | 7100 | 0.6561 | 0.4135 |
| 0.423 | 19.15 | 7200 | 0.6498 | 0.4051 |
| 0.3803 | 19.41 | 7300 | 0.6475 | 0.3903 |
| 0.3084 | 19.68 | 7400 | 0.6403 | 0.4042 |
| 0.3012 | 19.95 | 7500 | 0.6460 | 0.4004 |
| 0.3306 | 20.21 | 7600 | 0.6491 | 0.3837 |
| 0.3612 | 20.48 | 7700 | 0.6752 | 0.3884 |
| 0.3572 | 20.74 | 7800 | 0.6383 | 0.3793 |
| 0.3638 | 21.01 | 7900 | 0.6349 | 0.3838 |
| 0.3658 | 21.28 | 8000 | 0.6544 | 0.3793 |
| 0.3726 | 21.54 | 8100 | 0.6567 | 0.3756 |
| 0.3618 | 21.81 | 8200 | 0.6390 | 0.3795 |
| 0.3212 | 22.07 | 8300 | 0.6359 | 0.3768 |
| 0.3561 | 22.34 | 8400 | 0.6452 | 0.3732 |
| 0.3231 | 22.61 | 8500 | 0.6416 | 0.3731 |
| 0.3764 | 22.87 | 8600 | 0.6428 | 0.3697 |
| 0.4142 | 23.14 | 8700 | 0.6415 | 0.3665 |
| 0.2713 | 23.4 | 8800 | 0.6541 | 0.3676 |
| 0.2277 | 23.67 | 8900 | 0.6492 | 0.3684 |
| 0.3849 | 23.94 | 9000 | 0.6448 | 0.3651 |
| 0.266 | 24.2 | 9100 | 0.6602 | 0.3643 |
| 0.3464 | 24.47 | 9200 | 0.6673 | 0.3607 |
| 0.2919 | 24.73 | 9300 | 0.6557 | 0.3677 |
| 0.2878 | 25.0 | 9400 | 0.6377 | 0.3653 |
| 0.1603 | 25.27 | 9500 | 0.6598 | 0.3700 |
| 0.2055 | 25.53 | 9600 | 0.6558 | 0.3614 |
| 0.1508 | 25.8 | 9700 | 0.6543 | 0.3605 |
| 0.3162 | 26.06 | 9800 | 0.6570 | 0.3576 |
| 0.2613 | 26.33 | 9900 | 0.6604 | 0.3584 |
| 0.2244 | 26.6 | 10000 | 0.6618 | 0.3634 |
| 0.1585 | 26.86 | 10100 | 0.6698 | 0.3634 |
| 0.2959 | 27.13 | 10200 | 0.6709 | 0.3593 |
| 0.2778 | 27.39 | 10300 | 0.6638 | 0.3537 |
| 0.2354 | 27.66 | 10400 | 0.6770 | 0.3585 |
| 0.2992 | 27.93 | 10500 | 0.6698 | 0.3506 |
| 0.2664 | 28.19 | 10600 | 0.6725 | 0.3533 |
| 0.2582 | 28.46 | 10700 | 0.6689 | 0.3542 |
| 0.2096 | 28.72 | 10800 | 0.6731 | 0.3527 |
| 0.4169 | 28.99 | 10900 | 0.6691 | 0.3521 |
| 0.2716 | 29.26 | 11000 | 0.6712 | 0.3517 |
| 0.2944 | 29.52 | 11100 | 0.6708 | 0.3509 |
| 0.2737 | 29.79 | 11200 | 0.6699 | 0.3491 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.1+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
public-data/Hopenet
|
public-data
| 2022-04-15T20:12:15Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-04-15T20:03:56Z |
# Hopenet
- https://github.com/natanielruiz/deep-head-pose
- https://drive.google.com/file/d/1EJPu2sOAwrfuamTitTkw2xJ2ipmMsmD3/view
- https://drive.google.com/file/d/16OZdRULgUpceMKZV6U9PNFiigfjezsCY/view
- https://drive.google.com/file/d/1m25PrSE7g9D2q2XJVMR6IA7RaCvWSzCR/view
## Note
```python
import pathlib
import torch
paths = sorted(pathlib.Path('orig').glob('*'))
out_dir = pathlib.Path('models')
out_dir.mkdir(exist_ok=True)
for path in paths:
ckpt = torch.load(path, map_location='cpu')
torch.save(ckpt, out_dir / path.name)
```
|
profoz/distilbert-toxic-clf
|
profoz
| 2022-04-15T17:31:47Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-15T17:13:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-toxic-clf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-toxic-clf
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.10.3
|
dpazmino/finetuning-sentiment-model_duke_final_two
|
dpazmino
| 2022-04-15T17:30:54Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-14T23:30:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: finetuning-sentiment-model_duke_final_two
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model_duke_final_two
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3381
- F1: 0.8801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
inovex/multi2convai-corona-fr-bert
|
inovex
| 2022-04-15T17:09:57Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- text-classification
widget:
- text: "Dois-je porter un masque?"
license: mit
language: fr
---
# Multi2ConvAI-Corona: finetuned Bert for French
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Corona (more details about our use cases: ([en](https://multi2conv.ai/en/blog/use-cases), [de](https://multi2conv.ai/en/blog/use-cases)))
- language: French (fr)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-fr-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-fr-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected]
|
Zhaoheng/svoice_wsj0_2mix
|
Zhaoheng
| 2022-04-15T16:58:15Z | 3 | 4 |
espnet
|
[
"espnet",
"audio",
"audio-to-audio",
"en",
"dataset:wsj0_2mix",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
audio-to-audio
| 2022-04-14T12:16:35Z |
---
tags:
- espnet
- audio
- audio-to-audio
language: en
datasets:
- wsj0_2mix
license: cc-by-4.0
---
## ESPnet2 ENH model
### `Zhaoheng/svoice_wsj0_2mix`
This model was trained by Zhaoheng Ni using wsj0_2mix recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 5ae7c9580f85dae5bc81cb1e845366c251d871ac
pip install -e .
cd egs2/wsj0_2mix/enh1
./run.sh --skip_data_prep false --skip_train true --download_model Zhaoheng/svoice_wsj0_2mix
```
<!-- Generated by ./scripts/utils/show_enh_score.sh -->
# RESULTS
## Environments
- date: `Thu Apr 14 09:47:05 UTC 2022`
- python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.10.1+cu111`
- Git hash: `9dbe4179b866b994f6914ef52ea7483696d22760`
- Commit date: `Wed Mar 16 13:25:26 2022 +0000`
## ..
config: conf/tuning/train_enh_svoice.yaml
|dataset|STOI|SAR|SDR|SIR|SI_SNR|
|---|---|---|---|---|---|
|enhanced_cv_min_8k|0.97|21.44|20.98|32.21|20.67|
|enhanced_tt_min_8k|0.98|21.41|20.96|32.27|20.66|
## ENH config
<details><summary>expand</summary>
```
config: conf/tuning/train_enh_svoice.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/enh_train_enh_svoice_raw
ngpu: 4
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 150
patience: 20
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- si_snr
- max
- - valid
- loss
- min
keep_nbest_models: 1
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 8
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/enh_stats_8k/train/speech_mix_shape
- exp/enh_stats_8k/train/speech_ref1_shape
- exp/enh_stats_8k/train/speech_ref2_shape
valid_shape_file:
- exp/enh_stats_8k/valid/speech_mix_shape
- exp/enh_stats_8k/valid/speech_ref1_shape
- exp/enh_stats_8k/valid/speech_ref2_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 80000
- 80000
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 16000
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr_min_8k/wav.scp
- speech_mix
- sound
- - dump/raw/tr_min_8k/spk1.scp
- speech_ref1
- sound
- - dump/raw/tr_min_8k/spk2.scp
- speech_ref2
- sound
valid_data_path_and_name_and_type:
- - dump/raw/cv_min_8k/wav.scp
- speech_mix
- sound
- - dump/raw/cv_min_8k/spk1.scp
- speech_ref1
- sound
- - dump/raw/cv_min_8k/spk2.scp
- speech_ref2
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-08
weight_decay: 0
scheduler: reducelronplateau
scheduler_conf:
mode: min
factor: 0.7
patience: 1
init: xavier_uniform
model_conf:
stft_consistency: false
loss_type: mask_mse
mask_type: null
criterions:
- name: si_snr
conf:
eps: 1.0e-07
wrapper: multilayer_pit
wrapper_conf:
weight: 1.0
independent_perm: true
use_preprocessor: false
encoder: same
encoder_conf: {}
separator: svoice
separator_conf:
enc_dim: 128
kernel_size: 8
hidden_size: 128
num_spk: 2
num_layers: 6
segment_size: 128
input_normalize: false
decoder: same
decoder_conf: {}
required:
- output_dir
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{ESPnet-SE,
author = {Chenda Li and Jing Shi and Wangyou Zhang and Aswin Shanmugam Subramanian and Xuankai Chang and
Naoyuki Kamo and Moto Hira and Tomoki Hayashi and Christoph B{"o}}ddeker and Zhuo Chen and Shinji Watanabe}
@inproceedings{nachmani2020voice,
title={Voice separation with an unknown number of multiple speakers},
author={Nachmani, Eliya and Adi, Yossi and Wolf, Lior},
booktitle={International Conference on Machine Learning},
pages={7164--7175},
year={2020},
organization={PMLR}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
public-data/Anime2Sketch
|
public-data
| 2022-04-15T16:17:03Z | 0 | 2 | null |
[
"region:us"
] | null | 2022-04-15T16:12:54Z |
# Anime2Sketch
- https://github.com/Mukosame/Anime2Sketch
- https://drive.google.com/drive/folders/1Srf-WYUixK0wiUddc9y3pNKHHno5PN6R
|
Chikashi/t5-small-finetuned-cnndm2-wikihow2
|
Chikashi
| 2022-04-15T15:13:22Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wikihow",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-15T12:41:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikihow
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm2-wikihow2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wikihow
type: wikihow
args: all
metrics:
- name: Rouge1
type: rouge
value: 27.0962
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm2-wikihow2
This model is a fine-tuned version of [Chikashi/t5-small-finetuned-cnndm2-wikihow1](https://huggingface.co/Chikashi/t5-small-finetuned-cnndm2-wikihow1) on the wikihow dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3311
- Rouge1: 27.0962
- Rouge2: 10.3575
- Rougel: 23.1099
- Rougelsum: 26.4664
- Gen Len: 18.5197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.517 | 1.0 | 39313 | 2.3311 | 27.0962 | 10.3575 | 23.1099 | 26.4664 | 18.5197 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
birgermoell/psst-fairseq-larger-rir
|
birgermoell
| 2022-04-15T13:59:09Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-15T12:44:14Z |
---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
---
This model is trained on the PSST Challenge data, with a subset of TIMIT that was augmented using Room Impulse Response (RIR). A file containing the list of TIMIT IDs is in the repository (`timit-ids.txt`)
The model was finetuned on [Wav2vec 2.0 Large, No finetuning](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec), and the results on the validation set were **PER:** 21\.0%, **FER:** 9\.2%.
|
huggan/pix2pix-uavid-15
|
huggan
| 2022-04-15T13:45:13Z | 0 | 0 | null |
[
"pytorch",
"huggan",
"gan",
"dataset:arakesh/uavid-15-hq-mixedres",
"arxiv:1611.07004",
"license:mit",
"region:us"
] | null | 2022-04-12T18:53:30Z |
---
tags:
- huggan
- gan
datasets:
- arakesh/uavid-15-hq-mixedres
# See a list of available tags here:
# https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L12
# task: unconditional-image-generation or conditional-image-generation or image-to-image
license: mit
---
# MyModelName
## Model description
[Pix2pix Model](https://arxiv.org/abs/1611.07004) is a conditional adversarial networks, a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
## Intended uses & limitations:
Used for reconstruction of images from edges
#### How to use
```python
from torchvision.transforms import Compose, Resize, ToTensor, Normalize
from PIL import Image
from torchvision.utils import save_image
import cv2
from huggan.pytorch.pix2pix.modeling_pix2pix import GeneratorUNet
transform = Compose(
[
Resize((256, 256), Image.BICUBIC),
ToTensor(),
Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]
)
model = GeneratorUNet.from_pretrained('huggan/pix2pix-uavid-15)
def predict_fn(img):
inp = transform(img).unsqueeze(0)
out = model(inp)
save_image(out, 'out.png', normalize=True)
return 'out.png'
predict_fn(img)
```
#### Limitations and bias
* Gives unrealistic colors in the image
## Training data
* [edges2shoes](https://huggingface.co/datasets/huggan/edges2shoes)
## Training procedure
```
# clone the repository
git clone https://github.com/huggingface/community-events.git
pip install .
# change directory
cd community-events/huggan/pytorch/pix2pix/
# define config
accelerate config
# launch training with required parameters
accelerate launch train.py --checkpoint_interval 1 --dataset arakesh/uavid-15-hq-mixedres --push_to_hub --model_name pix2pix-uavid-15 --batch_size 2 --n_epochs 50 --image_size 1024 --sample_interval 500
```
## Generated Images
Here,
* First Image Row: Input Image
* Second Image Row: Generated Image
* Third Image Row: Target Image


### BibTeX entry and citation info
```bibtex
@article{pix2pix2017,
title={Image-to-Image Translation with Conditional Adversarial Networks},
author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A},
journal={CVPR},
year={2017}
}
```
|
annaeze/lab9_1
|
annaeze
| 2022-04-15T12:44:42Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-14T13:43:01Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: annaeze/lab9_1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# annaeze/lab9_1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0230
- Validation Loss: 0.0572
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1017, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1174 | 0.0596 | 0 |
| 0.0391 | 0.0529 | 1 |
| 0.0230 | 0.0572 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Chikashi/t5-small-finetuned-cnndm2-wikihow1
|
Chikashi
| 2022-04-15T11:30:20Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-15T06:14:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm2-wikihow1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.6317
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm2-wikihow1
This model is a fine-tuned version of [Chikashi/t5-small-finetuned-cnndm1-wikihow1](https://huggingface.co/Chikashi/t5-small-finetuned-cnndm1-wikihow1) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6305
- Rouge1: 24.6317
- Rouge2: 11.8655
- Rougel: 20.3598
- Rougelsum: 23.2467
- Gen Len: 18.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8062 | 1.0 | 71779 | 1.6305 | 24.6317 | 11.8655 | 20.3598 | 23.2467 | 18.9996 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ketan-rmcf/hinglish-finetuned
|
ketan-rmcf
| 2022-04-15T10:03:30Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-04-14T21:05:58Z |
---
tags:
- generated_from_trainer
model-index:
- name: hinglish-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hinglish-finetuned
This model is a fine-tuned version of [verloop/Hinglish-Bert](https://huggingface.co/verloop/Hinglish-Bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.3784 | 1.0 | 80 | 3.0527 |
| 3.0398 | 2.0 | 160 | 2.8067 |
| 2.9133 | 3.0 | 240 | 2.7252 |
| 2.7872 | 4.0 | 320 | 2.5783 |
| 2.6205 | 5.0 | 400 | 2.5050 |
| 2.5979 | 6.0 | 480 | 2.4654 |
| 2.5655 | 7.0 | 560 | 2.4091 |
| 2.5412 | 8.0 | 640 | 2.3630 |
| 2.4479 | 9.0 | 720 | 2.3754 |
| 2.3724 | 10.0 | 800 | 2.2860 |
| 2.3842 | 11.0 | 880 | 2.2812 |
| 2.3411 | 12.0 | 960 | 2.2038 |
| 2.2617 | 13.0 | 1040 | 2.1887 |
| 2.3141 | 14.0 | 1120 | 2.1966 |
| 2.2115 | 15.0 | 1200 | 2.1248 |
| 2.2363 | 16.0 | 1280 | 2.1006 |
| 2.2191 | 17.0 | 1360 | 2.1248 |
| 2.1856 | 18.0 | 1440 | 2.0872 |
| 2.2009 | 19.0 | 1520 | 2.0299 |
| 2.2364 | 20.0 | 1600 | 2.0193 |
| 2.1785 | 21.0 | 1680 | 2.0227 |
| 2.1934 | 22.0 | 1760 | 2.0540 |
| 2.1479 | 23.0 | 1840 | 2.0381 |
| 2.0973 | 24.0 | 1920 | 1.9885 |
| 2.1376 | 25.0 | 2000 | 2.0142 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
malcolm/REA_GenderIdentification_v1
|
malcolm
| 2022-04-15T08:38:29Z | 5 | 6 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-15T08:23:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: REA_GenderIdentification_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# REA_GenderIdentification_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3366
- Accuracy: 0.8798
- F1: 0.8522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
agdsga/chinese-bert-wwm-finetuned-product-1
|
agdsga
| 2022-04-15T06:06:27Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-15T02:08:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: chinese-bert-wwm-finetuned-product-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-bert-wwm-finetuned-product-1
This model is a fine-tuned version of [hfl/chinese-bert-wwm](https://huggingface.co/hfl/chinese-bert-wwm) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0000
- eval_runtime: 10.6737
- eval_samples_per_second: 362.572
- eval_steps_per_second: 5.715
- epoch: 11.61
- step: 18797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.17.0
- Pytorch 1.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
aaya/distilbert-base-uncased-finetuned-ner
|
aaya
| 2022-04-15T05:46:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-14T11:55:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
zhuzhusleepearly/bert-finetuned
|
zhuzhusleepearly
| 2022-04-15T05:13:59Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-14T23:16:28Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: zhuzhusleepearly/bert-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# zhuzhusleepearly/bert-finetuned
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0248
- Validation Loss: 0.0614
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1017, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1264 | 0.0606 | 0 |
| 0.0422 | 0.0551 | 1 |
| 0.0248 | 0.0614 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
qp321/distilbert-base-uncased-finetuned-cola
|
qp321
| 2022-04-15T05:11:06Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-15T04:22:54Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: qp321/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# qp321/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1122
- Validation Loss: 0.6352
- Train Matthews Correlation: 0.5295
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2670, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.3241 | 0.4856 | 0.5251 | 0 |
| 0.1893 | 0.5330 | 0.5158 | 1 |
| 0.1122 | 0.6352 | 0.5295 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Manishkalra/finetuning-sentiment-model-4000-samples
|
Manishkalra
| 2022-04-15T05:05:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-15T04:38:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-4000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9
- name: F1
type: f1
value: 0.9038461538461539
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-4000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2706
- Accuracy: 0.9
- F1: 0.9038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huggan/pix2pix-night2day
|
huggan
| 2022-04-15T04:27:40Z | 0 | 2 | null |
[
"pytorch",
"huggan",
"gan",
"dataset:huggan/night2day",
"arxiv:1611.07004",
"license:mit",
"region:us"
] | null | 2022-04-14T15:42:14Z |
---
tags:
- huggan
- gan
datasets:
- huggan/night2day
# See a list of available tags here:
# https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L12
# task: unconditional-image-generation or conditional-image-generation or image-to-image
license: mit
---
# MyModelName
## Model description
[Pix2pix Model](https://arxiv.org/abs/1611.07004) is a conditional adversarial networks, a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
## Intended uses & limitations:
Used for reconstruction of images from edges
#### How to use
```python
from torchvision.transforms import Compose, Resize, ToTensor, Normalize
from PIL import Image
from torchvision.utils import save_image
import cv2
from huggan.pytorch.pix2pix.modeling_pix2pix import GeneratorUNet
transform = Compose(
[
Resize((256, 256), Image.BICUBIC),
ToTensor(),
Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]
)
model = GeneratorUNet.from_pretrained('huggan/pix2pix-night2day')
def predict_fn(img):
inp = transform(img).unsqueeze(0)
out = model(inp)
save_image(out, 'out.png', normalize=True)
return 'out.png'
predict_fn(img)
```
#### Limitations and bias
* Gives unrealistic colors in the image
* Gives Blurry image sometimes
## Training data
* [night2day](https://huggingface.co/datasets/huggan/night2day)
## Training procedure
```
# clone the repository
git clone https://github.com/huggingface/community-events.git
pip install .
# change directory
cd community-events/huggan/pytorch/pix2pix/
# define config
accelerate config
# launch training with required parameters
accelerate launch train.py --checkpoint_interval 5 --dataset huggan/night2day --push_to_hub --model_name pix2pix-night2day --batch_size 128 --n_epochs 50
```
## Generated Images
Here,
* First Image Row: Input Image
* Second Image Row: Generated Image
* Third Image Row: Target Image


### BibTeX entry and citation info
```bibtex
@article{pix2pix2017,
title={Image-to-Image Translation with Conditional Adversarial Networks},
author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A},
journal={CVPR},
year={2017}
}
```
|
zhuzhusleepearly/bert-task5finetuned
|
zhuzhusleepearly
| 2022-04-15T04:23:54Z | 2 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-15T04:17:15Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: zhuzhusleepearly/bert-task5finetuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# zhuzhusleepearly/bert-task5finetuned
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0350
- Validation Loss: 0.0775
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 669, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1257 | 0.0908 | 0 |
| 0.0567 | 0.0718 | 1 |
| 0.0350 | 0.0775 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
junnyu/roformer_chinese_sim_char_ft_base
|
junnyu
| 2022-04-15T03:52:49Z | 9 | 7 |
transformers
|
[
"transformers",
"pytorch",
"roformer",
"text-generation",
"tf2.0",
"zh",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: zh
tags:
- roformer
- pytorch
- tf2.0
inference: False
---
# 安装
- pip install roformer==0.4.3
# 使用
```python
import torch
import numpy as np
from roformer import RoFormerForCausalLM, RoFormerConfig
from transformers import BertTokenizer
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
pretrained_model = "junnyu/roformer_chinese_sim_char_base"
tokenizer = BertTokenizer.from_pretrained(pretrained_model)
config = RoFormerConfig.from_pretrained(pretrained_model)
config.is_decoder = True
config.eos_token_id = tokenizer.sep_token_id
config.pooler_activation = "linear"
model = RoFormerForCausalLM.from_pretrained(pretrained_model, config=config)
model.to(device)
model.eval()
def gen_synonyms(text, n=100, k=20):
''''含义: 产生sent的n个相似句,然后返回最相似的k个。
做法:用seq2seq生成,并用encoder算相似度并排序。
'''
# 寻找所有相似的句子
r = []
inputs1 = tokenizer(text, return_tensors="pt")
for _ in range(n):
inputs1.to(device)
output = tokenizer.batch_decode(model.generate(**inputs1, top_p=0.95, do_sample=True, max_length=128), skip_special_tokens=True)[0].replace(" ","").replace(text, "") # 去除空格,去除原始text文本。
r.append(output)
# 对相似的句子进行排序
r = [i for i in set(r) if i != text and len(i) > 0]
r = [text] + r
inputs2 = tokenizer(r, padding=True, return_tensors="pt")
with torch.no_grad():
inputs2.to(device)
outputs = model(**inputs2)
Z = outputs.pooler_output.cpu().numpy()
Z /= (Z**2).sum(axis=1, keepdims=True)**0.5
argsort = np.dot(Z[1:], -Z[0]).argsort()
return [r[i + 1] for i in argsort[:k]]
out = gen_synonyms("广州和深圳哪个好?")
print(out)
# ['深圳和广州哪个好?',
# '广州和深圳哪个好',
# '深圳和广州哪个好',
# '深圳和广州哪个比较好。',
# '深圳和广州哪个最好?',
# '深圳和广州哪个比较好',
# '广州和深圳那个比较好',
# '深圳和广州哪个更好?',
# '深圳与广州哪个好',
# '深圳和广州,哪个比较好',
# '广州与深圳比较哪个好',
# '深圳和广州哪里比较好',
# '深圳还是广州比较好?',
# '广州和深圳哪个地方好一些?',
# '广州好还是深圳好?',
# '广州好还是深圳好呢?',
# '广州与深圳哪个地方好点?',
# '深圳好还是广州好',
# '广州好还是深圳好',
# '广州和深圳哪个城市好?']
```
|
junnyu/roformer_chinese_sim_char_ft_small
|
junnyu
| 2022-04-15T03:51:50Z | 6 | 3 |
transformers
|
[
"transformers",
"pytorch",
"roformer",
"text-generation",
"tf2.0",
"zh",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: zh
tags:
- roformer
- pytorch
- tf2.0
inference: False
---
# 安装
- pip install roformer==0.4.3
# 使用
```python
import torch
import numpy as np
from roformer import RoFormerForCausalLM, RoFormerConfig
from transformers import BertTokenizer
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
pretrained_model = "junnyu/roformer_chinese_sim_char_base"
tokenizer = BertTokenizer.from_pretrained(pretrained_model)
config = RoFormerConfig.from_pretrained(pretrained_model)
config.is_decoder = True
config.eos_token_id = tokenizer.sep_token_id
config.pooler_activation = "linear"
model = RoFormerForCausalLM.from_pretrained(pretrained_model, config=config)
model.to(device)
model.eval()
def gen_synonyms(text, n=100, k=20):
''''含义: 产生sent的n个相似句,然后返回最相似的k个。
做法:用seq2seq生成,并用encoder算相似度并排序。
'''
# 寻找所有相似的句子
r = []
inputs1 = tokenizer(text, return_tensors="pt")
for _ in range(n):
inputs1.to(device)
output = tokenizer.batch_decode(model.generate(**inputs1, top_p=0.95, do_sample=True, max_length=128), skip_special_tokens=True)[0].replace(" ","").replace(text, "") # 去除空格,去除原始text文本。
r.append(output)
# 对相似的句子进行排序
r = [i for i in set(r) if i != text and len(i) > 0]
r = [text] + r
inputs2 = tokenizer(r, padding=True, return_tensors="pt")
with torch.no_grad():
inputs2.to(device)
outputs = model(**inputs2)
Z = outputs.pooler_output.cpu().numpy()
Z /= (Z**2).sum(axis=1, keepdims=True)**0.5
argsort = np.dot(Z[1:], -Z[0]).argsort()
return [r[i + 1] for i in argsort[:k]]
out = gen_synonyms("广州和深圳哪个好?")
print(out)
# ['深圳和广州哪个好?',
# '广州和深圳哪个好',
# '深圳和广州哪个好',
# '深圳和广州哪个比较好。',
# '深圳和广州哪个最好?',
# '深圳和广州哪个比较好',
# '广州和深圳那个比较好',
# '深圳和广州哪个更好?',
# '深圳与广州哪个好',
# '深圳和广州,哪个比较好',
# '广州与深圳比较哪个好',
# '深圳和广州哪里比较好',
# '深圳还是广州比较好?',
# '广州和深圳哪个地方好一些?',
# '广州好还是深圳好?',
# '广州好还是深圳好呢?',
# '广州与深圳哪个地方好点?',
# '深圳好还是广州好',
# '广州好还是深圳好',
# '广州和深圳哪个城市好?']
```
|
Chikashi/t5-small-finetuned-cnndm1-wikihow1
|
Chikashi
| 2022-04-15T03:46:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wikihow",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-15T01:03:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikihow
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm1-wikihow1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wikihow
type: wikihow
args: all
metrics:
- name: Rouge1
type: rouge
value: 26.6881
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm1-wikihow1
This model is a fine-tuned version of [Chikashi/t5-small-finetuned-cnndm1-wikihow0](https://huggingface.co/Chikashi/t5-small-finetuned-cnndm1-wikihow0) on the wikihow dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3727
- Rouge1: 26.6881
- Rouge2: 9.9589
- Rougel: 22.6828
- Rougelsum: 26.0203
- Gen Len: 18.4813
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.56 | 1.0 | 39313 | 2.3727 | 26.6881 | 9.9589 | 22.6828 | 26.0203 | 18.4813 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
nicholasdino/bert-finetuned-ner
|
nicholasdino
| 2022-04-15T02:58:55Z | 2 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-15T01:28:07Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: nicholasdino/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nicholasdino/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0241
- Validation Loss: 0.0588
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1017, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1261 | 0.0587 | 0 |
| 0.0397 | 0.0540 | 1 |
| 0.0241 | 0.0588 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Raychanan/bert-base-chinese-first512
|
Raychanan
| 2022-04-15T02:50:33Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-15T02:10:28Z |
first 512
training_args = TrainingArguments(
output_dir="./results",
learning_rate=5e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=5,
weight_decay=0.01,
evaluation_strategy="epoch",
push_to_hub=True
)
|
vabadeh213/autotrain-iris-744122711
|
vabadeh213
| 2022-04-15T02:09:16Z | 2 | 0 |
transformers
|
[
"transformers",
"joblib",
"decision_tree",
"autotrain",
"tabular",
"classification",
"structured-data-classification",
"dataset:vabadeh213/autotrain-data-iris",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | null | 2022-04-15T02:08:51Z |
---
tags:
- autotrain
- tabular
- classification
- structured-data-classification
datasets:
- vabadeh213/autotrain-data-iris
co2_eq_emissions: 0.0006493037575021453
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 744122711
- CO2 Emissions (in grams): 0.0006493037575021453
## Validation Metrics
- Loss: 0.09241962407466127
- Accuracy: 0.9666666666666667
- Macro F1: 0.9665831244778613
- Micro F1: 0.9666666666666667
- Weighted F1: 0.9665831244778613
- Macro Precision: 0.9696969696969697
- Micro Precision: 0.9666666666666667
- Weighted Precision: 0.9696969696969696
- Macro Recall: 0.9666666666666667
- Micro Recall: 0.9666666666666667
- Weighted Recall: 0.9666666666666667
## Usage
```python
import json
import joblib
model = joblib.load('model.joblib')
config = json.load(open('config.json'))
features = config['features']
# data = pd.read_csv("data.csv")
data = data[features]
predictions = model.predict(data) # or model.predict_proba(data)
```
|
Raychanan/COVID_RandomOver
|
Raychanan
| 2022-04-15T01:24:46Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-15T00:42:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [hfl/chinese-bert-wwm-ext](https://huggingface.co/hfl/chinese-bert-wwm-ext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4235
- F1: 0.9546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.1307 | 1.0 | 3268 | 0.9040 | 0.0 |
| 0.8795 | 2.0 | 6536 | 0.5532 | 0.9546 |
| 0.8183 | 3.0 | 9804 | 0.3641 | 0.9546 |
| 1.0074 | 4.0 | 13072 | 0.3998 | 0.9546 |
| 0.7947 | 5.0 | 16340 | 0.4235 | 0.9546 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Raychanan/COVID
|
Raychanan
| 2022-04-14T23:55:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-14T23:32:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [hfl/chinese-bert-wwm-ext](https://huggingface.co/hfl/chinese-bert-wwm-ext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5193
- F1: 0.9546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3803 | 1.0 | 1792 | 0.5110 | 0.9546 |
| 0.4129 | 2.0 | 3584 | 0.5256 | 0.9546 |
| 0.4804 | 3.0 | 5376 | 0.5305 | 0.9546 |
| 0.6571 | 4.0 | 7168 | 0.5583 | 0.9546 |
| 0.6605 | 5.0 | 8960 | 0.5193 | 0.9546 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Chikashi/t5-small-finetuned-cnndm1-wikihow0
|
Chikashi
| 2022-04-14T23:28:23Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-14T17:20:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm1-wikihow0
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.6116
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm1-wikihow0
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6436
- Rouge1: 24.6116
- Rouge2: 11.8788
- Rougel: 20.3665
- Rougelsum: 23.2474
- Gen Len: 18.9998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8208 | 1.0 | 71779 | 1.6436 | 24.6116 | 11.8788 | 20.3665 | 23.2474 | 18.9998 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Adrian/distilbert-base-uncased-finetuned-emotion
|
Adrian
| 2022-04-14T22:11:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-14T21:58:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.927345202022014
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2071
- Accuracy: 0.9275
- F1: 0.9273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8153 | 1.0 | 250 | 0.2942 | 0.9125 | 0.9102 |
| 0.2406 | 2.0 | 500 | 0.2071 | 0.9275 | 0.9273 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
brad1141/oldData_BERT
|
brad1141
| 2022-04-14T21:27:01Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-14T20:35:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: oldData_BERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# oldData_BERT
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2348 | 1.0 | 1125 | 1.0185 |
| 1.0082 | 2.0 | 2250 | 0.7174 |
| 0.699 | 3.0 | 3375 | 0.3657 |
| 0.45 | 4.0 | 4500 | 0.1880 |
| 0.2915 | 5.0 | 5625 | 0.1140 |
| 0.2056 | 6.0 | 6750 | 0.0708 |
| 0.1312 | 7.0 | 7875 | 0.0616 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AhmedSayeem/VIT_Basic
|
AhmedSayeem
| 2022-04-14T19:01:22Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-04-14T19:01:13Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: VIT_Basic
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9107142686843872
---
# VIT_Basic
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### chairs

#### hot dog

#### ice cream

#### ladders

#### tables

|
Tianle/distilbert-base-uncased-finetuned-squad
|
Tianle
| 2022-04-14T18:59:38Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-04-13T17:56:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2169
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2631 | 1.0 | 5533 | 1.2169 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
luquesky/distilbert-base-uncased-finetuned-emotion
|
luquesky
| 2022-04-14T17:48:19Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-13T11:25:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.934
- name: F1
type: f1
value: 0.9337817808480242
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2155
- Accuracy: 0.934
- F1: 0.9338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1768 | 1.0 | 250 | 0.1867 | 0.924 | 0.9235 |
| 0.1227 | 2.0 | 500 | 0.1588 | 0.934 | 0.9346 |
| 0.1031 | 3.0 | 750 | 0.1656 | 0.931 | 0.9306 |
| 0.0843 | 4.0 | 1000 | 0.1662 | 0.9395 | 0.9392 |
| 0.0662 | 5.0 | 1250 | 0.1714 | 0.9325 | 0.9326 |
| 0.0504 | 6.0 | 1500 | 0.1821 | 0.934 | 0.9338 |
| 0.0429 | 7.0 | 1750 | 0.2038 | 0.933 | 0.9324 |
| 0.0342 | 8.0 | 2000 | 0.2054 | 0.938 | 0.9379 |
| 0.0296 | 9.0 | 2250 | 0.2128 | 0.9345 | 0.9345 |
| 0.0211 | 10.0 | 2500 | 0.2155 | 0.934 | 0.9338 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
florentiino/DialoGPT-small-rick
|
florentiino
| 2022-04-14T15:24:54Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-14T13:56:29Z |
---
tags:
- conversational
---
# My Awesome Model that talks like Rick but thinks that your name is Morty
|
Ning-fish/xlm-roberta-base-finetuned-panx-de
|
Ning-fish
| 2022-04-14T15:17:38Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-14T13:02:31Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8591260810195721
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1352
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.257 | 1.0 | 525 | 0.1512 | 0.8302 |
| 0.1305 | 2.0 | 1050 | 0.1401 | 0.8447 |
| 0.0817 | 3.0 | 1575 | 0.1352 | 0.8591 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
anton-l/xtreme_s_xlsr_300m_fleurs_asr
|
anton-l
| 2022-04-14T14:49:44Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-10T17:26:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: xtreme_s_xlsr_300m_fleurs_asr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_300m_fleurs_asr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Cer: 0.3330
- Loss: 1.2864
- Wer: 0.8344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:------:|:---------------:|:------:|
| 4.677 | 0.13 | 1000 | 1.0 | 3.2323 | 1.0 |
| 4.1512 | 0.26 | 2000 | 0.5098 | 1.7858 | 0.9869 |
| 1.119 | 0.39 | 3000 | 0.4412 | 1.6628 | 0.9063 |
| 0.8573 | 0.52 | 4000 | 0.3588 | 1.3440 | 0.9016 |
| 1.0232 | 0.65 | 5000 | 0.3690 | 1.3004 | 0.8775 |
| 0.6328 | 0.78 | 6000 | 0.3354 | 1.2219 | 0.8331 |
| 0.6636 | 0.91 | 7000 | 0.3604 | 1.2839 | 0.8637 |
| 0.6536 | 1.04 | 8000 | 0.3420 | 1.2481 | 0.8504 |
| 0.5002 | 1.17 | 9000 | 0.3518 | 1.2514 | 0.8403 |
| 0.4785 | 1.3 | 10000 | 0.3399 | 1.2409 | 0.8570 |
| 0.517 | 1.43 | 11000 | 0.3599 | 1.3058 | 0.8654 |
| 0.506 | 1.56 | 12000 | 0.3484 | 1.2350 | 0.8441 |
| 0.4013 | 1.69 | 13000 | 0.3327 | 1.1982 | 0.8246 |
| 0.3521 | 1.82 | 14000 | 0.3270 | 1.1653 | 0.8265 |
| 0.4265 | 1.95 | 15000 | 0.3562 | 1.2647 | 0.8564 |
| 0.3949 | 2.08 | 16000 | 0.3490 | 1.2988 | 0.8480 |
| 0.3059 | 2.21 | 17000 | 0.3327 | 1.2332 | 0.8323 |
| 0.3618 | 2.34 | 18000 | 0.3480 | 1.2394 | 0.8517 |
| 0.2567 | 2.47 | 19000 | 0.3365 | 1.2294 | 0.8394 |
| 0.3501 | 2.6 | 20000 | 0.3271 | 1.1853 | 0.8250 |
| 0.2766 | 2.73 | 21000 | 0.3425 | 1.2339 | 0.8443 |
| 0.3396 | 2.86 | 22000 | 0.3501 | 1.2768 | 0.8669 |
| 0.3566 | 2.99 | 23000 | 0.3477 | 1.2648 | 0.8710 |
| 0.3166 | 3.12 | 24000 | 0.3550 | 1.3773 | 0.8641 |
| 0.2388 | 3.25 | 25000 | 0.3301 | 1.2374 | 0.8316 |
| 0.2057 | 3.38 | 26000 | 0.3429 | 1.2846 | 0.8560 |
| 0.2264 | 3.51 | 27000 | 0.3469 | 1.2676 | 0.8542 |
| 0.1998 | 3.64 | 28000 | 0.3531 | 1.3365 | 0.8655 |
| 0.2701 | 3.77 | 29000 | 0.3518 | 1.3124 | 0.8711 |
| 0.18 | 3.9 | 30000 | 0.3498 | 1.3095 | 0.8648 |
| 0.1337 | 4.03 | 31000 | 0.3397 | 1.2941 | 0.8452 |
| 0.162 | 4.16 | 32000 | 0.3320 | 1.2942 | 0.8295 |
| 0.2776 | 4.29 | 33000 | 0.3275 | 1.2690 | 0.8276 |
| 0.1634 | 4.42 | 34000 | 0.3307 | 1.3145 | 0.8331 |
| 0.2172 | 4.54 | 35000 | 0.3334 | 1.3031 | 0.8435 |
| 0.1305 | 4.67 | 36000 | 0.3303 | 1.2768 | 0.8321 |
| 0.1436 | 4.8 | 37000 | 0.3353 | 1.2968 | 0.8416 |
| 0.134 | 4.93 | 38000 | 0.3330 | 1.2864 | 0.8344 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.1+cu111
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
|
jogonba2/barthez-deft-linguistique
|
jogonba2
| 2022-04-14T14:04:46Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: barthez-deft-linguistique
results:
- task:
name: Summarization
type: summarization
metrics:
- name: Rouge1
type: rouge
value: 41.989
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# barthez-deft-linguistique
This model is a fine-tuned version of [moussaKam/barthez](https://huggingface.co/moussaKam/barthez) on an unknown dataset.
**Note**: this model is one of the preliminary experiments and it underperforms the models published in the paper (using [MBartHez](https://huggingface.co/moussaKam/mbarthez) and HAL/Wiki pre-training + copy mechanisms)
It achieves the following results on the evaluation set:
- Loss: 1.7596
- Rouge1: 41.989
- Rouge2: 22.4524
- Rougel: 32.7966
- Rougelsum: 32.7953
- Gen Len: 22.1549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.0569 | 1.0 | 108 | 2.0282 | 31.6993 | 14.9483 | 25.5565 | 25.4379 | 18.3803 |
| 2.2892 | 2.0 | 216 | 1.8553 | 35.2563 | 18.019 | 28.3135 | 28.2927 | 18.507 |
| 1.9062 | 3.0 | 324 | 1.7696 | 37.4613 | 18.1488 | 28.9959 | 29.0134 | 19.5352 |
| 1.716 | 4.0 | 432 | 1.7641 | 37.6903 | 18.7496 | 30.1097 | 30.1027 | 18.9577 |
| 1.5722 | 5.0 | 540 | 1.7781 | 38.1013 | 19.8291 | 29.8142 | 29.802 | 19.169 |
| 1.4655 | 6.0 | 648 | 1.7661 | 38.3557 | 20.3309 | 30.5068 | 30.4728 | 19.3662 |
| 1.3507 | 7.0 | 756 | 1.7596 | 39.7409 | 20.2998 | 31.0849 | 31.1152 | 19.3944 |
| 1.2874 | 8.0 | 864 | 1.7706 | 37.7846 | 20.3457 | 30.6826 | 30.6321 | 19.4789 |
| 1.2641 | 9.0 | 972 | 1.7848 | 38.7421 | 19.5701 | 30.5798 | 30.6305 | 19.3944 |
| 1.1192 | 10.0 | 1080 | 1.8008 | 40.3313 | 20.3378 | 31.8325 | 31.8648 | 19.5493 |
| 1.0724 | 11.0 | 1188 | 1.8450 | 38.9612 | 20.5719 | 31.4496 | 31.3144 | 19.8592 |
| 1.0077 | 12.0 | 1296 | 1.8364 | 36.5997 | 18.46 | 29.1808 | 29.1705 | 19.7324 |
| 0.9362 | 13.0 | 1404 | 1.8677 | 38.0371 | 19.2321 | 30.3893 | 30.3926 | 19.6338 |
| 0.8868 | 14.0 | 1512 | 1.9154 | 36.4737 | 18.5314 | 29.325 | 29.3634 | 19.6479 |
| 0.8335 | 15.0 | 1620 | 1.9344 | 35.7583 | 18.0687 | 27.9666 | 27.8675 | 19.8028 |
| 0.8305 | 16.0 | 1728 | 1.9556 | 37.2137 | 18.2199 | 29.5959 | 29.5799 | 19.9577 |
| 0.8057 | 17.0 | 1836 | 1.9793 | 36.6834 | 17.8505 | 28.6701 | 28.7145 | 19.7324 |
| 0.7869 | 18.0 | 1944 | 1.9994 | 37.5918 | 19.1984 | 28.8569 | 28.8278 | 19.7606 |
| 0.7549 | 19.0 | 2052 | 2.0117 | 37.3278 | 18.5169 | 28.778 | 28.7737 | 19.8028 |
| 0.7497 | 20.0 | 2160 | 2.0189 | 37.7513 | 19.1813 | 29.3675 | 29.402 | 19.6901 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1+cu110
- Datasets 1.11.0
- Tokenizers 0.10.3
|
jogonba2/barthez-deft-archeologie
|
jogonba2
| 2022-04-14T14:04:35Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: barthez-deft-archeologie
results:
- task:
name: Summarization
type: summarization
metrics:
- name: Rouge1
type: rouge
value: 37.1845
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# barthez-deft-archeologie
This model is a fine-tuned version of [moussaKam/barthez](https://huggingface.co/moussaKam/barthez) on an unknown dataset.
**Note**: this model is one of the preliminary experiments and it underperforms the models published in the paper (using [MBartHez](https://huggingface.co/moussaKam/mbarthez) and HAL/Wiki pre-training + copy mechanisms)
It achieves the following results on the evaluation set:
- Loss: 2.0733
- Rouge1: 37.1845
- Rouge2: 16.9534
- Rougel: 28.8416
- Rougelsum: 29.077
- Gen Len: 34.4028
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.4832 | 1.0 | 108 | 2.4237 | 22.6662 | 10.009 | 19.8729 | 19.8814 | 15.8333 |
| 2.557 | 2.0 | 216 | 2.2328 | 24.8102 | 11.9911 | 20.4773 | 20.696 | 19.0139 |
| 2.2702 | 3.0 | 324 | 2.2002 | 25.6482 | 11.6191 | 21.8383 | 21.9341 | 18.1944 |
| 2.1119 | 4.0 | 432 | 2.1266 | 25.5806 | 11.9765 | 21.3973 | 21.3503 | 19.4306 |
| 1.9582 | 5.0 | 540 | 2.1072 | 25.6578 | 12.2709 | 22.182 | 22.0548 | 19.1528 |
| 1.8137 | 6.0 | 648 | 2.1008 | 26.5272 | 11.4033 | 22.359 | 22.3259 | 19.4722 |
| 1.7725 | 7.0 | 756 | 2.1074 | 25.0405 | 11.1773 | 21.1369 | 21.1847 | 19.1806 |
| 1.6772 | 8.0 | 864 | 2.0959 | 26.5237 | 11.6028 | 22.5018 | 22.3931 | 19.3333 |
| 1.5798 | 9.0 | 972 | 2.0976 | 27.7443 | 11.9898 | 22.4052 | 22.2954 | 19.7222 |
| 1.4753 | 10.0 | 1080 | 2.0733 | 28.3502 | 12.9162 | 22.6352 | 22.6015 | 19.8194 |
| 1.4646 | 11.0 | 1188 | 2.1091 | 27.9198 | 12.8591 | 23.0718 | 23.0779 | 19.6111 |
| 1.4082 | 12.0 | 1296 | 2.1036 | 28.8509 | 13.0987 | 23.4189 | 23.5044 | 19.4861 |
| 1.2862 | 13.0 | 1404 | 2.1222 | 28.6641 | 12.8157 | 22.6799 | 22.7051 | 19.8611 |
| 1.2612 | 14.0 | 1512 | 2.1487 | 26.9709 | 11.6084 | 22.0312 | 22.0543 | 19.875 |
| 1.2327 | 15.0 | 1620 | 2.1808 | 28.218 | 12.6239 | 22.7372 | 22.7881 | 19.7361 |
| 1.2264 | 16.0 | 1728 | 2.1778 | 26.7393 | 11.4474 | 21.6057 | 21.555 | 19.7639 |
| 1.1848 | 17.0 | 1836 | 2.1995 | 27.6902 | 12.1082 | 22.0406 | 22.0101 | 19.6806 |
| 1.133 | 18.0 | 1944 | 2.2038 | 27.0402 | 12.1846 | 21.7793 | 21.7513 | 19.8056 |
| 1.168 | 19.0 | 2052 | 2.2116 | 27.5149 | 11.9876 | 22.1113 | 22.1527 | 19.7222 |
| 1.1206 | 20.0 | 2160 | 2.2133 | 28.2321 | 12.677 | 22.749 | 22.8485 | 19.5972 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1+cu110
- Datasets 1.11.0
- Tokenizers 0.10.3
|
obokkkk/bert-base-multilingual-cased-finetuned-klue
|
obokkkk
| 2022-04-14T12:57:25Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-04-14T03:17:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-cased-finetuned-klue
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-klue
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 36
- total_train_batch_size: 288
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6323 | 5.0 | 500 | 1.6799 |
| 1.3765 | 10.0 | 1000 | 1.3027 |
| 0.8433 | 15.0 | 1500 | 1.2946 |
| 0.5224 | 20.0 | 2000 | 1.4197 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.1
|
ClaireV/MLMA_Lab8
|
ClaireV
| 2022-04-14T12:46:24Z | 2 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-13T20:33:42Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ClaireV/MLMA_Lab8
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ClaireV/MLMA_Lab8
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0232
- Validation Loss: 0.0598
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1017, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1262 | 0.0666 | 0 |
| 0.0380 | 0.0571 | 1 |
| 0.0232 | 0.0598 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huggingtweets/elonmusk-joebiden
|
huggingtweets
| 2022-04-14T12:38:39Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-14T12:38:32Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1503591435324563456/foUrqiEw_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1308769664240160770/AfgzWVE7_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Joe Biden</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-joebiden</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Joe Biden.
| Data | Elon Musk | Joe Biden |
| --- | --- | --- |
| Tweets downloaded | 200 | 3249 |
| Retweets | 15 | 571 |
| Short tweets | 60 | 34 |
| Tweets kept | 125 | 2644 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ne2s3c4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-joebiden's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ka86kb6l) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ka86kb6l/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-joebiden')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Wanjiru/bert-base-multilingual_en_ner_
|
Wanjiru
| 2022-04-14T12:33:55Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-12T16:05:06Z |
Label ID Label Name
0 0
1. B-PER
2. I-PER
3. B-ORG
4. I-ORG
5. B-LOC
6. I-LOC
|
zzzzzzttt/swin-tiny-patch4-window7-224-finetuned-eurosat
|
zzzzzzttt
| 2022-04-14T12:20:10Z | 81 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-04-14T09:04:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9762962962962963
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0654
- Accuracy: 0.9763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2431 | 1.0 | 190 | 0.1119 | 0.9607 |
| 0.1682 | 2.0 | 380 | 0.0921 | 0.9693 |
| 0.1644 | 3.0 | 570 | 0.0654 | 0.9763 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Felix92/doctr-dummy-tf-linknet-resnet34
|
Felix92
| 2022-04-14T12:18:22Z | 2 | 1 |
transformers
|
[
"transformers",
"en",
"endpoints_compatible",
"region:us"
] | null | 2022-04-14T12:18:14Z |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: detection
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-tf-linknet-resnet50
|
Felix92
| 2022-04-14T11:37:56Z | 1 | 0 |
transformers
|
[
"transformers",
"en",
"endpoints_compatible",
"region:us"
] | null | 2022-04-14T11:37:48Z |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: detection
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-tf-linknet-resnet18
|
Felix92
| 2022-04-14T11:29:46Z | 3 | 0 |
transformers
|
[
"transformers",
"en",
"endpoints_compatible",
"region:us"
] | null | 2022-04-14T11:29:39Z |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: detection
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-tf-db-mobilenet-v3-large
|
Felix92
| 2022-04-14T11:28:25Z | 1 | 0 |
transformers
|
[
"transformers",
"en",
"endpoints_compatible",
"region:us"
] | null | 2022-04-14T11:28:18Z |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: detection
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-tf-mobilenet-v3-large
|
Felix92
| 2022-04-14T11:23:03Z | 1 | 0 |
transformers
|
[
"transformers",
"en",
"endpoints_compatible",
"region:us"
] | null | 2022-04-14T11:22:55Z |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: classification
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-tf-magc-resnet31
|
Felix92
| 2022-04-14T11:13:32Z | 3 | 0 |
transformers
|
[
"transformers",
"en",
"endpoints_compatible",
"region:us"
] | null | 2022-04-14T11:13:24Z |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: classification
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-tf-resnet50
|
Felix92
| 2022-04-14T11:09:30Z | 1 | 0 |
transformers
|
[
"transformers",
"en",
"endpoints_compatible",
"region:us"
] | null | 2022-04-14T11:09:22Z |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: classification
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
gary109/wav2vec2-large-xlsr-53-MIR_ST500_ASR
|
gary109
| 2022-04-14T11:05:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"/workspace/datasets/datasets/MIR_ST500/MIR_ST500.py",
"generated_from_trainer",
"dataset:mir_st500",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-14T03:20:19Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- /workspace/datasets/datasets/MIR_ST500/MIR_ST500.py
- generated_from_trainer
datasets:
- mir_st500
model-index:
- name: wav2vec2-large-xlsr-53-MIR_ST500_ASR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-MIR_ST500_ASR
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the /WORKSPACE/DATASETS/DATASETS/MIR_ST500/MIR_ST500.PY - ASR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5180
- Wer: 0.5824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 8
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 56.764 | 0.13 | 100 | 24.4254 | 0.9990 |
| 7.5081 | 0.27 | 200 | 2.9111 | 1.0 |
| 3.4899 | 0.4 | 300 | 2.1361 | 1.0 |
| 2.4094 | 0.53 | 400 | 1.9088 | 1.0 |
| 2.6764 | 0.67 | 500 | 1.8543 | 1.0 |
| 3.3107 | 0.8 | 600 | 1.7979 | 1.0 |
| 2.2856 | 0.93 | 700 | 1.7571 | 1.0 |
| 1.856 | 1.07 | 800 | 1.7351 | 0.9648 |
| 1.8882 | 1.2 | 900 | 1.7181 | 0.9654 |
| 2.1731 | 1.33 | 1000 | 1.6736 | 0.9637 |
| 1.8252 | 1.46 | 1100 | 1.3468 | 0.9647 |
| 1.9092 | 1.6 | 1200 | 1.3302 | 0.9627 |
| 1.9435 | 1.73 | 1300 | 1.2428 | 0.9634 |
| 1.3027 | 1.86 | 1400 | 1.2133 | 0.9644 |
| 1.3438 | 2.0 | 1500 | 1.2002 | 0.9635 |
| 1.2161 | 2.13 | 1600 | 1.1901 | 0.9636 |
| 1.203 | 2.26 | 1700 | 1.1620 | 0.9616 |
| 1.1159 | 2.4 | 1800 | 1.1660 | 0.9598 |
| 1.1466 | 2.53 | 1900 | 1.2089 | 0.9605 |
| 1.0563 | 2.66 | 2000 | 1.1732 | 0.9603 |
| 1.1019 | 2.8 | 2100 | 1.1468 | 0.9612 |
| 1.029 | 2.93 | 2200 | 1.1188 | 0.9622 |
| 1.0079 | 3.06 | 2300 | 1.0604 | 0.9617 |
| 1.0483 | 3.2 | 2400 | 1.0499 | 0.9612 |
| 0.9892 | 3.33 | 2500 | 1.0292 | 0.9606 |
| 0.9556 | 3.46 | 2600 | 1.0228 | 0.9604 |
| 0.9626 | 3.6 | 2700 | 1.0028 | 0.9617 |
| 1.0537 | 3.73 | 2800 | 1.0051 | 0.9608 |
| 1.0648 | 3.86 | 2900 | 0.9723 | 0.9604 |
| 0.8657 | 3.99 | 3000 | 0.9620 | 0.9605 |
| 0.8964 | 4.13 | 3100 | 1.0432 | 0.9612 |
| 0.9639 | 4.26 | 3200 | 0.9322 | 0.9589 |
| 0.8965 | 4.39 | 3300 | 0.9091 | 0.9559 |
| 0.8257 | 4.53 | 3400 | 0.8999 | 0.9499 |
| 0.8002 | 4.66 | 3500 | 0.8754 | 0.9554 |
| 0.7335 | 4.79 | 3600 | 0.8608 | 0.9572 |
| 0.936 | 4.93 | 3700 | 0.8564 | 0.9510 |
| 0.8185 | 5.06 | 3800 | 0.8890 | 0.9517 |
| 0.7422 | 5.19 | 3900 | 0.8262 | 0.9392 |
| 0.7784 | 5.33 | 4000 | 0.8292 | 0.9259 |
| 0.8123 | 5.46 | 4100 | 0.8180 | 0.9374 |
| 0.7256 | 5.59 | 4200 | 0.8158 | 0.9077 |
| 0.7638 | 5.73 | 4300 | 0.8423 | 0.9170 |
| 0.6737 | 5.86 | 4400 | 0.7818 | 0.9182 |
| 0.7644 | 5.99 | 4500 | 0.7692 | 0.8934 |
| 0.7911 | 6.13 | 4600 | 0.7627 | 0.8978 |
| 0.6922 | 6.26 | 4700 | 0.7627 | 0.8906 |
| 0.7369 | 6.39 | 4800 | 0.7570 | 0.8838 |
| 0.6642 | 6.52 | 4900 | 0.9476 | 0.8953 |
| 0.7502 | 6.66 | 5000 | 0.7336 | 0.8955 |
| 0.6243 | 6.79 | 5100 | 0.7583 | 0.8896 |
| 0.6912 | 6.92 | 5200 | 0.7764 | 0.8761 |
| 0.7744 | 7.06 | 5300 | 0.7615 | 0.8790 |
| 0.6195 | 7.19 | 5400 | 0.7114 | 0.8712 |
| 0.7418 | 7.32 | 5500 | 0.8314 | 0.8864 |
| 0.7658 | 7.46 | 5600 | 0.8531 | 0.8718 |
| 0.6821 | 7.59 | 5700 | 0.9068 | 0.8786 |
| 0.6931 | 7.72 | 5800 | 0.7549 | 0.8645 |
| 0.6771 | 7.86 | 5900 | 0.7138 | 0.8442 |
| 0.6735 | 7.99 | 6000 | 0.6947 | 0.8493 |
| 0.6427 | 8.12 | 6100 | 0.6997 | 0.8475 |
| 0.6988 | 8.26 | 6200 | 0.6814 | 0.8098 |
| 0.6726 | 8.39 | 6300 | 0.6656 | 0.8259 |
| 0.6247 | 8.52 | 6400 | 0.6438 | 0.8314 |
| 0.5101 | 8.66 | 6500 | 0.6323 | 0.8446 |
| 0.5325 | 8.79 | 6600 | 0.6305 | 0.8413 |
| 0.5918 | 8.92 | 6700 | 0.6353 | 0.8076 |
| 0.617 | 9.05 | 6800 | 0.6544 | 0.8118 |
| 0.4861 | 9.19 | 6900 | 0.6174 | 0.8429 |
| 0.6396 | 9.32 | 7000 | 0.6140 | 0.8117 |
| 0.436 | 9.45 | 7100 | 0.6148 | 0.7887 |
| 0.6141 | 9.59 | 7200 | 0.6133 | 0.8007 |
| 0.5781 | 9.72 | 7300 | 0.6135 | 0.8211 |
| 0.52 | 9.85 | 7400 | 0.6155 | 0.8042 |
| 0.6681 | 9.99 | 7500 | 0.6074 | 0.7843 |
| 0.5004 | 10.12 | 7600 | 0.5950 | 0.8035 |
| 0.4993 | 10.25 | 7700 | 0.5888 | 0.7710 |
| 0.4768 | 10.39 | 7800 | 0.5922 | 0.7633 |
| 0.4535 | 10.52 | 7900 | 0.5906 | 0.8030 |
| 0.517 | 10.65 | 8000 | 0.5875 | 0.7823 |
| 0.5894 | 10.79 | 8100 | 0.5882 | 0.7932 |
| 0.6005 | 10.92 | 8200 | 0.5798 | 0.7922 |
| 0.4284 | 11.05 | 8300 | 0.5775 | 0.7701 |
| 0.5163 | 11.19 | 8400 | 0.5715 | 0.7592 |
| 0.4701 | 11.32 | 8500 | 0.5955 | 0.7485 |
| 0.5152 | 11.45 | 8600 | 0.6041 | 0.6914 |
| 0.4442 | 11.58 | 8700 | 0.5614 | 0.7439 |
| 0.4451 | 11.72 | 8800 | 0.5619 | 0.7033 |
| 0.4433 | 11.85 | 8900 | 0.5562 | 0.7246 |
| 0.4799 | 11.98 | 9000 | 0.5834 | 0.7040 |
| 0.4832 | 12.12 | 9100 | 0.5902 | 0.7349 |
| 0.523 | 12.25 | 9200 | 0.5562 | 0.7326 |
| 0.4419 | 12.38 | 9300 | 0.5472 | 0.7326 |
| 0.437 | 12.52 | 9400 | 0.5466 | 0.7100 |
| 0.4797 | 12.65 | 9500 | 0.5470 | 0.6698 |
| 0.3971 | 12.78 | 9600 | 0.5437 | 0.6835 |
| 0.5254 | 12.92 | 9700 | 0.5385 | 0.6747 |
| 0.5046 | 13.05 | 9800 | 0.5330 | 0.6554 |
| 0.4692 | 13.18 | 9900 | 0.5305 | 0.6527 |
| 0.4305 | 13.32 | 10000 | 0.5292 | 0.6314 |
| 0.6132 | 13.45 | 10100 | 0.5405 | 0.6028 |
| 0.4741 | 13.58 | 10200 | 0.5311 | 0.6207 |
| 0.398 | 13.72 | 10300 | 0.5320 | 0.6261 |
| 0.458 | 13.85 | 10400 | 0.5240 | 0.6242 |
| 0.4154 | 13.98 | 10500 | 0.5262 | 0.6215 |
| 0.3702 | 14.11 | 10600 | 0.5206 | 0.6136 |
| 0.427 | 14.25 | 10700 | 0.5231 | 0.6289 |
| 0.4307 | 14.38 | 10800 | 0.5210 | 0.5908 |
| 0.4738 | 14.51 | 10900 | 0.5211 | 0.5826 |
| 0.5522 | 14.65 | 11000 | 0.5193 | 0.5886 |
| 0.4717 | 14.78 | 11100 | 0.5194 | 0.5907 |
| 0.4819 | 14.91 | 11200 | 0.5178 | 0.5870 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.1+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
javilonso/Mex_Rbta_Opinion_Polarity
|
javilonso
| 2022-04-14T09:44:12Z | 7 | 1 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-14T09:04:20Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: javilonso/Mex_Rbta_Opinion_Polarity
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# javilonso/Mex_Rbta_Opinion_Polarity
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4033
- Validation Loss: 0.5572
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5986, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.5989 | 0.5516 | 0 |
| 0.4033 | 0.5572 | 1 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Matthijs/snacks-classifier
|
Matthijs
| 2022-04-14T09:39:49Z | 90 | 0 |
transformers
|
[
"transformers",
"pytorch",
"swin",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-04-14T09:19:01Z |
`microsoft/swin-tiny-patch4-window7-224` fine-tuned on the `Matthijs/snacks` dataset.
Test set accuracy after 50 epochs: 0.9286.
|
ndavid/autotrain-trec-fine-bert-739422530
|
ndavid
| 2022-04-14T09:39:42Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:ndavid/autotrain-data-trec-fine-bert",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-14T09:37:03Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ndavid/autotrain-data-trec-fine-bert
co2_eq_emissions: 0.02238820299105448
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 739422530
- CO2 Emissions (in grams): 0.02238820299105448
## Validation Metrics
- Loss: 0.36623290181159973
- Accuracy: 0.9321753515301903
- Macro F1: 0.9066706944656866
- Micro F1: 0.9321753515301903
- Weighted F1: 0.9314858667247282
- Macro Precision: 0.9489233194839841
- Micro Precision: 0.9321753515301903
- Weighted Precision: 0.9347346558570125
- Macro Recall: 0.8842587178845419
- Micro Recall: 0.9321753515301903
- Weighted Recall: 0.9321753515301903
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ndavid/autotrain-trec-fine-bert-739422530
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ndavid/autotrain-trec-fine-bert-739422530", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ndavid/autotrain-trec-fine-bert-739422530", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Felix92/doctr-dummy-torch-fasterrcnn-mobilenet-v3-large-fpn
|
Felix92
| 2022-04-14T09:28:24Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"en",
"endpoints_compatible",
"region:us"
] | null | 2022-04-14T09:28:16Z |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: obj_detection
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
htufgg/roberta-finetuned-CPV_Spanish
|
htufgg
| 2022-04-14T09:01:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-13T17:43:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: roberta-finetuned-CPV_Spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-CPV_Spanish
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0422
- F1: 0.7739
- Roc Auc: 0.8704
- Accuracy: 0.7201
- Coverage Error: 11.5798
- Label Ranking Average Precision Score: 0.7742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | Coverage Error | Label Ranking Average Precision Score |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:|:--------------:|:-------------------------------------:|
| 0.0579 | 1.0 | 2039 | 0.0548 | 0.6327 | 0.7485 | 0.5274 | 21.7879 | 0.5591 |
| 0.0411 | 2.0 | 4078 | 0.0441 | 0.7108 | 0.8027 | 0.6386 | 16.8647 | 0.6732 |
| 0.0294 | 3.0 | 6117 | 0.0398 | 0.7437 | 0.8295 | 0.6857 | 14.6700 | 0.7249 |
| 0.0223 | 4.0 | 8156 | 0.0389 | 0.7568 | 0.8453 | 0.7056 | 13.3552 | 0.7494 |
| 0.0163 | 5.0 | 10195 | 0.0397 | 0.7626 | 0.8569 | 0.7097 | 12.5895 | 0.7620 |
| 0.0132 | 6.0 | 12234 | 0.0395 | 0.7686 | 0.8620 | 0.7126 | 12.1926 | 0.7656 |
| 0.0095 | 7.0 | 14273 | 0.0409 | 0.7669 | 0.8694 | 0.7109 | 11.5978 | 0.7700 |
| 0.0066 | 8.0 | 16312 | 0.0415 | 0.7705 | 0.8726 | 0.7107 | 11.4252 | 0.7714 |
| 0.0055 | 9.0 | 18351 | 0.0417 | 0.7720 | 0.8689 | 0.7163 | 11.6987 | 0.7716 |
| 0.0045 | 10.0 | 20390 | 0.0422 | 0.7739 | 0.8704 | 0.7201 | 11.5798 | 0.7742 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.1
|
Toshifumi/bert-base-multilingual-cased-finetuned-emotion
|
Toshifumi
| 2022-04-14T08:27:21Z | 24 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-13T13:33:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: bert-base-multilingual-cased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9195
- name: F1
type: f1
value: 0.9204823251325381
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-emotion
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2369
- Accuracy: 0.9195
- F1: 0.9205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9212 | 1.0 | 250 | 0.3466 | 0.8965 | 0.8966 |
| 0.2893 | 2.0 | 500 | 0.2369 | 0.9195 | 0.9205 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Felix92/doctr-dummy-torch-mobilenet-v3-small
|
Felix92
| 2022-04-14T08:25:21Z | 146 | 0 |
transformers
|
[
"transformers",
"pytorch",
"en",
"endpoints_compatible",
"region:us"
] | null | 2022-04-14T08:25:15Z |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: classification
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-torch-magc-resnet31
|
Felix92
| 2022-04-14T08:18:52Z | 145 | 0 |
transformers
|
[
"transformers",
"pytorch",
"en",
"endpoints_compatible",
"region:us"
] | null | 2022-04-14T08:18:44Z |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: classification
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-torch-resnet50
|
Felix92
| 2022-04-14T08:06:25Z | 146 | 0 |
transformers
|
[
"transformers",
"pytorch",
"en",
"endpoints_compatible",
"region:us"
] | null | 2022-04-14T08:06:18Z |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: classification
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
ASCCCCCCCC/PENGMENGJIE-finetuned-sms
|
ASCCCCCCCC
| 2022-04-14T07:57:02Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-14T06:37:58Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PENGMENGJIE-finetuned-sms
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PENGMENGJIE-finetuned-sms
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0116 | 1.0 | 1250 | 0.0060 | 0.999 | 0.9990 |
| 0.003 | 2.0 | 2500 | 0.0000 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Felix92/doctr-dummy-torch-resnet34
|
Felix92
| 2022-04-14T07:48:34Z | 145 | 0 |
transformers
|
[
"transformers",
"pytorch",
"en",
"endpoints_compatible",
"region:us"
] | null | 2022-04-14T07:48:27Z |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: classification
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
cj-mills/distilbert-base-uncased-finetuned-clinc
|
cj-mills
| 2022-04-14T07:21:55Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-13T21:50:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9161290322580645
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7796
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2938 | 1.0 | 318 | 3.2905 | 0.7410 |
| 2.6346 | 2.0 | 636 | 1.8833 | 0.8326 |
| 1.5554 | 3.0 | 954 | 1.1650 | 0.8926 |
| 1.0189 | 4.0 | 1272 | 0.8636 | 0.9110 |
| 0.8028 | 5.0 | 1590 | 0.7796 | 0.9161 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
csikasote/wav2vec2-large-xlsr-bemba
|
csikasote
| 2022-04-14T07:20:37Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"bem",
"dataset:BembaSpeech",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: bem
datasets:
- BembaSpeech
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Bemba by Claytone Sikasote
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: BembaSpeech bem
type: bembaspeech
args: bem
metrics:
- name: Test WER
type: wer
value: 42.17
---
# Wav2Vec2-Large-XLSR-53-Bemba
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Bemba language of Zambia using the [BembaSpeech](https://csikasote.github.io/BembaSpeech). When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("csv", data_files={"test": "/content/test.csv"}, delimiter="\t")["test"] # Adapt the path to test.csv
processor = Wav2Vec2Processor.from_pretrained("csikasote/wav2vec2-large-xlsr-bemba")
model = Wav2Vec2ForCTC.from_pretrained("csikasote/wav2vec2-large-xlsr-bemba")
#BembaSpeech is sample at 16kHz so we you do not need to resample
#resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = speech_array.squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Bemba test data of BembaSpeech.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("csv", data_files={"test": "/content/test.csv"}, delimiter="\\t")["test"]
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("csikasote/wav2vec2-large-xlsr-bemba")
model = Wav2Vec2ForCTC.from_pretrained("csikasote/wav2vec2-large-xlsr-bemba")
model.to("cuda")
chars_to_ignore_regex = '[\,\_\?\.\!\;\:\"\“]'
#resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = speech_array.squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 42.17 %
## Training
The BembaSpeech `train`, `dev` and `test` datasets were used for training, development and evaluation respectively. The script used for evaluating the model on the test dataset can be found [here](https://colab.research.google.com/drive/1aplFHfaXE68HGDwBYV2KqUWPasrk7bXv?usp=sharing).
|
wasifa/fake_news_classifier
|
wasifa
| 2022-04-14T07:01:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-04-08T02:38:25Z |
# Fake News Classification
# Dependencies
The project requires Python 3.6 and the latest version of PyTorch
The models were trained on Kaggle kernels with a GPU
# Data
The dataset consists of fake and true articles.
# Code
All the solution and notebook files (with cell outputs) are provided.
# Run
Use the following command to open the notebook and train the model:
```
jupyter notebook fake_news_classifier.ipynb
```
|
huggingtweets/credenzaclear2-dril-nia_mp4
|
huggingtweets
| 2022-04-14T04:40:26Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-14T04:39:38Z |
---
language: en
thumbnail: http://www.huggingtweets.com/credenzaclear2-dril-nia_mp4/1649911222622/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1510917391533830145/XW-zSFDJ_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1487740104340918272/7c9spp2E_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1511875789213638656/WdSSvAhj_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & Nia & Audrey Horne</div>
<div style="text-align: center; font-size: 14px;">@credenzaclear2-dril-nia_mp4</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & Nia & Audrey Horne.
| Data | wint | Nia | Audrey Horne |
| --- | --- | --- | --- |
| Tweets downloaded | 3229 | 1552 | 626 |
| Retweets | 477 | 28 | 74 |
| Short tweets | 303 | 133 | 124 |
| Tweets kept | 2449 | 1391 | 428 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3rarj99g/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @credenzaclear2-dril-nia_mp4's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/20c2vigo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/20c2vigo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/credenzaclear2-dril-nia_mp4')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
eagles/focus_sum
|
eagles
| 2022-04-14T04:26:44Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-13T10:33:00Z |
---
tags:
- generated_from_trainer
model-index:
- name: focus_sum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# focus_sum
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9644 | 3.75 | 500 | 0.6880 |
| 0.4682 | 7.52 | 1000 | 0.4350 |
| 0.4672 | 11.28 | 1500 | 0.2599 |
| 0.3439 | 15.04 | 2000 | 0.1568 |
| 0.2753 | 18.79 | 2500 | 0.1064 |
| 0.1885 | 22.55 | 3000 | 0.0737 |
| 0.2185 | 26.31 | 3500 | 0.0575 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.1
|
jekdoieao/wav2vec2-large-xls-r-300m-turkish-colab
|
jekdoieao
| 2022-04-14T02:33:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-13T22:21:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3731
- Wer: 0.3635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.967 | 3.67 | 400 | 0.6661 | 0.6756 |
| 0.3882 | 7.34 | 800 | 0.4310 | 0.4755 |
| 0.1828 | 11.01 | 1200 | 0.4146 | 0.4485 |
| 0.126 | 14.68 | 1600 | 0.4014 | 0.4254 |
| 0.0955 | 18.35 | 2000 | 0.4125 | 0.4040 |
| 0.0749 | 22.02 | 2400 | 0.3912 | 0.3960 |
| 0.0606 | 25.69 | 2800 | 0.3707 | 0.3771 |
| 0.0477 | 29.36 | 3200 | 0.3731 | 0.3635 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
vumichien/tiny-albert
|
vumichien
| 2022-04-14T00:16:10Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"albert",
"token-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-13T23:31:03Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: tiny-albert
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tiny-albert
This model is a fine-tuned version of [hf-internal-testing/tiny-albert](https://huggingface.co/hf-internal-testing/tiny-albert) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Tokenizers 0.12.1
|
ales/wav2vec2-cv-be
|
ales
| 2022-04-13T21:33:15Z | 165 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"be",
"dataset:mozilla-foundation/common_voice_8_0",
"license:gpl-3.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-13T11:42:20Z |
---
license: gpl-3.0
language:
- be
tags:
- audio
- speech
- automatic-speech-recognition
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
model-index:
- name: wav2vec2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: be
metrics:
- name: Dev WER
type: wer
value: 17.61
- name: Test WER
type: wer
value: 18.7
- name: Dev WER (with LM)
type: wer
value: 11.5
- name: Test WER (with LM)
type: wer
value: 12.4
---
# Automatic Speech Recognition for Belarusian language
Fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on `mozilla-foundation/common_voice_8_0 be` dataset.
`Train`, `Dev`, `Test` splits were used as they are present in the dataset. No additional data was used from `Validated` split,
only 1 voicing of each sentence was used - the way the data was split by [CommonVoice CorporaCreator](https://github.com/common-voice/CorporaCreator).
To build a better model **one can use additional voicings from `Validated` split** for sentences already present in `Train`, `Dev`, `Test` splits,
i.e. enlarge mentioned splits.
Language model was built using [KenLM](https://kheafield.com/code/kenlm/estimation/).
5-gram Language model was built on sentences from `Train + (Other - Dev - Test)` splits of `mozilla-foundation/common_voice_8_0 be` dataset.
Source code is available [here](https://github.com/yks72p/stt_be).
## Run model in a browser
This page contains interactive demo widget that lets you test this model right in a browser.
However, this widget uses Acoustic model only **without** Language model that significantly improves overall performance.
You can play with **full pipeline of Acoustic model + Language model** on the following [spaces page](https://huggingface.co/spaces/ales/wav2vec2-cv-be-lm)
(also works from browser).
|
huggingtweets/kc_lyricbot
|
huggingtweets
| 2022-04-13T21:14:37Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-13T21:12:47Z |
---
language: en
thumbnail: http://www.huggingtweets.com/kc_lyricbot/1649884470723/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1448393533921112064/q3fCXTyu_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">King Crimson Lyric Bot</div>
<div style="text-align: center; font-size: 14px;">@kc_lyricbot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from King Crimson Lyric Bot.
| Data | King Crimson Lyric Bot |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 0 |
| Short tweets | 231 |
| Tweets kept | 3019 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yn81k4o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kc_lyricbot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/15ndpk6d) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/15ndpk6d/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kc_lyricbot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
javilonso/Mex_Rbta_Opinion_Augmented_Polarity
|
javilonso
| 2022-04-13T20:38:36Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-13T20:16:54Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: javilonso/Mex_Rbta_Opinion_Augmented_Polarity
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# javilonso/Mex_Rbta_Opinion_Augmented_Polarity
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6885
- Validation Loss: 0.6118
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7710, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.6885 | 0.6118 | 0 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
javilonso/Mex_Rbta_TitleWithOpinion_Attraction
|
javilonso
| 2022-04-13T18:44:33Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-13T17:46:47Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: javilonso/Mex_Rbta_TitleWithOpinion_Attraction
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# javilonso/Mex_Rbta_TitleWithOpinion_Attraction
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0064
- Validation Loss: 0.0515
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 8979, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0780 | 0.0650 | 0 |
| 0.0204 | 0.0464 | 1 |
| 0.0064 | 0.0515 | 2 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
javilonso/Mex_Rbta_TitleWithOpinion_Polarity
|
javilonso
| 2022-04-13T17:35:16Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-13T16:55:39Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: javilonso/Mex_Rbta_TitleWithOpinion_Polarity
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# javilonso/Mex_Rbta_TitleWithOpinion_Polarity
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3691
- Validation Loss: 0.5035
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5986, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.5710 | 0.5017 | 0 |
| 0.3691 | 0.5035 | 1 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ParulChaudhari/distilbert-base-uncased-finetuned-squad
|
ParulChaudhari
| 2022-04-13T17:06:14Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-04-11T17:01:40Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ParulChaudhari/distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ParulChaudhari/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an SQUAD dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.3927
- Validation Loss: 1.1305
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 177048, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.3927 | 1.1305 | 0 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.5.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
veddm/all-distilroberta-v1-finetuned-DIT-10_epochs
|
veddm
| 2022-04-13T16:31:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-04-13T11:19:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: all-distilroberta-v1-finetuned-DIT-10_epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-distilroberta-v1-finetuned-DIT-10_epochs
This model is a fine-tuned version of [sentence-transformers/all-distilroberta-v1](https://huggingface.co/sentence-transformers/all-distilroberta-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0044
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 358 | 0.0196 |
| 0.3013 | 2.0 | 716 | 0.0092 |
| 0.0073 | 3.0 | 1074 | 0.0065 |
| 0.0073 | 4.0 | 1432 | 0.0054 |
| 0.0021 | 5.0 | 1790 | 0.0051 |
| 0.0007 | 6.0 | 2148 | 0.0047 |
| 0.0004 | 7.0 | 2506 | 0.0047 |
| 0.0004 | 8.0 | 2864 | 0.0046 |
| 0.0004 | 9.0 | 3222 | 0.0044 |
| 0.0003 | 10.0 | 3580 | 0.0044 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggan/pix2pix-maps
|
huggan
| 2022-04-13T16:25:52Z | 0 | 2 | null |
[
"pytorch",
"huggan",
"gan",
"dataset:huggan/maps",
"arxiv:1611.07004",
"license:mit",
"region:us"
] | null | 2022-04-13T08:11:16Z |
---
tags:
- huggan
- gan
datasets:
- huggan/maps
# See a list of available tags here:
# https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L12
# task: unconditional-image-generation or conditional-image-generation or image-to-image
license: mit
---
# Pix2Pix trained on the maps dataset
## Model description
This model is a [Pix2Pix](https://arxiv.org/abs/1611.07004) model trained on the [huggan/maps](https://huggingface.co/datasets/huggan/maps) dataset. The goal for the model is to turn a satellite map into a geographic map à la Google Maps, and the other way around.
The model was trained using the [example script](https://github.com/huggingface/community-events/tree/main/huggan/pytorch/pix2pix) provided by HuggingFace as part of the [HugGAN sprint](https://github.com/huggingface/community-events/tree/main/huggan).
## Intended uses & limitations
#### How to use
```python
from huggan.pytorch.pix2pix.modeling_pix2pix import GeneratorUNet
from PIL import Image
from torchvision.utils import save_image
image = Image.open("...")
generator = GeneratorUNet.from_pretrained("huggan/pix2pix-maps")
pixel_values = transform(image).unsqueeze(0)
output = generator(pixel_values)
save_image(output, 'output.png', normalize=True)
```
#### Limitations and bias
Provide examples of latent issues and potential remediations.
## Training data
The data used was huggan/maps.
## Training procedure
The following command was used:
```bash
accelerate launch train.py --dataset huggan/maps --push_to_hub --model_name pix2pix-maps --checkpoint_interval 1
```
## Eval results
## Generated Images
You can embed local or remote images using ``
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/IsolaZZE16,
author = {Phillip Isola and
Jun{-}Yan Zhu and
Tinghui Zhou and
Alexei A. Efros},
title = {Image-to-Image Translation with Conditional Adversarial Networks},
journal = {CoRR},
volume = {abs/1611.07004},
year = {2016},
url = {http://arxiv.org/abs/1611.07004},
eprinttype = {arXiv},
eprint = {1611.07004},
timestamp = {Mon, 13 Aug 2018 16:49:05 +0200},
biburl = {https://dblp.org/rec/journals/corr/IsolaZZE16.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
potatobunny/results-yelp
|
potatobunny
| 2022-04-13T15:36:11Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-13T15:20:19Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: results-yelp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results-yelp
This model is a fine-tuned version of [textattack/bert-base-uncased-yelp-polarity](https://huggingface.co/textattack/bert-base-uncased-yelp-polarity) on a filtered and manually reviewed Yelp dataset containing restaurant reviews only.
It achieves the following results on the evaluation set:
- Loss: 0.3563
- Accuracy: 0.9302
- Precision: 0.9461
- Recall: 0.9608
- F1: 0.9534
Note: to use this tokenizer, please use the following code to load all the required files:
`tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased", config=AutoConfig.from_pretrained("potatobunny/results-yelp"))`
## Model description
This model is fine-tuned on a Yelp dataset with labelled data containing a restaurant review (text) and whether it has a positive (1) or negative (0) sentiment.
## Intended uses & limitations
This is intended to perform text classification, specifically sentiment analysis, on text data obtained from restaurant reviews to determine if the particular review is positive or negative.
## Training and evaluation data
The training and evaluation data were both obtained from the same Yelp dataset. The data was split into 70% training and 30% validation.
<!-- ## Training procedure -->
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
The training loss obtained was 0.265741667.
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.1
|
Toshifumi/distilbert-base-multilingual-cased-finetuned-emotion
|
Toshifumi
| 2022-04-13T12:30:50Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-13T12:15:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-multilingual-cased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8885
- name: F1
type: f1
value: 0.8888307905223247
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3702
- Accuracy: 0.8885
- F1: 0.8888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.1646 | 1.0 | 250 | 0.6190 | 0.8085 | 0.7992 |
| 0.4536 | 2.0 | 500 | 0.3702 | 0.8885 | 0.8888 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
philschmid/MiniLMv2-L12-H384-distilled-finetuned-clinc
|
philschmid
| 2022-04-13T12:07:00Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-13T11:56:01Z |
---
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: MiniLMv2-L12-H384-distilled-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9529032258064516
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L12-H384-distilled-finetuned-clinc
This model is a fine-tuned version of [nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3058
- Accuracy: 0.9529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9908 | 1.0 | 239 | 1.6816 | 0.3910 |
| 1.5212 | 2.0 | 478 | 1.2365 | 0.7697 |
| 1.129 | 3.0 | 717 | 0.9209 | 0.8706 |
| 0.8462 | 4.0 | 956 | 0.6978 | 0.9152 |
| 0.6497 | 5.0 | 1195 | 0.5499 | 0.9342 |
| 0.5124 | 6.0 | 1434 | 0.4447 | 0.9445 |
| 0.4196 | 7.0 | 1673 | 0.3797 | 0.9455 |
| 0.3587 | 8.0 | 1912 | 0.3358 | 0.95 |
| 0.3228 | 9.0 | 2151 | 0.3133 | 0.9513 |
| 0.3052 | 10.0 | 2390 | 0.3058 | 0.9529 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
javilonso/classificationEsp1_Augmented_Attraction
|
javilonso
| 2022-04-13T11:39:19Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-13T10:32:42Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: javilonso/classificationEsp1_Augmented_Attraction
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# javilonso/classificationEsp1_Augmented_Attraction
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0078
- Validation Loss: 0.0581
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11565, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1187 | 0.0748 | 0 |
| 0.0323 | 0.0606 | 1 |
| 0.0078 | 0.0581 | 2 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
creat89/NER_FEDA_Cs
|
creat89
| 2022-04-13T09:38:35Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"labse",
"ner",
"multilingual",
"cs",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: mit
language:
- multilingual
- cs
tags:
- labse
- ner
---
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats:
1. Wikiann (LOC, PER, ORG)
2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO)
3. CNEC (LOC, ORG, MEDIA, ART, PER, TIME)
4. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME)
PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date
You can select the tagset to use in the output by configuring the model. This model manages differently uppercase words.
More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA).
|
creat89/NER_FEDA_Ru
|
creat89
| 2022-04-13T09:32:54Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"rubert",
"ner",
"ru",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: mit
language:
- ru
tags:
- rubert
- ner
---
This is a Russian NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on RuBERT and supports different tagsets all using IOBES formats:
1. Wikiann (LOC, PER, ORG)
2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO)
4. CNE5 (GEOPOLIT, LOC, MEDIA, PER, ORG)
5. FactRuEval (LOC, ORG, PER)
PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date, GEOPOLIT: Geopolitical,
You can select the tagset to use in the output by configuring the model. This models manages differently uppercase words.
More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA).
|
creat89/NER_FEDA_Uk
|
creat89
| 2022-04-13T09:29:36Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"labse",
"ner",
"multilingual",
"uk",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: mit
language:
- multilingual
- uk
tags:
- labse
- ner
---
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats:
1. Wikiann (LOC, PER, ORG)
2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO)
3. NER-UK (LOC, MISC, ORG, PER)
4. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME)
PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date, GEOPOLIT: Geopolitical,
You can select the tagset to use in the output by configuring the model. This models manages differently uppercase words.
More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA).
|
creat89/NER_FEDA_Latin2
|
creat89
| 2022-04-13T09:03:00Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"labse",
"ner",
"multilingual",
"cs",
"pl",
"sl",
"fi",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: mit
language:
- multilingual
- cs
- pl
- sl
- fi
tags:
- labse
- ner
---
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats:
1. Wikiann (LOC, PER, ORG)
2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO)
3. SlavNER 17 (LOC, MISC, ORG, PER)
4. SSJ500k (LOC, MISC, ORG, PER)
5. KPWr (EVT, LOC, ORG, PER, PRO)
6. CNEC (LOC, ORG, MEDIA, ART, PER, TIME)
7. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME)
PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date
You can select the tagset to use in the output by configuring the model. This model manages differently uppercase words.
More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA).
|
creat89/NER_FEDA_Latin1
|
creat89
| 2022-04-13T09:02:03Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"labse",
"ner",
"multilingual",
"cs",
"pl",
"sl",
"fi",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: mit
language:
- multilingual
- cs
- pl
- sl
- fi
tags:
- labse
- ner
---
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats:
1. Wikiann (LOC, PER, ORG)
2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO)
3. SlavNER 17 (LOC, MISC, ORG, PER)
4. SSJ500k (LOC, MISC, ORG, PER)
5. KPWr (EVT, LOC, ORG, PER, PRO)
6. CNEC (LOC, ORG, MEDIA, ART, PER, TIME)
7. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME)
PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date
You can select the tagset to use in the output by configuring the model.
More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA).
|
patrickvonplaten/bart-large-fp32
|
patrickvonplaten
| 2022-04-13T09:00:04Z | 54 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bart",
"feature-extraction",
"en",
"arxiv:1910.13461",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-04-13T07:53:21Z |
---
license: apache-2.0
language: en
---
**NOTE: This is the FP32 version of [Facebook's official bart-large](https://huggingface.co/facebook/bart-large/edit/main/README.md).**
# BART (large-sized model)
BART model pre-trained on English language. It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/bart).
Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).
## Intended uses & limitations
You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=bart) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import BartTokenizer, BartModel
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
model = BartModel.from_pretrained('facebook/bart-large')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1910-13461,
author = {Mike Lewis and
Yinhan Liu and
Naman Goyal and
Marjan Ghazvininejad and
Abdelrahman Mohamed and
Omer Levy and
Veselin Stoyanov and
Luke Zettlemoyer},
title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language
Generation, Translation, and Comprehension},
journal = {CoRR},
volume = {abs/1910.13461},
year = {2019},
url = {http://arxiv.org/abs/1910.13461},
eprinttype = {arXiv},
eprint = {1910.13461},
timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
studio-ousia/luke-base
|
studio-ousia
| 2022-04-13T08:59:59Z | 3,493 | 21 |
transformers
|
[
"transformers",
"pytorch",
"luke",
"fill-mask",
"named entity recognition",
"entity typing",
"relation classification",
"question answering",
"en",
"arxiv:1906.08237",
"arxiv:1903.07785",
"arxiv:2002.01808",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/studio-ousia/luke/raw/master/resources/luke_logo.png
tags:
- luke
- named entity recognition
- entity typing
- relation classification
- question answering
license: apache-2.0
---
## LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention
**LUKE** (**L**anguage **U**nderstanding with **K**nowledge-based
**E**mbeddings) is a new pre-trained contextualized representation of words and
entities based on transformer. LUKE treats words and entities in a given text as
independent tokens, and outputs contextualized representations of them. LUKE
adopts an entity-aware self-attention mechanism that is an extension of the
self-attention mechanism of the transformer, and considers the types of tokens
(words or entities) when computing attention scores.
LUKE achieves state-of-the-art results on five popular NLP benchmarks including
**[SQuAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/)** (extractive
question answering),
**[CoNLL-2003](https://www.clips.uantwerpen.be/conll2003/ner/)** (named entity
recognition), **[ReCoRD](https://sheng-z.github.io/ReCoRD-explorer/)**
(cloze-style question answering),
**[TACRED](https://nlp.stanford.edu/projects/tacred/)** (relation
classification), and
**[Open Entity](https://www.cs.utexas.edu/~eunsol/html_pages/open_entity.html)**
(entity typing).
Please check the [official repository](https://github.com/studio-ousia/luke) for
more details and updates.
This is the LUKE base model with 12 hidden layers, 768 hidden size. The total number
of parameters in this model is 253M. It is trained using December 2018 version of
Wikipedia.
### Experimental results
The experimental results are provided as follows:
| Task | Dataset | Metric | LUKE-large | luke-base | Previous SOTA |
| ------------------------------ | ---------------------------------------------------------------------------- | ------ | ----------------- | --------- | ------------------------------------------------------------------------- |
| Extractive Question Answering | [SQuAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/) | EM/F1 | **90.2**/**95.4** | 86.1/92.3 | 89.9/95.1 ([Yang et al., 2019](https://arxiv.org/abs/1906.08237)) |
| Named Entity Recognition | [CoNLL-2003](https://www.clips.uantwerpen.be/conll2003/ner/) | F1 | **94.3** | 93.3 | 93.5 ([Baevski et al., 2019](https://arxiv.org/abs/1903.07785)) |
| Cloze-style Question Answering | [ReCoRD](https://sheng-z.github.io/ReCoRD-explorer/) | EM/F1 | **90.6**/**91.2** | - | 83.1/83.7 ([Li et al., 2019](https://www.aclweb.org/anthology/D19-6011/)) |
| Relation Classification | [TACRED](https://nlp.stanford.edu/projects/tacred/) | F1 | **72.7** | - | 72.0 ([Wang et al. , 2020](https://arxiv.org/abs/2002.01808)) |
| Fine-grained Entity Typing | [Open Entity](https://www.cs.utexas.edu/~eunsol/html_pages/open_entity.html) | F1 | **78.2** | - | 77.6 ([Wang et al. , 2020](https://arxiv.org/abs/2002.01808)) |
### Citation
If you find LUKE useful for your work, please cite the following paper:
```latex
@inproceedings{yamada2020luke,
title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention},
author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto},
booktitle={EMNLP},
year={2020}
}
```
|
lewtun/roberta-large-finetuned-clinc
|
lewtun
| 2022-04-13T08:48:32Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-13T08:40:22Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: roberta-large-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9767741935483871
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-clinc
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1545
- Accuracy: 0.9768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 5.0548 | 1.0 | 120 | 5.0359 | 0.0071 |
| 4.4725 | 2.0 | 240 | 2.9385 | 0.7558 |
| 1.8924 | 3.0 | 360 | 0.6456 | 0.9374 |
| 0.4552 | 4.0 | 480 | 0.2297 | 0.9626 |
| 0.1589 | 5.0 | 600 | 0.1545 | 0.9768 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Danni/distilbert-base-uncased-finetuned-cola
|
Danni
| 2022-04-13T07:28:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-06T15:04:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.44113488112476795
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4994
- Matthews Correlation: 0.4411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5282 | 1.0 | 535 | 0.4994 | 0.4411 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
mimicheng/codeparrot-ds-sample-1ep-12apr
|
mimicheng
| 2022-04-13T07:16:11Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-13T03:45:04Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds-sample-1ep-12apr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds-sample-1ep-12apr
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- distributed_type: tpu
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8723 | 0.37 | 1000 | 2.5340 |
| 2.1776 | 0.74 | 2000 | 1.9947 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/radfemman
|
huggingtweets
| 2022-04-13T06:22:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-13T05:44:47Z |
---
language: en
thumbnail: http://www.huggingtweets.com/radfemman/1649830938917/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1428572680882688005/rqGxWIRJ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Radfem Ally 🇺🇸</div>
<div style="text-align: center; font-size: 14px;">@radfemman</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Radfem Ally 🇺🇸.
| Data | Radfem Ally 🇺🇸 |
| --- | --- |
| Tweets downloaded | 227 |
| Retweets | 33 |
| Short tweets | 14 |
| Tweets kept | 180 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/29ku9tl5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @radfemman's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/33qza7xp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/33qza7xp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/radfemman')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.