modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 18:52:31
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 18:52:05
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Xiaoman/NER-CoNLL2003-V4
|
Xiaoman
| 2022-05-14T19:37:35Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-14T18:52:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: NER-CoNLL2003-V4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NER-CoNLL2003-V4
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.961395091713594e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 27
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 14 | 0.3630 |
| No log | 2.0 | 28 | 0.2711 |
| No log | 3.0 | 42 | 0.2407 |
| No log | 4.0 | 56 | 0.2057 |
| No log | 5.0 | 70 | 0.2095 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
anas-awadalla/splinter-large-few-shot-k-16-finetuned-squad-seed-2
|
anas-awadalla
| 2022-05-14T19:36:09Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"splinter",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-14T19:27:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-large-few-shot-k-16-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-large-few-shot-k-16-finetuned-squad-seed-2
This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
claytonsamples/xlm-roberta-base-finetuned-panx-de
|
claytonsamples
| 2022-05-14T19:19:42Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-14T18:40:01Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8620945214069894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
syp1229/bert-base-finetuned-koidiom
|
syp1229
| 2022-05-14T16:44:17Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-14T16:42:21Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: syp1229/bert-base-finetuned-koidiom
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# syp1229/bert-base-finetuned-koidiom
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.1288
- Validation Loss: 1.8307
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.1288 | 1.8307 | 0 |
### Framework versions
- Transformers 4.19.1
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
akreal/mbart-large-50-finetuned-slurp
|
akreal
| 2022-05-14T16:36:01Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"mbart-50",
"en",
"dataset:SLURP",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-14T15:56:23Z |
---
language:
- en
tags:
- mbart-50
license: apache-2.0
datasets:
- SLURP
metrics:
- accuracy
- slu-f1
---
This model is `mbart-large-50-many-to-many-mmt` model fine-tuned on the text part of [SLURP](https://github.com/pswietojanski/slurp) spoken language understanding dataset.
The scores on the test set are 85.68% and 79.00% for Intent accuracy and SLU-F1 respectively.
|
syp1229/koelectra-base-v3-generator-finetuned-koidiom
|
syp1229
| 2022-05-14T16:14:31Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"electra",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-14T16:10:36Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: syp1229/koelectra-base-v3-generator-finetuned-koidiom
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# syp1229/koelectra-base-v3-generator-finetuned-koidiom
This model is a fine-tuned version of [monologg/koelectra-base-v3-generator](https://huggingface.co/monologg/koelectra-base-v3-generator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.4310
- Validation Loss: 2.0533
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.4310 | 2.0533 | 0 |
### Framework versions
- Transformers 4.19.1
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
DBusAI/RPPO-CarRacing-v0-v1
|
DBusAI
| 2022-05-14T16:03:06Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"CarRacing-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-14T16:01:07Z |
---
library_name: stable-baselines3
tags:
- CarRacing-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: RPPO
results:
- metrics:
- type: mean_reward
value: 614.78 +/- 160.84
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v0
type: CarRacing-v0
---
# **RPPO** Agent playing **CarRacing-v0**
This is a trained model of a **RPPO** agent playing **CarRacing-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
DBusAI/RPPO-CarRacing-v0
|
DBusAI
| 2022-05-14T16:00:15Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"CarRacing-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-13T22:52:43Z |
---
library_name: stable-baselines3
tags:
- CarRacing-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: RPPO
results:
- metrics:
- type: mean_reward
value: 614.78 +/- 160.84
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v0
type: CarRacing-v0
---
# **RPPO** Agent playing **CarRacing-v0**
This is a trained model of a **RPPO** agent playing **CarRacing-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
nadirbekovnadir/LunarLander-281_23
|
nadirbekovnadir
| 2022-05-14T15:38:42Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-14T15:38:03Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 278.11 +/- 23.37
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
nadirbekovnadir/LunarLander-283_19
|
nadirbekovnadir
| 2022-05-14T13:25:49Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-14T13:25:08Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 283.38 +/- 17.68
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
nadirbekovnadir/LunarLander-276_21
|
nadirbekovnadir
| 2022-05-14T11:41:56Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-14T11:41:16Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 278.41 +/- 17.89
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
nadirbekovnadir/LunarLander-278_18_2
|
nadirbekovnadir
| 2022-05-14T11:39:44Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-14T11:39:04Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 274.15 +/- 17.03
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
buehlpa/bert-finetuned-ner
|
buehlpa
| 2022-05-14T11:06:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-14T10:38:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9308580858085809
- name: Recall
type: recall
value: 0.9493436553349041
- name: F1
type: f1
value: 0.9400099983336112
- name: Accuracy
type: accuracy
value: 0.9862541943839407
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0607
- Precision: 0.9309
- Recall: 0.9493
- F1: 0.9400
- Accuracy: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0855 | 1.0 | 1756 | 0.0632 | 0.9191 | 0.9386 | 0.9287 | 0.9832 |
| 0.0414 | 2.0 | 3512 | 0.0572 | 0.9264 | 0.9475 | 0.9368 | 0.9855 |
| 0.0198 | 3.0 | 5268 | 0.0607 | 0.9309 | 0.9493 | 0.9400 | 0.9863 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
danieleV9H/hubert-base-timit-demo-google-colab-ft30ep_v5
|
danieleV9H
| 2022-05-14T10:32:52Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-12T20:23:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: hubert-base-timit-demo-google-colab-ft30ep_v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-timit-demo-google-colab-ft30ep_v5
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the timit-asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4763
- Wer: 0.3322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.9596 | 0.87 | 500 | 3.1237 | 1.0 |
| 2.5388 | 1.73 | 1000 | 1.1689 | 0.9184 |
| 1.0448 | 2.6 | 1500 | 0.6106 | 0.5878 |
| 0.6793 | 3.46 | 2000 | 0.4912 | 0.5200 |
| 0.5234 | 4.33 | 2500 | 0.4529 | 0.4798 |
| 0.4368 | 5.19 | 3000 | 0.4239 | 0.4543 |
| 0.3839 | 6.06 | 3500 | 0.4326 | 0.4339 |
| 0.3315 | 6.92 | 4000 | 0.4265 | 0.4173 |
| 0.2878 | 7.79 | 4500 | 0.4304 | 0.4068 |
| 0.25 | 8.65 | 5000 | 0.4130 | 0.3940 |
| 0.242 | 9.52 | 5500 | 0.4310 | 0.3938 |
| 0.2182 | 10.38 | 6000 | 0.4204 | 0.3843 |
| 0.2063 | 11.25 | 6500 | 0.4449 | 0.3816 |
| 0.2099 | 12.11 | 7000 | 0.4016 | 0.3681 |
| 0.1795 | 12.98 | 7500 | 0.4027 | 0.3647 |
| 0.1604 | 13.84 | 8000 | 0.4294 | 0.3664 |
| 0.1683 | 14.71 | 8500 | 0.4412 | 0.3661 |
| 0.1452 | 15.57 | 9000 | 0.4484 | 0.3588 |
| 0.1491 | 16.44 | 9500 | 0.4508 | 0.3515 |
| 0.1388 | 17.3 | 10000 | 0.4240 | 0.3518 |
| 0.1399 | 18.17 | 10500 | 0.4605 | 0.3513 |
| 0.1265 | 19.03 | 11000 | 0.4412 | 0.3485 |
| 0.1137 | 19.9 | 11500 | 0.4520 | 0.3467 |
| 0.106 | 20.76 | 12000 | 0.4873 | 0.3426 |
| 0.1243 | 21.63 | 12500 | 0.4456 | 0.3396 |
| 0.1055 | 22.49 | 13000 | 0.4819 | 0.3406 |
| 0.1124 | 23.36 | 13500 | 0.4613 | 0.3391 |
| 0.1064 | 24.22 | 14000 | 0.4842 | 0.3430 |
| 0.0875 | 25.09 | 14500 | 0.4661 | 0.3348 |
| 0.086 | 25.95 | 15000 | 0.4724 | 0.3371 |
| 0.0842 | 26.82 | 15500 | 0.4982 | 0.3381 |
| 0.0834 | 27.68 | 16000 | 0.4856 | 0.3337 |
| 0.0918 | 28.55 | 16500 | 0.4783 | 0.3344 |
| 0.0773 | 29.41 | 17000 | 0.4763 | 0.3322 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
fgaim/tielectra-small-sentiment
|
fgaim
| 2022-05-14T06:49:29Z | 15 | 1 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"ti",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: ti
widget:
- text: "ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር"
metrics:
- f1
- precision
- recall
- accuracy
model-index:
- name: tielectra-small-sentiment
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: F1
type: f1
value: 0.8228962818003914
- name: Precision
type: precision
value: 0.8055555555555556
- name: Recall
type: recall
value: 0.841
- name: Accuracy
type: accuracy
value: 0.819
---
# Sentiment Analysis for Tigrinya with TiELECTRA small
This model is a fine-tuned version of [TiELECTRA small](https://huggingface.co/fgaim/tielectra-small) on a YouTube comments Sentiment Analysis dataset for Tigrinya (Tela et al. 2020).
## Basic usage
```python
from transformers import pipeline
ti_sent = pipeline("sentiment-analysis", model="fgaim/tielectra-small-sentiment")
ti_sent("ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር")
```
## Training
### Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Results
The model achieves the following results on the evaluation set:
- F1: 0.8229
- Precision: 0.8056
- Recall: 0.841
- Accuracy: 0.819
- Loss: 0.4299
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu111
- Datasets 1.10.2
- Tokenizers 0.10.1
## Citation
If you use this model in your product or research, please cite as follows:
```
@article{Fitsum2021TiPLMs,
author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
title={Monolingual Pre-trained Language Models for Tigrinya},
year=2021,
publisher= {WiNLP 2021/EMNLP 2021}
}
```
## References
```
Tela, A., Woubie, A. and Hautamäki, V. 2020.
Transferring Monolingual Model to Low-Resource Language: The Case of Tigrinya.
ArXiv, abs/2006.07698.
```
|
NeonPigeon/TEST2ppo-LunarLander-v2
|
NeonPigeon
| 2022-05-14T06:48:08Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-14T05:31:06Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 289.62 +/- 18.60
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
fgaim/tiroberta-sentiment
|
fgaim
| 2022-05-14T06:47:23Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"ti",
"dataset:TLMD",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: ti
widget:
- text: "ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር"
datasets:
- TLMD
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: tiroberta-sentiment
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.828
- name: F1
type: f1
value: 0.8476527900797165
- name: Precision
type: precision
value: 0.760731319554849
- name: Recall
type: recall
value: 0.957
---
# Sentiment Analysis for Tigrinya with TiRoBERTa
This model is a fine-tuned version of [TiRoBERTa](https://huggingface.co/fgaim/roberta-base-tigrinya) on a YouTube comments Sentiment Analysis dataset for Tigrinya (Tela et al. 2020).
## Basic usage
```python
from transformers import pipeline
ti_sent = pipeline("sentiment-analysis", model="fgaim/tiroberta-sentiment")
ti_sent("ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር")
```
## Training
### Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Results
It achieves the following results on the evaluation set:
- F1: 0.8477
- Precision: 0.7607
- Recall: 0.957
- Accuracy: 0.828
- Loss: 0.6796
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu111
- Datasets 1.10.2
- Tokenizers 0.10.1
## Citation
If you use this model in your product or research, please cite as follows:
```
@article{Fitsum2021TiPLMs,
author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
title={Monolingual Pre-trained Language Models for Tigrinya},
year=2021,
publisher={WiNLP 2021/EMNLP 2021}
}
```
## References
```
Tela, A., Woubie, A. and Hautamäki, V. 2020.
Transferring Monolingual Model to Low-Resource Language: The Case of Tigrinya.
ArXiv, abs/2006.07698.
```
|
fgaim/tielectra-geezswitch
|
fgaim
| 2022-05-14T06:20:23Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"geezlab",
"ti",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-30T22:42:10Z |
---
language: ti
widget:
- text: "ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር"
- text: "ወአመ ሳብዕት ዕለት ቦዘወፅአ እምውስተ ሕዝብ ከመ ያስተጋብእ ወኢረከበ።"
- text: "እሊ እግል ኖሱ አሳስ ተጠውር ወዐቦት ክምሰልቱ ሸክ ኢወትውዴ።"
- text: "ኣኩኽር ፡ ልሽክክ ናው ጀረቢነዅስክ ክሙኑኽር ክራውል ሕበርሲድኖ ገረሰነኵ።"
- text: "ነገ ለግማሽ ፍፃሜ ያለፉትን አሳውቀንና አስመርጠናችሁ እንሸልማለን።"
tags:
- geezlab
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: geezswitch-tielectra
results: []
license: cc-by-4.0
---
# TiELECTRA-GeezSwitch
This model is a fine-tuned version of [fgaim/tielectra-small](https://huggingface.co/fgaim/tielectra-small) on the [GeezSwitch](https://github.com/fgaim/geezswitch-data) dataset.
It achieves the following results on the test set:
- F1: 0.9844
- Recall: 0.9844
- Precision: 0.9845
- Accuracy: 0.9844
- Loss: 0.2190
## Training
### Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- seed: 42
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
### Citation
If you use this model or the GeezSwitch model in your research, please cite as follows:
```markdown
@inproceedings{fgaim2022geezswitch,
title={GeezSwitch: Language Identification in Typologically Related Low-resourced East African Languages},
author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
booktitle={Proceedings of the 13th Language Resources and Evaluation Conference},
year={2022}
}
```
|
omar47/wav2vec2-large-xls-r-300m-urdu-v2
|
omar47
| 2022-05-14T04:53:01Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-07T14:37:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-urdu-CV_8_0-and-PRUS_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-urdu-CV_8_0-and-PRUS_v2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3541
- Wer: 0.6532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 14.8521 | 0.52 | 32 | 20.0617 | 1.0 |
| 9.2152 | 1.05 | 64 | 7.8943 | 1.0 |
| 4.8598 | 1.57 | 96 | 5.1558 | 1.0 |
| 3.866 | 2.1 | 128 | 3.9680 | 1.0 |
| 3.3517 | 2.62 | 160 | 3.4201 | 1.0 |
| 3.2029 | 3.15 | 192 | 3.2355 | 1.0 |
| 3.1509 | 3.67 | 224 | 3.2337 | 1.0 |
| 3.1399 | 4.2 | 256 | 3.1627 | 1.0 |
| 3.0848 | 4.72 | 288 | 3.0550 | 1.0 |
| 2.9806 | 5.25 | 320 | 2.8343 | 0.9996 |
| 2.3814 | 5.77 | 352 | 2.0685 | 0.9523 |
| 1.2936 | 6.3 | 384 | 1.5907 | 0.8657 |
| 0.8656 | 6.82 | 416 | 1.3810 | 0.8235 |
| 0.7014 | 7.34 | 448 | 1.3838 | 0.7920 |
| 0.6015 | 7.87 | 480 | 1.3479 | 0.8046 |
| 0.5341 | 8.39 | 512 | 1.2613 | 0.7757 |
| 0.5031 | 8.92 | 544 | 1.2818 | 0.7890 |
| 0.4349 | 9.44 | 576 | 1.3171 | 0.7739 |
| 0.4198 | 9.97 | 608 | 1.2420 | 0.7750 |
| 0.3593 | 10.49 | 640 | 1.2991 | 0.7587 |
| 0.3252 | 11.02 | 672 | 1.2653 | 0.7228 |
| 0.2715 | 11.54 | 704 | 1.2488 | 0.7350 |
| 0.2733 | 12.07 | 736 | 1.2639 | 0.7110 |
| 0.2338 | 12.59 | 768 | 1.3733 | 0.7454 |
| 0.2403 | 13.11 | 800 | 1.3908 | 0.7228 |
| 0.2106 | 13.64 | 832 | 1.3384 | 0.7224 |
| 0.2041 | 14.16 | 864 | 1.3770 | 0.7050 |
| 0.1814 | 14.69 | 896 | 1.3526 | 0.6932 |
| 0.1742 | 15.21 | 928 | 1.3486 | 0.6895 |
| 0.1658 | 15.74 | 960 | 1.3210 | 0.6936 |
| 0.1455 | 16.26 | 992 | 1.3292 | 0.6858 |
| 0.1399 | 16.79 | 1024 | 1.3521 | 0.6828 |
| 0.1325 | 17.31 | 1056 | 1.3339 | 0.6876 |
| 0.1256 | 17.84 | 1088 | 1.3389 | 0.6836 |
| 0.1219 | 18.36 | 1120 | 1.3496 | 0.6769 |
| 0.1212 | 18.89 | 1152 | 1.3277 | 0.6776 |
| 0.1097 | 19.41 | 1184 | 1.3594 | 0.6762 |
| 0.1129 | 19.93 | 1216 | 1.3448 | 0.6688 |
| 0.1036 | 20.46 | 1248 | 1.3295 | 0.6710 |
| 0.1035 | 20.98 | 1280 | 1.3243 | 0.6577 |
| 0.094 | 21.51 | 1312 | 1.3832 | 0.6591 |
| 0.0912 | 22.03 | 1344 | 1.3857 | 0.6584 |
| 0.0815 | 22.56 | 1376 | 1.3739 | 0.6547 |
| 0.0864 | 23.08 | 1408 | 1.3649 | 0.6554 |
| 0.0772 | 23.61 | 1440 | 1.3791 | 0.6458 |
| 0.0894 | 24.13 | 1472 | 1.3630 | 0.6488 |
| 0.0776 | 24.66 | 1504 | 1.3541 | 0.6532 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
anwesham/imdb-sentiment-baseline-distilbert
|
anwesham
| 2022-05-14T03:58:39Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"unk",
"dataset:anwesham/autotrain-data-imdb-sentiment-analysis",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-14T03:06:07Z |
---
language: unk
datasets:
- anwesham/autotrain-data-imdb-sentiment-analysis
---
## Description
- Problem type: Binary Classification
## Validation Metrics
- Loss: 0.17481304705142975
- Accuracy: 0.936
- Precision: 0.9526578073089701
- Recall: 0.9176
- AUC: 0.9841454399999999
- F1: 0.93480032599837
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/anwesham/autotrain-imdb-sentiment-analysis-864927555
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("anwesham/autotrain-imdb-sentiment-analysis-864927555", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("anwesham/autotrain-imdb-sentiment-analysis-864927555", use_auth_token=True)
inputs = tokenizer("I love to eat good food and watch Moana.", return_tensors="pt")
outputs = model(**inputs)
```
|
anwesham/autotrain-imdb-sentiment-analysis-864927559
|
anwesham
| 2022-05-14T03:56:56Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"unk",
"dataset:anwesham/autotrain-data-imdb-sentiment-analysis",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-14T03:06:26Z |
---
language: unk
datasets:
- anwesham/autotrain-data-imdb-sentiment-analysis
co2_eq_emissions: 0.2033402242358345
---
- Problem type: Binary Classification
- Model ID: 864927559
- CO2 Emissions (in grams): 0.2033402242358345
## Validation Metrics
- Loss: 0.18383920192718506
- Accuracy: 0.9318
- Precision: 0.9560625264047318
- Recall: 0.9052
- AUC: 0.98281574
- F1: 0.9299363057324841
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/anwesham/autotrain-imdb-sentiment-analysis-864927559
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("anwesham/autotrain-imdb-sentiment-analysis-864927559", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("anwesham/autotrain-imdb-sentiment-analysis-864927559", use_auth_token=True)
inputs = tokenizer("I love to eat food", return_tensors="pt")
outputs = model(**inputs)
```
|
ruselkomp/deepavlov-framebank-10size
|
ruselkomp
| 2022-05-14T03:48:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-13T22:08:47Z |
---
tags:
- generated_from_trainer
model-index:
- name: deepavlov-test-bert-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deepavlov-test-bert-2
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0314 | 1.0 | 4523 | 1.0242 |
| 0.739 | 2.0 | 9046 | 1.0326 |
| 0.5207 | 3.0 | 13569 | 1.1607 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2.dev0
- Tokenizers 0.12.1
|
gregtozzi/ppo-LunarLander-v2-4
|
gregtozzi
| 2022-05-14T02:51:27Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-14T02:51:00Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 295.25 +/- 17.66
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
gregtozzi/ppo-LunarLander-v2-3
|
gregtozzi
| 2022-05-14T02:15:41Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-14T02:15:16Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 292.99 +/- 18.45
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
gregtozzi/ppo-LunarLander-v2-2
|
gregtozzi
| 2022-05-14T02:10:40Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-14T02:10:13Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 288.74 +/- 16.79
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
describeai/gemini
|
describeai
| 2022-05-14T00:46:52Z | 765 | 41 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"Explain code",
"Code Summarization",
"Summarization",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- Explain code
- Code Summarization
- Summarization
license: mit
---
# Gemini
For in-depth understanding of our model and methods, please see our blog [here](https://www.describe-ai.com/gemini)
## Model description
Gemini is a transformer based on Google's T5 model. The model is pre-trained on approximately 800k code/description pairs and then fine-tuned on 10k higher-level explanations that were synthetically generated. Gemini is capable of summarization/explaining short to medium code snippets in:
- Python
- Javascript (mostly vanilla JS, however, it can handle frameworks like React as well)
- Java
- Ruby
- Go
And outputs a description in English.
## Intended uses
Gemini without any additional fine-tuning is capable of explaining code in a sentence or two and typically performs best in Python and Javascript. We recommend using Gemini for either simple code explanation, documentation or producing more synthetic data to improve its explanations.
### How to use
You can use this model directly with a pipeline for Text2Text generation, as shown below:
```python
from transformers import pipeline, set_seed
summarizer = pipeline('text2text-generation', model='describeai/gemini')
code = "print('hello world!')"
response = summarizer(code, max_length=100, num_beams=3)
print("Summarized code: " + response[0]['generated_text'])
```
Which should yield something along the lines of:
```
Summarized code: The following code is greeting the world.
```
### Model sizes
- Gemini (this repo): 770 Million Parameters
- Gemini-Small - 220 Million Parameters
### Limitations
Typically, Gemini may produce overly simplistic descriptions that don't encompass the entire code snippet. We suspect with more training data, this could be circumvented and will produce better results.
### About Us
A Describe.ai, we are focused on building Artificial Intelligence systems that can understand language as well as humans. While a long path, we plan to contribute our findings to our API to the Open Source community.
|
itsroadtrip/test-pull-requests
|
itsroadtrip
| 2022-05-13T23:50:46Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-05-13T23:50:13Z |
---
license: mit
---
[click me](https://www.youtube.com/watch?v=dQw4w9WgXcQ)
|
bstad/ppo-LunarLander-v2-n_envs-32
|
bstad
| 2022-05-13T22:37:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-13T22:36:52Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 149.07 +/- 88.31
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
vukpetar/ppo-CarRacing-v0-v1
|
vukpetar
| 2022-05-13T22:06:01Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"CarRacing-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-13T22:03:40Z |
---
library_name: stable-baselines3
tags:
- CarRacing-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 407.75 +/- 151.62
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v0
type: CarRacing-v0
---
# **PPO** Agent playing **CarRacing-v0**
This is a trained model of a **PPO** agent playing **CarRacing-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
subhasisj/en-finetuned-squad-qa-minilmv2-32
|
subhasisj
| 2022-05-13T21:50:53Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-13T19:47:17Z |
---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: en-finetuned-squad-qa-minilmv2-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en-finetuned-squad-qa-minilmv2-32
This model is a fine-tuned version of [subhasisj/en-TAPT-MLM-MiniLM](https://huggingface.co/subhasisj/en-TAPT-MLM-MiniLM) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 350 | 2.1514 |
| 2.9587 | 2.0 | 700 | 1.4819 |
| 1.3873 | 3.0 | 1050 | 1.2724 |
| 1.3873 | 4.0 | 1400 | 1.2039 |
| 1.0438 | 5.0 | 1750 | 1.1955 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
nepp1d0/TAPE-finetuned-viralProteins
|
nepp1d0
| 2022-05-13T21:27:09Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-13T19:33:59Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: TAPE-finetuned-viralProteins
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TAPE-finetuned-viralProteins
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9033
- Accuracy: 0.87
- F1: 0.8555
- Precision: 0.8475
- Recall: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.8845 | 1.0 | 5000 | 0.8302 | 0.85 | 0.8060 | 0.7779 | 0.85 |
| 0.8189 | 2.0 | 10000 | 0.6062 | 0.86 | 0.8255 | 0.8115 | 0.86 |
| 0.806 | 3.0 | 15000 | 0.8546 | 0.85 | 0.8095 | 0.7840 | 0.85 |
| 0.6971 | 4.0 | 20000 | 0.7660 | 0.86 | 0.8228 | 0.8027 | 0.86 |
| 0.6269 | 5.0 | 25000 | 0.7787 | 0.85 | 0.8343 | 0.8226 | 0.85 |
| 0.5771 | 6.0 | 30000 | 0.7965 | 0.855 | 0.8402 | 0.8290 | 0.855 |
| 0.5433 | 7.0 | 35000 | 0.7864 | 0.875 | 0.8573 | 0.8473 | 0.875 |
| 0.5183 | 8.0 | 40000 | 0.8292 | 0.87 | 0.8521 | 0.8425 | 0.87 |
| 0.4396 | 9.0 | 45000 | 0.8838 | 0.875 | 0.8566 | 0.8483 | 0.875 |
| 0.4019 | 10.0 | 50000 | 0.9033 | 0.87 | 0.8555 | 0.8475 | 0.87 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Kommunarus/ppo_rl-LunarLander-v2
|
Kommunarus
| 2022-05-13T21:25:43Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-13T21:23:58Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 289.97 +/- 7.68
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-earlystopping
|
theojolliffe
| 2022-05-13T21:16:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T21:46:01Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-pubmed-arxiv-pubmed-arxiv-earlystopping
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-pubmed-arxiv-earlystopping
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8793
- Rouge1: 56.2055
- Rouge2: 41.9231
- Rougel: 45.0616
- Rougelsum: 54.6643
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 0.31 | 125 | 1.2057 | 50.9339 | 30.6777 | 32.6396 | 47.9592 | 141.3519 |
| No log | 0.63 | 250 | 1.0933 | 52.0728 | 31.2361 | 32.8214 | 48.9776 | 141.9815 |
| No log | 0.94 | 375 | 0.9685 | 51.6847 | 32.1578 | 34.1933 | 48.8808 | 141.5556 |
| 1.1594 | 1.26 | 500 | 0.9725 | 50.5131 | 30.6043 | 32.1861 | 47.4346 | 142.0 |
| 1.1594 | 1.57 | 625 | 0.9342 | 52.228 | 32.2073 | 33.797 | 49.2395 | 142.0 |
| 1.1594 | 1.88 | 750 | 0.8715 | 52.2 | 33.6602 | 36.1303 | 49.7138 | 141.6481 |
| 1.1594 | 2.2 | 875 | 0.8334 | 53.116 | 33.9871 | 35.9641 | 50.7658 | 141.8889 |
| 0.6845 | 2.51 | 1000 | 0.8241 | 52.2612 | 32.8025 | 35.27 | 49.5694 | 142.0 |
| 0.6845 | 2.83 | 1125 | 0.7986 | 54.1803 | 35.0019 | 37.4582 | 51.4577 | 142.0 |
| 0.6845 | 3.14 | 1250 | 0.8532 | 52.1328 | 32.6086 | 34.7455 | 49.6219 | 141.7037 |
| 0.6845 | 3.45 | 1375 | 0.8319 | 51.9614 | 32.8544 | 35.3269 | 49.3279 | 141.7593 |
| 0.4488 | 3.77 | 1500 | 0.8033 | 53.1404 | 34.6086 | 37.5482 | 50.7414 | 142.0 |
| 0.4488 | 4.08 | 1625 | 0.8322 | 53.1736 | 34.8662 | 37.7514 | 51.0601 | 142.0 |
| 0.4488 | 4.4 | 1750 | 0.7985 | 51.8251 | 32.9457 | 36.4164 | 49.55 | 142.0 |
| 0.4488 | 4.71 | 1875 | 0.8049 | 54.3423 | 36.6293 | 39.1316 | 52.2706 | 141.8148 |
| 0.3017 | 5.03 | 2000 | 0.8148 | 53.0698 | 35.2569 | 38.406 | 50.9346 | 141.7778 |
| 0.3017 | 5.34 | 2125 | 0.8153 | 53.4479 | 35.1525 | 37.8071 | 51.3731 | 141.0741 |
| 0.3017 | 5.65 | 2250 | 0.8009 | 52.5517 | 34.8287 | 37.999 | 50.2889 | 141.6111 |
| 0.3017 | 5.97 | 2375 | 0.7509 | 54.2725 | 37.4164 | 40.516 | 52.1379 | 142.0 |
| 0.2052 | 6.28 | 2500 | 0.8019 | 54.622 | 36.4776 | 39.9306 | 52.5069 | 142.0 |
| 0.2052 | 6.6 | 2625 | 0.8176 | 55.4796 | 38.4502 | 41.5523 | 53.5211 | 142.0 |
| 0.2052 | 6.91 | 2750 | 0.7956 | 55.4906 | 37.9064 | 40.845 | 53.107 | 141.9815 |
| 0.2052 | 7.22 | 2875 | 0.7966 | 54.5177 | 37.3399 | 40.7678 | 52.4241 | 142.0 |
| 0.1465 | 7.54 | 3000 | 0.8311 | 54.3473 | 37.0659 | 40.2507 | 52.372 | 142.0 |
| 0.1465 | 7.85 | 3125 | 0.8227 | 53.9245 | 36.4695 | 39.1205 | 51.9416 | 141.8889 |
| 0.1465 | 8.17 | 3250 | 0.7947 | 54.766 | 38.4275 | 41.2293 | 52.9075 | 142.0 |
| 0.1465 | 8.48 | 3375 | 0.7954 | 54.5305 | 37.6934 | 40.6804 | 52.5884 | 141.9444 |
| 0.115 | 8.79 | 3500 | 0.8433 | 54.7962 | 37.9373 | 41.3906 | 52.3778 | 142.0 |
| 0.115 | 9.11 | 3625 | 0.8416 | 56.59 | 41.2271 | 44.4207 | 54.7199 | 142.0 |
| 0.115 | 9.42 | 3750 | 0.8164 | 55.1903 | 39.0588 | 41.4908 | 53.4897 | 142.0 |
| 0.115 | 9.74 | 3875 | 0.8363 | 55.2894 | 39.3598 | 42.1138 | 53.831 | 141.8889 |
| 0.0912 | 10.05 | 4000 | 0.8850 | 55.7705 | 40.4924 | 43.1048 | 54.254 | 142.0 |
| 0.0912 | 10.36 | 4125 | 0.8268 | 56.1664 | 40.641 | 42.798 | 54.0001 | 141.9259 |
| 0.0912 | 10.68 | 4250 | 0.8564 | 55.4701 | 39.4949 | 42.2559 | 53.4486 | 141.8889 |
| 0.0912 | 10.99 | 4375 | 0.8557 | 56.0849 | 41.2861 | 45.8277 | 54.5999 | 141.6667 |
| 0.0707 | 11.31 | 4500 | 0.8432 | 54.9496 | 39.3006 | 42.0025 | 53.3854 | 142.0 |
| 0.0707 | 11.62 | 4625 | 0.8377 | 54.2438 | 37.6959 | 40.4637 | 52.3088 | 142.0 |
| 0.0707 | 11.93 | 4750 | 0.8794 | 55.9488 | 40.5401 | 43.7347 | 54.1282 | 142.0 |
| 0.0707 | 12.25 | 4875 | 0.8563 | 57.8762 | 43.366 | 46.6757 | 56.6985 | 142.0 |
| 0.0604 | 12.56 | 5000 | 0.8835 | 54.8926 | 39.3755 | 42.384 | 53.2687 | 141.6481 |
| 0.0604 | 12.88 | 5125 | 0.8570 | 55.6656 | 39.849 | 42.1455 | 54.352 | 142.0 |
| 0.0604 | 13.19 | 5250 | 0.8539 | 57.1549 | 41.901 | 45.153 | 55.213 | 142.0 |
| 0.0604 | 13.51 | 5375 | 0.8847 | 56.3279 | 40.9269 | 43.416 | 54.7242 | 142.0 |
| 0.051 | 13.82 | 5500 | 0.8795 | 56.8982 | 42.3333 | 45.2669 | 55.1034 | 142.0 |
| 0.051 | 14.13 | 5625 | 0.8751 | 55.3173 | 40.2853 | 43.2479 | 53.7236 | 142.0 |
| 0.051 | 14.45 | 5750 | 0.8799 | 56.1678 | 41.0862 | 43.8581 | 54.6316 | 142.0 |
| 0.051 | 14.76 | 5875 | 0.8678 | 57.3539 | 43.0473 | 44.8511 | 55.6474 | 142.0 |
| 0.0467 | 15.08 | 6000 | 0.8945 | 56.1939 | 41.985 | 45.0266 | 54.8139 | 142.0 |
| 0.0467 | 15.39 | 6125 | 0.9245 | 56.2071 | 41.5265 | 44.3228 | 54.5042 | 141.4074 |
| 0.0467 | 15.7 | 6250 | 0.8793 | 56.2055 | 41.9231 | 45.0616 | 54.6643 | 142.0 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
anas-awadalla/roberta-large-initialization-seed-4
|
anas-awadalla
| 2022-05-13T21:07:51Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-13T19:00:31Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-initialization-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-initialization-seed-4
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 24
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
nikiandr/DQN-LunarLanderv2-5e5t
|
nikiandr
| 2022-05-13T19:36:03Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-13T19:35:20Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: -86.43 +/- 37.10
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
subhasisj/en-TAPT-MLM-MiniLM
|
subhasisj
| 2022-05-13T19:35:12Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-13T18:46:52Z |
---
tags:
- generated_from_trainer
model-index:
- name: en-TAPT-MLM-MiniLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en-TAPT-MLM-MiniLM
This model is a fine-tuned version of [subhasisj/MiniLMv2-qa-encoder](https://huggingface.co/subhasisj/MiniLMv2-qa-encoder) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
nikiandr/PPO-LunarLanderv2-5e5t
|
nikiandr
| 2022-05-13T19:00:53Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-13T19:00:10Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 190.98 +/- 42.35
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
thanathorn/mt5-cpe-kmutt-thai-sentence-sum
|
thanathorn
| 2022-05-13T18:20:03Z | 20,007 | 8 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"mT5",
"th",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-04-27T09:12:47Z |
---
tags:
- summarization
- mT5
language:
- th
widget:
- text: "simplify: ถ้าพูดถึงขนมหวานในตำนานที่ชื่นใจที่สุดแล้วละก็ต้องไม่พ้น น้ำแข็งใส แน่เพราะว่าเป็นอะไรที่ชื่นใจสุด"
---
# mt5-cpe-kmutt-thai-sentence-sum
This repository contains the finetuned mT5-base model for Thai sentence summarization. The architecture of the model is based on mT5 model and fine-tuned on text-summarization pairs in Thai. Also, this project is a Senior Project of Computer Engineering Student at King Mongkut’s University of Technology Thonburi.
## Usage on SimpleTransformer (Tested on version 0.63.4)
```python
from simpletransformers.t5 import T5Model, T5Args
from torch import cuda
model = T5Model("t5", "thanathorn/mt5-cpe-kmutt-thai-sentence-sum", use_cuda=cuda.is_available())
sentence = "simplify: ถ้าพูดถึงขนมหวานในตำนานที่ชื่นใจที่สุดแล้วละก็ต้องไม่พ้น น้ำแข็งใส แน่เพราะว่าเป็นอะไรที่ชื่นใจสุด"
prediction = model.predict([sentence])
print(prediction[0])
```
(See the example on <a href="https://colab.research.google.com/drive/1XiNkZLgy1USwHYFVf_nEzOSWbHGSnYdg?usp=sharing">Google Colab</a>)
### Score
<ul>
<li>ROUGE-1: 61.7805</li>
<li>ROUGE-2: 45.9689</li>
<li>ROUGE-L: 59.3542</li>
</ul>
### Intended uses & limitations
<ul>
<li>You can use this model for Thai sentence text summarization.</li>
<li>Not intended to use with paragraph text.</li>
</ul>
|
subhasisj/vi-finetuned-squad-qa-minilmv2-8
|
subhasisj
| 2022-05-13T17:04:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-13T11:30:59Z |
---
tags:
- generated_from_trainer
model-index:
- name: vi-finetuned-squad-qa-minilmv2-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vi-finetuned-squad-qa-minilmv2-8
This model is a fine-tuned version of [subhasisj/vi-TAPT-MLM-MiniLM](https://huggingface.co/subhasisj/vi-TAPT-MLM-MiniLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1669 | 1.0 | 1424 | 1.4979 |
| 1.2377 | 2.0 | 2848 | 1.3259 |
| 1.0536 | 3.0 | 4272 | 1.3133 |
| 0.9568 | 4.0 | 5696 | 1.3103 |
| 0.8859 | 5.0 | 7120 | 1.3335 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2
- Datasets 2.0.0
- Tokenizers 0.11.0
|
ogpat23/Jules-Chatbot
|
ogpat23
| 2022-05-13T16:43:30Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- conversational
---
# Chat bot based on Pulp fiction Character Jules
# Model trained on Pytorch framework uisng Pulp fiction dialogue script dataset from kaggle
|
DBusAI/PPO-BipedalWalker-v3
|
DBusAI
| 2022-05-13T16:39:16Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-13T13:36:41Z |
---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 303.05 +/- 1.79
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
---
# **PPO** Agent playing **BipedalWalker-v3**
This is a trained model of a **PPO** agent playing **BipedalWalker-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
karthiksv/vit-base-patch16-224-in21k-finetuned-cifar10
|
karthiksv
| 2022-05-13T16:25:11Z | 55 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:cifar10",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-05-13T16:21:13Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- cifar10
model-index:
- name: vit-base-patch16-224-in21k-finetuned-cifar10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-cifar10
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cifar10 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.1
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Rietta/CycleGAN_WoW
|
Rietta
| 2022-05-13T15:57:41Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2022-05-13T15:57:23Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training Metrics
Model history needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
ntcuong777/electra-iu-answer-retrieval
|
ntcuong777
| 2022-05-13T15:31:50Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"endpoints_compatible",
"region:us"
] | null | 2022-05-09T06:40:16Z |
This is a model for International University VNU-HCMC use cases only.
|
tobyych/ppo-LunarLander-v2
|
tobyych
| 2022-05-13T15:12:21Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-13T13:35:32Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 254.64 +/- 22.65
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
Davincilee/door_inner_with_SA-bert-base-uncased
|
Davincilee
| 2022-05-13T14:56:11Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-03T06:38:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: door_inner_with_SA-bert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# door_inner_with_SA-bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5492 | 1.0 | 96 | 2.3831 |
| 2.4031 | 2.0 | 192 | 2.2963 |
| 2.3391 | 3.0 | 288 | 2.2000 |
| 2.2951 | 4.0 | 384 | 2.2505 |
| 2.2151 | 5.0 | 480 | 2.1691 |
| 2.2237 | 6.0 | 576 | 2.1855 |
| 2.1984 | 7.0 | 672 | 2.2558 |
| 2.1749 | 8.0 | 768 | 2.2019 |
| 2.1475 | 9.0 | 864 | 2.1310 |
| 2.1446 | 10.0 | 960 | 2.1334 |
| 2.1374 | 11.0 | 1056 | 2.1909 |
| 2.1117 | 12.0 | 1152 | 2.2028 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
DBusAI/PPO-BipedalWalker-v3-v1
|
DBusAI
| 2022-05-13T14:32:50Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-13T14:32:01Z |
---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 226.04 +/- 113.91
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
---
# **PPO** Agent playing **BipedalWalker-v3**
This is a trained model of a **PPO** agent playing **BipedalWalker-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
Davincilee/closure_system_door_inne-roberta-base
|
Davincilee
| 2022-05-13T14:24:57Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-13T13:57:50Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: closure_system_door_inne-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# closure_system_door_inne-roberta-base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3302 | 1.0 | 3 | 1.6837 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Narsil/nolicense
|
Narsil
| 2022-05-13T14:23:29Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-05-13T14:20:50Z |
---
license: mit
commercial: false
---
|
DBusAI/PPO-CarRacing-v0
|
DBusAI
| 2022-05-13T12:55:40Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"CarRacing-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-13T12:53:48Z |
---
library_name: stable-baselines3
tags:
- CarRacing-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 81.28 +/- 82.32
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v0
type: CarRacing-v0
---
# **PPO** Agent playing **CarRacing-v0**
This is a trained model of a **PPO** agent playing **CarRacing-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
yogeshchandrasekharuni/bart-paraphrase-finetuned-xsum
|
yogeshchandrasekharuni
| 2022-05-13T11:12:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-13T06:12:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-paraphrase-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-paraphrase-finetuned-xsum
This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 61 | 1.1215 | 70.9729 | 60.41 | 70.2648 | 70.2724 | 12.2295 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
alk/t5-small-finetuned-cnn_dailymail-en-es
|
alk
| 2022-05-13T11:11:01Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-12T20:51:21Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: alk/t5-small-finetuned-cnn_dailymail-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# alk/t5-small-finetuned-cnn_dailymail-en-es
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.9163
- Validation Loss: 1.7610
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 71776, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.9945 | 1.7837 | 0 |
| 1.9478 | 1.7694 | 1 |
| 1.9278 | 1.7646 | 2 |
| 1.9163 | 1.7610 | 3 |
### Framework versions
- Transformers 4.19.0
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
chanifrusydi/bert-finetuned-squad
|
chanifrusydi
| 2022-05-13T10:45:36Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-13T08:05:44Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: chanifrusydi/bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# chanifrusydi/bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.4528
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0002, 'decay_steps': 11091, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 5.4528 | 0 |
### Framework versions
- Transformers 4.19.0
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
jkhan447/language-detection-Bert-base-uncased
|
jkhan447
| 2022-05-13T10:07:04Z | 30 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-13T04:02:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: language-detection-Bert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# language-detection-Bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2231
- Accuracy: 0.9512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
shenyi/bert-base-cased-wikitext2
|
shenyi
| 2022-05-13T07:53:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-13T07:22:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.0721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 391 | 7.2240 |
| 7.6715 | 2.0 | 782 | 7.0516 |
| 7.0737 | 3.0 | 1173 | 7.0823 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.7.1+cu110
- Datasets 2.2.1
- Tokenizers 0.12.1
|
shenyi/gpt2-wikitext2
|
shenyi
| 2022-05-13T07:21:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-13T07:00:51Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.7.1+cu110
- Datasets 2.2.1
- Tokenizers 0.12.1
|
anas-awadalla/roberta-large-data-seed-4
|
anas-awadalla
| 2022-05-13T06:24:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-13T04:13:10Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-data-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-data-seed-4
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 24
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Khalsuu/filipino-wav2vec2-l-xls-r-300m-official
|
Khalsuu
| 2022-05-13T05:58:50Z | 14,622 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:filipino_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-13T03:24:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- filipino_voice
model-index:
- name: filipino-wav2vec2-l-xls-r-300m-official
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# filipino-wav2vec2-l-xls-r-300m-official
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the filipino_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4672
- Wer: 0.2922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3671 | 2.09 | 400 | 0.5584 | 0.5987 |
| 0.48 | 4.19 | 800 | 0.4244 | 0.4195 |
| 0.2796 | 6.28 | 1200 | 0.3742 | 0.3765 |
| 0.1916 | 8.38 | 1600 | 0.4291 | 0.3667 |
| 0.1463 | 10.47 | 2000 | 0.3745 | 0.3415 |
| 0.1165 | 12.57 | 2400 | 0.4472 | 0.3407 |
| 0.0955 | 14.66 | 2800 | 0.4269 | 0.3290 |
| 0.0823 | 16.75 | 3200 | 0.4608 | 0.3475 |
| 0.0709 | 18.85 | 3600 | 0.4706 | 0.3281 |
| 0.0603 | 20.94 | 4000 | 0.4380 | 0.3183 |
| 0.0527 | 23.04 | 4400 | 0.4473 | 0.3067 |
| 0.0449 | 25.13 | 4800 | 0.4550 | 0.3029 |
| 0.041 | 27.23 | 5200 | 0.4671 | 0.3020 |
| 0.0358 | 29.32 | 5600 | 0.4672 | 0.2922 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
whimsical/ppo-LunarLander-v2
|
whimsical
| 2022-05-13T05:00:19Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-13T04:59:39Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 144.17 +/- 32.67
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
Ambiwlans/Default_ppo-LunarLander-v2
|
Ambiwlans
| 2022-05-13T02:11:41Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-13T02:09:15Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 272.96 +/- 13.01
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
Used default settings but for 1511424 timesteps
|
cj-mills/ppo-LunarLander-v2
|
cj-mills
| 2022-05-13T02:10:27Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-05T01:07:01Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- metrics:
- type: mean_reward
value: 268.12 +/- 21.13
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
huxxx657/distilbert-base-uncased-finetuned-jumbling-squad-15
|
huxxx657
| 2022-05-13T01:01:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-13T00:19:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-jumbling-squad-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-jumbling-squad-15
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3345
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3629 | 1.0 | 5532 | 1.3345 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
kathywu/DialoGPT-medium-kathy
|
kathywu
| 2022-05-13T00:41:24Z | 5 | 4 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-13T00:12:36Z |
---
tags:
- conversational
---
|
subhasisj/es-finetuned-squad-qa-minilmv2-16
|
subhasisj
| 2022-05-12T22:52:07Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-12T20:30:11Z |
---
tags:
- generated_from_trainer
model-index:
- name: es-finetuned-squad-qa-minilmv2-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# es-finetuned-squad-qa-minilmv2-16
This model is a fine-tuned version of [subhasisj/es-TAPT-MLM-MiniLM](https://huggingface.co/subhasisj/es-TAPT-MLM-MiniLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.485 | 1.0 | 711 | 1.7377 |
| 1.6984 | 2.0 | 1422 | 1.3005 |
| 1.0772 | 3.0 | 2133 | 1.2348 |
| 0.9997 | 4.0 | 2844 | 1.2231 |
| 0.8976 | 5.0 | 3555 | 1.2304 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
strangetcy/PPO-LunarLander-v2_experiments
|
strangetcy
| 2022-05-12T22:15:50Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-12T12:50:38Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 288.23 +/- 18.78
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
ruselkomp/sber-full-framebank
|
ruselkomp
| 2022-05-12T21:32:41Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-10T19:34:58Z |
---
tags:
- generated_from_trainer
model-index:
- name: tests-finetuned-squad-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tests-finetuned-squad-full
This model is a fine-tuned version of [sberbank-ai/sbert_large_nlu_ru](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0601 | 1.0 | 11307 | 1.0849 |
| 0.6918 | 2.0 | 22614 | 1.1588 |
| 0.4071 | 3.0 | 33921 | 1.5672 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2.dev0
- Tokenizers 0.12.1
|
LazaroAGM/Complicaciones_Diabetes
|
LazaroAGM
| 2022-05-12T19:15:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-05-12T18:32:36Z |
## Identificación de retinopatías
El Propósito del siguiente trabajo es identificar los pacientes que tienen complicaciones diabéticas, como lo son la neuropatía, nefropatía y retinopatía de notas médicas. Es el trabajo final del curso Clinical Natural Language Processing impartido en Coursera. Las notas medicas se encuentran en el siguiente link para el entrenamiento del modelo:
https://raw.githubusercontent.com/hhsieh2416/Identify_Diabetic_Complications/main/data/diabetes_notes.csv
Y los datos para su validación se encuentran en el siguiente link:
https://raw.githubusercontent.com/hhsieh2416/Identify_Diabetic_Complications/main/data/glodstandrad.csv
En primera instancia, se crea el siguiente código para ignorar los warnings:
```python
import warnings
warnings.filterwarnings("ignore", 'This pattern has match groups')
datos = "https://raw.githubusercontent.com/hhsieh2416/Identify_Diabetic_Complications/main/data/diabetes_notes.csv"
df = pd.read_csv(datos)
# Importando las paqueterías necesarias:
import pandas as pd
import matplotlib.pyplot as plt
import re
import numpy as np
from sklearn.metrics import confusion_matrix, classification_report
# Lectura de datos
datos = "https://raw.githubusercontent.com/hhsieh2416/Identify_Diabetic_Complications/main/data/diabetes_notes.csv"
df = pd.read_csv(datos)
# Análisis grafico de los datos
fig, ax = plt.subplots()
ax.bar(df['NOTE_ID'],df['TEXT'].str.split().apply(len))
# Cantidad de palabras por reporte de cada paciente identificado por un id
conteo = df['TEXT'].str.split().apply(len).tolist()
print('Media de palabras: ' + str(np.mean(conteo)))
print('Mediana de palabras: ' + str(np.median(conteo)))
print('Minimo de palabras: ' + str(np.min(conteo)))
print('Maximo de palabras: ' + str(np.max(conteo)))
def reporte_paciente(id):
resumen = re.findall(r"\w+", str(df[df.NOTE_ID == id]['TEXT'].tolist() ))
return resumen
# print(reporte_paciente(1))
```
Ahora, se genera una función la cual recibe nuestro DataFrame con las notas médicas, la palabra a buscar y el tamaño de la ventana
## Función sin expresiones regulares
```python
def extract_text_window(df, word, window_size, column_name = "TEXT"):
#Constants
user_input = f'({word})'
regex = re.compile(user_input)
negative = f'(no history of {word}|No history of {word}|any comorbid complications|family history|father also has {word}|denies {word}|Negative for {word})'
regex_negative = re.compile(negative)
half_window_size = window_size
final_df = pd.DataFrame([])
column_position = df.columns.get_loc(column_name) + 1 #We add 1 cause position 0 is the index
#Loop for each row of the column
for row in df.itertuples():
#Loop for multiple matches in the same row
for match in regex.finditer(row[column_position]):
window_start = int([match.start()-half_window_size if match.start()>=half_window_size else 0][0])
window_end = int([match.end() + half_window_size if match.end()+half_window_size <= len(row[column_position]) else len(row[column_position])][0])
final_df = final_df.append({
"WORD": match.group(),
"START_INDEX": match.start(),
"WINDOW_START": window_start,
"WINDOW_END": window_end,
"CONTEXT": row[column_position][window_start:window_end],
"FULL_TEXT": row[column_position],
"NOTE_ID": row[1]},
ignore_index=True)
#Extracción de negativos
for match in regex_negative.finditer(row[column_position]):
final_df2 = final_df[final_df["CONTEXT"].str.contains(pat = regex_negative, regex = True)==False]
return "No matches for the pattern" if len(final_df) == 0 else final_df2
# Buscando diabet en las notas médicas
df = pd.read_csv("https://raw.githubusercontent.com/hhsieh2416/Identify_Diabetic_Complications/main/data/diabetes_notes.csv")
word = "diabet"
window_size = 50 #tamaño de la ventana
diabetes_notes_window = extract_text_window(df,word,window_size)
diabetes_notes_window
```
Se crea una segunda función la cual recibe nuestro DataFrame con nuestras notas médicas, nuestra expresión regular para la palabra a buscar, expresión regular para las expresiones como "historial familiar, no tiene historial de diabetes, no se ha identificado diabetes" entre otras y el tamaño de la ventana al rededor de la palabra a buscar.
## Función con expresiones regulares
```python
def extract_text_window_pro(df, pattern,negatives, window_size, column_name = "TEXT"):
#Constants
half_window_size = window_size
final_df = pd.DataFrame([])
column_position = df.columns.get_loc(column_name) + 1 #We add 1 cause position 0 is the index
#Loop for each row of the column
for row in df.itertuples():
#Loop for multiple matches in the same row
for match in re.finditer(pattern,row[column_position]):
window_start = int([match.start()-half_window_size if match.start()>=half_window_size else 0][0])
window_end = int([match.end() + half_window_size if match.end()+half_window_size <= len(row[column_position]) else len(row[column_position])][0])
final_df = final_df.append({
"WORD": match.group(),
"START_INDEX": match.start(),
"WINDOW_START": window_start,
"WINDOW_END": window_end,
"CONTEXT": row[column_position][window_start:window_end],
"FULL_TEXT": row[column_position],
"NOTE_ID": row[1]},
ignore_index=True)
#Extracción de negativos
final_df2 = final_df[final_df["CONTEXT"].str.contains(pat = negatives, regex = True)==False]
return "No matches for the pattern" if len(final_df) == 0 else final_df2
# Buscando diabet en las notas médicas
df = pd.read_csv("https://raw.githubusercontent.com/hhsieh2416/Identify_Diabetic_Complications/main/data/diabetes_notes.csv")
pattern = "diabetes|diabetic" #"(?<![a-zA-Z])diabet(es|ic)?(?![a-zA-Z])"
window_size = 50
negatives = r"no history of (?<![a-zA-Z])diabet(es|ic)?(?![a-zA-z])|No history of (?<![a-zA-Z])diabet(es|ic)?(?![a-zA-z])|den(ies|y)? any comorbid complications|family history|negative for (?<![a-zA-Z])diabet(es|ic)?(?![a-zA-z])|(father|mother) (also)? (?<![a-zA-Z])diabet(es|ic)?(?![a-zA-z])|Negative for (?<![a-zA-Z])diabet(es|ic)?(?![a-zA-z]) |no weakness, numbness or tingling|patient's mother and father|father also has diabetes"
diabetes_notes_window = extract_text_window_pro(df,pattern,negatives,window_size)
diabetes_notes_window
```
A continuación, es momento de obtener mediante la función, con expresiones regulares, los DataFrame para neuropathy, nephropathy y retinopathy.
```python
diabetes_notes_window.drop_duplicates(subset=["NOTE_ID"])
neuropathy = diabetes_notes_window[diabetes_notes_window['CONTEXT'].str.contains(pat=r"(?<![a-zA-Z])neuropath(y|ic)?(?![a-zA-z])|diabetic nerve pain|tingling",regex=True)]
neuropathy['COMPLICATIONS'] = "neuropathy"
diabetes_notes_neuropathy = neuropathy[['NOTE_ID','CONTEXT','COMPLICATIONS']].drop_duplicates(subset=['NOTE_ID'])
print(diabetes_notes_neuropathy)
print(diabetes_notes_neuropathy.count())
nephropathy = diabetes_notes_window[diabetes_notes_window['CONTEXT'].str.contains(pat=r"(?<![a-zA-Z])nephropathy(?![a-zA-z])|renal (insufficiency|disease)",regex=True)]
nephropathy['COMPLICATIONS'] = "nephropathy"
diabetes_notes_nephropathy = nephropathy[['NOTE_ID','CONTEXT','COMPLICATIONS']].drop_duplicates(subset=['NOTE_ID'])
print(diabetes_notes_nephropathy)
print(diabetes_notes_nephropathy.count())
retinopathy = diabetes_notes_window[diabetes_notes_window['CONTEXT'].str.contains(pat=r"(?<![a-zA-Z])retinopath(y|ic)?(?![a-zA-z])",regex=True)]
retinopathy['COMPLICATIONS'] = "retinopathy"
diabetes_notes_retinopathy = retinopathy[['NOTE_ID','CONTEXT','COMPLICATIONS']].drop_duplicates(subset=['NOTE_ID'])
print(diabetes_notes_retinopathy)
print(diabetes_notes_retinopathy.count())
```
Para validar que nuestras funciones estén obteniendo bien la información, se hace el uso del segundo link el cual se nos fue proporcionado para la validación de estas notas médicas.
```python
# Con el link antes mencionado de validación se crean los DataFrame para cada patología
datos_verificacion = pd.read_csv("https://raw.githubusercontent.com/hhsieh2416/Identify_Diabetic_Complications/main/data/glodstandrad.csv")
datos_verificacion_neuropathy = datos_verificacion[datos_verificacion['DIABETIC_NEUROPATHY']==1][['NOTE_ID','DIABETIC_NEUROPATHY']]
print(datos_verificacion_neuropathy)
print(datos_verificacion_neuropathy.count())
datos_verificacion_nephropathy = datos_verificacion[datos_verificacion['DIABETIC_NEPHROPATHY']==1][['NOTE_ID','DIABETIC_NEPHROPATHY']]
print(datos_verificacion_nephropathy)
print(datos_verificacion_nephropathy.count())
datos_verificacion_retinopathy = datos_verificacion[datos_verificacion['DIABETIC_RETINOPATHY']==1][['NOTE_ID','DIABETIC_RETINOPATHY']]
print(datos_verificacion_retinopathy)
print(datos_verificacion_retinopathy.count())
```
Es necesario reunir los datos obtenidos por nuestro modelo con los datos de validación, tarea que es hecha por una unión, usando como llave el identificador de cada paciente
NOTE_ID.
```python
# Realizamos joins de nuestros DataFrame con las tablas de validación
ver_neuro = pd.merge(datos_verificacion_neuropathy, diabetes_notes_neuropathy, how = 'outer', on = 'NOTE_ID', indicator=True)
print(ver_neuro)
ver_nephro = pd.merge(datos_verificacion_nephropathy, diabetes_notes_nephropathy, how = 'outer', on = 'NOTE_ID', indicator=True)
print(ver_nephro)
ver_retino = pd.merge(datos_verificacion_retinopathy, diabetes_notes_retinopathy, how = 'outer', on = 'NOTE_ID', indicator=True)
print(ver_retino)
```
El primer análisis es realizar conteos para cada complicación, con el fin de saber cuantos falsos positivos y negativos se encuentran, con estos valores se
construye la matriz de confusión.
```python
# Se realizan los conteos
conteo_na_neuro_falso_positivo = ver_neuro['DIABETIC_NEUROPATHY'].isna().sum()
conteo_na_nephro_falso_positivo = ver_nephro['DIABETIC_NEPHROPATHY'].isna().sum()
conteo_na_retino_falso_positivo = ver_retino['DIABETIC_RETINOPATHY'].isna().sum()
print('Pacientes sin complicaciones pero que si se identifican: ', conteo_na_neuro_falso_positivo+conteo_na_nephro_falso_positivo+conteo_na_retino_falso_positivo)
```
Pacientes sin complicaciones pero que si se identifican: 5
```python
conteo_na_neuro_falso_negativo = ver_neuro['COMPLICATIONS'].isna().sum()
conteo_na_nephro_falso_negativo = ver_nephro['COMPLICATIONS'].isna().sum()
conteo_na_retino_falso_negativo = ver_retino['COMPLICATIONS'].isna().sum()
print('Pacientes con complicaciones que no fueron detectados: ', conteo_na_neuro_falso_negativo + conteo_na_nephro_falso_negativo + conteo_na_retino_falso_negativo)
```
Pacientes con complicaciones que no fueron detectados: 13
```python
conteo_correcto_neuro = len(ver_neuro[ver_neuro['_merge'] == 'both'])
conteo_correcto_nephro = len(ver_nephro[ver_nephro['_merge'] == 'both'])
conteo_correcto_retino = len(ver_retino[ver_retino['_merge'] == 'both'])
print('Pacientes que tienen complicaciones diabetes que si se encontaron: ', conteo_correcto_nephro+conteo_correcto_neuro+conteo_correcto_retino)
```
Pacientes que tienen complicaciones diabetes que si se encontaron: 15
```python
conteo_complicacion_neuro = len( ver_neuro[ver_neuro['DIABETIC_NEUROPATHY'] == 1] )
conteo_complicacion_nephro = len( ver_nephro[ver_nephro['DIABETIC_NEPHROPATHY'] == 1] )
conteo_complicacion_retino = len( ver_retino[ver_retino['DIABETIC_RETINOPATHY'] == 1] )
print('Pacientes que tienen complicaciones diabeticas: ', conteo_complicacion_neuro +conteo_complicacion_nephro + conteo_complicacion_retino )
```
Pacientes que tienen complicaciones diabeticas: 28
Matriz de Confusión.
| Predicción\Verdad | Complicaciones | No complicaciones |
|-------------------|----------------|-------------------|
| Complicaciones | 15 | 5 |
| No complicaciones | 13 | 108 |
Procedemos con la evaluación usando la función *classification_report* de la paqueteria *sklearn*. Iniciamos con neuropatia, primero debemos llenar todos los espacios con NA (obtenidos de la unión)
usando el valor de cero. Una vez completado esto, hacemos la comparación de las dos columnas.
```python
cor_neuro = datos_verificacion[['NOTE_ID', 'DIABETIC_NEUROPATHY']].merge(diabetes_notes_neuropathy[['NOTE_ID','COMPLICATIONS']], how='outer', on='NOTE_ID', indicator=True )
cor_neuro['COMPLICATIONS'] = cor_neuro['COMPLICATIONS'].map(d_neuro).fillna(0)
print('---NEUROPATHY---')
print(cor_neuro)
print(classification_report(cor_neuro['DIABETIC_NEUROPATHY'].tolist(), cor_neuro['COMPLICATIONS'].tolist()))
```
Teniendo la siguiente evaluación:
| | precision | recall | f1-score | support |
|--------------|-----------|--------|----------|---------|
| 0 | 0.94 | 0.98 | 0.95 | 126 |
| 1 | 0.78 | 0.47 | 0.58 | 15 |
| accuracy | | | 0.93 | 141 |
| macroavg | 0.86 | 0.73 | 0.77 | 141 |
| weighted avg | 0.92 | 0.93 | 0.92 | 141 |
EL método muestra las principales métrica de precisión haciendo uso de los falsos y verdaderos positivos, junto a los falsos y verdaderos negativos.
*Recall* es la capacidad del clasificador de encontrar los ejemplares positivos, teniendo un valor de 0.73. *F1-Score* evalua cuantas predicciones positivas correctas se tiene,
el macropromedio es de 0.77. Teniendo un soporte de 15 ejemplares positivos, 126 negativos, sumando un total de 141.
En segundo lugar, evaluamos nefropatia.
```python
cor_nephro = datos_verificacion[['NOTE_ID', 'DIABETIC_NEPHROPATHY']].merge(diabetes_notes_nephropathy[['NOTE_ID','COMPLICATIONS']], how='outer', on='NOTE_ID', indicator=True )
cor_nephro['COMPLICATIONS'] = cor_nephro['COMPLICATIONS'].map(d_nephro).fillna(0)
print('---NEPHROPATHY---')
print(cor_nephro)
print(classification_report(cor_nephro['DIABETIC_NEPHROPATHY'].tolist(), cor_nephro['COMPLICATIONS'].tolist()))
```
| | precision | recall | f1-score | support |
|--------------|-----------|--------|----------|---------|
| 0 | 0.98 | 0.99 | 0.98 | 131 |
| 1 | 0.88 | 0.70 | 0.78 | 10 |
| accuracy | | | 0.97 | 141 |
| macroavg | 0.93 | 0.85 | 0.88 | 141 |
| weighted avg | 0.97 | 0.97 | 0.97 | 141 |
En este caso, el *F1-score* del macropromedio aumento a 0.88, mientras que el recall disminuyo a 0.73. Seguimos teniendo los 141 ejemplares.
Finalizando, tenemos retinopatia.
```python
cor_retino = datos_verificacion[['NOTE_ID', 'DIABETIC_RETINOPATHY']].merge(diabetes_notes_retinopathy[['NOTE_ID','COMPLICATIONS']], how='outer', on='NOTE_ID', indicator=True )
cor_retino['COMPLICATIONS'] = cor_retino['COMPLICATIONS'].map(d_retino).fillna(0)
print('---RETINOPATHY---')
print(cor_retino)
print(classification_report(cor_retino['DIABETIC_RETINOPATHY'].tolist(), cor_retino['COMPLICATIONS'].tolist()))
```
| | precision | recall | f1-score | support |
|--------------|-----------|--------|----------|---------|
| 0 | 0.99 | 0.99 | 0.98 | 138 |
| 1 | 0.33 | 0.33 | 0.33 | 3 |
| accuracy | | | 0.97 | 141 |
| macroavg | 0.66 | 0.66 | 0.66 | 141 |
| weighted avg | 0.97 | 0.97 | 0.97 | 141 |
Esta ultima evalaución nos devuelve el *f1-score* más bajo de las tres evaluaciones, con 0.66 en el macropromedio. Notemos que es la complicaciones con menos casos positivos de los tres casos
estudiados, contando con tres, de los cuales solo se encontro correctamente un ejemplar. Lo cual reduce el macropromedio considerablemente.
|
RaphaelReinauer/LunarLander-v6
|
RaphaelReinauer
| 2022-05-12T19:04:57Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-11T22:44:59Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 298.88 +/- 14.17
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
vukpetar/ppo-BipedalWalker-v3
|
vukpetar
| 2022-05-12T17:48:05Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-12T15:43:44Z |
---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 302.55 +/- 0.48
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
---
# **PPO** Agent playing **BipedalWalker-v3**
This is a trained model of a **PPO** agent playing **BipedalWalker-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
deepparag/gpt-j-6B-longer-generation
|
deepparag
| 2022-05-12T17:33:59Z | 0 | 1 | null |
[
"pytorch",
"causal-lm",
"en",
"arxiv:2104.09864",
"arxiv:2101.00027",
"license:apache-2.0",
"region:us"
] | null | 2022-05-12T17:32:17Z |
---
language:
- en
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- The Pile
---
# This model is a clone of https://huggingface.co/EleutherAI/gpt-j-6B in which I have simply increased the max response size.
# GPT-J 6B
## Model Description
GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 6053381344 |
| \\(n_{layers}\\) | 28* |
| \\(d_{model}\\) | 4096 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Training data
GPT-J 6B was trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai).
## Training procedure
This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
## Intended Use and Limitations
GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
```
### Limitations and Biases
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Evaluation results
<figure>
| Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) |
|--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------|
| Random Chance | ✓ | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 |
| GPT-3 Ada‡ | ✗ | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- |
| GPT-2 1.5B | ✓ | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 |
| GPT-Neo 1.3B‡ | ✓ | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 |
| Megatron-2.5B* | ✗ | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 |
| GPT-Neo 2.7B‡ | ✓ | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 |
| GPT-3 1.3B*‡ | ✗ | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 |
| GPT-3 Babbage‡ | ✗ | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- |
| Megatron-8.3B* | ✗ | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 |
| GPT-3 2.7B*‡ | ✗ | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 |
| Megatron-11B† | ✓ | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 |
| **GPT-J 6B‡** | **✓** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** |
| GPT-3 6.7B*‡ | ✗ | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 |
| GPT-3 Curie‡ | ✗ | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- |
| GPT-3 13B*‡ | ✗ | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 |
| GPT-3 175B*‡ | ✗ | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 |
| GPT-3 Davinci‡ | ✗ | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- |
<figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p>
<p><strong>*</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by
running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released
weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these
might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more
details.</p>
<p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not
reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a>
<a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>)
Thus, evaluation was not attempted.</p>
<p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models
failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is
trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure>
## Citation and Related Information
### BibTeX entry
To cite this model:
```bibtex
@misc{gpt-j,
author = {Wang, Ben and Komatsuzaki, Aran},
title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
To cite the codebase that trained this model:
```bibtex
@misc{mesh-transformer-jax,
author = {Wang, Ben},
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
Thanks to everyone who have helped out one way or another (listed alphabetically):
- [James Bradbury](https://twitter.com/jekbradbury) for valuable assistance with debugging JAX issues.
- [Stella Biderman](https://www.stellabiderman.com), [Eric Hallahan](https://twitter.com/erichallahan), [Kurumuz](https://github.com/kurumuz/), and [Finetune](https://github.com/finetuneanon/) for converting the model to be compatible with the `transformers` package.
- [Leo Gao](https://twitter.com/nabla_theta) for running zero shot evaluations for the baseline models for the table.
- [Laurence Golding](https://github.com/researcher2/) for adding some features to the web demo.
- [Aran Komatsuzaki](https://twitter.com/arankomatsuzaki) for advice with experiment design and writing the blog posts.
- [Janko Prester](https://github.com/jprester/) for creating the web demo frontend.
|
vukpetar/ppo-BipedalWalker-v3-v1
|
vukpetar
| 2022-05-12T17:21:23Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-12T17:20:30Z |
---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 302.93 +/- 0.82
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
---
# **PPO** Agent playing **BipedalWalker-v3**
This is a trained model of a **PPO** agent playing **BipedalWalker-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
CrispyAlbumArt/ppo-LunarLander-v4
|
CrispyAlbumArt
| 2022-05-12T16:17:26Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-12T16:17:02Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 296.41 +/- 12.56
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
alk/mt5-small-finetuned-cnn_dailymail-en-es
|
alk
| 2022-05-12T16:08:51Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T23:49:04Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: alk/mt5-small-finetuned-cnn_dailymail-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# alk/mt5-small-finetuned-cnn_dailymail-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.9490
- Validation Loss: 1.6920
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 287112, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.9445 | 1.9068 | 0 |
| 2.2439 | 1.8106 | 1 |
| 2.1301 | 1.7582 | 2 |
| 2.0643 | 1.7378 | 3 |
| 2.0191 | 1.7181 | 4 |
| 1.9870 | 1.7033 | 5 |
| 1.9646 | 1.7015 | 6 |
| 1.9490 | 1.6920 | 7 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
jgerbscheid/ppo-LunarLander-v2
|
jgerbscheid
| 2022-05-12T16:07:44Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-12T16:07:02Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 182.52 +/- 64.21
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
bansals10/wav2vec2-large-xls-r-300m-turkish-colab
|
bansals10
| 2022-05-12T15:25:20Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-11T14:43:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
mustapha/Lunar_lander_v2_gym_2
|
mustapha
| 2022-05-12T15:21:42Z | 3 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-11T10:36:28Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 284.86 +/- 16.57
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
karthiksv/vit-base-beans
|
karthiksv
| 2022-05-12T15:21:37Z | 69 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-05-10T15:08:52Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- beans
model-index:
- name: vit-base-beans
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.1
- Datasets 2.1.0
- Tokenizers 0.12.1
|
damianr13/ppo-LunarLander-v2
|
damianr13
| 2022-05-12T15:07:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-12T15:06:51Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 214.61 +/- 36.36
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
pinot/wav2vec2-base-timit-demo-colab
|
pinot
| 2022-05-12T14:37:53Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-23T05:58:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4548
- Wer: 0.3373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3291 | 4.0 | 500 | 1.0403 | 0.7174 |
| 0.5336 | 8.0 | 1000 | 0.4744 | 0.4489 |
| 0.2155 | 12.0 | 1500 | 0.4476 | 0.3832 |
| 0.1256 | 16.0 | 2000 | 0.4358 | 0.3639 |
| 0.0867 | 20.0 | 2500 | 0.4634 | 0.3527 |
| 0.0608 | 24.0 | 3000 | 0.4784 | 0.3466 |
| 0.0476 | 28.0 | 3500 | 0.4548 | 0.3373 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv-earlystopping
|
theojolliffe
| 2022-05-12T14:00:24Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-12T08:17:12Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv-earlystopping
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv-earlystopping
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8347
- Rouge1: 53.9049
- Rouge2: 35.5953
- Rougel: 39.788
- Rougelsum: 51.4101
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 0.31 | 125 | 1.0240 | 52.5632 | 32.977 | 34.672 | 49.9905 | 142.0 |
| No log | 0.63 | 250 | 1.0056 | 52.5508 | 32.4826 | 34.6851 | 49.835 | 141.6852 |
| No log | 0.94 | 375 | 0.8609 | 53.0475 | 32.9384 | 35.3322 | 50.272 | 141.6481 |
| 0.8255 | 1.26 | 500 | 0.9022 | 52.2493 | 31.5622 | 33.389 | 49.6612 | 142.0 |
| 0.8255 | 1.57 | 625 | 0.8706 | 53.3568 | 33.2533 | 35.7531 | 50.4568 | 141.8889 |
| 0.8255 | 1.88 | 750 | 0.8186 | 52.7375 | 33.4439 | 37.1094 | 50.5323 | 142.0 |
| 0.8255 | 2.2 | 875 | 0.8041 | 53.4992 | 34.6929 | 37.9614 | 51.091 | 142.0 |
| 0.5295 | 2.51 | 1000 | 0.7907 | 52.6185 | 33.8053 | 37.1725 | 50.4881 | 142.0 |
| 0.5295 | 2.83 | 1125 | 0.7740 | 52.7107 | 33.1023 | 36.0865 | 50.0365 | 142.0 |
| 0.5295 | 3.14 | 1250 | 0.8200 | 52.5607 | 33.7948 | 37.2312 | 50.3345 | 142.0 |
| 0.5295 | 3.45 | 1375 | 0.8188 | 53.9233 | 34.446 | 36.7566 | 51.3135 | 142.0 |
| 0.351 | 3.77 | 1500 | 0.8071 | 53.9096 | 35.5977 | 38.6832 | 51.4986 | 142.0 |
| 0.351 | 4.08 | 1625 | 0.8347 | 53.9049 | 35.5953 | 39.788 | 51.4101 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
vukpetar/ppo-MountainCar-v0
|
vukpetar
| 2022-05-12T13:59:19Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-12T12:37:22Z |
---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -90.00 +/- 6.86
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
---
# **PPO** Agent playing **MountainCar-v0**
This is a trained model of a **PPO** agent playing **MountainCar-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
MikhailKon/TEST2ppo-LunarLander-v2
|
MikhailKon
| 2022-05-12T13:56:22Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-12T11:31:06Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 262.31 +/- 16.12
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
huggingtweets/newscollected-nickmullensgf
|
huggingtweets
| 2022-05-12T13:41:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-23T17:13:18Z |
---
language: en
thumbnail: http://www.huggingtweets.com/newscollected-nickmullensgf/1652362865457/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1522032150358511616/83U7w6rG_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1469950344918671364/-037cCwh_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">del co & kayla</div>
<div style="text-align: center; font-size: 14px;">@newscollected-nickmullensgf</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from del co & kayla.
| Data | del co | kayla |
| --- | --- | --- |
| Tweets downloaded | 366 | 3215 |
| Retweets | 30 | 946 |
| Short tweets | 67 | 362 |
| Tweets kept | 269 | 1907 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/nqg16qms/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @newscollected-nickmullensgf's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3jf63jpr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3jf63jpr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/newscollected-nickmullensgf')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
turhancan97/first_ppo-MountainCar-v0
|
turhancan97
| 2022-05-12T13:31:10Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-12T13:30:42Z |
---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -200.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
---
# **PPO** Agent playing **MountainCar-v0**
This is a trained model of a **PPO** agent playing **MountainCar-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
IljaSamoilov/MBART-estonian-subtitles-with-seconds
|
IljaSamoilov
| 2022-05-12T12:34:45Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"et",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-09T18:41:45Z |
---
language:
- et
widget:
- text: "te olete ka noh, noh, päris korralikult ka Rahvusringhäälingu teatud mõttes sellisesse keerulisse olukorda pannud,"
- text: "Et, et, et miks mitte olla siis tasakaalus, ma noh, hüpoteetiliselt viskan selle palli üles,"
---
Dataset must be processed as following:
```
def preprocess_function_with_seconds(ds):
inputs = ds['generated']
targets = ds['subtitle']
model_inputs = tokenizer(inputs, truncation=True, max_length=128, padding=True, return_tensors="np")
secs = list(map(lambda x: "{:.1f}".format(x), ds["seconds"]))
sec_inputs = tokenizer(secs, truncation=True, max_length=128, padding=True, return_tensors="np")
model_inputs['input_ids'] = np.concatenate((sec_inputs['input_ids'][:,1:2], model_inputs['input_ids']), 1)
model_inputs['attention_mask'] = np.concatenate((sec_inputs['attention_mask'][:,1:2], model_inputs['attention_mask']), 1)
with tokenizer.as_target_tokenizer():
labels = tokenizer(targets, truncation=True, max_length=128, padding=True, return_tensors="np")
model_inputs["labels"] = labels["input_ids"]
return model_inputs
```
Importing the model and tokenizer:
```
tokenizer = MBart50Tokenizer.from_pretrained("IljaSamoilov/MBART-estonian-subtitles-with-seconds", src_lang="et_EE", tgt_lang="et_EE")
model = MBartForConditionalGeneration.from_pretrained("IljaSamoilov/MBART-estonian-subtitles-with-seconds")
```
|
jabot/PPO_LunarLanderV2
|
jabot
| 2022-05-12T11:59:38Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-11T21:01:39Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 293.18 +/- 13.38
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
CrispyAlbumArt/TEST2ppo-LunarLander-v2
|
CrispyAlbumArt
| 2022-05-12T11:54:18Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-12T11:24:02Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 272.13 +/- 19.88
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
driwnet/stsb-m-mt-ca-distilbert-base-uncased
|
driwnet
| 2022-05-12T11:18:25Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"sentence-similarity",
"ca",
"dataset:stsb_multi_mt",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-05-12T09:29:27Z |
---
language: ca
datasets:
- stsb_multi_mt
tags:
- sentence-similarity
- sentence-transformers
---
# distilbert-base-uncased trained for Semantic Textual Similarity in Catalan
This is a test model that was fine-tuned using the Catalan traduction of Spanish datasets from [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt) in order to understand and benchmark STS models.
## Model and training data description
This model was built taking `distilbert-base-uncased` and training it on a Semantic Textual Similarity task using a modified version of the training script for STS from Sentece Transformers (the modified script is included in the repo). It was trained using the Spanish datasets from [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt) which are the STSBenchmark datasets automatically translated to other languages using deepl.com. and salt.gva.es. Refer to the dataset repository for more details.
## Intended uses & limitations
This model was built just as a proof-of-concept on STS fine-tuning using Catalan data and no specific use other than getting a sense on how this training works.
## How to use
You may use it as any other STS trained model to extract sentence embeddings. Check Sentence Transformers documentation.
## Training procedure
Use the included script to train in Catalan the base model. You can also try to train another model passing it's reference as first argument. You can also train in some other language of those included in the training dataset.
## Evaluation results
Evaluating `distilbert-base-uncased` on the Catalan test dataset before training results in:
```
Cosine-Similarity : Pearson: 0.3180 Spearman: 0.4014
```
While the fine-tuned version with the defaults of the training script and the Catalan training dataset results in:
```
Cosine-Similarity : Pearson: 0.7368 Spearman: 0.7288
```
## Resources
- Training dataset [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt)
- Sentence Transformers [Semantic Textual Similarity](https://www.sbert.net/examples/training/sts/README.html)
- Check [sts_eval](https://github.com/eduardofv/sts_eval) for a comparison with Tensorflow and Sentence-Transformers models
- Check the [development environment to run the scripts and evaluation](https://github.com/eduardofv/ai-denv)
|
DioLiu/distilbert-base-uncased-finetuned-sst2-shake-wiki-update-shuffle
|
DioLiu
| 2022-05-12T11:04:41Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-12T08:35:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2-shake-wiki-update-shuffle
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2-shake-wiki-update-shuffle
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0284
- Accuracy: 0.9971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0166 | 1.0 | 7783 | 0.0135 | 0.9965 |
| 0.0091 | 2.0 | 15566 | 0.0172 | 0.9968 |
| 0.0059 | 3.0 | 23349 | 0.0223 | 0.9968 |
| 0.0 | 4.0 | 31132 | 0.0332 | 0.9962 |
| 0.0001 | 5.0 | 38915 | 0.0284 | 0.9971 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
alisonbrwn/ppo-LunarLander_doubled_steps
|
alisonbrwn
| 2022-05-12T10:59:54Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-12T10:59:24Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 266.68 +/- 13.25
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
ali-issa/FYP_ARABIZI
|
ali-issa
| 2022-05-12T10:47:21Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-12T06:34:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-Arabizi-gpu-colab-similar-to-german-param
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-Arabizi-gpu-colab-similar-to-german-param
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5609
- Wer: 0.4042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.6416 | 2.83 | 400 | 2.8983 | 1.0 |
| 1.4951 | 5.67 | 800 | 0.6272 | 0.6097 |
| 0.6419 | 8.51 | 1200 | 0.5491 | 0.5069 |
| 0.4767 | 11.35 | 1600 | 0.5152 | 0.4553 |
| 0.3899 | 14.18 | 2000 | 0.5436 | 0.4475 |
| 0.3342 | 17.02 | 2400 | 0.5400 | 0.4431 |
| 0.2982 | 19.85 | 2800 | 0.5599 | 0.4248 |
| 0.2738 | 22.69 | 3200 | 0.5401 | 0.4103 |
| 0.2563 | 25.53 | 3600 | 0.5710 | 0.4198 |
| 0.2443 | 28.37 | 4000 | 0.5609 | 0.4042 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
eslamxm/mt5-base-finetuned-urdu-arabic
|
eslamxm
| 2022-05-12T09:18:16Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"arabic",
"ar",
"Abstractive Summarization",
"generated_from_trainer",
"dataset:xlsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-05-12T01:15:19Z |
---
license: apache-2.0
tags:
- summarization
- arabic
- ar
- mt5
- Abstractive Summarization
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: mt5-base-finetuned-urdu-finetuned-urdu-arabic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-urdu-finetuned-urdu-arabic
This model is a fine-tuned version of [eslamxm/mt5-base-finetuned-urdu](https://huggingface.co/eslamxm/mt5-base-finetuned-urdu) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3744
- Rouge-1: 22.77
- Rouge-2: 10.15
- Rouge-l: 20.71
- Gen Len: 19.0
- Bertscore: 71.46
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 4.5155 | 1.0 | 1172 | 3.6895 | 18.81 | 6.77 | 17.01 | 19.0 | 70.27 |
| 3.8315 | 2.0 | 2344 | 3.5047 | 19.75 | 7.79 | 17.95 | 19.0 | 70.58 |
| 3.6122 | 3.0 | 3516 | 3.4231 | 20.46 | 8.44 | 18.7 | 19.0 | 70.8 |
| 3.4735 | 4.0 | 4688 | 3.3835 | 21.12 | 8.86 | 19.21 | 19.0 | 70.98 |
| 3.3855 | 5.0 | 5860 | 3.3744 | 21.48 | 9.01 | 19.57 | 19.0 | 71.17 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Einbauch/PPO-LunarLander-v2
|
Einbauch
| 2022-05-12T08:51:04Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-12T08:50:34Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 284.81 +/- 11.11
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
Laikokwei/bert-finetuned-squad
|
Laikokwei
| 2022-05-12T08:43:19Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-12T05:42:28Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Laikokwei/bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Laikokwei/bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4662
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 44364, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2206 | 0 |
| 0.7196 | 1 |
| 0.4662 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
shoyano372/test
|
shoyano372
| 2022-05-12T07:18:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-05-12T07:17:37Z |
- Test
---
license: apache-2.0
---
|
iis2009002/xlm-roberta-base-finetuned-panx-all
|
iis2009002
| 2022-05-12T07:17:40Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-04T11:40:11Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1752
- F1: 0.8557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3 | 1.0 | 835 | 0.1862 | 0.8114 |
| 0.1552 | 2.0 | 1670 | 0.1758 | 0.8426 |
| 0.1002 | 3.0 | 2505 | 0.1752 | 0.8557 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
iis2009002/xlm-roberta-base-finetuned-panx-en
|
iis2009002
| 2022-05-12T07:08:50Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-04T11:23:48Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.692179700499168
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3921
- F1: 0.6922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1465 | 1.0 | 50 | 0.5838 | 0.4777 |
| 0.5055 | 2.0 | 100 | 0.4477 | 0.6374 |
| 0.3713 | 3.0 | 150 | 0.3921 | 0.6922 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
iis2009002/xlm-roberta-base-finetuned-panx-de-fr
|
iis2009002
| 2022-05-12T07:03:30Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-04T10:18:36Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1644
- F1: 0.8617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 |
| 0.1471 | 2.0 | 1430 | 0.1627 | 0.8509 |
| 0.0947 | 3.0 | 2145 | 0.1644 | 0.8617 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Vnven25/en_pipeline
|
Vnven25
| 2022-05-12T06:49:36Z | 4 | 0 |
spacy
|
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] |
token-classification
| 2022-05-11T17:14:48Z |
---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_pipeline
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 1.0
- name: NER Recall
type: recall
value: 1.0
- name: NER F Score
type: f_score
value: 1.0
---
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.2.3,<3.3.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
##NE
<details>
<summary>View label scheme (6 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `COMPANY NAME`, `CONTRACT`, `EMAIL`, `EVENT`, `MODULE`, `NAME` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 100.00 |
| `ENTS_P` | 100.00 |
| `ENTS_R` | 100.00 |
| `TOK2VEC_LOSS` | 6689.73 |
| `NER_LOSS` | 483.71 |
|
Jackett/subject_classifier_extended
|
Jackett
| 2022-05-12T06:09:29Z | 9 | 2 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-04T03:05:43Z |
Label mappings
{'LABEL_0':'Biology','LABEL_1':'Physics','LABEL_2':'Chemistry','LABEL_3':'Maths','LABEL_4':'Social Science','LABEL_5':'English'}
Training data distribution
Physics - 7000
Maths - 7000
Biology - 7000
Chemistry - 7000
English - 5254
Social Science - 7000
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.