modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 06:30:45
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 06:30:39
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
nataliebhuerta/wav2vec2-base-finetuned-ks
|
nataliebhuerta
| 2022-05-27T14:46:35Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-05-27T14:35:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.14.0
- Tokenizers 0.10.3
|
esh/q-Taxi-v3
|
esh
| 2022-05-27T14:07:28Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-27T14:07:10Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: nan +/- nan
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="esh/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
esh/q-FrozenLake-v1-8x8-slippery
|
esh
| 2022-05-27T14:05:27Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-22T15:32:26Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-slippery
results:
- metrics:
- type: mean_reward
value: nan +/- nan
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="esh/q-FrozenLake-v1-8x8-slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
skyfox/q-FrozenLake-v1-4x4-noSlippery
|
skyfox
| 2022-05-27T14:02:09Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-27T14:02:02Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
srini98/q-FrozenLake-v1-4x4-noSlippery
|
srini98
| 2022-05-27T13:21:40Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-27T13:21:34Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="srini98/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
onewithnickelcoins/roberta-base-stars
|
onewithnickelcoins
| 2022-05-27T13:15:43Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-27T12:33:44Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-stars
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-stars
This model is a fine-tuned version of [onewithnickelcoins/roberta-base-MLM](https://huggingface.co/onewithnickelcoins/roberta-base-MLM) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2914
- Accuracy: 0.6857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
jkhan447/language-detection-Bert-base-uncased-additional
|
jkhan447
| 2022-05-27T13:02:32Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-27T09:28:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: language-detection-Bert-base-uncased-additional
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# language-detection-Bert-base-uncased-additional
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2330
- Accuracy: 0.9497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
YaYaB/q-Taxi-v3
|
YaYaB
| 2022-05-27T12:49:58Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-27T12:49:48Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
YaYaB/q-FrozenLake-v1-4x4-noSlippery
|
YaYaB
| 2022-05-27T12:35:29Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-27T12:35:18Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="YaYaB/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
onewithnickelcoins/roberta-base-MLM
|
onewithnickelcoins
| 2022-05-27T11:57:24Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-27T11:40:10Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-MLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-MLM
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0265
- Accuracy: 0.6009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
huggingtweets/mrbean
|
huggingtweets
| 2022-05-27T11:30:30Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-27T11:14:36Z |
---
language: en
thumbnail: http://www.huggingtweets.com/mrbean/1653651025192/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/521655203011899392/pxOndDc7_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mr Bean</div>
<div style="text-align: center; font-size: 14px;">@mrbean</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mr Bean.
| Data | Mr Bean |
| --- | --- |
| Tweets downloaded | 2324 |
| Retweets | 156 |
| Short tweets | 271 |
| Tweets kept | 1897 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1nqdk593/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mrbean's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/27zl3ib7) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/27zl3ib7/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mrbean')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/liwenliang
|
huggingtweets
| 2022-05-27T11:26:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-27T11:22:47Z |
---
language: en
thumbnail: http://www.huggingtweets.com/liwenliang/1653650598585/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1197224526175784968/7n8Q3j05_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Kevin Li</div>
<div style="text-align: center; font-size: 14px;">@liwenliang</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Kevin Li.
| Data | Kevin Li |
| --- | --- |
| Tweets downloaded | 108 |
| Retweets | 21 |
| Short tweets | 5 |
| Tweets kept | 82 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/k8wvicoq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @liwenliang's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/14q55e16) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/14q55e16/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/liwenliang')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/emilythornberry
|
huggingtweets
| 2022-05-27T11:19:25Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-27T11:19:18Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1446231256052731905/octqXaR9_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Emily Thornberry</div>
<div style="text-align: center; font-size: 14px;">@emilythornberry</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Emily Thornberry.
| Data | Emily Thornberry |
| --- | --- |
| Tweets downloaded | 3234 |
| Retweets | 1153 |
| Short tweets | 274 |
| Tweets kept | 1807 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/gag2yg4r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @emilythornberry's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2zsqk4sk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2zsqk4sk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/emilythornberry')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/alejodorowsky
|
huggingtweets
| 2022-05-27T11:13:26Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-27T11:11:07Z |
---
language: en
thumbnail: http://www.huggingtweets.com/alejodorowsky/1653650001771/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/784393032774873088/1x6o_3ws_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Alejandro Jodorowsky</div>
<div style="text-align: center; font-size: 14px;">@alejodorowsky</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Alejandro Jodorowsky.
| Data | Alejandro Jodorowsky |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 640 |
| Short tweets | 175 |
| Tweets kept | 2430 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1vwsnx64/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @alejodorowsky's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/j8ai679x) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/j8ai679x/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/alejodorowsky')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/dlputin
|
huggingtweets
| 2022-05-27T10:48:58Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-27T10:48:51Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/535525386872832001/NQn2b8OA_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">普京</div>
<div style="text-align: center; font-size: 14px;">@dlputin</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 普京.
| Data | 普京 |
| --- | --- |
| Tweets downloaded | 3200 |
| Retweets | 0 |
| Short tweets | 586 |
| Tweets kept | 2614 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2t4wvbm9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dlputin's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2vcew5d1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2vcew5d1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dlputin')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/campbellclaret
|
huggingtweets
| 2022-05-27T10:33:36Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-27T10:32:38Z |
---
language: en
thumbnail: http://www.huggingtweets.com/campbellclaret/1653647611538/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1441638351052881920/13PTOAD0_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ALASTAIR CAMPBELL</div>
<div style="text-align: center; font-size: 14px;">@campbellclaret</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ALASTAIR CAMPBELL.
| Data | ALASTAIR CAMPBELL |
| --- | --- |
| Tweets downloaded | 3239 |
| Retweets | 1921 |
| Short tweets | 112 |
| Tweets kept | 1206 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1psic63j/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @campbellclaret's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2bq64fuz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2bq64fuz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/campbellclaret')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
YaYaB/PPO_v3_LunarLander-v2
|
YaYaB
| 2022-05-27T09:26:56Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-27T09:26:32Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 272.63 +/- 20.66
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Giseok/wav2vec2-base-STTTest
|
Giseok
| 2022-05-27T09:12:19Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-26T09:01:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-STTTest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-STTTest
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5198
- Wer: 0.3393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.231 | 1.0 | 500 | 0.4337 | 0.4100 |
| 0.1845 | 2.01 | 1000 | 0.4296 | 0.3931 |
| 0.1551 | 3.01 | 1500 | 0.4397 | 0.3770 |
| 0.1479 | 4.02 | 2000 | 0.4524 | 0.3827 |
| 0.1186 | 5.02 | 2500 | 0.5182 | 0.3795 |
| 0.1079 | 6.02 | 3000 | 0.4799 | 0.3737 |
| 0.0974 | 7.03 | 3500 | 0.4966 | 0.3860 |
| 0.0878 | 8.03 | 4000 | 0.4993 | 0.3699 |
| 0.0788 | 9.04 | 4500 | 0.5183 | 0.3678 |
| 0.0732 | 10.04 | 5000 | 0.5064 | 0.3635 |
| 0.0664 | 11.04 | 5500 | 0.5330 | 0.3663 |
| 0.0596 | 12.05 | 6000 | 0.5147 | 0.3516 |
| 0.0538 | 13.05 | 6500 | 0.5254 | 0.3581 |
| 0.0535 | 14.06 | 7000 | 0.4902 | 0.3534 |
| 0.0492 | 15.06 | 7500 | 0.5115 | 0.3488 |
| 0.0455 | 16.06 | 8000 | 0.5250 | 0.3472 |
| 0.0434 | 17.07 | 8500 | 0.5338 | 0.3515 |
| 0.0351 | 18.07 | 9000 | 0.5365 | 0.3444 |
| 0.0341 | 19.08 | 9500 | 0.4886 | 0.3439 |
| 0.0332 | 20.08 | 10000 | 0.5234 | 0.3475 |
| 0.0289 | 21.08 | 10500 | 0.5375 | 0.3464 |
| 0.028 | 22.09 | 11000 | 0.5395 | 0.3478 |
| 0.0225 | 23.09 | 11500 | 0.5236 | 0.3428 |
| 0.0244 | 24.1 | 12000 | 0.5122 | 0.3402 |
| 0.0246 | 25.1 | 12500 | 0.5212 | 0.3390 |
| 0.0214 | 26.1 | 13000 | 0.5198 | 0.3393 |
| 0.0179 | 27.11 | 13500 | 0.5198 | 0.3393 |
| 0.0194 | 28.11 | 14000 | 0.5198 | 0.3393 |
| 0.0193 | 29.12 | 14500 | 0.5198 | 0.3393 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.1+cu111
- Datasets 1.18.3
- Tokenizers 0.12.1
|
huggingtweets/mit_istnews
|
huggingtweets
| 2022-05-27T09:11:24Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-27T09:10:02Z |
---
language: en
thumbnail: http://www.huggingtweets.com/mit_istnews/1653642679545/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/875463526583857156/mxYzB8tm_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">MIT IS&T</div>
<div style="text-align: center; font-size: 14px;">@mit_istnews</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from MIT IS&T.
| Data | MIT IS&T |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 20 |
| Short tweets | 132 |
| Tweets kept | 3098 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1b2tikho/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mit_istnews's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/15k3tyvf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/15k3tyvf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mit_istnews')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/terrybroad
|
huggingtweets
| 2022-05-27T08:46:44Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-27T08:44:29Z |
---
language: en
thumbnail: http://www.huggingtweets.com/terrybroad/1653641199493/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1445695092325380098/Zk0H0J37_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Terence Broad</div>
<div style="text-align: center; font-size: 14px;">@terrybroad</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Terence Broad.
| Data | Terence Broad |
| --- | --- |
| Tweets downloaded | 2248 |
| Retweets | 1230 |
| Short tweets | 231 |
| Tweets kept | 787 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2v3f7i92/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @terrybroad's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3fxvoi41) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3fxvoi41/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/terrybroad')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
auriolar/q-Taxi-v3
|
auriolar
| 2022-05-27T08:27:18Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-27T08:04:54Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="auriolar/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
auriolar/q-FrozenLake-v1-4x4-noSlippery
|
auriolar
| 2022-05-27T08:00:20Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-27T08:00:12Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="auriolar/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Splend1dchan/t5small-squad-extractive
|
Splend1dchan
| 2022-05-27T07:48:00Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-05-27T07:32:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: t5_squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_squad
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the squad dataset, using the extractive method by isolating the encoder only.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
{
"epoch": 3.0,
"eval_exact_match": 70.06622516556291,
"eval_f1": 80.02993815400357,
"eval_samples": 10659
}
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
kurapy/t5-small-finetuned-xsum
|
kurapy
| 2022-05-27T07:08:49Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-27T04:35:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.2621
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4782
- Rouge1: 28.2621
- Rouge2: 7.6583
- Rougel: 22.1971
- Rougelsum: 22.2
- Gen Len: 18.8243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7138 | 1.0 | 12753 | 2.4782 | 28.2621 | 7.6583 | 22.1971 | 22.2 | 18.8243 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
tanviraumi/bert-base-uncased-issues-128
|
tanviraumi
| 2022-05-27T06:26:04Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-27T06:01:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3389 | 1.0 | 73 | 1.7400 |
| 1.8014 | 2.0 | 146 | 1.4690 |
| 1.634 | 3.0 | 219 | 1.4783 |
| 1.5461 | 4.0 | 292 | 1.3912 |
| 1.4706 | 5.0 | 365 | 1.3109 |
| 1.4161 | 6.0 | 438 | 1.3405 |
| 1.3664 | 7.0 | 511 | 1.3459 |
| 1.332 | 8.0 | 584 | 1.2745 |
| 1.3029 | 9.0 | 657 | 1.2633 |
| 1.2871 | 10.0 | 730 | 1.2336 |
| 1.2807 | 11.0 | 803 | 1.2966 |
| 1.2569 | 12.0 | 876 | 1.1508 |
| 1.2392 | 13.0 | 949 | 1.2530 |
| 1.237 | 14.0 | 1022 | 1.2485 |
| 1.2169 | 15.0 | 1095 | 1.2592 |
| 1.2272 | 16.0 | 1168 | 1.2337 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.12.0.dev20220513+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
yilye/tapas_medi_flowsheet
|
yilye
| 2022-05-27T05:09:12Z | 0 | 0 | null |
[
"tapas",
"en",
"license:apache-2.0",
"region:us"
] | null | 2022-04-25T00:35:00Z |
---
language: en
tags:
- tapas
license: apache-2.0
---
# Overview
This model is based on [Tapas](https://huggingface.co/docs/transformers/model_doc/tapas), and I fine-tuned it on medical flowsheet dataset. This is for doctors and nurses who track patient's record by scrolling the mouse; instead, they can ask the question by natural language and the model will look through the table and find the answer for them.
|
geomos/distilbert-base-uncased-finetuned-imdb
|
geomos
| 2022-05-27T04:40:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-27T04:21:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4921 | 1.0 | 479 | 2.3047 |
| 2.3893 | 2.0 | 958 | 2.2607 |
| 2.3571 | 3.0 | 1437 | 2.2481 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 2.2.2
- Tokenizers 0.10.3
|
sabersol/bert-base-uncased-emotion
|
sabersol
| 2022-05-27T03:25:49Z | 21 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-19T15:13:05Z |
---
license: cc-by-nc-sa-4.0
---
# CITDA:
Fine-tuned `bert-base-uncased` on the `emotions` dataset
Demo Notebook: https://colab.research.google.com/drive/10ZCFvlf2UV3FjU4ymf4OoipQvqHbIItG?usp=sharing
## Packages
- Install `torch`
- Also, `pip install transformers datasets scikit-learn wandb seaborn python-dotenv`
## Train
1. Rename `.env.example` to `.env` and set an API key from [wandb](https://wandb.ai/authorize)
2. You can adjust model parameters in the `explainableai.py` file.
2. The model (`pytorch_model.bin`) is a based on the `bert-base-uncased` and already trained on the `emotions` dataset.
To re-produce the training run `finetune-emotions.py`. You can change the base model, or the dataset by changing that file's code.
## Example
Run `example.py`
## Train
The model is already trained on `bert-base-uncased` with the [emotions dataset](https://huggingface.co/datasets/emotion). However, you can change parameters and re-fine-tune the model by running `finetune-emotions.py`.
|
cj-mills/q-Taxi-v3
|
cj-mills
| 2022-05-27T00:59:31Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-27T00:30:00Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="cj-mills/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
cj-mills/q-FrozenLake-v1-4x4-noSlippery
|
cj-mills
| 2022-05-27T00:58:56Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-26T23:43:50Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="cj-mills/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
sdpetrides/ppe-LunarLander-v2
|
sdpetrides
| 2022-05-26T23:08:31Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-26T23:07:53Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 214.74 +/- 27.57
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
alefarasin/q-FrozenLake-v1-4x4-noSlippery
|
alefarasin
| 2022-05-26T23:07:42Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-26T23:07:35Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="alefarasin/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
castorini/mdpr-tied-pft-msmarco-ft-all
|
castorini
| 2022-05-26T21:14:21Z | 204 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-05-26T21:05:47Z |
The checkpoint is further fine-tuned based on the `castorini/mdpr-tied-pft-msmarco` checkpoint, on all the Mr. TyDi training data.
|
Aiyshwariya/bert-finetuned-squad
|
Aiyshwariya
| 2022-05-26T20:12:18Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-26T17:15:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
actionpace/pegasus-samsum
|
actionpace
| 2022-05-26T19:11:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-26T17:45:33Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7073 | 0.54 | 500 | 1.4841 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
UBC-NLP/AraT5-base-title-generation
|
UBC-NLP
| 2022-05-26T18:29:45Z | 130 | 12 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"Arabic T5",
"MSA",
"Twitter",
"Arabic Dialect",
"Arabic Machine Translation",
"Arabic Text Summarization",
"Arabic News Title and Question Generation",
"Arabic Paraphrasing and Transliteration",
"Arabic Code-Switched Translation",
"ar",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- ar
tags:
- Arabic T5
- MSA
- Twitter
- Arabic Dialect
- Arabic Machine Translation
- Arabic Text Summarization
- Arabic News Title and Question Generation
- Arabic Paraphrasing and Transliteration
- Arabic Code-Switched Translation
---
# AraT5-base-title-generation
# AraT5: Text-to-Text Transformers for Arabic Language Generation
<img src="https://huggingface.co/UBC-NLP/AraT5-base/resolve/main/AraT5_CR_new.png" alt="AraT5" width="45%" height="35%" align="right"/>
This is the repository accompanying our paper [AraT5: Text-to-Text Transformers for Arabic Language Understanding and Generation](https://aclanthology.org/2022.acl-long.47/). In this is the repository we Introduce **AraT5<sub>MSA</sub>**, **AraT5<sub>Tweet</sub>**, and **AraT5**: three powerful Arabic-specific text-to-text Transformer based models;
---
# How to use AraT5 models
Below is an example for fine-tuning **AraT5-base** for News Title Generation on the Aranews dataset
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("UBC-NLP/AraT5-base-title-generation")
model = AutoModelForSeq2SeqLM.from_pretrained("UBC-NLP/AraT5-base-title-generation")
Document = "تحت رعاية صاحب السمو الملكي الأمير سعود بن نايف بن عبدالعزيز أمير المنطقة الشرقية اختتمت غرفة الشرقية مؤخرا، الثاني من مبادرتها لتأهيل وتدريب أبناء وبنات المملكة ضمن مبادرتها المجانية للعام 2019 حيث قدمت 6 برامج تدريبية نوعية. وثمن رئيس مجلس إدارة الغرفة، عبدالحكيم العمار الخالدي، رعاية سمو أمير المنطقة الشرقية للمبادرة، مؤكدا أن دعم سموه لجميع أنشطة ."
encoding = tokenizer.encode_plus(Document,pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"], encoding["attention_mask"]
outputs = model.generate(
input_ids=input_ids, attention_mask=attention_masks,
max_length=256,
do_sample=True,
top_k=120,
top_p=0.95,
early_stopping=True,
num_return_sequences=5
)
for id, output in enumerate(outputs):
title = tokenizer.decode(output, skip_special_tokens=True,clean_up_tokenization_spaces=True)
print("title#"+str(id), title)
```
**The input news document**
<div style="white-space : pre-wrap !important;word-break: break-word; direction:rtl; text-align: right">
تحت رعاية صاحب السمو الملكي الأمير سعود بن نايف بن عبدالعزيز أمير المنطقة الشرقية اختتمت غرفة الشرقية مؤخرا، الثاني من مبادرتها لتأهيل وتدريب أبناء وبنات المملكة ضمن مبادرتها المجانية للعام 2019 حيث قدمت 6 برامج تدريبية نوعية. وثمن رئيس مجلس إدارة الغرفة، عبدالحكيم العمار الخالدي، رعاية سمو أمير المنطقة الشرقية للمبادرة، مؤكدا أن دعم سموه لجميع أنشطة .
<br>
</div>
**The generated titles**
```
title#0 غرفة الشرقية تختتم المرحلة الثانية من مبادرتها لتأهيل وتدريب أبناء وبنات المملكة
title#1 غرفة الشرقية تختتم الثاني من مبادرة تأهيل وتأهيل أبناء وبناتنا
title#2 سعود بن نايف يختتم ثانى مبادراتها لتأهيل وتدريب أبناء وبنات المملكة
title#3 أمير الشرقية يرعى اختتام برنامج برنامج تدريب أبناء وبنات المملكة
title#4 سعود بن نايف يرعى اختتام مبادرة تأهيل وتدريب أبناء وبنات المملكة
```
# AraT5 Models Checkpoints
AraT5 Pytorch and TensorFlow checkpoints are available on the Huggingface website for direct download and use ```exclusively for research```. ```For commercial use, please contact the authors via email @ (muhammad.mageed[at]ubc[dot]ca).```
| **Model** | **Link** |
|---------|:------------------:|
| **AraT5-base** | [https://huggingface.co/UBC-NLP/AraT5-base](https://huggingface.co/UBC-NLP/AraT5-base) |
| **AraT5-msa-base** | [https://huggingface.co/UBC-NLP/AraT5-msa-base](https://huggingface.co/UBC-NLP/AraT5-msa-base) |
| **AraT5-tweet-base** | [https://huggingface.co/UBC-NLP/AraT5-tweet-base](https://huggingface.co/UBC-NLP/AraT5-tweet-base) |
| **AraT5-msa-small** | [https://huggingface.co/UBC-NLP/AraT5-msa-small](https://huggingface.co/UBC-NLP/AraT5-msa-small) |
| **AraT5-tweet-small**| [https://huggingface.co/UBC-NLP/AraT5-tweet-small](https://huggingface.co/UBC-NLP/AraT5-tweet-small) |
# BibTex
If you use our models (Arat5-base, Arat5-msa-base, Arat5-tweet-base, Arat5-msa-small, or Arat5-tweet-small ) for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated):
```bibtex
@inproceedings{nagoudi-etal-2022-arat5,
title = "{A}ra{T}5: Text-to-Text Transformers for {A}rabic Language Generation",
author = "Nagoudi, El Moatez Billah and
Elmadany, AbdelRahim and
Abdul-Mageed, Muhammad",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.47",
pages = "628--647",
abstract = "Transfer learning with a unified Transformer framework (T5) that converts all language problems into a text-to-text format was recently proposed as a simple and effective transfer learning approach. Although a multilingual version of the T5 model (mT5) was also introduced, it is not clear how well it can fare on non-English tasks involving diverse data. To investigate this question, we apply mT5 on a language with a wide variety of dialects{--}Arabic. For evaluation, we introduce a novel benchmark for ARabic language GENeration (ARGEN), covering seven important tasks. For model comparison, we pre-train three powerful Arabic T5-style models and evaluate them on ARGEN. Although pre-trained with {\textasciitilde}49 less data, our new models perform significantly better than mT5 on all ARGEN tasks (in 52 out of 59 test sets) and set several new SOTAs. Our models also establish new SOTA on the recently-proposed, large Arabic language understanding evaluation benchmark ARLUE (Abdul-Mageed et al., 2021). Our new models are publicly available. We also link to ARGEN datasets through our repository: https://github.com/UBC-NLP/araT5.",
}
```
## Acknowledgments
We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE). We also thank the [Google TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) program for providing us with free TPU access.
|
ericntay/distilbert-base-uncased-finetuned-emotion
|
ericntay
| 2022-05-26T16:51:22Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-26T13:53:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240722191505606
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2055
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7795 | 1.0 | 250 | 0.2920 | 0.911 | 0.9079 |
| 0.2373 | 2.0 | 500 | 0.2055 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Against61/q-Taxi-v3
|
Against61
| 2022-05-26T16:18:05Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-26T16:17:58Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Against61/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
thundaa/tape-fluorescence-prediction-RITA_s
|
thundaa
| 2022-05-26T15:37:58Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rita",
"text-classification",
"protein language model",
"generated_from_trainer",
"custom_code",
"dataset:train",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
] |
text-classification
| 2022-05-25T10:59:12Z |
---
license: apache-2.0
tags:
- protein language model
- generated_from_trainer
datasets:
- train
metrics:
- spearmanr
model-index:
- name: tape-fluorescence-prediction-RITA_s
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: cradle-bio/tape-fluorescence
type: train
metrics:
- name: Spearmanr
type: spearmanr
value: 0.2955275250425323
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tape-fluorescence-prediction-RITA_s
This model is a fine-tuned version of [lightonai/RITA_s](https://huggingface.co/lightonai/RITA_s) on the cradle-bio/tape-fluorescence dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5855
- Spearmanr: 0.2955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 4096
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 4.3595 | 0.85 | 4 | 0.7057 | 0.0940 |
| 0.8654 | 1.85 | 8 | 0.6873 | 0.1280 |
| 0.8292 | 2.85 | 12 | 0.6835 | 0.2290 |
| 0.8212 | 3.85 | 16 | 0.6837 | 0.3110 |
| 0.8191 | 4.85 | 20 | 0.6799 | 0.3281 |
| 0.8137 | 5.85 | 24 | 0.6748 | 0.3277 |
| 0.8057 | 6.85 | 28 | 0.6592 | 0.3162 |
| 0.7769 | 7.85 | 32 | 0.6283 | 0.3065 |
| 0.7382 | 8.85 | 36 | 0.6103 | 0.2795 |
| 0.5991 | 9.85 | 40 | 0.5855 | 0.2955 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ryan1998/distilbert-base-uncased-finetuned-emotion
|
ryan1998
| 2022-05-26T14:32:56Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-26T08:09:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5280
- Accuracy: 0.2886
- F1: 0.2742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 1316 | 2.6049 | 0.2682 | 0.2516 |
| No log | 2.0 | 2632 | 2.5280 | 0.2886 | 0.2742 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingnft/azuki
|
huggingnft
| 2022-05-26T14:22:20Z | 13 | 1 |
transformers
|
[
"transformers",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"unconditional-image-generation",
"dataset:huggingnft/azuki",
"license:mit",
"endpoints_compatible",
"region:us"
] |
unconditional-image-generation
| 2022-04-15T21:52:23Z |
---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
- unconditional-image-generation
datasets:
- huggingnft/azuki
license: mit
---
# Hugging NFT: azuki
## Disclaimer
All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright
holder.
## Model description
LightWeight GAN model for unconditional generation.
NFT collection available [here](https://opensea.io/collection/azuki).
Dataset is available [here](https://huggingface.co/datasets/huggingnft/azuki).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
Project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
[](https://github.com/AlekseyKorshuk/huggingnft)
## Intended uses & limitations
#### How to use
Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
#### Limitations and bias
Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
## Training data
Dataset is available [here](https://huggingface.co/datasets/huggingnft/azuki).
## Training procedure
Training script is available [here](https://github.com/AlekseyKorshuk/huggingnft).
## Generated Images
Check results with Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
### BibTeX entry and citation info
```bibtex
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
|
Fra96/my-awesome-model
|
Fra96
| 2022-05-26T14:12:52Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-26T13:28:56Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: my-awesome-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my-awesome-model
This model is a fine-tuned version of [dbmdz/bert-base-italian-cased](https://huggingface.co/dbmdz/bert-base-italian-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3847
- Validation Loss: 0.3267
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -969, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.3847 | 0.3267 | 0 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.9.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
kz/mt5base-finetuned-ECC-japanese-small
|
kz
| 2022-05-26T13:50:56Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"ja",
"arxiv:2201.11903",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: "ja"
widget:
- text: "吾輩をは猫である。を書いた作家は,夏目漱 <extra_id_0>"
- text: "吾輩をは猫である。名前えはまだない。"
- text: "translate japanese to english: 赤い花. => red flower. 青い花. => <extra_id_0>"
license: "mit"
---
Google's mt5-base fine-tuned in Japanese to solve error detection and correction task.
# 日本語誤り訂正
- "吾輩をは猫である。名前えはまだない。"→"吾輩は猫である。名前はまだない。"
- "-small" has been trained on 20,000 text pairs only.
- dataset: [link](http://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9EWikipedia%E5%85%A5%E5%8A%9B%E8%AA%A4%E3%82%8A%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88) *used only first 20,000 text pairs.
- prefix: "correction: " (notice: single task trained.)
- text-to-textのお気持ち体験版ぐらいの感覚でどうぞ.
## 参考
- "東北大学でMASKが研究をしています。"→"東北大学でMASKの研究をしています。" ジム・キャリーを主語とした唯一のガ格が消され、ジム・キャリーは研究対象となった。易読化のために用いられる主語と動詞を近づける記法は誤り扱い?
- "東北大学でマスクが研究をしています。"→"東北大学でマスクの研究をしています。"
- "東北大学でイーロン・マスクが研究をしています。"→"東北大学でイーロン・マスクが研究をしています。"
- "東北大学で「イーロン・マスク」が研究をしています。"→"東北大学で「イーロン・マスク」の研究をしています。" 単語の意味も考慮されている?
- "東北大学でイマスクが研究をしています。"→"東北大学でイマスクの研究をしています。"
- "東北大学でクが研究をしています。"→"東北大学でコンピューターが研究をしています。" それはちょっと待って。
## 参考 extra_idを用い探索 <>は半角に変更してください
- "東北大学で <extra_id_0> の研究をしています。"→"東北大学で化学の研究をしています。"
- "東北大学で <extra_id_0> が研究をしています。"→"東北大学で工学が研究をしています。" 工学さん。
- "吾輩は <extra_id_0> である。"→"吾輩は吾輩である。"
- "答えは猫です。吾輩は <extra_id_0> である。"→"答えは猫です。吾輩は猫である。"
- "答えは猫です。吾輩の <extra_id_0> である。"→"答えは猫です。吾輩の心は猫である。"
- "私は猫です。私は <extra_id_0>"→"私は猫です。私は猫です。"
- "私は猫です。N/A <extra_id_0>"→"猫です。"
- "あなたは女性で猫です。彼は犬です。彼女は <extra_id_0>"→"あなたは女性で猫です。彼は犬です。彼女は猫です。"
- "あなたは女性で猫です。彼は犬です。彼は <extra_id_0>"→"あなたは女性で猫です。彼は犬です。"
- "あなたは女性で猫です。彼は犬です。彼は男性で <extra_id_0>"→"あなたは女性で猫です。彼は犬です。彼は男性で猫です。"
- "あなたは女性で猫です。彼は犬です。ライオンは <extra_id_0>"→"あなたは女性で猫です。彼は犬です。ライオンは猫です。"
- "あなたがは女性で猫です。彼はが犬です。ライオンが <extra_id_0>"→"あなたが女性で猫です。彼は犬です。ライオンが犬です。"
- "Aは11、Bは9。Aは <extra_id_0> 。Bは <extra_id_1> 。"→"Aは11、Bは9。Aは11。Bは9。"
- "彼の名前はallenです。彼のnameは <extra_id_0>"→"彼の名前はallenです。彼の名前は英語です。"
- "translate japanease to english: 赤い花. => red flower. 青い花. => <extra_id_0>"→"赤い花. => red flower. 青い花. => blue flower" タスク比依存翻訳可能性の片鱗.japaneseをjapaneaseと間違えたことは秘密だ・・・と言うか間違えても動くのか
## Prompting参考
Chain of Thought Prompting Elicits Reasoning in Large Language Models
https://arxiv.org/abs/2201.11903
**check in progress**
## Licenese
- The MIT license
|
i8pxgd2s/ppo-LunarLander-v2-version3
|
i8pxgd2s
| 2022-05-26T13:29:52Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-26T13:29:25Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 234.71 +/- 71.44
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
krotima1/mbart-ht2a-cs
|
krotima1
| 2022-05-26T12:59:01Z | 8 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"Summarization",
"abstractive summarization",
"mbart-cc25",
"Czech",
"cs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-22T23:41:07Z |
---
language:
- cs
- cs
tags:
- Summarization
- abstractive summarization
- mbart-cc25
- Czech
license: apache-2.0
datasets:
- private Czech News Center dataset news-based
- SumeCzech dataset news-based
metrics:
- rouge
- rougeraw
---
# mBART fine-tuned model for Czech abstractive summarization (HT2A-CS)
This model is a fine-tuned checkpoint of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the Czech news dataset to produce Czech abstractive summaries.
## Task
The model deals with the task ``Headline + Text to Abstract`` (HT2A) which consists in generating a multi-sentence summary considered as an abstract from a Czech news text.
## Dataset
The model has been trained on a large Czech news dataset developed by a concatenation of two datasets, the private CNC dataset provided by Czech News Center and [SumeCzech](https://ufal.mff.cuni.cz/sumeczech) dataset. The dataset includes around 1.75M Czech news-based documents consisting of a Headline, Abstract, and Full-text sections. Truncation and padding were set to 512 tokens for the encoder and 128 for the decoder.
## Training
The model has been trained on 1x NVIDIA Tesla A100 40GB for 60 hours and 4x NVIDIA Tesla A100 40GB for 40 hours. During training, the model has seen 12896K documents corresponding to roughly 8.4 epochs.
# Use
Assuming that you are using the provided Summarizer.ipynb file.
```python
def summ_config():
cfg = OrderedDict([
# summarization model - checkpoint from website
("model_name", "krotima1/mbart-ht2a-cs"),
("inference_cfg", OrderedDict([
("num_beams", 4),
("top_k", 40),
("top_p", 0.92),
("do_sample", True),
("temperature", 0.89),
("repetition_penalty", 1.2),
("no_repeat_ngram_size", None),
("early_stopping", True),
("max_length", 128),
("min_length", 10),
])),
#texts to summarize
("text",
[
"Input your Czech text",
]
),
])
return cfg
cfg = summ_config()
#load model
model = AutoModelForSeq2SeqLM.from_pretrained(cfg["model_name"])
tokenizer = AutoTokenizer.from_pretrained(cfg["model_name"])
# init summarizer
summarize = Summarizer(model, tokenizer, cfg["inference_cfg"])
summarize(cfg["text"])
```
|
Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned
|
Finnish-NLP
| 2022-05-26T12:42:28Z | 33 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_9_0",
"arxiv:2006.11477",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-14T16:30:58Z |
---
license: apache-2.0
language: fi
metrics:
- wer
- cer
tags:
- automatic-speech-recognition
- fi
- finnish
- generated_from_trainer
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_9_0
model-index:
- name: wav2vec2-base-fi-voxpopuli-v2-finetuned
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 9
type: mozilla-foundation/common_voice_9_0
args: fi
metrics:
- name: Test WER
type: wer
value: 5.93
- name: Test CER
type: cer
value: 1.40
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: FLEURS ASR
type: google/fleurs
args: fi_fi
metrics:
- name: Test WER
type: wer
value: 13.99
- name: Test CER
type: cer
value: 6.07
---
# Wav2Vec2-base-fi-voxpopuli-v2 for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-base-fi-voxpopuli-v2](https://huggingface.co/facebook/wav2vec2-base-fi-voxpopuli-v2) for Finnish ASR. The model has been fine-tuned with 276.7 hours of Finnish transcribed speech data. Wav2Vec2 was introduced in
[this paper](https://arxiv.org/abs/2006.11477) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
This repository also includes Finnish KenLM language model used in the decoding phase with the acoustic model.
## Model description
[Wav2vec2-base-fi-voxpopuli-v2](https://huggingface.co/facebook/wav2vec2-base-fi-voxpopuli-v2) is Facebook AI's pretrained model for Finnish speech. It is pretrained on 14.2k hours of unlabeled Finnish speech from [VoxPopuli V2 dataset](https://github.com/facebookresearch/voxpopuli/) with the wav2vec 2.0 objective.
This model is fine-tuned version of the pretrained model for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
The Finnish KenLM language model used in the decoding phase has been trained with text data from the audio transcriptions and from a subset of Finnish Wikipedia. Thus, the decoder's language model may not generalize to very different language, for example to spoken daily language with dialects (because especially the Wikipedia contains mostly formal Finnish language). It may be beneficial to train your own KenLM language model for your domain language and use that in the decoding.
## Training data
This model was fine-tuned with 276.7 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:------------------------------------------------------------------------------------------------------------------------------ |:--------:|:----------------:|
| [Common Voice 9.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0) | 10.80 h | 3.90 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.94 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.73 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.40 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.94 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained on a Tesla V100 GPU, sponsored by Hugging Face & OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
For the KenLM language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by Hugging Face. Training data for the 5-gram KenLM were text transcriptions of the audio training data and 100k random samples of cleaned [Finnish Wikipedia](https://huggingface.co/datasets/wikipedia) (August 2021) dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-04
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-base-fi-voxpopuli-v2` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.575 | 0.33 | 500 | 0.7454 | 0.7048 |
| 0.5838 | 0.66 | 1000 | 0.2377 | 0.2608 |
| 0.5692 | 1.0 | 1500 | 0.2014 | 0.2244 |
| 0.5112 | 1.33 | 2000 | 0.1885 | 0.2013 |
| 0.4857 | 1.66 | 2500 | 0.1881 | 0.2120 |
| 0.4821 | 1.99 | 3000 | 0.1603 | 0.1894 |
| 0.4531 | 2.32 | 3500 | 0.1594 | 0.1865 |
| 0.4411 | 2.65 | 4000 | 0.1641 | 0.1874 |
| 0.4437 | 2.99 | 4500 | 0.1545 | 0.1874 |
| 0.4191 | 3.32 | 5000 | 0.1565 | 0.1770 |
| 0.4158 | 3.65 | 5500 | 0.1696 | 0.1867 |
| 0.4032 | 3.98 | 6000 | 0.1561 | 0.1746 |
| 0.4003 | 4.31 | 6500 | 0.1432 | 0.1749 |
| 0.4059 | 4.64 | 7000 | 0.1390 | 0.1690 |
| 0.4019 | 4.98 | 7500 | 0.1291 | 0.1646 |
| 0.3811 | 5.31 | 8000 | 0.1485 | 0.1755 |
| 0.3955 | 5.64 | 8500 | 0.1351 | 0.1659 |
| 0.3562 | 5.97 | 9000 | 0.1328 | 0.1614 |
| 0.3646 | 6.3 | 9500 | 0.1329 | 0.1584 |
| 0.351 | 6.64 | 10000 | 0.1342 | 0.1554 |
| 0.3408 | 6.97 | 10500 | 0.1422 | 0.1509 |
| 0.3562 | 7.3 | 11000 | 0.1309 | 0.1528 |
| 0.3335 | 7.63 | 11500 | 0.1305 | 0.1506 |
| 0.3491 | 7.96 | 12000 | 0.1365 | 0.1560 |
| 0.3538 | 8.29 | 12500 | 0.1293 | 0.1512 |
| 0.3338 | 8.63 | 13000 | 0.1328 | 0.1511 |
| 0.3509 | 8.96 | 13500 | 0.1304 | 0.1520 |
| 0.3431 | 9.29 | 14000 | 0.1360 | 0.1517 |
| 0.3309 | 9.62 | 14500 | 0.1328 | 0.1514 |
| 0.3252 | 9.95 | 15000 | 0.1316 | 0.1498 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0), [Common Voice 9.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0) and with the [FLEURS ASR Finnish test split](https://huggingface.co/datasets/google/fleurs).
This model's training data includes the training splits of Common Voice 9.0 but most of our previous models include the Common Voice 7.0 so we ran tests for both Common Voice versions. Note: Common Voice doesn't seem to fully preserve the test split as fixed between the dataset versions so it is possible that some of the training examples of Common Voice 9.0 are in the test split of the Common Voice 7.0 and vice versa. Thus, Common Voice test result comparisons are not fully accurate between the models trained with different Common Voice versions but the comparison should still be meaningful enough.
### Common Voice 7.0 testing
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the first row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts:
| | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------|
|Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |5.85 |13.52 |1.35 |2.44 |
|Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |4.13 |**9.66** |0.90 |1.66 |
|Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |8.16 |17.92 |1.97 |3.36 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |5.65 |13.11 |1.20 |2.23 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**4.09** |9.73 |**0.88** |**1.65** |
### Common Voice 9.0 testing
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned --dataset mozilla-foundation/common_voice_9_0 --config fi --split test
```
This model (the first row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts:
| | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------|
|Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |5.93 |14.08 |1.40 |2.59 |
|Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |4.13 |9.83 |0.92 |1.71 |
|Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |7.42 |16.45 |1.79 |3.07 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |5.35 |13.00 |1.14 |2.20 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**3.72** |**8.96** |**0.80** |**1.52** |
### FLEURS ASR testing
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned --dataset google/fleurs --config fi_fi --split test
```
This model (the first row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts:
| | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------|
|Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |13.99 |17.16 |6.07 |6.61 |
|Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |12.44 |**14.63** |5.77 |6.22 |
|Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |17.72 |23.30 |6.78 |7.67 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |20.34 |16.67 |6.97 |6.35 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**12.11** |14.89 |**5.65** |**6.06** |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish
|
Finnish-NLP
| 2022-05-26T12:37:37Z | 176 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_9_0",
"arxiv:2006.11477",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-21T19:42:16Z |
---
license: apache-2.0
language: fi
metrics:
- wer
- cer
tags:
- automatic-speech-recognition
- fi
- finnish
- generated_from_trainer
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_9_0
model-index:
- name: wav2vec2-large-uralic-voxpopuli-v2-finnish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 9
type: mozilla-foundation/common_voice_9_0
args: fi
metrics:
- name: Test WER
type: wer
value: 4.13
- name: Test CER
type: cer
value: 0.92
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: FLEURS ASR
type: google/fleurs
args: fi_fi
metrics:
- name: Test WER
type: wer
value: 12.44
- name: Test CER
type: cer
value: 5.77
---
# Wav2vec2-large-uralic-voxpopuli-v2 for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-large-uralic-voxpopuli-v2](https://huggingface.co/facebook/wav2vec2-large-uralic-voxpopuli-v2) for Finnish ASR. The model has been fine-tuned with 276.7 hours of Finnish transcribed speech data. Wav2Vec2 was introduced in
[this paper](https://arxiv.org/abs/2006.11477) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
This repository also includes Finnish KenLM language model used in the decoding phase with the acoustic model.
## Model description
[Wav2vec2-large-uralic-voxpopuli-v2](https://huggingface.co/facebook/wav2vec2-large-uralic-voxpopuli-v2) is Facebook AI's pretrained model for uralic language family (Finnish, Estonian, Hungarian) speech. It is pretrained on 42.5k hours of unlabeled Finnish, Estonian and Hungarian speech from [VoxPopuli V2 dataset](https://github.com/facebookresearch/voxpopuli/) with the wav2vec 2.0 objective.
This model is fine-tuned version of the pretrained model for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
The Finnish KenLM language model used in the decoding phase has been trained with text data from the audio transcriptions and from a subset of Finnish Wikipedia. Thus, the decoder's language model may not generalize to very different language, for example to spoken daily language with dialects (because especially the Wikipedia contains mostly formal Finnish language). It may be beneficial to train your own KenLM language model for your domain language and use that in the decoding.
## Training data
This model was fine-tuned with 276.7 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:------------------------------------------------------------------------------------------------------------------------------ |:--------:|:----------------:|
| [Common Voice 9.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0) | 10.80 h | 3.90 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.94 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.73 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.40 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.94 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained on a Tesla V100 GPU, sponsored by Hugging Face & OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
For the KenLM language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by Hugging Face. Training data for the 5-gram KenLM were text transcriptions of the audio training data and 100k random samples of cleaned [Finnish Wikipedia](https://huggingface.co/datasets/wikipedia) (August 2021) dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-04
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-large-uralic-voxpopuli-v2` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.9421 | 0.17 | 500 | 0.8633 | 0.8870 |
| 0.572 | 0.33 | 1000 | 0.1650 | 0.1829 |
| 0.5149 | 0.5 | 1500 | 0.1416 | 0.1711 |
| 0.4884 | 0.66 | 2000 | 0.1265 | 0.1605 |
| 0.4729 | 0.83 | 2500 | 0.1205 | 0.1485 |
| 0.4723 | 1.0 | 3000 | 0.1108 | 0.1403 |
| 0.443 | 1.16 | 3500 | 0.1175 | 0.1439 |
| 0.4378 | 1.33 | 4000 | 0.1083 | 0.1482 |
| 0.4313 | 1.49 | 4500 | 0.1110 | 0.1398 |
| 0.4182 | 1.66 | 5000 | 0.1024 | 0.1418 |
| 0.3884 | 1.83 | 5500 | 0.1032 | 0.1395 |
| 0.4034 | 1.99 | 6000 | 0.0985 | 0.1318 |
| 0.3735 | 2.16 | 6500 | 0.1008 | 0.1355 |
| 0.4174 | 2.32 | 7000 | 0.0970 | 0.1361 |
| 0.3581 | 2.49 | 7500 | 0.0968 | 0.1297 |
| 0.3783 | 2.66 | 8000 | 0.0881 | 0.1284 |
| 0.3827 | 2.82 | 8500 | 0.0921 | 0.1352 |
| 0.3651 | 2.99 | 9000 | 0.0861 | 0.1298 |
| 0.3684 | 3.15 | 9500 | 0.0844 | 0.1270 |
| 0.3784 | 3.32 | 10000 | 0.0870 | 0.1248 |
| 0.356 | 3.48 | 10500 | 0.0828 | 0.1214 |
| 0.3524 | 3.65 | 11000 | 0.0878 | 0.1218 |
| 0.3879 | 3.82 | 11500 | 0.0874 | 0.1216 |
| 0.3521 | 3.98 | 12000 | 0.0860 | 0.1210 |
| 0.3527 | 4.15 | 12500 | 0.0818 | 0.1184 |
| 0.3529 | 4.31 | 13000 | 0.0787 | 0.1185 |
| 0.3114 | 4.48 | 13500 | 0.0852 | 0.1202 |
| 0.3495 | 4.65 | 14000 | 0.0807 | 0.1187 |
| 0.34 | 4.81 | 14500 | 0.0796 | 0.1162 |
| 0.3646 | 4.98 | 15000 | 0.0782 | 0.1149 |
| 0.3004 | 5.14 | 15500 | 0.0799 | 0.1142 |
| 0.3167 | 5.31 | 16000 | 0.0847 | 0.1123 |
| 0.3249 | 5.48 | 16500 | 0.0837 | 0.1171 |
| 0.3202 | 5.64 | 17000 | 0.0749 | 0.1109 |
| 0.3104 | 5.81 | 17500 | 0.0798 | 0.1093 |
| 0.3039 | 5.97 | 18000 | 0.0810 | 0.1132 |
| 0.3157 | 6.14 | 18500 | 0.0847 | 0.1156 |
| 0.3133 | 6.31 | 19000 | 0.0833 | 0.1140 |
| 0.3203 | 6.47 | 19500 | 0.0838 | 0.1113 |
| 0.3178 | 6.64 | 20000 | 0.0907 | 0.1141 |
| 0.3182 | 6.8 | 20500 | 0.0938 | 0.1143 |
| 0.3 | 6.97 | 21000 | 0.0854 | 0.1133 |
| 0.3151 | 7.14 | 21500 | 0.0859 | 0.1109 |
| 0.2963 | 7.3 | 22000 | 0.0832 | 0.1122 |
| 0.3099 | 7.47 | 22500 | 0.0865 | 0.1103 |
| 0.322 | 7.63 | 23000 | 0.0833 | 0.1105 |
| 0.3064 | 7.8 | 23500 | 0.0865 | 0.1078 |
| 0.2964 | 7.97 | 24000 | 0.0859 | 0.1096 |
| 0.2869 | 8.13 | 24500 | 0.0872 | 0.1100 |
| 0.315 | 8.3 | 25000 | 0.0869 | 0.1099 |
| 0.3003 | 8.46 | 25500 | 0.0878 | 0.1105 |
| 0.2947 | 8.63 | 26000 | 0.0884 | 0.1084 |
| 0.297 | 8.8 | 26500 | 0.0891 | 0.1102 |
| 0.3049 | 8.96 | 27000 | 0.0863 | 0.1081 |
| 0.2957 | 9.13 | 27500 | 0.0846 | 0.1083 |
| 0.2908 | 9.29 | 28000 | 0.0848 | 0.1059 |
| 0.2955 | 9.46 | 28500 | 0.0846 | 0.1085 |
| 0.2991 | 9.62 | 29000 | 0.0839 | 0.1081 |
| 0.3112 | 9.79 | 29500 | 0.0832 | 0.1071 |
| 0.29 | 9.96 | 30000 | 0.0828 | 0.1075 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0), [Common Voice 9.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0) and with the [FLEURS ASR Finnish test split](https://huggingface.co/datasets/google/fleurs).
This model's training data includes the training splits of Common Voice 9.0 but most of our previous models include the Common Voice 7.0 so we ran tests for both Common Voice versions. Note: Common Voice doesn't seem to fully preserve the test split as fixed between the dataset versions so it is possible that some of the training examples of Common Voice 9.0 are in the test split of the Common Voice 7.0 and vice versa. Thus, Common Voice test result comparisons are not fully accurate between the models trained with different Common Voice versions but the comparison should still be meaningful enough.
### Common Voice 7.0 testing
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the second row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts:
| | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------|
|Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |5.85 |13.52 |1.35 |2.44 |
|Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |4.13 |**9.66** |0.90 |1.66 |
|Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |8.16 |17.92 |1.97 |3.36 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |5.65 |13.11 |1.20 |2.23 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**4.09** |9.73 |**0.88** |**1.65** |
### Common Voice 9.0 testing
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish --dataset mozilla-foundation/common_voice_9_0 --config fi --split test
```
This model (the second row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts:
| | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------|
|Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |5.93 |14.08 |1.40 |2.59 |
|Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |4.13 |9.83 |0.92 |1.71 |
|Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |7.42 |16.45 |1.79 |3.07 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |5.35 |13.00 |1.14 |2.20 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**3.72** |**8.96** |**0.80** |**1.52** |
### FLEURS ASR testing
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish --dataset google/fleurs --config fi_fi --split test
```
This model (the second row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts:
| | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------|
|Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |13.99 |17.16 |6.07 |6.61 |
|Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |12.44 |**14.63** |5.77 |6.22 |
|Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |17.72 |23.30 |6.78 |7.67 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |20.34 |16.67 |6.97 |6.35 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**12.11** |14.89 |**5.65** |**6.06** |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
redcy/FrasierBotv1
|
redcy
| 2022-05-26T12:25:09Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-26T12:15:45Z |
---
tags:
- conversational
license: afl-3.0
---
|
chrisvinsen/wav2vec2-base-timit-demo-colab
|
chrisvinsen
| 2022-05-26T12:14:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-16T01:37:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4617
- Wer: 0.3416
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4272 | 4.0 | 500 | 1.3108 | 1.0214 |
| 0.5997 | 8.0 | 1000 | 0.4324 | 0.4310 |
| 0.219 | 12.0 | 1500 | 0.4512 | 0.3864 |
| 0.1264 | 16.0 | 2000 | 0.5002 | 0.3721 |
| 0.0834 | 20.0 | 2500 | 0.4934 | 0.3550 |
| 0.0616 | 24.0 | 3000 | 0.4467 | 0.3475 |
| 0.0477 | 28.0 | 3500 | 0.4617 | 0.3416 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Kashni/damontvd
|
Kashni
| 2022-05-26T11:43:34Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-26T11:24:49Z |
---
tags:
- conversation
---
#Damon from TVD
|
sayanmandal/t5-small_6_3-hi_en-to-en
|
sayanmandal
| 2022-05-26T11:32:32Z | 14 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:cmu_hinglish_dog",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-05-26T04:44:38Z |
---
tags:
- translation
- generated_from_trainer
datasets:
- cmu_hinglish_dog
metrics:
- bleu
model-index:
- name: t5-small_6_3-hi_en-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cmu_hinglish_dog
type: cmu_hinglish_dog
args: hi_en-en
metrics:
- name: Bleu
type: bleu
value: 18.0863
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small_6_3-hi_en-to-en
This model was trained from scratch on the cmu_hinglish_dog dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3662
- Bleu: 18.0863
- Gen Len: 15.2708
## Model description
Model generated using:<br />
```python make_student.py t5-small t5_small_6_3 6 3```<br />
Check this [link](https://discuss.huggingface.co/t/questions-on-distilling-from-t5/1193/9) for more information.
## Intended uses & limitations
More information needed
## Training and evaluation data
Used cmu_hinglish_dog dataset. Please check this [link](https://huggingface.co/datasets/cmu_hinglish_dog) for dataset description
## Translation:
* Source: hi_en: The text in Hinglish
* Target: en: The text in English
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 126 | 3.0601 | 4.7146 | 11.9904 |
| No log | 2.0 | 252 | 2.8885 | 5.9584 | 12.3418 |
| No log | 3.0 | 378 | 2.7914 | 6.649 | 12.3758 |
| 3.4671 | 4.0 | 504 | 2.7347 | 7.3305 | 12.3854 |
| 3.4671 | 5.0 | 630 | 2.6832 | 8.3132 | 12.4268 |
| 3.4671 | 6.0 | 756 | 2.6485 | 8.339 | 12.3641 |
| 3.4671 | 7.0 | 882 | 2.6096 | 8.7269 | 12.414 |
| 3.0208 | 8.0 | 1008 | 2.5814 | 9.2163 | 12.2675 |
| 3.0208 | 9.0 | 1134 | 2.5542 | 9.448 | 12.3875 |
| 3.0208 | 10.0 | 1260 | 2.5339 | 9.9011 | 12.4321 |
| 3.0208 | 11.0 | 1386 | 2.5043 | 9.7529 | 12.5149 |
| 2.834 | 12.0 | 1512 | 2.4848 | 9.9606 | 12.4193 |
| 2.834 | 13.0 | 1638 | 2.4737 | 9.9368 | 12.3673 |
| 2.834 | 14.0 | 1764 | 2.4458 | 10.3182 | 12.4352 |
| 2.834 | 15.0 | 1890 | 2.4332 | 10.486 | 12.4671 |
| 2.7065 | 16.0 | 2016 | 2.4239 | 10.6921 | 12.414 |
| 2.7065 | 17.0 | 2142 | 2.4064 | 10.7426 | 12.4607 |
| 2.7065 | 18.0 | 2268 | 2.3941 | 11.0509 | 12.4087 |
| 2.7065 | 19.0 | 2394 | 2.3826 | 11.2407 | 12.3386 |
| 2.603 | 20.0 | 2520 | 2.3658 | 11.3711 | 12.3992 |
| 2.603 | 21.0 | 2646 | 2.3537 | 11.42 | 12.5032 |
| 2.603 | 22.0 | 2772 | 2.3475 | 12.0665 | 12.5074 |
| 2.603 | 23.0 | 2898 | 2.3398 | 12.0343 | 12.4342 |
| 2.5192 | 24.0 | 3024 | 2.3298 | 12.1011 | 12.5096 |
| 2.5192 | 25.0 | 3150 | 2.3216 | 12.2562 | 12.4809 |
| 2.5192 | 26.0 | 3276 | 2.3131 | 12.4585 | 12.4427 |
| 2.5192 | 27.0 | 3402 | 2.3052 | 12.7094 | 12.534 |
| 2.4445 | 28.0 | 3528 | 2.2984 | 12.7432 | 12.5053 |
| 2.4445 | 29.0 | 3654 | 2.2920 | 12.8409 | 12.4501 |
| 2.4445 | 30.0 | 3780 | 2.2869 | 12.6365 | 12.4936 |
| 2.4445 | 31.0 | 3906 | 2.2777 | 12.8523 | 12.5234 |
| 2.3844 | 32.0 | 4032 | 2.2788 | 12.9216 | 12.4204 |
| 2.3844 | 33.0 | 4158 | 2.2710 | 12.9568 | 12.5064 |
| 2.3844 | 34.0 | 4284 | 2.2643 | 12.9641 | 12.4299 |
| 2.3844 | 35.0 | 4410 | 2.2621 | 12.9787 | 12.448 |
| 2.3282 | 36.0 | 4536 | 2.2554 | 13.1264 | 12.4374 |
| 2.3282 | 37.0 | 4662 | 2.2481 | 13.1853 | 12.4416 |
| 2.3282 | 38.0 | 4788 | 2.2477 | 13.3259 | 12.4119 |
| 2.3282 | 39.0 | 4914 | 2.2448 | 13.2017 | 12.4278 |
| 2.2842 | 40.0 | 5040 | 2.2402 | 13.3772 | 12.4437 |
| 2.2842 | 41.0 | 5166 | 2.2373 | 13.2184 | 12.414 |
| 2.2842 | 42.0 | 5292 | 2.2357 | 13.5267 | 12.4342 |
| 2.2842 | 43.0 | 5418 | 2.2310 | 13.5754 | 12.4087 |
| 2.2388 | 44.0 | 5544 | 2.2244 | 13.653 | 12.4427 |
| 2.2388 | 45.0 | 5670 | 2.2243 | 13.6028 | 12.431 |
| 2.2388 | 46.0 | 5796 | 2.2216 | 13.7128 | 12.4151 |
| 2.2388 | 47.0 | 5922 | 2.2231 | 13.749 | 12.4172 |
| 2.2067 | 48.0 | 6048 | 2.2196 | 13.7256 | 12.4034 |
| 2.2067 | 49.0 | 6174 | 2.2125 | 13.8237 | 12.396 |
| 2.2067 | 50.0 | 6300 | 2.2131 | 13.6642 | 12.4416 |
| 2.2067 | 51.0 | 6426 | 2.2115 | 13.8876 | 12.4119 |
| 2.1688 | 52.0 | 6552 | 2.2091 | 14.0323 | 12.4639 |
| 2.1688 | 53.0 | 6678 | 2.2082 | 13.916 | 12.3843 |
| 2.1688 | 54.0 | 6804 | 2.2071 | 13.924 | 12.3758 |
| 2.1688 | 55.0 | 6930 | 2.2046 | 13.9563 | 12.4416 |
| 2.1401 | 56.0 | 7056 | 2.2020 | 14.0592 | 12.483 |
| 2.1401 | 57.0 | 7182 | 2.2047 | 13.8879 | 12.4076 |
| 2.1401 | 58.0 | 7308 | 2.2018 | 13.9267 | 12.3949 |
| 2.1401 | 59.0 | 7434 | 2.1964 | 14.0518 | 12.4363 |
| 2.1092 | 60.0 | 7560 | 2.1926 | 14.1518 | 12.4883 |
| 2.1092 | 61.0 | 7686 | 2.1972 | 14.132 | 12.4034 |
| 2.1092 | 62.0 | 7812 | 2.1939 | 14.2066 | 12.4151 |
| 2.1092 | 63.0 | 7938 | 2.1905 | 14.2923 | 12.4459 |
| 2.0932 | 64.0 | 8064 | 2.1932 | 14.2476 | 12.3418 |
| 2.0932 | 65.0 | 8190 | 2.1925 | 14.2057 | 12.3907 |
| 2.0932 | 66.0 | 8316 | 2.1906 | 14.2978 | 12.4055 |
| 2.0932 | 67.0 | 8442 | 2.1903 | 14.3276 | 12.4427 |
| 2.0706 | 68.0 | 8568 | 2.1918 | 14.4681 | 12.4034 |
| 2.0706 | 69.0 | 8694 | 2.1882 | 14.3751 | 12.4225 |
| 2.0706 | 70.0 | 8820 | 2.1870 | 14.5904 | 12.4204 |
| 2.0706 | 71.0 | 8946 | 2.1865 | 14.6409 | 12.4512 |
| 2.0517 | 72.0 | 9072 | 2.1831 | 14.6505 | 12.4352 |
| 2.0517 | 73.0 | 9198 | 2.1835 | 14.7485 | 12.4363 |
| 2.0517 | 74.0 | 9324 | 2.1824 | 14.7344 | 12.4586 |
| 2.0517 | 75.0 | 9450 | 2.1829 | 14.8097 | 12.4575 |
| 2.0388 | 76.0 | 9576 | 2.1822 | 14.6681 | 12.4108 |
| 2.0388 | 77.0 | 9702 | 2.1823 | 14.6421 | 12.4342 |
| 2.0388 | 78.0 | 9828 | 2.1816 | 14.7014 | 12.4459 |
| 2.0388 | 79.0 | 9954 | 2.1810 | 14.744 | 12.4565 |
| 2.0224 | 80.0 | 10080 | 2.1839 | 14.7889 | 12.4437 |
| 2.0224 | 81.0 | 10206 | 2.1793 | 14.802 | 12.4565 |
| 2.0224 | 82.0 | 10332 | 2.1776 | 14.7702 | 12.4214 |
| 2.0224 | 83.0 | 10458 | 2.1809 | 14.6772 | 12.4236 |
| 2.0115 | 84.0 | 10584 | 2.1786 | 14.709 | 12.4214 |
| 2.0115 | 85.0 | 10710 | 2.1805 | 14.7693 | 12.3981 |
| 2.0115 | 86.0 | 10836 | 2.1790 | 14.7628 | 12.4172 |
| 2.0115 | 87.0 | 10962 | 2.1785 | 14.7538 | 12.3992 |
| 2.0007 | 88.0 | 11088 | 2.1788 | 14.7493 | 12.3726 |
| 2.0007 | 89.0 | 11214 | 2.1788 | 14.8793 | 12.4045 |
| 2.0007 | 90.0 | 11340 | 2.1786 | 14.8318 | 12.3747 |
| 2.0007 | 91.0 | 11466 | 2.1769 | 14.8061 | 12.4013 |
| 1.9967 | 92.0 | 11592 | 2.1757 | 14.8108 | 12.3843 |
| 1.9967 | 93.0 | 11718 | 2.1747 | 14.8036 | 12.379 |
| 1.9967 | 94.0 | 11844 | 2.1764 | 14.7447 | 12.3737 |
| 1.9967 | 95.0 | 11970 | 2.1759 | 14.7759 | 12.3875 |
| 1.9924 | 96.0 | 12096 | 2.1760 | 14.7695 | 12.3875 |
| 1.9924 | 97.0 | 12222 | 2.1762 | 14.8022 | 12.3769 |
| 1.9924 | 98.0 | 12348 | 2.1763 | 14.7519 | 12.3822 |
| 1.9924 | 99.0 | 12474 | 2.1760 | 14.7756 | 12.3832 |
| 1.9903 | 100.0 | 12600 | 2.1761 | 14.7713 | 12.3822 |
### Evaluation results
| Data Split | Bleu |
|:----------:|:-------:|
| Validation | 17.8061 |
| Test | 18.0863 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
madatnlp/mbart
|
madatnlp
| 2022-05-26T11:25:18Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"mbart",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-26T08:26:54Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: mbart
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mbart
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5342
- Validation Loss: 0.5633
- Epoch: 35
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'SGD', 'learning_rate': 0.01, 'decay': 0.0, 'momentum': 0.9, 'nesterov': False}
- training_precision: mixed_bfloat16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.5626 | 3.7843 | 0 |
| 2.5836 | 1.9212 | 1 |
| 1.6546 | 1.2552 | 2 |
| 1.2499 | 1.0248 | 3 |
| 1.0088 | 0.8457 | 4 |
| 0.9100 | 0.7958 | 5 |
| 0.8290 | 0.8421 | 6 |
| 0.7999 | 0.7625 | 7 |
| 0.7633 | 0.7202 | 8 |
| 0.7439 | 0.7100 | 9 |
| 0.7182 | 0.6787 | 10 |
| 0.7092 | 0.6877 | 11 |
| 0.6823 | 0.6684 | 12 |
| 0.6738 | 0.6712 | 13 |
| 0.6603 | 0.6858 | 14 |
| 0.6462 | 0.6268 | 15 |
| 0.6373 | 0.6208 | 16 |
| 0.6424 | 0.6735 | 17 |
| 0.6259 | 0.6423 | 18 |
| 0.6249 | 0.6069 | 19 |
| 0.6148 | 0.6510 | 20 |
| 0.6063 | 0.6207 | 21 |
| 0.5987 | 0.5977 | 22 |
| 0.5917 | 0.6019 | 23 |
| 0.5800 | 0.5828 | 24 |
| 0.5779 | 0.5505 | 25 |
| 0.5765 | 0.5887 | 26 |
| 0.5667 | 0.5989 | 27 |
| 0.5623 | 0.5859 | 28 |
| 0.5564 | 0.5907 | 29 |
| 0.5523 | 0.5928 | 30 |
| 0.5478 | 0.5624 | 31 |
| 0.5472 | 0.5563 | 32 |
| 0.5462 | 0.5953 | 33 |
| 0.5324 | 0.5593 | 34 |
| 0.5342 | 0.5633 | 35 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
imohammad12/GRS-Constrained-Paraphrasing-Bart
|
imohammad12
| 2022-05-26T10:49:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"grs",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-19T00:14:51Z |
---
language: en
tags: grs
---
## Citation
Please star the [GRS GitHub repo](https://github.com/imohammad12/GRS) and cite the paper if you found our model useful:
```
@inproceedings{dehghan-etal-2022-grs,
title = "{GRS}: Combining Generation and Revision in Unsupervised Sentence Simplification",
author = "Dehghan, Mohammad and
Kumar, Dhruv and
Golab, Lukasz",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.77",
pages = "949--960",
abstract = "We propose GRS: an unsupervised approach to sentence simplification that combines text generation and text revision. We start with an iterative framework in which an input sentence is revised using explicit edit operations, and add paraphrasing as a new edit operation. This allows us to combine the advantages of generative and revision-based approaches: paraphrasing captures complex edit operations, and the use of explicit edit operations in an iterative manner provides controllability and interpretability. We demonstrate these advantages of GRS compared to existing methods on the Newsela and ASSET datasets.",
}
```
|
imohammad12/GRS-complex-simple-classifier-DeBerta
|
imohammad12
| 2022-05-26T10:49:13Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta",
"text-classification",
"grs",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-18T22:37:49Z |
---
language: en
tags: grs
---
## Citation
Please star the [GRS GitHub repo](https://github.com/imohammad12/GRS) and cite the paper if you found our model useful:
```
@inproceedings{dehghan-etal-2022-grs,
title = "{GRS}: Combining Generation and Revision in Unsupervised Sentence Simplification",
author = "Dehghan, Mohammad and
Kumar, Dhruv and
Golab, Lukasz",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.77",
pages = "949--960",
abstract = "We propose GRS: an unsupervised approach to sentence simplification that combines text generation and text revision. We start with an iterative framework in which an input sentence is revised using explicit edit operations, and add paraphrasing as a new edit operation. This allows us to combine the advantages of generative and revision-based approaches: paraphrasing captures complex edit operations, and the use of explicit edit operations in an iterative manner provides controllability and interpretability. We demonstrate these advantages of GRS compared to existing methods on the Newsela and ASSET datasets.",
}
```
|
imohammad12/GRS-Grammar-Checker-DeBerta
|
imohammad12
| 2022-05-26T10:48:39Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"deberta",
"text-classification",
"grs",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-19T01:01:25Z |
---
language: en
tags: grs
---
## Citation
Please star the [GRS GitHub repo](https://github.com/imohammad12/GRS) and cite the paper if you found our model useful:
```
@inproceedings{dehghan-etal-2022-grs,
title = "{GRS}: Combining Generation and Revision in Unsupervised Sentence Simplification",
author = "Dehghan, Mohammad and
Kumar, Dhruv and
Golab, Lukasz",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.77",
pages = "949--960",
abstract = "We propose GRS: an unsupervised approach to sentence simplification that combines text generation and text revision. We start with an iterative framework in which an input sentence is revised using explicit edit operations, and add paraphrasing as a new edit operation. This allows us to combine the advantages of generative and revision-based approaches: paraphrasing captures complex edit operations, and the use of explicit edit operations in an iterative manner provides controllability and interpretability. We demonstrate these advantages of GRS compared to existing methods on the Newsela and ASSET datasets.",
}
```
|
Obaid/Test1ppo-LunarLander-v2
|
Obaid
| 2022-05-26T09:04:14Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-26T09:03:41Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 238.77 +/- 14.32
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
GRANTHE2761/swin-tiny-patch4-window7-224-finetuned-eurosat
|
GRANTHE2761
| 2022-05-26T09:00:52Z | 71 | 0 |
transformers
|
[
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-05-26T08:44:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9688888888888889
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0866
- Accuracy: 0.9689
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3046 | 1.0 | 95 | 0.1547 | 0.9452 |
| 0.191 | 2.0 | 190 | 0.1161 | 0.9559 |
| 0.1701 | 3.0 | 285 | 0.0866 | 0.9689 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
duclee9x/wav2vec2-voa-example
|
duclee9x
| 2022-05-26T08:32:06Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-25T22:33:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-voa-example
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-voa-example
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 4.296 | 4.35 | 500 | 3.7226 | 1.0 |
| 3.027 | 8.7 | 1000 | 3.7233 | 1.0 |
| 3.0376 | 13.04 | 1500 | 3.7246 | 1.0 |
| 3.0221 | 17.39 | 2000 | nan | 1.0 |
| 0.0 | 21.74 | 2500 | nan | 1.0 |
| 0.0 | 26.09 | 3000 | nan | 1.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
RuiqianLi/one-simple-finetune-test
|
RuiqianLi
| 2022-05-26T07:41:32Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:li_singlish",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-26T06:59:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- li_singlish
model-index:
- name: one-simple-finetune-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# one-simple-finetune-test
This model is a fine-tuned version of [RuiqianLi/wav2vec2-large-xls-r-300m-singlish-colab](https://huggingface.co/RuiqianLi/wav2vec2-large-xls-r-300m-singlish-colab) on the li_singlish dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
SusBioRes-UBC/q-FrozenLake-v1-4x4-noSlippery
|
SusBioRes-UBC
| 2022-05-26T04:39:55Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-26T04:39:47Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="SusBioRes-UBC/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Vkt/victor-hg-ptbr-2.0
|
Vkt
| 2022-05-26T04:10:53Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-24T13:07:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: victor-hg-ptbr-2.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# victor-hg-ptbr-2.0
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0240
- Wer: 0.0219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.4069 | 0.21 | 400 | 1.1372 | 0.9140 |
| 0.8079 | 0.43 | 800 | 0.5822 | 0.5339 |
| 0.5821 | 0.64 | 1200 | 0.4226 | 0.4177 |
| 0.5159 | 0.86 | 1600 | 0.4074 | 0.3970 |
| 0.4484 | 1.07 | 2000 | 0.3144 | 0.3220 |
| 0.3937 | 1.29 | 2400 | 0.3160 | 0.3264 |
| 0.3911 | 1.5 | 2800 | 0.2863 | 0.2956 |
| 0.3761 | 1.71 | 3200 | 0.3029 | 0.3128 |
| 0.3722 | 1.93 | 3600 | 0.2771 | 0.2933 |
| 0.3193 | 2.14 | 4000 | 0.2603 | 0.2795 |
| 0.3013 | 2.36 | 4400 | 0.2682 | 0.2703 |
| 0.3039 | 2.57 | 4800 | 0.2630 | 0.2618 |
| 0.3133 | 2.79 | 5200 | 0.2578 | 0.2629 |
| 0.3173 | 3.0 | 5600 | 0.2640 | 0.2746 |
| 0.2521 | 3.22 | 6000 | 0.2797 | 0.2662 |
| 0.2654 | 3.43 | 6400 | 0.2762 | 0.2640 |
| 0.2586 | 3.64 | 6800 | 0.2642 | 0.2596 |
| 0.265 | 3.86 | 7200 | 0.2656 | 0.2794 |
| 0.2432 | 4.07 | 7600 | 0.2459 | 0.2497 |
| 0.226 | 4.29 | 8000 | 0.2533 | 0.2509 |
| 0.2385 | 4.5 | 8400 | 0.2332 | 0.2394 |
| 0.2332 | 4.72 | 8800 | 0.2500 | 0.2569 |
| 0.2358 | 4.93 | 9200 | 0.2384 | 0.2489 |
| 0.2169 | 5.14 | 9600 | 0.2410 | 0.2380 |
| 0.2038 | 5.36 | 10000 | 0.2426 | 0.2333 |
| 0.2109 | 5.57 | 10400 | 0.2480 | 0.2473 |
| 0.2147 | 5.79 | 10800 | 0.2341 | 0.2272 |
| 0.2153 | 6.0 | 11200 | 0.2402 | 0.2424 |
| 0.186 | 6.22 | 11600 | 0.2560 | 0.2489 |
| 0.1854 | 6.43 | 12000 | 0.2444 | 0.2402 |
| 0.1915 | 6.65 | 12400 | 0.2720 | 0.2531 |
| 0.1929 | 6.86 | 12800 | 0.2516 | 0.2342 |
| 0.1842 | 7.07 | 13200 | 0.2480 | 0.2304 |
| 0.1682 | 7.29 | 13600 | 0.2393 | 0.2276 |
| 0.1753 | 7.5 | 14000 | 0.2514 | 0.2263 |
| 0.1798 | 7.72 | 14400 | 0.2191 | 0.2178 |
| 0.1736 | 7.93 | 14800 | 0.2351 | 0.2197 |
| 0.1668 | 8.15 | 15200 | 0.2315 | 0.2194 |
| 0.1545 | 8.36 | 15600 | 0.2291 | 0.2079 |
| 0.1508 | 8.57 | 16000 | 0.2351 | 0.2134 |
| 0.1662 | 8.79 | 16400 | 0.2298 | 0.2197 |
| 0.1621 | 9.0 | 16800 | 0.2314 | 0.2219 |
| 0.1416 | 9.22 | 17200 | 0.2306 | 0.2192 |
| 0.1455 | 9.43 | 17600 | 0.2466 | 0.2184 |
| 0.1522 | 9.65 | 18000 | 0.2392 | 0.2255 |
| 0.1434 | 9.86 | 18400 | 0.2464 | 0.2208 |
| 0.1362 | 10.08 | 18800 | 0.2351 | 0.2095 |
| 0.127 | 10.29 | 19200 | 0.2373 | 0.2110 |
| 0.133 | 10.5 | 19600 | 0.2269 | 0.2031 |
| 0.1308 | 10.72 | 20000 | 0.2400 | 0.2096 |
| 0.1331 | 10.93 | 20400 | 0.2243 | 0.2083 |
| 0.125 | 11.15 | 20800 | 0.2334 | 0.2063 |
| 0.1236 | 11.36 | 21200 | 0.2195 | 0.2044 |
| 0.1263 | 11.58 | 21600 | 0.2263 | 0.2050 |
| 0.1235 | 11.79 | 22000 | 0.2217 | 0.2087 |
| 0.1301 | 12.0 | 22400 | 0.2332 | 0.2094 |
| 0.1123 | 12.22 | 22800 | 0.2195 | 0.2068 |
| 0.117 | 12.43 | 23200 | 0.2266 | 0.2110 |
| 0.1156 | 12.65 | 23600 | 0.2469 | 0.2063 |
| 0.1117 | 12.86 | 24000 | 0.2379 | 0.2035 |
| 0.1124 | 13.08 | 24400 | 0.2156 | 0.1963 |
| 0.106 | 13.29 | 24800 | 0.2310 | 0.1988 |
| 0.1066 | 13.5 | 25200 | 0.2334 | 0.1950 |
| 0.1069 | 13.72 | 25600 | 0.2230 | 0.2011 |
| 0.1089 | 13.93 | 26000 | 0.2233 | 0.2003 |
| 0.0977 | 14.15 | 26400 | 0.2273 | 0.1895 |
| 0.0972 | 14.36 | 26800 | 0.2265 | 0.1887 |
| 0.1005 | 14.58 | 27200 | 0.2196 | 0.1934 |
| 0.1058 | 14.79 | 27600 | 0.2213 | 0.1870 |
| 0.1027 | 15.01 | 28000 | 0.2361 | 0.1916 |
| 0.0886 | 15.22 | 28400 | 0.2275 | 0.1815 |
| 0.0885 | 15.43 | 28800 | 0.2230 | 0.1891 |
| 0.0911 | 15.65 | 29200 | 0.2237 | 0.1989 |
| 0.0923 | 15.86 | 29600 | 0.2200 | 0.1857 |
| 0.0868 | 16.08 | 30000 | 0.2248 | 0.1875 |
| 0.0812 | 16.29 | 30400 | 0.2240 | 0.1874 |
| 0.0829 | 16.51 | 30800 | 0.2198 | 0.1814 |
| 0.0832 | 16.72 | 31200 | 0.2328 | 0.1892 |
| 0.0822 | 16.93 | 31600 | 0.2283 | 0.1862 |
| 0.0828 | 17.15 | 32000 | 0.2283 | 0.1806 |
| 0.0791 | 17.36 | 32400 | 0.2197 | 0.1787 |
| 0.0801 | 17.58 | 32800 | 0.2249 | 0.1815 |
| 0.0804 | 17.79 | 33200 | 0.2304 | 0.1789 |
| 0.0833 | 18.01 | 33600 | 0.2235 | 0.1832 |
| 0.0762 | 18.22 | 34000 | 0.2358 | 0.1784 |
| 0.0688 | 18.44 | 34400 | 0.2183 | 0.1758 |
| 0.0751 | 18.65 | 34800 | 0.2169 | 0.1805 |
| 0.0729 | 18.86 | 35200 | 0.2296 | 0.1770 |
| 0.0681 | 19.08 | 35600 | 0.2380 | 0.1770 |
| 0.067 | 19.29 | 36000 | 0.2153 | 0.1777 |
| 0.0669 | 19.51 | 36400 | 0.2260 | 0.1742 |
| 0.0824 | 19.72 | 36800 | 0.0289 | 0.0310 |
| 0.0857 | 19.94 | 37200 | 0.0289 | 0.0322 |
| 0.0799 | 20.15 | 37600 | 0.0264 | 0.0298 |
| 0.0767 | 20.36 | 38000 | 0.0273 | 0.0318 |
| 0.079 | 20.58 | 38400 | 0.0274 | 0.0320 |
| 0.0791 | 20.79 | 38800 | 0.0279 | 0.0318 |
| 0.0805 | 21.01 | 39200 | 0.0285 | 0.0330 |
| 0.0622 | 21.22 | 39600 | 0.0263 | 0.0306 |
| 0.0622 | 21.44 | 40000 | 0.0290 | 0.0318 |
| 0.0672 | 21.65 | 40400 | 0.0278 | 0.0330 |
| 0.0706 | 21.86 | 40800 | 0.0270 | 0.0297 |
| 0.0619 | 22.08 | 41200 | 0.0288 | 0.0328 |
| 0.0633 | 22.29 | 41600 | 0.0256 | 0.0303 |
| 0.0618 | 22.51 | 42000 | 0.0263 | 0.0299 |
| 0.0576 | 22.72 | 42400 | 0.0273 | 0.0301 |
| 0.0583 | 22.94 | 42800 | 0.0282 | 0.0297 |
| 0.0565 | 23.15 | 43200 | 0.0256 | 0.0280 |
| 0.0557 | 23.37 | 43600 | 0.0268 | 0.0280 |
| 0.0548 | 23.58 | 44000 | 0.0266 | 0.0291 |
| 0.056 | 23.79 | 44400 | 0.0264 | 0.0290 |
| 0.0546 | 24.01 | 44800 | 0.0273 | 0.0284 |
| 0.0496 | 24.22 | 45200 | 0.0261 | 0.0279 |
| 0.0512 | 24.44 | 45600 | 0.0256 | 0.0281 |
| 0.0482 | 24.65 | 46000 | 0.0264 | 0.0285 |
| 0.0503 | 24.87 | 46400 | 0.0256 | 0.0268 |
| 0.0471 | 25.08 | 46800 | 0.0270 | 0.0282 |
| 0.0453 | 25.29 | 47200 | 0.0255 | 0.0267 |
| 0.0431 | 25.51 | 47600 | 0.0251 | 0.0264 |
| 0.0464 | 25.72 | 48000 | 0.0262 | 0.0261 |
| 0.0431 | 25.94 | 48400 | 0.0257 | 0.0265 |
| 0.0405 | 26.15 | 48800 | 0.0260 | 0.0251 |
| 0.0406 | 26.37 | 49200 | 0.0246 | 0.0250 |
| 0.0397 | 26.58 | 49600 | 0.0252 | 0.0254 |
| 0.0403 | 26.8 | 50000 | 0.0250 | 0.0256 |
| 0.0385 | 27.01 | 50400 | 0.0254 | 0.0241 |
| 0.0398 | 27.22 | 50800 | 0.0255 | 0.0242 |
| 0.0363 | 27.44 | 51200 | 0.0250 | 0.0236 |
| 0.0372 | 27.65 | 51600 | 0.0247 | 0.0232 |
| 0.0362 | 27.87 | 52000 | 0.0240 | 0.0226 |
| 0.0367 | 28.08 | 52400 | 0.0246 | 0.0224 |
| 0.0347 | 28.3 | 52800 | 0.0247 | 0.0229 |
| 0.0348 | 28.51 | 53200 | 0.0241 | 0.0229 |
| 0.0331 | 28.72 | 53600 | 0.0242 | 0.0224 |
| 0.0339 | 28.94 | 54000 | 0.0241 | 0.0220 |
| 0.0336 | 29.15 | 54400 | 0.0244 | 0.0221 |
| 0.0336 | 29.37 | 54800 | 0.0243 | 0.0215 |
| 0.0349 | 29.58 | 55200 | 0.0239 | 0.0217 |
| 0.0308 | 29.8 | 55600 | 0.0240 | 0.0219 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.1+cu111
- Datasets 2.2.1
- Tokenizers 0.12.1
|
sumedh/pegasus
|
sumedh
| 2022-05-26T03:41:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-05-22T23:23:36Z |
Work in progress <br>
Finetuned model for abstractive summarization coming soon <br>
|
luisu0124/Amazon_review
|
luisu0124
| 2022-05-26T03:28:01Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"Text Classification",
"es",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-24T05:44:24Z |
---
language:
- es
tags:
- Text Classification
---
## language:
- es
## tags:
- amazon_reviews_multi
- Text Clasiffication
### Dataset

### Example structure review:
| review_id (string) | product_id (string) | reviewer_id (string) | stars (int) | review_body (string) | review_title (string) | language (string) | product_category (string) |
| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |
| de_0203609|product_de_0865382|reviewer_de_0267719|1|Armband ist leider nach 1 Jahr kaputt gegangen|Leider nach 1 Jahr kaputt|de|sports|
### Model

### Model train

| Text | Classification |
| ------------- | ------------- |
| review_body | stars |
### Model test

### Clasiffication reviews in Spanish
Uses `POS`, `NEG` labels.
|
ENM/scibert_scivocab_cased-new-finetuned-breastcancer
|
ENM
| 2022-05-26T02:28:12Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-26T02:04:39Z |
---
tags:
- generated_from_trainer
model-index:
- name: scibert_scivocab_cased-new-finetuned-breastcancer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert_scivocab_cased-new-finetuned-breastcancer
This model is a fine-tuned version of [allenai/scibert_scivocab_cased](https://huggingface.co/allenai/scibert_scivocab_cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 40 | 3.1340 |
| No log | 2.0 | 80 | 1.6044 |
| No log | 3.0 | 120 | 1.2439 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
espejelomar/identify-my-cat
|
espejelomar
| 2022-05-26T02:08:56Z | 0 | 1 |
fastai
|
[
"fastai",
"image-classification",
"region:us"
] |
image-classification
| 2022-05-05T19:42:30Z |
---
tags:
- fastai
- image-classification
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
PontifexMaximus/ArabicTranslator
|
PontifexMaximus
| 2022-05-26T01:25:24Z | 33 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_infopankki",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-25T08:25:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_infopankki
metrics:
- bleu
model-index:
- name: opus-mt-ar-en-finetuned-ar-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_infopankki
type: opus_infopankki
args: ar-en
metrics:
- name: Bleu
type: bleu
value: 51.6508
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ar-en-finetuned-ar-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on the opus_infopankki dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7269
- Bleu: 51.6508
- Gen Len: 15.0812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.4974 | 1.0 | 1587 | 1.3365 | 36.9061 | 15.3385 |
| 1.3768 | 2.0 | 3174 | 1.2139 | 39.5476 | 15.2079 |
| 1.2887 | 3.0 | 4761 | 1.1265 | 41.2771 | 15.2034 |
| 1.2076 | 4.0 | 6348 | 1.0556 | 42.6907 | 15.2687 |
| 1.1512 | 5.0 | 7935 | 0.9975 | 43.9498 | 15.2072 |
| 1.0797 | 6.0 | 9522 | 0.9491 | 45.224 | 15.2034 |
| 1.0499 | 7.0 | 11109 | 0.9101 | 46.1387 | 15.1651 |
| 1.0095 | 8.0 | 12696 | 0.8778 | 47.0586 | 15.1788 |
| 0.9833 | 9.0 | 14283 | 0.8501 | 47.8083 | 15.162 |
| 0.9601 | 10.0 | 15870 | 0.8267 | 48.5236 | 15.1784 |
| 0.9457 | 11.0 | 17457 | 0.8059 | 49.1717 | 15.095 |
| 0.9233 | 12.0 | 19044 | 0.7883 | 49.7742 | 15.1126 |
| 0.8964 | 13.0 | 20631 | 0.7736 | 50.2168 | 15.0917 |
| 0.8849 | 14.0 | 22218 | 0.7606 | 50.5583 | 15.0913 |
| 0.8751 | 15.0 | 23805 | 0.7504 | 50.8481 | 15.1108 |
| 0.858 | 16.0 | 25392 | 0.7417 | 51.1841 | 15.0989 |
| 0.8673 | 17.0 | 26979 | 0.7353 | 51.4271 | 15.0939 |
| 0.8548 | 18.0 | 28566 | 0.7306 | 51.535 | 15.0911 |
| 0.8483 | 19.0 | 30153 | 0.7279 | 51.6102 | 15.078 |
| 0.8614 | 20.0 | 31740 | 0.7269 | 51.6508 | 15.0812 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.7.1+cu110
- Datasets 2.2.2
- Tokenizers 0.12.1
|
fastai/fastbook_04_mnist_basics
|
fastai
| 2022-05-26T00:39:12Z | 54 | 2 |
fastai
|
[
"fastai",
"image-classification",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
tags:
- fastai
- image-classification
---
# Amazing!
Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (template below and [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using the 🤗Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join our fastai community on the Hugging Face Discord!
Greetings fellow fastlearner 🤝!
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
bhaswara/ppo-MountainCar-v0
|
bhaswara
| 2022-05-26T00:15:29Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-25T23:00:06Z |
---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -200.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
---
# **PPO** Agent playing **MountainCar-v0**
This is a trained model of a **PPO** agent playing **MountainCar-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Felix92/doctr-dummy-torch-crnn-vgg16-bn
|
Felix92
| 2022-05-25T21:34:04Z | 166 | 0 |
transformers
|
[
"transformers",
"pytorch",
"image-to-text",
"en",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2022-04-14T09:24:21Z |
---
language: en
pipeline_tag: image-to-text
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-torch-crnn-mobilenet-v3-small
|
Felix92
| 2022-05-25T21:33:45Z | 165 | 2 |
transformers
|
[
"transformers",
"pytorch",
"image-to-text",
"en",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2022-04-14T09:26:33Z |
---
language: en
pipeline_tag: image-to-text
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-tf-crnn-vgg16-bn
|
Felix92
| 2022-05-25T21:33:21Z | 5 | 1 |
transformers
|
[
"transformers",
"image-to-text",
"en",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2022-04-14T11:42:26Z |
---
language: en
pipeline_tag: image-to-text
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv-v3-e8
|
theojolliffe
| 2022-05-25T20:10:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-25T18:58:59Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv-v3-e8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv-v3-e8
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8063
- Rouge1: 54.9922
- Rouge2: 38.7265
- Rougel: 41.9288
- Rougelsum: 52.8766
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 0.8651 | 53.3185 | 33.3722 | 35.8852 | 50.5929 | 142.0 |
| 0.8268 | 2.0 | 796 | 0.8063 | 53.5267 | 34.3205 | 36.9783 | 51.0289 | 142.0 |
| 0.5331 | 3.0 | 1194 | 0.8155 | 53.5409 | 34.9962 | 38.078 | 51.2038 | 142.0 |
| 0.3588 | 4.0 | 1592 | 0.7883 | 53.7055 | 35.0869 | 38.1521 | 51.3094 | 141.4815 |
| 0.3588 | 5.0 | 1990 | 0.7770 | 54.4542 | 37.5817 | 39.8734 | 52.1947 | 141.7778 |
| 0.2447 | 6.0 | 2388 | 0.7929 | 55.1571 | 38.8425 | 41.4301 | 53.3049 | 141.4444 |
| 0.1765 | 7.0 | 2786 | 0.7909 | 55.5838 | 38.6226 | 42.0453 | 53.543 | 142.0 |
| 0.13 | 8.0 | 3184 | 0.8063 | 54.9922 | 38.7265 | 41.9288 | 52.8766 | 142.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
aakorolyova/reported_outcome_extraction
|
aakorolyova
| 2022-05-25T19:31:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-18T08:32:05Z |
<h1>Model description</h1>
This is a fine-tuned BioBERT model for extracting reported outcomes (i.e. those for which results are presented) from articles reporting clinical trials.
This is the second version of the model; the original model development was reported in:
Anna Koroleva, Sanjay Kamath, Patrick Paroubek. Extracting primary and reported outcomes from articles reporting randomized controlled trials using pre-trained deep language representations. Preprint: https://easychair.org/publications/preprint/qpml
The original work was conducted within the scope of the Assisted authoring for avoiding inadequate claims in scientific reporting PhD project of the Methods for Research on Research (MiRoR, http://miror-ejd.eu/) program.
Model creator: Anna Koroleva
<h1>Intended uses & limitations</h1>
The model is intended to be used for extracting reported outcomes from texts of clinical trials.
The main limitation is that the model was trained on a fairly small sample of data annotated by a single annotator. Annotating more data or involvig more annotators was not possiblw within the PhD project.
<h1>How to use</h1>
The model should be used with the BioBERT tokeniser. A sample code for getting model predictions is below:
```
import numpy as np
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained('dmis-lab/biobert-v1.1')
model = AutoModelForTokenClassification.from_pretrained(r'aakorolyova/reported_outcome_extraction')
text = """Compared with placebo plus chemotherapy, pembrolizumab plus chemotherapy improved overall survival in patients with previously untreated, advanced oesophageal squamous cell carcinoma and PD-L1 CPS of 10 or more, and overall survival and progression-free survival in patients with oesophageal squamous cell carcinoma, PD-L1 CPS of 10 or more, and in all randomised patients regardless of histology, and had a manageable safety profile in the total as-treated population."""
encoded_input = tokenizer(text, padding=True, truncation=True, max_length=2000, return_tensors='pt')
output = model(**encoded_input)['logits']
output = np.argmax(output.detach().numpy(), axis=2)
print(output)
```
Some more useful functions can be found in or Github repository: https://github.com/aakorolyova/DeSpin-2.0
<h1>Training data</h1>
Training data can be found in https://github.com/aakorolyova/DeSpin-2.0/tree/main/data/Reported_Outcomes
<h1>Training procedure</h1>
The model was fine-tuned using Huggingface Trainer API. Training scripts can be found in https://github.com/aakorolyova/DeSpin-2.0
<h1>Evaluation</h1>
Precision: 65.57%
Recall: 74.77%
F1: 69.87%
|
aakorolyova/primary_and_secondary_outcome_extraction
|
aakorolyova
| 2022-05-25T19:30:56Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-18T08:04:32Z |
<h1>Model description</h1>
This is a fine-tuned BioBERT model for extracting primary and secondary outcomes from articles reporting clinical trials.
This model is a version of https://huggingface.co/aakorolyova/primary_outcome_extraction. We have not annotated any secondary outcome during the related PhD project. To be able to extract secondary outcomes, we manually annotated secondary outcomes in the existing annotated sentences with primary outcomes (only a small percentage of sentences contains secondary outcomes) and performed automatic data augmentation by replacing "primary"/"main"/"principal" by "secondary" and changing tags from B/I-Prim to B/I-Sec in the primary outcomes data.
Model creator: Anna Koroleva
<h1>Intended uses & limitations</h1>
The model is intended to be used for extracting primary and secondary outcomes from texts of clinical trials.
The main limitation is that the model was trained on a mix of manually annotated and automatically augmented data, which might lead to inaccuracies in prediction.
<h1>How to use</h1>
The model should be used with the BioBERT tokeniser. A sample code for getting model predictions is below:
```
import numpy as np
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained('dmis-lab/biobert-v1.1')
model = AutoModelForTokenClassification.from_pretrained(r'aakorolyova/primary_and_secondary_outcome_extraction')
text = 'Primary endpoint was overall survival in patients with oesophageal squamous cell carcinoma and PD-L1 combined positive score (CPS) of 10 or more, secondary endpoints were overall survival and progression-free survival in patients with oesophageal squamous cell carcinoma, PD-L1 CPS of 10 or more, and in all randomised patients.'
encoded_input = tokenizer(text, padding=True, truncation=True, max_length=2000, return_tensors='pt')
output = model(**encoded_input)['logits']
output = np.argmax(output.detach().numpy(), axis=2)
print(output)
```
Some more useful functions can be found in or Github repository: https://github.com/aakorolyova/DeSpin-2.0
<h1>Training data</h1>
Training data can be found in https://github.com/aakorolyova/DeSpin-2.0/tree/main/data/Primary_Secondary_Outcomes
<h1>Training procedure</h1>
The model was fine-tuned using Huggingface Trainer API. Training scripts can be found in https://github.com/aakorolyova/DeSpin-2.0
<h1>Evaluation</h1>
Primary outcomes:
Precision: 92.22
Recall: 94.86
F1: 93.52
Secondary outcomes:
Precision: 91.43
Recall: 91.87
F1: 91.65
Overall precision: 91.79
Overall recall: 93.23
Overall F1: 92.51
|
tbosse/bert-base-german-cased-finetuned-subj_v6_7Epoch_v3
|
tbosse
| 2022-05-25T19:01:02Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-25T18:16:21Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-subj_v6_7Epoch_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-subj_v6_7Epoch_v3
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2732
- Precision: 0.7654
- Recall: 0.7829
- F1: 0.7740
- Accuracy: 0.9119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 33 | 0.3281 | 0.6656 | 0.5914 | 0.6263 | 0.8623 |
| No log | 2.0 | 66 | 0.2623 | 0.7440 | 0.7057 | 0.7243 | 0.8940 |
| No log | 3.0 | 99 | 0.2460 | 0.7536 | 0.7514 | 0.7525 | 0.9067 |
| No log | 4.0 | 132 | 0.2440 | 0.7778 | 0.76 | 0.7688 | 0.9124 |
| No log | 5.0 | 165 | 0.2582 | 0.7723 | 0.7657 | 0.7690 | 0.9107 |
| No log | 6.0 | 198 | 0.2681 | 0.7690 | 0.78 | 0.7745 | 0.9119 |
| No log | 7.0 | 231 | 0.2732 | 0.7654 | 0.7829 | 0.7740 | 0.9119 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv-v3-e2
|
theojolliffe
| 2022-05-25T18:51:56Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-25T17:53:40Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv-v3-e2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv-v3-e2
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8604
- Rouge1: 53.7901
- Rouge2: 34.5052
- Rougel: 36.6399
- Rougelsum: 51.2331
- Gen Len: 141.7593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 0.8776 | 53.3731 | 34.1946 | 36.4438 | 50.7369 | 142.0 |
| 0.8266 | 2.0 | 796 | 0.8604 | 53.7901 | 34.5052 | 36.6399 | 51.2331 | 141.7593 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
pritam18/swadeshi_bhojpuriwav2vec2asr
|
pritam18
| 2022-05-25T18:35:33Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-25T11:59:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: swadeshi_bhojpuriwav2vec2asr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swadeshi_bhojpuriwav2vec2asr
This model is a fine-tuned version of [theainerd/Wav2Vec2-large-xlsr-hindi](https://huggingface.co/theainerd/Wav2Vec2-large-xlsr-hindi) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2155
- Wer: 0.2931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6928 | 3.2 | 400 | 2.4820 | 0.9925 |
| 1.6981 | 6.4 | 800 | 0.8053 | 0.6320 |
| 0.975 | 9.6 | 1200 | 0.5420 | 0.4980 |
| 0.7672 | 12.8 | 1600 | 0.4224 | 0.4233 |
| 0.636 | 16.0 | 2000 | 0.3481 | 0.3774 |
| 0.5562 | 19.2 | 2400 | 0.2861 | 0.3409 |
| 0.4973 | 22.4 | 2800 | 0.2450 | 0.3211 |
| 0.4616 | 25.6 | 3200 | 0.2230 | 0.3004 |
| 0.4264 | 28.8 | 3600 | 0.2155 | 0.2931 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.3.dev0
- Tokenizers 0.12.1
|
arcAman07/distilbert-base-uncased-finetuned-emotion
|
arcAman07
| 2022-05-25T17:08:01Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-25T17:00:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240598378254522
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2222
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8294 | 1.0 | 250 | 0.3209 | 0.9025 | 0.9001 |
| 0.2536 | 2.0 | 500 | 0.2222 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/sickziii
|
huggingtweets
| 2022-05-25T16:18:04Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-25T16:17:55Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/701052820754190336/OwxAZ9ES_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">sickzee</div>
<div style="text-align: center; font-size: 14px;">@sickziii</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from sickzee.
| Data | sickzee |
| --- | --- |
| Tweets downloaded | 3214 |
| Retweets | 2499 |
| Short tweets | 224 |
| Tweets kept | 491 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2hmehe5f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sickziii's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/drajr5oy) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/drajr5oy/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sickziii')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mikeadimech/pegasus-qmsum-meeting-summarization
|
mikeadimech
| 2022-05-25T16:15:41Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:yawnick/QMSum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-02T17:05:40Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-qmsum-meeting-summarization
results: []
datasets:
- yawnick/QMSum
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-qmsum-meeting-summarization
This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on the QMSum dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2331
- Rouge1: 32.7156
- Rouge2: 10.5699
- Rougel: 23.2759
- Rougelsum: 29.7903
- Gen Len: 61.65
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 300
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 5.5746 | 1.09 | 100 | 5.1739 | 9.4941 | 1.7868 | 7.2455 | 8.4302 | 29.825 |
| 5.5784 | 2.17 | 200 | 5.0939 | 9.113 | 1.7887 | 6.9741 | 8.0457 | 26.85 |
| 5.3777 | 3.26 | 300 | 4.9723 | 9.6387 | 1.9301 | 7.349 | 8.7941 | 25.325 |
| 5.1884 | 4.35 | 400 | 4.8423 | 10.6045 | 2.4008 | 7.8423 | 9.4593 | 22.625 |
| 5.0795 | 5.43 | 500 | 4.7313 | 13.7621 | 3.1231 | 9.6944 | 12.2204 | 32.175 |
| 4.9369 | 6.52 | 600 | 4.6555 | 19.5696 | 4.9121 | 14.2603 | 16.9622 | 46.45 |
| 4.8926 | 7.61 | 700 | 4.6038 | 22.8411 | 5.9791 | 17.2227 | 20.1173 | 51.825 |
| 4.7502 | 8.7 | 800 | 4.5659 | 24.0555 | 6.1971 | 18.967 | 20.9143 | 54.25 |
| 4.6876 | 9.78 | 900 | 4.5379 | 24.7066 | 6.0317 | 19.542 | 21.5774 | 57.575 |
| 4.6266 | 10.87 | 1000 | 4.5160 | 26.128 | 6.5089 | 20.5573 | 22.5338 | 58.0 |
| 4.6303 | 11.96 | 1100 | 4.4983 | 26.6639 | 7.1208 | 20.5222 | 23.5783 | 57.925 |
| 4.6263 | 13.04 | 1200 | 4.4815 | 26.8262 | 7.1029 | 20.5172 | 23.6216 | 57.575 |
| 4.577 | 14.13 | 1300 | 4.4667 | 27.7952 | 7.8331 | 21.1111 | 24.6086 | 56.95 |
| 4.5797 | 15.22 | 1400 | 4.4559 | 27.728 | 7.8144 | 21.1519 | 24.4858 | 56.6 |
| 4.4923 | 16.3 | 1500 | 4.4448 | 28.0998 | 8.1346 | 21.4004 | 25.3769 | 55.975 |
| 4.4583 | 17.39 | 1600 | 4.4335 | 28.9003 | 8.6135 | 22.0139 | 26.0409 | 56.55 |
| 4.5036 | 18.48 | 1700 | 4.4246 | 29.2187 | 8.8301 | 22.3569 | 26.1964 | 58.125 |
| 4.4383 | 19.57 | 1800 | 4.4144 | 28.8424 | 8.9131 | 22.0398 | 25.9214 | 56.75 |
| 4.4797 | 20.65 | 1900 | 4.4054 | 28.9285 | 8.9298 | 22.222 | 26.0316 | 56.225 |
| 4.4264 | 21.74 | 2000 | 4.3989 | 29.7184 | 9.0477 | 22.2885 | 26.7439 | 56.225 |
| 4.3615 | 22.83 | 2100 | 4.3902 | 29.1538 | 8.9529 | 22.0076 | 26.4925 | 57.175 |
| 4.329 | 23.91 | 2200 | 4.3839 | 29.5186 | 9.2777 | 21.9025 | 26.3141 | 55.5 |
| 4.3578 | 25.0 | 2300 | 4.3766 | 28.4309 | 8.9423 | 21.0945 | 25.8191 | 53.975 |
| 4.3748 | 26.09 | 2400 | 4.3707 | 28.3 | 9.0625 | 21.4946 | 25.1966 | 53.0 |
| 4.3233 | 27.17 | 2500 | 4.3639 | 28.2325 | 8.9889 | 21.6226 | 25.3677 | 54.6 |
| 4.339 | 28.26 | 2600 | 4.3578 | 28.0744 | 8.774 | 21.2509 | 25.2901 | 54.1 |
| 4.2798 | 29.35 | 2700 | 4.3532 | 27.772 | 8.7096 | 21.1687 | 25.3345 | 54.025 |
| 4.2964 | 30.43 | 2800 | 4.3465 | 27.7827 | 8.1597 | 20.8139 | 25.0152 | 54.45 |
| 4.3365 | 31.52 | 2900 | 4.3423 | 28.2039 | 8.4661 | 21.3546 | 25.6381 | 55.5 |
| 4.2385 | 32.61 | 3000 | 4.3380 | 28.1098 | 8.6483 | 21.5279 | 25.2009 | 53.95 |
| 4.2451 | 33.7 | 3100 | 4.3331 | 28.2745 | 8.5024 | 21.4456 | 25.3363 | 52.6 |
| 4.2393 | 34.78 | 3200 | 4.3289 | 28.7597 | 9.0881 | 21.6532 | 25.8954 | 52.65 |
| 4.2116 | 35.87 | 3300 | 4.3252 | 29.0463 | 9.1218 | 21.8026 | 26.2037 | 53.65 |
| 4.2175 | 36.96 | 3400 | 4.3210 | 28.8009 | 9.0188 | 21.8368 | 25.8678 | 52.85 |
| 4.2071 | 38.04 | 3500 | 4.3169 | 28.9313 | 8.9787 | 21.3554 | 26.0628 | 54.325 |
| 4.1775 | 39.13 | 3600 | 4.3132 | 28.837 | 8.9621 | 21.6342 | 26.0569 | 54.025 |
| 4.1962 | 40.22 | 3700 | 4.3086 | 28.9265 | 9.0701 | 21.588 | 26.0702 | 53.075 |
| 4.1452 | 41.3 | 3800 | 4.3060 | 29.7968 | 9.366 | 22.1712 | 26.8461 | 54.925 |
| 4.1912 | 42.39 | 3900 | 4.3018 | 29.1488 | 9.1631 | 21.6566 | 26.1476 | 54.25 |
| 4.1356 | 43.48 | 4000 | 4.2984 | 30.0138 | 9.2456 | 22.2547 | 27.2714 | 54.85 |
| 4.1272 | 44.57 | 4100 | 4.2949 | 29.8858 | 9.1498 | 22.1221 | 27.0798 | 55.65 |
| 4.1174 | 45.65 | 4200 | 4.2895 | 30.0427 | 9.2297 | 22.2602 | 27.4219 | 56.175 |
| 4.1029 | 46.74 | 4300 | 4.2885 | 29.9443 | 9.4293 | 22.1229 | 27.3496 | 56.45 |
| 4.157 | 47.83 | 4400 | 4.2851 | 30.3693 | 9.406 | 22.471 | 27.7511 | 56.775 |
| 4.1105 | 48.91 | 4500 | 4.2827 | 30.6193 | 9.7082 | 22.6169 | 27.8044 | 57.225 |
| 4.083 | 50.0 | 4600 | 4.2796 | 30.8083 | 9.9211 | 22.5228 | 28.1236 | 57.575 |
| 4.0891 | 51.09 | 4700 | 4.2764 | 30.4201 | 9.6192 | 22.4747 | 27.7514 | 57.475 |
| 4.0603 | 52.17 | 4800 | 4.2741 | 30.7777 | 9.7432 | 22.6705 | 27.5956 | 57.1 |
| 4.0472 | 53.26 | 4900 | 4.2731 | 30.8093 | 9.7916 | 22.5533 | 27.7858 | 56.15 |
| 4.0712 | 54.35 | 5000 | 4.2703 | 29.9667 | 9.5645 | 22.113 | 26.647 | 56.525 |
| 4.0658 | 55.43 | 5100 | 4.2674 | 29.5415 | 9.4291 | 21.6862 | 26.7816 | 56.55 |
| 4.059 | 56.52 | 5200 | 4.2659 | 30.2032 | 9.8875 | 22.2539 | 27.1801 | 56.925 |
| 4.0257 | 57.61 | 5300 | 4.2629 | 30.3181 | 9.8187 | 22.4266 | 27.4318 | 56.925 |
| 4.0002 | 58.7 | 5400 | 4.2608 | 29.6641 | 9.9252 | 22.1725 | 27.0764 | 56.6 |
| 4.0978 | 59.78 | 5500 | 4.2591 | 30.653 | 10.087 | 22.6956 | 27.7481 | 56.25 |
| 3.9978 | 60.87 | 5600 | 4.2568 | 29.5473 | 9.5653 | 21.6367 | 26.391 | 55.825 |
| 3.9832 | 61.96 | 5700 | 4.2552 | 30.6368 | 10.1624 | 22.7204 | 27.5866 | 57.425 |
| 3.9841 | 63.04 | 5800 | 4.2525 | 30.3045 | 9.7966 | 22.2939 | 27.0978 | 57.725 |
| 4.002 | 64.13 | 5900 | 4.2507 | 30.4468 | 9.9323 | 22.6572 | 27.0761 | 57.5 |
| 3.9705 | 65.22 | 6000 | 4.2491 | 30.1218 | 9.6921 | 22.465 | 26.3835 | 57.55 |
| 3.9863 | 66.3 | 6100 | 4.2477 | 31.3982 | 9.9901 | 22.8762 | 27.6169 | 58.975 |
| 3.9308 | 67.39 | 6200 | 4.2454 | 30.2673 | 9.5804 | 22.4474 | 26.6111 | 59.2 |
| 3.9794 | 68.48 | 6300 | 4.2449 | 30.8612 | 9.8254 | 22.8444 | 27.4979 | 58.075 |
| 3.9499 | 69.57 | 6400 | 4.2412 | 30.8366 | 9.7 | 22.4469 | 27.1621 | 59.025 |
| 3.9722 | 70.65 | 6500 | 4.2414 | 30.9625 | 9.8251 | 22.4089 | 27.4342 | 59.1 |
| 3.9125 | 71.74 | 6600 | 4.2394 | 30.5777 | 9.5514 | 22.1581 | 26.8665 | 58.75 |
| 3.9184 | 72.83 | 6700 | 4.2396 | 30.8306 | 9.5469 | 22.6571 | 27.4302 | 59.725 |
| 3.9337 | 73.91 | 6800 | 4.2377 | 30.8688 | 9.6733 | 22.3073 | 27.2943 | 58.975 |
| 3.9145 | 75.0 | 6900 | 4.2358 | 30.467 | 9.6393 | 22.225 | 27.0127 | 58.45 |
| 3.9038 | 76.09 | 7000 | 4.2353 | 30.6344 | 9.3676 | 22.1945 | 27.1871 | 59.275 |
| 3.893 | 77.17 | 7100 | 4.2335 | 31.4486 | 9.8839 | 22.735 | 27.7854 | 59.025 |
| 3.885 | 78.26 | 7200 | 4.2318 | 30.7118 | 9.8568 | 22.2546 | 27.3983 | 58.5 |
| 3.9266 | 79.35 | 7300 | 4.2304 | 31.6171 | 9.8817 | 22.6145 | 27.6888 | 59.25 |
| 3.8826 | 80.43 | 7400 | 4.2299 | 31.0976 | 9.4662 | 22.2285 | 27.817 | 58.95 |
| 3.8775 | 81.52 | 7500 | 4.2286 | 31.1379 | 10.0975 | 22.5686 | 27.883 | 59.8 |
| 3.8455 | 82.61 | 7600 | 4.2292 | 32.076 | 10.0214 | 22.8866 | 28.3828 | 59.15 |
| 3.8838 | 83.7 | 7700 | 4.2269 | 31.5696 | 9.7812 | 22.7619 | 28.2236 | 58.6 |
| 3.8425 | 84.78 | 7800 | 4.2266 | 31.1731 | 9.97 | 22.4203 | 27.4956 | 59.1 |
| 3.8766 | 85.87 | 7900 | 4.2260 | 32.3221 | 10.6243 | 23.079 | 28.9008 | 58.45 |
| 3.8217 | 86.96 | 8000 | 4.2258 | 31.9956 | 10.4201 | 23.083 | 28.4945 | 58.5 |
| 3.8319 | 88.04 | 8100 | 4.2245 | 32.0272 | 10.4673 | 23.3471 | 28.9845 | 58.35 |
| 3.8283 | 89.13 | 8200 | 4.2231 | 32.2943 | 10.2594 | 23.1819 | 29.1345 | 60.5 |
| 3.8394 | 90.22 | 8300 | 4.2221 | 31.3976 | 10.3085 | 22.6581 | 28.2494 | 59.25 |
| 3.8258 | 91.3 | 8400 | 4.2203 | 31.4433 | 10.1184 | 22.672 | 28.1236 | 58.85 |
| 3.7981 | 92.39 | 8500 | 4.2205 | 31.1313 | 10.0056 | 22.677 | 27.7409 | 59.075 |
| 3.8349 | 93.48 | 8600 | 4.2215 | 31.5779 | 10.0303 | 22.6155 | 28.0566 | 59.2 |
| 3.8225 | 94.57 | 8700 | 4.2201 | 31.9646 | 10.0643 | 22.7808 | 28.67 | 58.925 |
| 3.8145 | 95.65 | 8800 | 4.2193 | 32.0347 | 10.5103 | 23.095 | 28.6056 | 57.225 |
| 3.7771 | 96.74 | 8900 | 4.2180 | 30.8138 | 9.602 | 22.2649 | 27.7948 | 57.875 |
| 3.823 | 97.83 | 9000 | 4.2168 | 31.3785 | 9.7046 | 22.3877 | 28.2578 | 58.675 |
| 3.7701 | 98.91 | 9100 | 4.2169 | 31.4511 | 9.9183 | 22.6645 | 28.1932 | 59.0 |
| 3.773 | 100.0 | 9200 | 4.2169 | 31.7392 | 9.9669 | 22.5894 | 28.218 | 58.15 |
| 3.7661 | 101.09 | 9300 | 4.2161 | 31.5507 | 9.8992 | 22.4602 | 28.3357 | 58.375 |
| 3.7875 | 102.17 | 9400 | 4.2163 | 31.5145 | 9.5173 | 22.321 | 27.8613 | 58.375 |
| 3.7659 | 103.26 | 9500 | 4.2152 | 31.2967 | 9.8797 | 22.6247 | 28.0317 | 57.925 |
| 3.7576 | 104.35 | 9600 | 4.2139 | 31.5739 | 9.8376 | 22.7561 | 28.2318 | 58.4 |
| 3.7784 | 105.43 | 9700 | 4.2144 | 32.2269 | 10.2299 | 22.6582 | 28.6249 | 58.425 |
| 3.7356 | 106.52 | 9800 | 4.2139 | 32.3031 | 10.1505 | 22.7079 | 28.9052 | 58.475 |
| 3.7799 | 107.61 | 9900 | 4.2124 | 31.1334 | 9.1481 | 22.1297 | 27.5951 | 58.6 |
| 3.7269 | 108.7 | 10000 | 4.2122 | 31.6957 | 9.2874 | 22.4867 | 28.225 | 58.4 |
| 3.719 | 109.78 | 10100 | 4.2108 | 31.477 | 10.0245 | 22.4703 | 28.1316 | 58.075 |
| 3.7411 | 110.87 | 10200 | 4.2112 | 31.4165 | 9.9791 | 22.4396 | 28.3068 | 58.275 |
| 3.7135 | 111.96 | 10300 | 4.2122 | 31.4924 | 9.9864 | 22.496 | 28.2414 | 57.8 |
| 3.7317 | 113.04 | 10400 | 4.2120 | 31.6599 | 10.1605 | 22.5322 | 28.3045 | 59.075 |
| 3.7113 | 114.13 | 10500 | 4.2127 | 31.6814 | 10.106 | 22.4311 | 28.5808 | 59.5 |
| 3.7063 | 115.22 | 10600 | 4.2132 | 31.2448 | 10.0006 | 22.5549 | 28.4686 | 57.775 |
| 3.681 | 116.3 | 10700 | 4.2123 | 31.1739 | 10.0533 | 22.2954 | 28.0822 | 58.35 |
| 3.7369 | 117.39 | 10800 | 4.2118 | 31.8541 | 10.1452 | 22.7607 | 28.9501 | 58.8 |
| 3.6645 | 118.48 | 10900 | 4.2122 | 31.7128 | 9.8554 | 22.4464 | 28.5888 | 58.375 |
| 3.6766 | 119.57 | 11000 | 4.2118 | 31.1492 | 9.8058 | 22.0978 | 28.1827 | 58.725 |
| 3.6915 | 120.65 | 11100 | 4.2110 | 31.1679 | 9.5755 | 22.1391 | 28.0886 | 58.375 |
| 3.6702 | 121.74 | 11200 | 4.2129 | 31.0682 | 9.7375 | 22.0118 | 28.2189 | 59.15 |
| 3.6946 | 122.83 | 11300 | 4.2118 | 31.6134 | 9.5918 | 22.2506 | 28.5343 | 59.175 |
| 3.6713 | 123.91 | 11400 | 4.2110 | 31.3585 | 9.4211 | 22.1884 | 27.8744 | 59.05 |
| 3.6694 | 125.0 | 11500 | 4.2126 | 32.0058 | 9.6453 | 22.3911 | 28.6928 | 59.55 |
| 3.6585 | 126.09 | 11600 | 4.2123 | 31.7679 | 9.7101 | 22.2378 | 28.4985 | 59.2 |
| 3.6857 | 127.17 | 11700 | 4.2118 | 31.7766 | 10.0375 | 22.5097 | 28.8104 | 59.6 |
| 3.6338 | 128.26 | 11800 | 4.2126 | 32.2508 | 10.2617 | 22.6745 | 29.0714 | 59.075 |
| 3.6412 | 129.35 | 11900 | 4.2135 | 32.0515 | 10.0905 | 22.7015 | 29.0028 | 58.9 |
| 3.6594 | 130.43 | 12000 | 4.2122 | 32.7784 | 10.351 | 23.0969 | 29.6672 | 59.525 |
| 3.6571 | 131.52 | 12100 | 4.2120 | 32.3165 | 10.329 | 22.8445 | 29.2886 | 59.5 |
| 3.6002 | 132.61 | 12200 | 4.2120 | 32.5553 | 10.0875 | 22.6064 | 29.1046 | 59.425 |
| 3.6621 | 133.7 | 12300 | 4.2126 | 31.7637 | 9.9785 | 22.5716 | 28.7173 | 59.275 |
| 3.6651 | 134.78 | 12400 | 4.2122 | 31.7568 | 9.7503 | 22.3876 | 28.6015 | 59.6 |
| 3.6127 | 135.87 | 12500 | 4.2123 | 31.5708 | 9.5203 | 21.9951 | 28.2082 | 58.75 |
| 3.6544 | 136.96 | 12600 | 4.2124 | 32.0767 | 9.8955 | 22.2724 | 28.4755 | 59.5 |
| 3.5994 | 138.04 | 12700 | 4.2125 | 31.8523 | 9.9159 | 22.2978 | 28.8159 | 59.175 |
| 3.6174 | 139.13 | 12800 | 4.2114 | 32.2165 | 9.784 | 22.4377 | 28.5603 | 59.1 |
| 3.6122 | 140.22 | 12900 | 4.2115 | 32.0247 | 9.6881 | 22.3116 | 28.61 | 58.9 |
| 3.6174 | 141.3 | 13000 | 4.2116 | 31.9549 | 9.5924 | 22.3997 | 28.9145 | 59.15 |
| 3.5965 | 142.39 | 13100 | 4.2113 | 32.6173 | 10.4241 | 22.8644 | 29.3928 | 60.9 |
| 3.6076 | 143.48 | 13200 | 4.2112 | 33.0058 | 10.6417 | 23.0297 | 29.8375 | 61.0 |
| 3.6013 | 144.57 | 13300 | 4.2105 | 33.005 | 10.5398 | 22.9758 | 29.7266 | 60.325 |
| 3.6181 | 145.65 | 13400 | 4.2117 | 31.0558 | 9.4714 | 21.9025 | 27.9627 | 60.025 |
| 3.6288 | 146.74 | 13500 | 4.2107 | 32.7196 | 10.4991 | 22.9182 | 29.6586 | 60.25 |
| 3.5879 | 147.83 | 13600 | 4.2091 | 32.6755 | 10.3936 | 22.9559 | 29.5314 | 60.425 |
| 3.591 | 148.91 | 13700 | 4.2101 | 33.2956 | 10.6616 | 22.8509 | 29.5237 | 60.4 |
| 3.5658 | 150.0 | 13800 | 4.2116 | 33.4712 | 10.3725 | 23.1449 | 30.0987 | 60.2 |
| 3.574 | 151.09 | 13900 | 4.2115 | 33.5427 | 10.5852 | 22.9671 | 29.8456 | 60.175 |
| 3.5795 | 152.17 | 14000 | 4.2115 | 33.4387 | 10.5744 | 23.4785 | 30.0494 | 60.15 |
| 3.5728 | 153.26 | 14100 | 4.2119 | 33.1244 | 10.0308 | 22.8377 | 29.7725 | 60.775 |
| 3.5441 | 154.35 | 14200 | 4.2121 | 32.9226 | 9.9625 | 22.9013 | 29.6004 | 59.7 |
| 3.5236 | 155.43 | 14300 | 4.2114 | 32.3717 | 9.9122 | 22.78 | 28.8305 | 59.725 |
| 3.5679 | 156.52 | 14400 | 4.2120 | 33.6347 | 10.7457 | 23.5191 | 30.1966 | 60.65 |
| 3.5574 | 157.61 | 14500 | 4.2119 | 33.4821 | 10.986 | 23.3567 | 30.1972 | 60.1 |
| 3.5935 | 158.7 | 14600 | 4.2115 | 32.7255 | 10.2639 | 23.1617 | 29.8065 | 60.35 |
| 3.5316 | 159.78 | 14700 | 4.2118 | 32.8033 | 10.0216 | 22.7099 | 29.3968 | 60.525 |
| 3.5618 | 160.87 | 14800 | 4.2118 | 32.6244 | 10.7228 | 22.8601 | 29.3613 | 60.8 |
| 3.545 | 161.96 | 14900 | 4.2132 | 32.6231 | 10.0711 | 22.4686 | 29.5341 | 59.675 |
| 3.5466 | 163.04 | 15000 | 4.2129 | 32.7601 | 10.3376 | 22.2373 | 29.3588 | 59.4 |
| 3.5594 | 164.13 | 15100 | 4.2127 | 32.4645 | 10.5106 | 22.6804 | 29.6229 | 60.375 |
| 3.4839 | 165.22 | 15200 | 4.2130 | 32.1799 | 10.0462 | 22.5474 | 29.1419 | 59.75 |
| 3.5492 | 166.3 | 15300 | 4.2133 | 32.6831 | 10.5307 | 22.8539 | 29.6406 | 59.875 |
| 3.5053 | 167.39 | 15400 | 4.2133 | 32.8614 | 10.0344 | 23.0577 | 29.5848 | 60.975 |
| 3.5427 | 168.48 | 15500 | 4.2140 | 32.7897 | 10.178 | 22.6287 | 29.4839 | 60.1 |
| 3.5495 | 169.57 | 15600 | 4.2126 | 33.1428 | 10.2866 | 22.9377 | 29.6883 | 60.525 |
| 3.5245 | 170.65 | 15700 | 4.2116 | 32.9892 | 10.1082 | 23.1528 | 29.576 | 60.675 |
| 3.5121 | 171.74 | 15800 | 4.2131 | 33.2677 | 10.5916 | 23.3002 | 29.8222 | 59.975 |
| 3.5559 | 172.83 | 15900 | 4.2126 | 32.5155 | 9.9557 | 22.6846 | 29.1171 | 60.85 |
| 3.4758 | 173.91 | 16000 | 4.2133 | 32.374 | 9.9127 | 22.4816 | 29.2839 | 60.9 |
| 3.5148 | 175.0 | 16100 | 4.2125 | 32.5611 | 9.8266 | 22.5993 | 28.9821 | 61.1 |
| 3.5093 | 176.09 | 16200 | 4.2132 | 32.1092 | 9.6761 | 22.3612 | 28.7771 | 60.05 |
| 3.5248 | 177.17 | 16300 | 4.2143 | 32.2696 | 9.6471 | 22.2791 | 28.9759 | 60.925 |
| 3.4807 | 178.26 | 16400 | 4.2139 | 31.9593 | 9.3878 | 22.0643 | 28.5392 | 61.3 |
| 3.5138 | 179.35 | 16500 | 4.2144 | 32.0284 | 9.8303 | 22.5724 | 29.0168 | 59.95 |
| 3.4834 | 180.43 | 16600 | 4.2153 | 32.3203 | 9.5741 | 22.4998 | 28.8014 | 60.5 |
| 3.4701 | 181.52 | 16700 | 4.2156 | 31.7243 | 9.544 | 22.1355 | 28.2238 | 61.275 |
| 3.5501 | 182.61 | 16800 | 4.2152 | 32.519 | 9.9372 | 22.3881 | 28.8347 | 61.45 |
| 3.4789 | 183.7 | 16900 | 4.2148 | 32.3324 | 9.7556 | 22.2474 | 28.7559 | 61.575 |
| 3.5172 | 184.78 | 17000 | 4.2156 | 32.161 | 9.4847 | 22.2358 | 28.8895 | 60.95 |
| 3.4681 | 185.87 | 17100 | 4.2167 | 32.6524 | 9.7116 | 22.8415 | 29.0798 | 60.575 |
| 3.4936 | 186.96 | 17200 | 4.2173 | 32.533 | 9.9478 | 22.7379 | 29.1301 | 61.575 |
| 3.4664 | 188.04 | 17300 | 4.2165 | 32.4549 | 10.1094 | 22.7097 | 28.7992 | 61.4 |
| 3.4599 | 189.13 | 17400 | 4.2164 | 32.6665 | 10.3463 | 22.7678 | 29.308 | 61.575 |
| 3.4724 | 190.22 | 17500 | 4.2175 | 32.4146 | 10.1782 | 22.7414 | 29.3546 | 60.75 |
| 3.4923 | 191.3 | 17600 | 4.2163 | 32.3624 | 9.8306 | 22.7311 | 28.7497 | 59.825 |
| 3.4771 | 192.39 | 17700 | 4.2161 | 33.1427 | 10.429 | 23.462 | 29.6967 | 60.35 |
| 3.4737 | 193.48 | 17800 | 4.2168 | 31.6894 | 9.7073 | 22.527 | 28.3711 | 60.65 |
| 3.4307 | 194.57 | 17900 | 4.2182 | 32.4769 | 10.1673 | 22.8356 | 29.4565 | 60.75 |
| 3.4843 | 195.65 | 18000 | 4.2168 | 32.5461 | 10.2855 | 22.8587 | 29.1242 | 60.825 |
| 3.4479 | 196.74 | 18100 | 4.2170 | 32.9284 | 10.2293 | 23.2679 | 29.8067 | 61.075 |
| 3.489 | 197.83 | 18200 | 4.2180 | 32.9561 | 10.481 | 23.2807 | 29.5499 | 61.25 |
| 3.4596 | 198.91 | 18300 | 4.2179 | 33.1418 | 10.2768 | 22.8762 | 30.0241 | 61.2 |
| 3.4552 | 200.0 | 18400 | 4.2171 | 33.5524 | 10.5969 | 23.5734 | 30.1587 | 61.525 |
| 3.4699 | 201.09 | 18500 | 4.2176 | 33.1941 | 10.3296 | 23.1962 | 30.1624 | 61.45 |
| 3.4281 | 202.17 | 18600 | 4.2187 | 33.3715 | 10.1919 | 23.1843 | 30.3192 | 61.55 |
| 3.4561 | 203.26 | 18700 | 4.2186 | 32.5288 | 9.9299 | 22.6515 | 29.2853 | 61.575 |
| 3.446 | 204.35 | 18800 | 4.2188 | 33.4268 | 10.7152 | 23.6525 | 30.4668 | 61.575 |
| 3.4259 | 205.43 | 18900 | 4.2189 | 33.1715 | 10.198 | 22.9264 | 29.8387 | 61.25 |
| 3.4497 | 206.52 | 19000 | 4.2192 | 33.3472 | 10.5372 | 23.0833 | 30.2925 | 61.25 |
| 3.4674 | 207.61 | 19100 | 4.2192 | 32.7581 | 10.2502 | 23.0554 | 29.6639 | 61.175 |
| 3.4521 | 208.7 | 19200 | 4.2186 | 33.7883 | 10.8639 | 23.4038 | 30.6114 | 61.475 |
| 3.443 | 209.78 | 19300 | 4.2194 | 33.029 | 10.6622 | 22.9009 | 29.9762 | 61.675 |
| 3.4356 | 210.87 | 19400 | 4.2199 | 32.7229 | 9.9204 | 22.5445 | 29.5517 | 61.3 |
| 3.4198 | 211.96 | 19500 | 4.2208 | 33.5216 | 10.3836 | 22.9423 | 29.9006 | 61.625 |
| 3.4417 | 213.04 | 19600 | 4.2210 | 32.7772 | 10.3206 | 22.9031 | 29.3774 | 61.625 |
| 3.4348 | 214.13 | 19700 | 4.2214 | 31.9959 | 10.0821 | 22.2012 | 28.6722 | 61.375 |
| 3.4528 | 215.22 | 19800 | 4.2213 | 32.5434 | 10.2807 | 22.6512 | 29.1705 | 61.65 |
| 3.3955 | 216.3 | 19900 | 4.2220 | 32.9148 | 10.5869 | 22.8107 | 29.4975 | 61.675 |
| 3.4437 | 217.39 | 20000 | 4.2227 | 32.8879 | 10.4334 | 22.6863 | 29.6794 | 61.125 |
| 3.4374 | 218.48 | 20100 | 4.2225 | 32.1453 | 9.9115 | 22.2936 | 28.9428 | 61.1 |
| 3.429 | 219.57 | 20200 | 4.2230 | 33.0805 | 10.5792 | 22.9417 | 29.9572 | 61.55 |
| 3.4089 | 220.65 | 20300 | 4.2239 | 32.0499 | 10.1613 | 22.6264 | 28.9217 | 61.65 |
| 3.418 | 221.74 | 20400 | 4.2237 | 32.6069 | 10.5032 | 22.8024 | 29.5804 | 61.275 |
| 3.4274 | 222.83 | 20500 | 4.2235 | 31.8624 | 10.2513 | 22.2816 | 28.8234 | 61.2 |
| 3.4156 | 223.91 | 20600 | 4.2242 | 32.2666 | 10.4604 | 22.5607 | 29.0666 | 61.025 |
| 3.4135 | 225.0 | 20700 | 4.2247 | 31.3445 | 10.0898 | 22.0664 | 28.5988 | 60.5 |
| 3.4283 | 226.09 | 20800 | 4.2245 | 31.47 | 10.0171 | 21.9423 | 28.4329 | 61.175 |
| 3.4048 | 227.17 | 20900 | 4.2242 | 31.93 | 10.4874 | 22.5287 | 29.1292 | 60.7 |
| 3.3925 | 228.26 | 21000 | 4.2243 | 32.3618 | 10.0902 | 22.6176 | 29.2689 | 60.775 |
| 3.4371 | 229.35 | 21100 | 4.2245 | 32.174 | 10.0424 | 22.516 | 28.9855 | 60.775 |
| 3.3789 | 230.43 | 21200 | 4.2239 | 33.0237 | 10.8644 | 23.3016 | 29.916 | 61.275 |
| 3.4109 | 231.52 | 21300 | 4.2248 | 32.88 | 10.6969 | 22.8426 | 30.0468 | 60.8 |
| 3.4128 | 232.61 | 21400 | 4.2257 | 32.6551 | 10.6032 | 22.6787 | 29.5307 | 60.725 |
| 3.3941 | 233.7 | 21500 | 4.2266 | 31.9296 | 10.0718 | 22.5 | 28.9451 | 60.75 |
| 3.3734 | 234.78 | 21600 | 4.2266 | 32.4862 | 10.0754 | 22.9705 | 29.2087 | 61.225 |
| 3.4144 | 235.87 | 21700 | 4.2269 | 32.1757 | 10.1225 | 22.6842 | 29.1731 | 60.75 |
| 3.3986 | 236.96 | 21800 | 4.2273 | 32.3403 | 10.481 | 22.7186 | 29.3236 | 60.725 |
| 3.3898 | 238.04 | 21900 | 4.2275 | 32.4957 | 10.4595 | 22.8682 | 29.6414 | 60.8 |
| 3.4031 | 239.13 | 22000 | 4.2275 | 32.4625 | 10.3807 | 22.7121 | 29.5187 | 60.725 |
| 3.3836 | 240.22 | 22100 | 4.2274 | 31.8107 | 10.2075 | 22.4437 | 28.9719 | 60.725 |
| 3.4084 | 241.3 | 22200 | 4.2272 | 32.3374 | 10.1027 | 22.5784 | 29.2192 | 61.2 |
| 3.3805 | 242.39 | 22300 | 4.2276 | 32.2783 | 10.375 | 22.7825 | 29.3762 | 61.2 |
| 3.3815 | 243.48 | 22400 | 4.2277 | 32.3337 | 10.3561 | 22.8489 | 29.4485 | 61.15 |
| 3.418 | 244.57 | 22500 | 4.2273 | 32.333 | 10.2841 | 22.8481 | 29.403 | 61.125 |
| 3.369 | 245.65 | 22600 | 4.2277 | 32.038 | 10.3555 | 22.6939 | 29.242 | 60.7 |
| 3.4305 | 246.74 | 22700 | 4.2276 | 32.7594 | 10.6867 | 23.0632 | 29.5852 | 61.575 |
| 3.3928 | 247.83 | 22800 | 4.2282 | 32.4979 | 10.5013 | 22.7875 | 29.4793 | 61.55 |
| 3.3676 | 248.91 | 22900 | 4.2286 | 32.6014 | 10.5697 | 22.8526 | 29.7876 | 61.6 |
| 3.3918 | 250.0 | 23000 | 4.2288 | 32.4746 | 10.6321 | 22.586 | 29.6323 | 60.675 |
| 3.395 | 251.09 | 23100 | 4.2294 | 32.4704 | 10.5456 | 22.6785 | 29.5769 | 60.725 |
| 3.363 | 252.17 | 23200 | 4.2296 | 32.2721 | 10.2554 | 22.5303 | 29.4554 | 60.725 |
| 3.3884 | 253.26 | 23300 | 4.2298 | 32.2746 | 10.434 | 22.6686 | 29.4486 | 60.725 |
| 3.3891 | 254.35 | 23400 | 4.2296 | 32.5382 | 10.5112 | 23.0243 | 29.8106 | 61.125 |
| 3.3679 | 255.43 | 23500 | 4.2296 | 32.4656 | 10.5631 | 22.9952 | 29.6832 | 61.125 |
| 3.4078 | 256.52 | 23600 | 4.2297 | 32.3377 | 10.4791 | 22.8362 | 29.6212 | 60.7 |
| 3.3642 | 257.61 | 23700 | 4.2302 | 32.2519 | 10.5551 | 22.6957 | 29.3763 | 61.075 |
| 3.3745 | 258.7 | 23800 | 4.2300 | 31.9413 | 10.4752 | 22.7447 | 29.1 | 61.175 |
| 3.3844 | 259.78 | 23900 | 4.2305 | 32.237 | 10.5492 | 23.0342 | 29.4079 | 61.65 |
| 3.3501 | 260.87 | 24000 | 4.2302 | 31.9797 | 10.4631 | 22.9089 | 29.332 | 61.65 |
| 3.4259 | 261.96 | 24100 | 4.2304 | 31.7515 | 10.3564 | 22.5923 | 29.1275 | 61.175 |
| 3.3578 | 263.04 | 24200 | 4.2309 | 32.0462 | 10.3883 | 22.9083 | 29.3591 | 61.65 |
| 3.39 | 264.13 | 24300 | 4.2308 | 31.9307 | 10.3057 | 22.8501 | 29.2547 | 61.65 |
| 3.3805 | 265.22 | 24400 | 4.2312 | 32.1836 | 10.3577 | 23.1293 | 29.4325 | 61.65 |
| 3.3667 | 266.3 | 24500 | 4.2309 | 32.1545 | 10.301 | 23.0613 | 29.343 | 61.65 |
| 3.3977 | 267.39 | 24600 | 4.2313 | 31.9549 | 10.2824 | 23.0397 | 29.2684 | 61.65 |
| 3.3434 | 268.48 | 24700 | 4.2314 | 31.9432 | 10.167 | 23.098 | 29.2669 | 61.65 |
| 3.3577 | 269.57 | 24800 | 4.2316 | 31.9679 | 10.3075 | 23.0715 | 29.3077 | 61.65 |
| 3.3781 | 270.65 | 24900 | 4.2317 | 32.2292 | 10.2988 | 23.0879 | 29.4171 | 61.65 |
| 3.3514 | 271.74 | 25000 | 4.2321 | 32.1653 | 10.4198 | 23.0554 | 29.3574 | 61.65 |
| 3.3935 | 272.83 | 25100 | 4.2320 | 32.134 | 10.2884 | 22.9444 | 29.2272 | 61.65 |
| 3.3447 | 273.91 | 25200 | 4.2324 | 32.3498 | 10.4505 | 23.0734 | 29.4438 | 61.65 |
| 3.3872 | 275.0 | 25300 | 4.2323 | 32.1743 | 10.4152 | 22.9462 | 29.3187 | 61.65 |
| 3.3755 | 276.09 | 25400 | 4.2324 | 32.2311 | 10.372 | 22.9563 | 29.3285 | 61.65 |
| 3.3832 | 277.17 | 25500 | 4.2323 | 32.0289 | 10.2105 | 22.9636 | 29.1449 | 61.65 |
| 3.3367 | 278.26 | 25600 | 4.2321 | 32.3053 | 10.2512 | 23.0834 | 29.4111 | 61.65 |
| 3.3767 | 279.35 | 25700 | 4.2323 | 32.4099 | 10.2793 | 23.0137 | 29.4049 | 61.65 |
| 3.3989 | 280.43 | 25800 | 4.2324 | 32.3471 | 10.4356 | 23.0179 | 29.4453 | 61.65 |
| 3.3625 | 281.52 | 25900 | 4.2325 | 32.2213 | 10.4363 | 22.9573 | 29.2886 | 61.65 |
| 3.3352 | 282.61 | 26000 | 4.2328 | 32.713 | 10.7489 | 23.2367 | 29.8725 | 61.65 |
| 3.3899 | 283.7 | 26100 | 4.2328 | 32.2145 | 10.2347 | 22.7896 | 29.2107 | 61.65 |
| 3.359 | 284.78 | 26200 | 4.2327 | 32.2466 | 10.4236 | 22.916 | 29.4227 | 61.65 |
| 3.3866 | 285.87 | 26300 | 4.2327 | 32.2466 | 10.4236 | 22.916 | 29.4227 | 61.65 |
| 3.3845 | 286.96 | 26400 | 4.2328 | 32.2466 | 10.4236 | 22.916 | 29.4227 | 61.65 |
| 3.3486 | 288.04 | 26500 | 4.2328 | 32.595 | 10.5041 | 23.1214 | 29.69 | 61.65 |
| 3.3807 | 289.13 | 26600 | 4.2328 | 32.759 | 10.566 | 23.3108 | 29.8555 | 61.65 |
| 3.3676 | 290.22 | 26700 | 4.2330 | 32.759 | 10.566 | 23.3108 | 29.8555 | 61.65 |
| 3.3361 | 291.3 | 26800 | 4.2332 | 32.759 | 10.566 | 23.3108 | 29.8555 | 61.65 |
| 3.3897 | 292.39 | 26900 | 4.2331 | 32.7251 | 10.566 | 23.3108 | 29.7958 | 61.65 |
| 3.3579 | 293.48 | 27000 | 4.2331 | 32.759 | 10.566 | 23.3108 | 29.8555 | 61.65 |
| 3.3809 | 294.57 | 27100 | 4.2331 | 32.759 | 10.566 | 23.3108 | 29.8555 | 61.65 |
| 3.3885 | 295.65 | 27200 | 4.2331 | 32.759 | 10.566 | 23.3108 | 29.8555 | 61.65 |
| 3.3173 | 296.74 | 27300 | 4.2331 | 32.7156 | 10.5699 | 23.2759 | 29.7903 | 61.65 |
| 3.3648 | 297.83 | 27400 | 4.2331 | 32.7156 | 10.5699 | 23.2759 | 29.7903 | 61.65 |
| 3.3793 | 298.91 | 27500 | 4.2331 | 32.7156 | 10.5699 | 23.2759 | 29.7903 | 61.65 |
| 3.3604 | 300.0 | 27600 | 4.2331 | 32.7156 | 10.5699 | 23.2759 | 29.7903 | 61.65 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
vai6hav/wav2vec2-large-xls-r-300m-turkish-colab
|
vai6hav
| 2022-05-25T16:14:45Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-06T18:30:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
mikeadimech/bart-qmsum-meeting-summarization
|
mikeadimech
| 2022-05-25T16:14:18Z | 6 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:yawnick/QMSum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-27T11:54:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-qmsum-meeting-summarization
results: []
datasets:
- yawnick/QMSum
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-qmsum-meeting-summarization
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on the QMSum dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3354
- Rouge1: 39.5539
- Rouge2: 12.1134
- Rougel: 23.9163
- Rougelsum: 36.0299
- Gen Len: 117.225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 200
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 5.5573 | 2.17 | 100 | 5.4074 | 23.6282 | 4.1122 | 14.584 | 21.2263 | 84.75 |
| 5.4721 | 4.35 | 200 | 5.2899 | 24.61 | 4.272 | 15.2096 | 22.2997 | 87.2 |
| 5.3407 | 6.52 | 300 | 5.1360 | 25.8272 | 4.3314 | 15.9926 | 23.3416 | 87.95 |
| 5.1527 | 8.7 | 400 | 4.9751 | 27.7207 | 5.31 | 16.7055 | 24.8357 | 88.35 |
| 5.0058 | 10.87 | 500 | 4.8372 | 30.1847 | 6.8615 | 18.934 | 27.2424 | 89.95 |
| 4.8807 | 13.04 | 600 | 4.7488 | 33.1208 | 9.1784 | 20.655 | 30.1198 | 101.3 |
| 4.7931 | 15.22 | 700 | 4.6891 | 33.2266 | 8.4253 | 20.0334 | 30.4093 | 108.925 |
| 4.7272 | 17.39 | 800 | 4.6467 | 35.0475 | 9.326 | 21.0655 | 31.8413 | 111.7 |
| 4.6904 | 19.57 | 900 | 4.6102 | 34.869 | 9.6046 | 21.395 | 32.4346 | 115.05 |
| 4.6547 | 21.74 | 1000 | 4.5829 | 36.3392 | 10.9936 | 22.1524 | 33.6863 | 119.875 |
| 4.594 | 23.91 | 1100 | 4.5602 | 35.9717 | 10.3827 | 21.6118 | 32.8302 | 119.5 |
| 4.5714 | 26.09 | 1200 | 4.5424 | 36.3656 | 10.6282 | 22.2187 | 33.6494 | 118.0 |
| 4.542 | 28.26 | 1300 | 4.5256 | 36.7386 | 10.615 | 22.2487 | 34.1927 | 115.675 |
| 4.5092 | 30.43 | 1400 | 4.5116 | 37.1597 | 10.7751 | 22.6747 | 34.396 | 118.55 |
| 4.5031 | 32.61 | 1500 | 4.4981 | 37.6108 | 10.9732 | 22.8342 | 34.6833 | 117.125 |
| 4.4682 | 34.78 | 1600 | 4.4875 | 37.5057 | 11.1328 | 22.8973 | 34.7114 | 117.65 |
| 4.4387 | 36.96 | 1700 | 4.4775 | 38.1278 | 11.3597 | 23.1307 | 35.1869 | 115.65 |
| 4.4085 | 39.13 | 1800 | 4.4682 | 37.9578 | 11.4355 | 23.1149 | 35.4961 | 119.6 |
| 4.4166 | 41.3 | 1900 | 4.4592 | 38.1467 | 11.3208 | 23.045 | 35.0824 | 120.05 |
| 4.3971 | 43.48 | 2000 | 4.4517 | 37.9922 | 11.5071 | 23.3983 | 34.6918 | 114.425 |
| 4.3638 | 45.65 | 2100 | 4.4438 | 38.1666 | 11.4985 | 23.5518 | 35.1484 | 117.2 |
| 4.3522 | 47.83 | 2200 | 4.4377 | 37.7572 | 11.3984 | 23.4437 | 35.0453 | 113.725 |
| 4.3398 | 50.0 | 2300 | 4.4320 | 38.5833 | 11.4575 | 23.6411 | 35.3437 | 116.125 |
| 4.3341 | 52.17 | 2400 | 4.4247 | 38.2705 | 12.0374 | 23.5807 | 34.9985 | 110.8 |
| 4.3024 | 54.35 | 2500 | 4.4201 | 39.0206 | 12.2041 | 23.4394 | 35.6291 | 114.5 |
| 4.3117 | 56.52 | 2600 | 4.4147 | 38.6555 | 12.1079 | 23.5655 | 35.5287 | 111.325 |
| 4.2659 | 58.7 | 2700 | 4.4107 | 39.2235 | 12.025 | 23.934 | 36.2243 | 113.3 |
| 4.2946 | 60.87 | 2800 | 4.4055 | 39.0301 | 12.1833 | 23.8999 | 36.0487 | 110.325 |
| 4.2431 | 63.04 | 2900 | 4.4009 | 39.0498 | 12.3215 | 23.9686 | 36.0277 | 112.775 |
| 4.2439 | 65.22 | 3000 | 4.3968 | 38.8786 | 12.0985 | 23.8308 | 35.8575 | 115.175 |
| 4.2244 | 67.39 | 3100 | 4.3922 | 38.7614 | 12.1721 | 23.7736 | 35.6744 | 113.55 |
| 4.235 | 69.57 | 3200 | 4.3895 | 38.6858 | 11.3994 | 23.6392 | 35.3456 | 114.125 |
| 4.2064 | 71.74 | 3300 | 4.3859 | 39.0258 | 12.0435 | 24.2528 | 35.8378 | 113.5 |
| 4.1934 | 73.91 | 3400 | 4.3835 | 39.0467 | 11.5556 | 23.6704 | 35.5643 | 111.5 |
| 4.1859 | 76.09 | 3500 | 4.3800 | 38.776 | 11.729 | 24.1254 | 35.3894 | 112.9 |
| 4.1762 | 78.26 | 3600 | 4.3775 | 38.9465 | 11.9112 | 23.8123 | 35.5453 | 114.125 |
| 4.1848 | 80.43 | 3700 | 4.3744 | 39.2783 | 11.6539 | 23.8236 | 35.8465 | 110.225 |
| 4.1386 | 82.61 | 3800 | 4.3730 | 38.8894 | 11.4784 | 23.7534 | 35.5464 | 113.15 |
| 4.1483 | 84.78 | 3900 | 4.3710 | 39.2734 | 12.0285 | 23.8171 | 35.6884 | 115.95 |
| 4.1428 | 86.96 | 4000 | 4.3688 | 39.6134 | 12.0616 | 23.7454 | 36.0363 | 113.375 |
| 4.133 | 89.13 | 4100 | 4.3663 | 38.935 | 11.4781 | 23.8766 | 35.4061 | 114.15 |
| 4.1211 | 91.3 | 4200 | 4.3648 | 39.1488 | 11.8399 | 23.9935 | 35.3107 | 113.975 |
| 4.1076 | 93.48 | 4300 | 4.3650 | 38.9764 | 11.9963 | 23.4994 | 35.7214 | 116.25 |
| 4.121 | 95.65 | 4400 | 4.3597 | 38.9418 | 11.8416 | 24.0272 | 35.6597 | 111.325 |
| 4.0936 | 97.83 | 4500 | 4.3602 | 39.266 | 12.5616 | 24.2046 | 36.1883 | 114.275 |
| 4.0841 | 100.0 | 4600 | 4.3588 | 39.4659 | 12.2132 | 24.0521 | 36.249 | 115.475 |
| 4.0768 | 102.17 | 4700 | 4.3578 | 39.4167 | 12.0587 | 24.025 | 35.9668 | 114.375 |
| 4.0711 | 104.35 | 4800 | 4.3541 | 39.6943 | 12.1095 | 24.0925 | 36.3496 | 115.65 |
| 4.072 | 106.52 | 4900 | 4.3539 | 40.2024 | 12.4618 | 24.2863 | 36.8844 | 113.475 |
| 4.0646 | 108.7 | 5000 | 4.3540 | 39.4299 | 11.8085 | 23.686 | 36.0454 | 113.975 |
| 4.0508 | 110.87 | 5100 | 4.3517 | 39.9217 | 11.9379 | 24.2299 | 36.6362 | 115.5 |
| 4.0549 | 113.04 | 5200 | 4.3498 | 40.3496 | 12.2558 | 24.0271 | 36.9715 | 112.5 |
| 4.0428 | 115.22 | 5300 | 4.3497 | 40.1349 | 12.0628 | 24.0622 | 36.9169 | 113.95 |
| 4.0391 | 117.39 | 5400 | 4.3480 | 40.1209 | 12.3587 | 24.3456 | 36.8411 | 116.025 |
| 4.0195 | 119.57 | 5500 | 4.3474 | 39.5209 | 12.1325 | 24.2622 | 36.4357 | 111.975 |
| 4.0054 | 121.74 | 5600 | 4.3468 | 40.2885 | 12.4453 | 24.2373 | 36.932 | 117.375 |
| 4.0286 | 123.91 | 5700 | 4.3465 | 39.3943 | 11.8399 | 23.9786 | 35.991 | 116.475 |
| 4.005 | 126.09 | 5800 | 4.3459 | 38.7442 | 11.7408 | 23.8948 | 35.3673 | 117.625 |
| 3.991 | 128.26 | 5900 | 4.3444 | 39.6276 | 12.1549 | 23.9542 | 36.3832 | 115.675 |
| 4.0137 | 130.43 | 6000 | 4.3427 | 39.8331 | 12.2687 | 24.187 | 36.6144 | 115.475 |
| 3.9755 | 132.61 | 6100 | 4.3438 | 39.1907 | 12.1033 | 24.2339 | 35.9126 | 114.525 |
| 4.0134 | 134.78 | 6200 | 4.3422 | 39.4298 | 11.862 | 24.0847 | 35.5744 | 115.025 |
| 3.9935 | 136.96 | 6300 | 4.3416 | 39.4158 | 11.6968 | 23.9636 | 35.8155 | 114.35 |
| 3.9606 | 139.13 | 6400 | 4.3409 | 39.1239 | 11.7046 | 23.6846 | 36.0431 | 114.775 |
| 3.9834 | 141.3 | 6500 | 4.3404 | 39.6375 | 12.2746 | 24.2636 | 36.1425 | 116.175 |
| 3.9687 | 143.48 | 6600 | 4.3409 | 39.1494 | 12.1404 | 24.0778 | 35.4932 | 118.05 |
| 3.9861 | 145.65 | 6700 | 4.3394 | 39.6258 | 12.2497 | 23.9662 | 36.4054 | 116.8 |
| 3.9755 | 147.83 | 6800 | 4.3400 | 39.3121 | 11.7831 | 23.6584 | 35.9636 | 118.125 |
| 3.9591 | 150.0 | 6900 | 4.3390 | 39.6957 | 11.9406 | 24.0599 | 36.3021 | 114.9 |
| 3.9599 | 152.17 | 7000 | 4.3389 | 39.4271 | 11.4159 | 24.1437 | 35.9056 | 115.8 |
| 3.9456 | 154.35 | 7100 | 4.3384 | 39.4862 | 11.726 | 23.883 | 35.9839 | 116.375 |
| 3.9341 | 156.52 | 7200 | 4.3386 | 39.6915 | 11.8028 | 24.346 | 36.406 | 116.425 |
| 3.9648 | 158.7 | 7300 | 4.3383 | 39.9311 | 11.7135 | 23.985 | 36.2617 | 118.075 |
| 3.9486 | 160.87 | 7400 | 4.3372 | 39.8375 | 12.0014 | 24.0969 | 36.5902 | 118.8 |
| 3.9533 | 163.04 | 7500 | 4.3371 | 40.2678 | 12.3137 | 24.1916 | 37.1632 | 118.075 |
| 3.9344 | 165.22 | 7600 | 4.3369 | 39.5588 | 11.6805 | 24.1474 | 36.2021 | 114.875 |
| 3.9314 | 167.39 | 7700 | 4.3368 | 39.8649 | 11.9824 | 24.5459 | 36.3921 | 113.65 |
| 3.9558 | 169.57 | 7800 | 4.3363 | 39.8428 | 12.0892 | 24.0175 | 36.67 | 112.7 |
| 3.928 | 171.74 | 7900 | 4.3364 | 39.2281 | 11.8456 | 23.7212 | 36.2005 | 113.95 |
| 3.9351 | 173.91 | 8000 | 4.3363 | 39.9798 | 12.4387 | 23.7687 | 36.6472 | 115.45 |
| 3.9326 | 176.09 | 8100 | 4.3363 | 39.9772 | 12.1193 | 24.1518 | 36.5791 | 117.4 |
| 3.9387 | 178.26 | 8200 | 4.3363 | 39.8629 | 12.1719 | 23.9446 | 36.345 | 115.075 |
| 3.9204 | 180.43 | 8300 | 4.3358 | 39.9738 | 12.3072 | 23.8641 | 36.4802 | 116.3 |
| 3.9418 | 182.61 | 8400 | 4.3357 | 40.1451 | 12.4144 | 24.1553 | 36.4251 | 116.025 |
| 3.9289 | 184.78 | 8500 | 4.3357 | 39.7241 | 12.0543 | 24.0752 | 36.0847 | 115.8 |
| 3.9176 | 186.96 | 8600 | 4.3358 | 39.7969 | 12.0967 | 24.123 | 36.2664 | 118.6 |
| 3.9097 | 189.13 | 8700 | 4.3356 | 39.4096 | 11.9872 | 24.0609 | 35.8662 | 117.2 |
| 3.938 | 191.3 | 8800 | 4.3354 | 39.4695 | 11.9343 | 24.0295 | 35.9372 | 117.025 |
| 3.9239 | 193.48 | 8900 | 4.3352 | 39.3231 | 12.0965 | 23.9131 | 35.9555 | 117.275 |
| 3.91 | 195.65 | 9000 | 4.3354 | 39.5932 | 12.1808 | 23.9233 | 36.0864 | 116.925 |
| 3.9234 | 197.83 | 9100 | 4.3354 | 39.5539 | 12.1134 | 23.9163 | 36.0299 | 117.225 |
| 3.9263 | 200.0 | 9200 | 4.3354 | 39.5539 | 12.1134 | 23.9163 | 36.0299 | 117.225 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
castorini/monot5-small-msmarco-100k
|
castorini
| 2022-05-25T15:08:56Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-25T15:04:22Z |
This model is a T5-small reranker fine-tuned on the MS MARCO passage dataset for 100k steps (or 1 epoch).
For more details on how to use it, check the following links:
- [A simple reranking example](https://github.com/castorini/pygaggle#a-simple-reranking-example)
- [Rerank MS MARCO passages](https://github.com/castorini/pygaggle/blob/master/docs/experiments-msmarco-passage-subset.md)
- [Rerank Robust04 documents](https://github.com/castorini/pygaggle/blob/master/docs/experiments-robust04-monot5-gpu.md)
Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
|
vai6hav/wav2vec2-large-xls-r-300m-hindi-colab
|
vai6hav
| 2022-05-25T15:01:42Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-25T13:59:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hindi-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
DuboiJ/finetuning-sentiment-model-3000-samples
|
DuboiJ
| 2022-05-25T13:48:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-23T13:20:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8633333333333333
- name: F1
type: f1
value: 0.8637873754152824
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3211
- Accuracy: 0.8633
- F1: 0.8638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Monsia/afrilang-bci-tts
|
Monsia
| 2022-05-25T12:46:34Z | 2 | 0 |
espnet
|
[
"espnet",
"audio",
"text-to-speech",
"bci",
"dataset:afrilang-bci",
"arxiv:1804.00015",
"license:apache-2.0",
"region:us"
] |
text-to-speech
| 2022-05-24T12:40:18Z |
---
tags:
- espnet
- audio
- text-to-speech
language:
- bci
datasets:
- afrilang-bci
license: apache-2.0
metrics:
- mos
---
## ESPnet2 TTS model
### ``
This model was trained by using recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
pip install -e .
cd egs2/afrilang-bci/tts1
./run.sh --skip_data_prep false --skip_train true --download_model
```
## TTS config
<details><summary>expand</summary>
```
config: ./conf/train_vits.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/44k/tts_train_vits_raw_char_tacotron
ngpu: 1
seed: 777
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
collect_stats: false
write_collected_feats: false
max_epoch: 20
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- total_count
- max
keep_nbest_models: 2
nbest_averaging_interval: 0
grad_clip: -1
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 5
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 20
batch_size: 20
valid_batch_size: null
batch_bins: 500
valid_batch_bins: null
train_shape_file:
- exp/44k/tts_stats_raw_linear_spectrogram_char_tacotron/train/text_shape.char
- exp/44k/tts_stats_raw_linear_spectrogram_char_tacotron/train/speech_shape
valid_shape_file:
- exp/44k/tts_stats_raw_linear_spectrogram_char_tacotron/valid/text_shape.char
- exp/44k/tts_stats_raw_linear_spectrogram_char_tacotron/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/org/train/text
- text
- text
- - dump/raw/org/train/wav.scp
- speech
- sound
valid_data_path_and_name_and_type:
- - dump/raw/org/test/text
- text
- text
- - dump/raw/org/test/wav.scp
- speech
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adamw
optim_conf:
lr: 0.0002
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler: exponentiallr
scheduler_conf:
gamma: 0.999875
optim2: adamw
optim2_conf:
lr: 0.0002
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler2: exponentiallr
scheduler2_conf:
gamma: 0.999875
generator_first: false
token_list:
- <blank>
- <unk>
- <space>
- N
- E
- A
- I
- O
- U
- L
- K
- M
- S
- B
- W
- T
- F
- R
- Y
- Z
- D
- G
- J
- P
- C
- V
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: tacotron
g2p: g2p_en
feats_extract: linear_spectrogram
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
normalize: null
normalize_conf: {}
tts: vits
tts_conf:
generator_type: vits_generator
generator_params:
hidden_channels: 192
spks: -1
global_channels: -1
segment_size: 32
text_encoder_attention_heads: 2
text_encoder_ffn_expand: 4
text_encoder_blocks: 6
text_encoder_positionwise_layer_type: conv1d
text_encoder_positionwise_conv_kernel_size: 3
text_encoder_positional_encoding_layer_type: rel_pos
text_encoder_self_attention_layer_type: rel_selfattn
text_encoder_activation_type: swish
text_encoder_normalize_before: true
text_encoder_dropout_rate: 0.1
text_encoder_positional_dropout_rate: 0.0
text_encoder_attention_dropout_rate: 0.1
use_macaron_style_in_text_encoder: true
use_conformer_conv_in_text_encoder: false
text_encoder_conformer_kernel_size: -1
decoder_kernel_size: 7
decoder_channels: 512
decoder_upsample_scales:
- 8
- 8
- 2
- 2
decoder_upsample_kernel_sizes:
- 16
- 16
- 4
- 4
decoder_resblock_kernel_sizes:
- 3
- 7
- 11
decoder_resblock_dilations:
- - 1
- 3
- 5
- - 1
- 3
- 5
- - 1
- 3
- 5
use_weight_norm_in_decoder: true
posterior_encoder_kernel_size: 5
posterior_encoder_layers: 16
posterior_encoder_stacks: 1
posterior_encoder_base_dilation: 1
posterior_encoder_dropout_rate: 0.0
use_weight_norm_in_posterior_encoder: true
flow_flows: 4
flow_kernel_size: 5
flow_base_dilation: 1
flow_layers: 4
flow_dropout_rate: 0.0
use_weight_norm_in_flow: true
use_only_mean_in_flow: true
stochastic_duration_predictor_kernel_size: 3
stochastic_duration_predictor_dropout_rate: 0.5
stochastic_duration_predictor_flows: 4
stochastic_duration_predictor_dds_conv_layers: 3
vocabs: 27
aux_channels: 513
discriminator_type: hifigan_multi_scale_multi_period_discriminator
discriminator_params:
scales: 1
scale_downsample_pooling: AvgPool1d
scale_downsample_pooling_params:
kernel_size: 4
stride: 2
padding: 2
scale_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 15
- 41
- 5
- 3
channels: 128
max_downsample_channels: 1024
max_groups: 16
bias: true
downsample_scales:
- 2
- 2
- 4
- 4
- 1
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
follow_official_norm: false
periods:
- 2
- 3
- 5
- 7
- 11
period_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 5
- 3
channels: 32
downsample_scales:
- 3
- 3
- 3
- 3
- 1
max_downsample_channels: 1024
bias: true
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
generator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
discriminator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
feat_match_loss_params:
average_by_discriminators: false
average_by_layers: false
include_final_outputs: true
mel_loss_params:
fs: 44100
n_fft: 1024
hop_length: 256
win_length: null
window: hann
n_mels: 80
fmin: 0
fmax: null
log_base: null
lambda_adv: 1.0
lambda_mel: 45.0
lambda_feat_match: 2.0
lambda_dur: 1.0
lambda_kl: 1.0
sampling_rate: 44100
cache_generator_outputs: true
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: '202204'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
comodoro/ppo-CartPole-v1
|
comodoro
| 2022-05-25T12:10:46Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-25T12:10:20Z |
---
library_name: stable-baselines3
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **PPO** Agent playing **CartPole-v1**
This is a trained model of a **PPO** agent playing **CartPole-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jimypbr/t5-base-test
|
jimypbr
| 2022-05-25T12:02:55Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"optimum_graphcore",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-23T09:03:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: t5-base-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-summarization
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the cnn_dailymail 3.0.0 dataset.
## Model description
More information needed
## Intended uses & limitations
This is a work in progress. Please don't use these weights. :)
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 256
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.15
- num_epochs: 5.0
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
|
arimboux/q-Taxi-v4
|
arimboux
| 2022-05-25T11:56:11Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-25T11:50:42Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v4
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="arimboux/q-Taxi-v4", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
arimboux/q-Taxi-v3
|
arimboux
| 2022-05-25T11:41:04Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-25T11:40:58Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="arimboux/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
arimboux/q-FrozenLake-v1-4x4-noSlippery
|
arimboux
| 2022-05-25T11:37:59Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-25T11:37:52Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="arimboux/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
dsavich/LunarLander-v2
|
dsavich
| 2022-05-25T11:05:56Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-25T10:44:10Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 279.89 +/- 20.45
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
morahil/wav2vec2-hindi-new-3
|
morahil
| 2022-05-25T11:00:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-25T08:37:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-hindi-new-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-hindi-new-3
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.1206
- eval_wer: 0.8949
- eval_runtime: 20.2358
- eval_samples_per_second: 19.767
- eval_steps_per_second: 2.471
- epoch: 25.8
- step: 1600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.3.dev0
- Tokenizers 0.12.1
|
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv-v3-e16
|
theojolliffe
| 2022-05-25T10:47:47Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-25T08:50:11Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv-v3-e16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv-v3-e16
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8960
- Rouge1: 57.7198
- Rouge2: 44.5711
- Rougel: 47.6281
- Rougelsum: 56.2372
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 398 | 0.8634 | 53.7416 | 34.3731 | 37.1193 | 51.3075 | 142.0 |
| 0.8276 | 2.0 | 796 | 0.8001 | 53.9975 | 35.1019 | 38.2722 | 51.7878 | 142.0 |
| 0.5311 | 3.0 | 1194 | 0.7988 | 53.409 | 34.3201 | 37.5443 | 50.738 | 142.0 |
| 0.3538 | 4.0 | 1592 | 0.7698 | 53.679 | 34.7209 | 37.7895 | 51.2497 | 142.0 |
| 0.3538 | 5.0 | 1990 | 0.7863 | 54.2493 | 36.0643 | 39.1249 | 51.9758 | 142.0 |
| 0.2367 | 6.0 | 2388 | 0.7810 | 54.4042 | 37.4276 | 41.529 | 52.1544 | 142.0 |
| 0.164 | 7.0 | 2786 | 0.8055 | 56.0408 | 39.6744 | 42.8323 | 54.163 | 142.0 |
| 0.1146 | 8.0 | 3184 | 0.8098 | 55.2046 | 38.5399 | 41.9178 | 53.0001 | 142.0 |
| 0.089 | 9.0 | 3582 | 0.8199 | 57.1523 | 41.7614 | 44.5914 | 55.1602 | 142.0 |
| 0.089 | 10.0 | 3980 | 0.8644 | 56.943 | 41.5063 | 44.4929 | 54.9515 | 142.0 |
| 0.0647 | 11.0 | 4378 | 0.8413 | 57.0321 | 41.964 | 45.3971 | 55.0957 | 142.0 |
| 0.0485 | 12.0 | 4776 | 0.8735 | 56.7275 | 41.8577 | 44.3911 | 54.9824 | 142.0 |
| 0.0365 | 13.0 | 5174 | 0.8858 | 57.6103 | 43.8831 | 47.0374 | 56.0675 | 142.0 |
| 0.0271 | 14.0 | 5572 | 0.8974 | 57.39 | 42.8693 | 45.9344 | 55.7404 | 142.0 |
| 0.0271 | 15.0 | 5970 | 0.8990 | 57.9433 | 44.7301 | 47.843 | 56.5407 | 142.0 |
| 0.0232 | 16.0 | 6368 | 0.8960 | 57.7198 | 44.5711 | 47.6281 | 56.2372 | 142.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ksmcg/q-Taxi-v3
|
ksmcg
| 2022-05-25T10:43:37Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-25T10:43:30Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ksmcg/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
ksmcg/q-FrozenLake-v1-4x4-noSlippery
|
ksmcg
| 2022-05-25T10:39:53Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-25T10:39:45Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ksmcg/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
dinalzein/xlm-roberta-base-finetuned-language-identification
|
dinalzein
| 2022-05-25T09:52:27Z | 8 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-24T19:22:24Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-finetuned-language-identification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-language-detection-new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [Language Identification dataset](https://huggingface.co/datasets/papluca/language-identification).
It achieves the following results on the evaluation set:
- Loss: 0.0436
- Accuracy: 0.9959
## Model description
The model used in this task is XLM-RoBERTa, a transformer model with a classification head on top.
## Intended uses & limitations
It identifies the language a document is written in and it supports 20 different langauges:
Arabic (ar), Bulgarian (bg), German (de), Modern greek (el), English (en), Spanish (es), French (fr), Hindi (hi), Italian (it), Japanese (ja), Dutch (nl), Polish (pl), Portuguese (pt), Russian (ru), Swahili (sw), Thai (th), Turkish (tr), Urdu (ur), Vietnamese (vi), Chinese (zh)
## Training and evaluation data
The model is fine-tuned on the [Language Identification dataset](https://huggingface.co/datasets/papluca/language-identification), a corpus consists of text from 20 different languages. The dataset is split with 7000 sentences for training, 1000 for validating, and 1000 for testing. The accuracy on the test set is 99.5%.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0493 | 1.0 | 35000 | 0.0407 | 0.9955 |
| 0.018 | 2.0 | 70000 | 0.0436 | 0.9959 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
XGBooster/q-Taxi-v3
|
XGBooster
| 2022-05-25T09:14:10Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-25T09:14:03Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="XGBooster/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
AswiN037/sentence-t-roberta-large-wechsel-tamil
|
AswiN037
| 2022-05-25T08:55:45Z | 2 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-05-24T11:00:44Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sent-Roberta-wechsel-tamil
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
XGBooster/q-FrozenLake-v1-8x8-noSlippery
|
XGBooster
| 2022-05-25T08:43:46Z | 0 | 0 | null |
[
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-25T08:43:38Z |
---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="XGBooster/q-FrozenLake-v1-8x8-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.