modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 18:27:28
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 532
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 18:27:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ish97/bert-finetuned-chunking-for-echo-reading
|
ish97
| 2022-08-29T19:27:28Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-29T18:07:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-chunking-for-echo-reading
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-chunking-for-echo-reading
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3411
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 2 | 0.4490 | 0.0 | 0.0 | 0.0 | 0.875 |
| No log | 2.0 | 4 | 0.3668 | 0.0 | 0.0 | 0.0 | 0.875 |
| No log | 3.0 | 6 | 0.3411 | 0.0 | 0.0 | 0.0 | 0.875 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ntinosmg/dqn-SpaceInvadersNoFrameskip-v4
|
ntinosmg
| 2022-08-29T19:21:48Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-29T19:21:07Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 555.50 +/- 234.83
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ntinosmg -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ntinosmg
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
huggingtweets/lustfulliberal-pg13scottwatson
|
huggingtweets
| 2022-08-29T19:11:34Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-26T02:13:36Z |
---
language: en
thumbnail: http://www.huggingtweets.com/lustfulliberal-pg13scottwatson/1661800282918/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1114620037300654082/KcWDPQsE_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1231999409916764162/mo9U0uNT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Loony Liberal - Tweets or GTFO & (18+ ONLY) - The Lustful Liberal - Scorny on Main</div>
<div style="text-align: center; font-size: 14px;">@lustfulliberal-pg13scottwatson</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from The Loony Liberal - Tweets or GTFO & (18+ ONLY) - The Lustful Liberal - Scorny on Main.
| Data | The Loony Liberal - Tweets or GTFO | (18+ ONLY) - The Lustful Liberal - Scorny on Main |
| --- | --- | --- |
| Tweets downloaded | 3234 | 3228 |
| Retweets | 1055 | 893 |
| Short tweets | 235 | 336 |
| Tweets kept | 1944 | 1999 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/20f7h18q/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lustfulliberal-pg13scottwatson's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1y0wr0ip) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1y0wr0ip/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lustfulliberal-pg13scottwatson')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jonaskoenig/xtremedistil-l6-h256-uncased-future-time-references-D1
|
jonaskoenig
| 2022-08-29T18:44:10Z | 9 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"dataset:jonaskoenig/trump_administration_statement",
"dataset:jonaskoenig/future-time-references-static-filter-D1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-15T10:48:03Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: xtremedistil-l6-h256-uncased-future-time-references-D1
results: []
datasets:
- jonaskoenig/trump_administration_statement
- jonaskoenig/future-time-references-static-filter-D1
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xtremedistil-l6-h256-uncased-future-time-references-D1
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on the [jonaskoenig/trump_administration_statement](https://huggingface.co/datasets/jonaskoenig/trump_administration_statement) and [jonaskoenig/future-time-refernces-static-filter](https://huggingface.co/datasets/jonaskoenig/future-time-refernces-static-filter) datsets.
It achieves the following results on the evaluation set:
- Train Loss: 0.0099
- Train Sparse Categorical Accuracy: 0.9977
- Validation Loss: 0.0128
- Validation Sparse Categorical Accuracy: 0.9976
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 0.0276 | 0.9932 | 0.0156 | 0.9968 | 0 |
| 0.0138 | 0.9969 | 0.0125 | 0.9972 | 1 |
| 0.0117 | 0.9974 | 0.0126 | 0.9974 | 2 |
| 0.0099 | 0.9977 | 0.0128 | 0.9976 | 3 |
The test accuracy is: 99.77%
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.9.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Dizzykong/Aristotle-8-29
|
Dizzykong
| 2022-08-29T17:46:28Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-29T16:31:34Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Aristotle-8-29
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Aristotle-8-29
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
huggingtweets/chrishildabrant
|
huggingtweets
| 2022-08-29T17:19:30Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-29T17:19:20Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1367991702523437062/x5beyUQ-_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Chris Hildabrant</div>
<div style="text-align: center; font-size: 14px;">@chrishildabrant</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Chris Hildabrant.
| Data | Chris Hildabrant |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 0 |
| Short tweets | 243 |
| Tweets kept | 3007 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3dagd4ww/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chrishildabrant's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ctoe6ys) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ctoe6ys/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/chrishildabrant')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/actbrigitte
|
huggingtweets
| 2022-08-29T16:46:55Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-29T16:45:33Z |
---
language: en
thumbnail: http://www.huggingtweets.com/actbrigitte/1661791610963/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1001845274476797954/TbklBZ1r_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Brigitte Gabriel</div>
<div style="text-align: center; font-size: 14px;">@actbrigitte</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Brigitte Gabriel.
| Data | Brigitte Gabriel |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 716 |
| Short tweets | 105 |
| Tweets kept | 2429 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/w0rkndg8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @actbrigitte's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2jtfv41h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2jtfv41h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/actbrigitte')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
cemilcelik/distilgpt2_pubmed
|
cemilcelik
| 2022-08-29T16:34:51Z | 157 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-29T13:16:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2_pubmed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2_pubmed
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7569 | 1.0 | 528 | 2.0859 |
| 2.1098 | 2.0 | 1056 | 1.9187 |
| 2.0058 | 3.0 | 1584 | 1.8745 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
|
merve/20newsgroups
|
merve
| 2022-08-29T16:04:55Z | 0 | 0 |
sklearn
|
[
"sklearn",
"skops",
"text-classification",
"license:mit",
"region:us"
] |
text-classification
| 2022-08-29T16:04:53Z |
---
license: mit
library_name: sklearn
tags:
- sklearn
- skops
- text-classification
---
# Model description
This is a multinomial naive Bayes model trained on 20 new groups dataset. Count vectorizer and TFIDF vectorizer are used on top of the model.
## Intended uses & limitations
This model is not ready to be used in production.
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|---------------------|----------------------------------------------------------------------------------------|
| memory | |
| steps | [('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', MultinomialNB())] |
| verbose | False |
| vect | CountVectorizer() |
| tfidf | TfidfTransformer() |
| clf | MultinomialNB() |
| vect__analyzer | word |
| vect__binary | False |
| vect__decode_error | strict |
| vect__dtype | <class 'numpy.int64'> |
| vect__encoding | utf-8 |
| vect__input | content |
| vect__lowercase | True |
| vect__max_df | 1.0 |
| vect__max_features | |
| vect__min_df | 1 |
| vect__ngram_range | (1, 1) |
| vect__preprocessor | |
| vect__stop_words | |
| vect__strip_accents | |
| vect__token_pattern | (?u)\b\w\w+\b |
| vect__tokenizer | |
| vect__vocabulary | |
| tfidf__norm | l2 |
| tfidf__smooth_idf | True |
| tfidf__sublinear_tf | False |
| tfidf__use_idf | True |
| clf__alpha | 1.0 |
| clf__class_prior | |
| clf__fit_prior | True |
</details>
### Model Plot
The model plot is below.
<style>#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 {color: black;background-color: white;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 pre{padding: 0;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-toggleable {background-color: white;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-estimator:hover {background-color: #d4ebff;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-item {z-index: 1;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-parallel-item:only-child::after {width: 0;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-label-container {position: relative;z-index: 2;text-align: center;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6 div.sk-text-repr-fallback {display: none;}</style><div id="sk-8f9616f3-01a7-4784-b5f5-5c31d2b0f7a6" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('vect', CountVectorizer()), ('tfidf', TfidfTransformer()),('clf', MultinomialNB())])</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="9caae382-ba9c-4e50-b4e0-017fa1bca4b4" type="checkbox" ><label for="9caae382-ba9c-4e50-b4e0-017fa1bca4b4" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('vect', CountVectorizer()), ('tfidf', TfidfTransformer()),('clf', MultinomialNB())])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="6bf44786-d8ef-4af0-be6a-2ac8b82cf581" type="checkbox" ><label for="6bf44786-d8ef-4af0-be6a-2ac8b82cf581" class="sk-toggleable__label sk-toggleable__label-arrow">CountVectorizer</label><div class="sk-toggleable__content"><pre>CountVectorizer()</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="69b80eb1-41d4-421a-9875-a9e95faa6d45" type="checkbox" ><label for="69b80eb1-41d4-421a-9875-a9e95faa6d45" class="sk-toggleable__label sk-toggleable__label-arrow">TfidfTransformer</label><div class="sk-toggleable__content"><pre>TfidfTransformer()</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="63c8c7e2-7443-4092-a86b-32b1cbef1a1b" type="checkbox" ><label for="63c8c7e2-7443-4092-a86b-32b1cbef1a1b" class="sk-toggleable__label sk-toggleable__label-arrow">MultinomialNB</label><div class="sk-toggleable__content"><pre>MultinomialNB()</pre></div></div></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|---------|
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
import pickle
with open(pkl_filename, 'rb') as file:
clf = pickle.load(file)
```
</details>
# Model Card Authors
This model card is written by following authors:
merve
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
bibtex
@inproceedings{...,year={2020}}
```
|
Atharvgarg/distilbart-xsum-6-6-finetuned-bbc-news-on-abstractive
|
Atharvgarg
| 2022-08-29T15:47:39Z | 49 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"summarisation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-29T15:10:50Z |
---
license: apache-2.0
tags:
- summarisation
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-xsum-6-6-finetuned-bbc-news-on-abstractive
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-xsum-6-6-finetuned-bbc-news-on-abstractive
This model is a fine-tuned version of [sshleifer/distilbart-xsum-6-6](https://huggingface.co/sshleifer/distilbart-xsum-6-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6549
- Rouge1: 38.9186
- Rouge2: 30.2223
- Rougel: 32.6201
- Rougelsum: 37.7502
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 1.3838 | 1.0 | 445 | 1.4841 | 39.1621 | 30.4379 | 32.6613 | 37.9963 |
| 1.0077 | 2.0 | 890 | 1.5173 | 39.388 | 30.9125 | 33.099 | 38.2442 |
| 0.7983 | 3.0 | 1335 | 1.5726 | 38.7913 | 30.0766 | 32.6092 | 37.5953 |
| 0.6681 | 4.0 | 1780 | 1.6549 | 38.9186 | 30.2223 | 32.6201 | 37.7502 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Atharvgarg/distilbart-xsum-6-6-finetuned-bbc-news
|
Atharvgarg
| 2022-08-29T12:38:44Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"summarisation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-29T11:36:02Z |
---
license: apache-2.0
tags:
- summarisation
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-xsum-6-6-finetuned-bbc-news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-xsum-6-6-finetuned-bbc-news
This model is a fine-tuned version of [sshleifer/distilbart-xsum-6-6](https://huggingface.co/sshleifer/distilbart-xsum-6-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2624
- Rouge1: 62.1927
- Rouge2: 54.4754
- Rougel: 55.868
- Rougelsum: 60.9345
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 0.4213 | 1.0 | 445 | 0.2005 | 59.4886 | 51.7791 | 53.5126 | 58.3405 |
| 0.1355 | 2.0 | 890 | 0.1887 | 61.7474 | 54.2823 | 55.7324 | 60.5787 |
| 0.0891 | 3.0 | 1335 | 0.1932 | 61.1312 | 53.103 | 54.6992 | 59.8923 |
| 0.0571 | 4.0 | 1780 | 0.2141 | 60.8797 | 52.6195 | 54.4402 | 59.5298 |
| 0.0375 | 5.0 | 2225 | 0.2318 | 61.7875 | 53.8753 | 55.5068 | 60.5448 |
| 0.0251 | 6.0 | 2670 | 0.2484 | 62.3535 | 54.6029 | 56.2804 | 61.031 |
| 0.0175 | 7.0 | 3115 | 0.2542 | 61.6351 | 53.8248 | 55.6399 | 60.3765 |
| 0.0133 | 8.0 | 3560 | 0.2624 | 62.1927 | 54.4754 | 55.868 | 60.9345 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
mayjul/t5-small-finetuned-xsum
|
mayjul
| 2022-08-29T11:52:46Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-28T14:36:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
config: default
split: train
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.2727
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4789
- Rouge1: 28.2727
- Rouge2: 7.7068
- Rougel: 22.1993
- Rougelsum: 22.2071
- Gen Len: 18.8238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7189 | 1.0 | 12753 | 2.4789 | 28.2727 | 7.7068 | 22.1993 | 22.2071 | 18.8238 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
PKM230/Lunar_lander
|
PKM230
| 2022-08-29T11:32:51Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-29T11:31:18Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 14.50 +/- 141.88
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
StefanSteib/Photographer
|
StefanSteib
| 2022-08-29T11:27:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-08-29T11:26:32Z |
Carry plenty cameras
black clothes
|
hhffxx/pegasus-samsum
|
hhffxx
| 2022-08-29T10:52:44Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-29T06:48:07Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [stas/pegasus-cnn_dailymail-tiny-random](https://huggingface.co/stas/pegasus-cnn_dailymail-tiny-random) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 7.5735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.6148 | 0.54 | 500 | 7.5735 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
autoevaluate/summarization
|
autoevaluate
| 2022-08-29T10:12:08Z | 26 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"summarization",
"dataset:xsum",
"dataset:autoevaluate/xsum-sample",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-05-28T12:27:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- summarization
datasets:
- xsum
- autoevaluate/xsum-sample
metrics:
- rouge
model-index:
- name: summarization
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 23.9405
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6690
- Rouge1: 23.9405
- Rouge2: 5.0879
- Rougel: 18.4981
- Rougelsum: 18.5032
- Gen Len: 18.7376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.9249 | 0.08 | 1000 | 2.6690 | 23.9405 | 5.0879 | 18.4981 | 18.5032 | 18.7376 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
autoevaluate/translation
|
autoevaluate
| 2022-08-29T10:08:28Z | 25 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"dataset:autoevaluate/wmt16-sample",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-28T14:14:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
- autoevaluate/wmt16-sample
metrics:
- bleu
model-index:
- name: translation
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 28.5866
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translation
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3170
- Bleu: 28.5866
- Gen Len: 33.9575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.8302 | 0.03 | 1000 | 1.3170 | 28.5866 | 33.9575 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
artfrontier/ddpm-butterflies-128
|
artfrontier
| 2022-08-29T09:07:51Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-29T07:14:18Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/artfrontier/ddpm-butterflies-128/tensorboard?#scalars)
|
kingabzpro/Reinforce-CartPole-v1
|
kingabzpro
| 2022-08-29T08:58:15Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-29T08:56:09Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
hieule/bert-finetuned-ner
|
hieule
| 2022-08-29T07:32:11Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-29T06:30:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Recall
type: recall
value: 0.9522046449007069
- name: F1
type: f1
value: 0.9441802252816022
- name: Accuracy
type: accuracy
value: 0.9866221227997881
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0858
- Precition: 0.9363
- Recall: 0.9522
- F1: 0.9442
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precition | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0081 | 1.0 | 1756 | 0.0914 | 0.9273 | 0.9446 | 0.9359 | 0.9848 |
| 0.012 | 2.0 | 3512 | 0.0852 | 0.9321 | 0.9478 | 0.9399 | 0.9857 |
| 0.0036 | 3.0 | 5268 | 0.0858 | 0.9363 | 0.9522 | 0.9442 | 0.9866 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
pinot/wav2vec2-large-xls-r-300m-ja-colab-new
|
pinot
| 2022-08-29T07:21:29Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_10_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-08-28T16:18:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_10_0
model-index:
- name: wav2vec2-large-xls-r-300m-ja-colab-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ja-colab-new
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_10_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1931
- Wer: 0.2584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 637 | 5.3089 | 0.9670 |
| No log | 2.0 | 1274 | 3.2716 | 0.6123 |
| No log | 3.0 | 1911 | 2.1797 | 0.4708 |
| No log | 4.0 | 2548 | 1.8331 | 0.4113 |
| 6.3938 | 5.0 | 3185 | 1.5111 | 0.3460 |
| 6.3938 | 6.0 | 3822 | 1.3575 | 0.3132 |
| 6.3938 | 7.0 | 4459 | 1.2946 | 0.2957 |
| 6.3938 | 8.0 | 5096 | 1.2346 | 0.2762 |
| 1.023 | 9.0 | 5733 | 1.2053 | 0.2653 |
| 1.023 | 10.0 | 6370 | 1.1931 | 0.2584 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.10.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Shengyu/Evaluation_of_NER_models
|
Shengyu
| 2022-08-29T03:03:59Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-08-29T02:58:44Z |
# **Evaluation of the NER models in medical dataset**
The goal of the whole project is to compare the NER models and feature evaluation in the medical dataset, and the program of model comparison needs to be executed in the GPU environment. Here are the instructions for the two project.
## 1. Model Comparison
### 1.1 Environment setting:
(1) Python 3 environment (Python 3.6 and above)
The user can click the link (https://www.python.org/) to select the appropriate python version and download.
(2) Some related package in python
The version of the package we used is as follows:
```shell
Transformers: 4.8.2
NERDA: 0.9.5
Pytorch: 1.8.1+cu101
Tensorflow: 2.3.0
```
The user can execute the following command in python environment.
```shell
pip install tensorflow-gpu==2.3.0 -i https://pypi.doubanio.com/simple
pip install transformers==4.8.2
pip install NERDA
pip install sentencepiece
pip install torch==1.8.1+cu101 torchvision==0.9.1+cu101 torchaudio===0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
```
### 1.2 The process of implementation
(1) Training and testing
Users can check the "training&testing.ipynb" file. The user can load the models to be trained and download them locally, or directly import it into the internal model of transformers website.
For example:
```python
# Model loading in the "training&testing.ipynb" file
transformer = '../../Model/bigbird-roberta-base/'
or
transformer = 'google/bigbird-roberta-base'
```
Address of model download:
```http
https://huggingface.co/dmis-lab/biobert-base-cased-v1.1
https://huggingface.co/roberta-base
https://huggingface.co/google/bigbird-roberta-base
https://huggingface.co/microsoft/deberta-base
```
The user can download models through the above websites and put them in the "model" folder.
(2) Prediction program
Users can load the trained models and input new text to make that the model recognize the entities in the text. We give five trained models with the best training effect for RoBERTa, BigBird, DeBERTa, and BioBERT NER models ( The suffix of the five models ends with ". bin" ). These models is saved in "Trained model" file.
For example:
```python
import torch
model = torch.load('../../trained_model/trained_models_by_Revised_JNLPBA_dataset/deberta.bin')
model.predict_text('Number of glucocorticoid receptors in lymphocytes and their sensitivity to hormone action.')
->> ([['Number', 'of', 'glucocorticoid', 'receptors', 'in', 'lymphocytes', 'and', 'their', 'sensitivity', 'to', 'hormone','action','.']],
[['O', 'O', 'B-protein','I-protein','o','B-cell_type','O','O','O','O','O','O','O']])
```
## 2. Feature Evaluation
### 2.1 Environment setting:
(1) Some related package in python
Packages we used is as follows, users can download the latest packages by ”pip install package name“ commend.
```shell
1. warnings
2. matplotlib
3. pandas
4. seaborn
5. statsmodels
6. sklearn
```
### 2.2 The process of implementation
Users can check the "feature_selection.ipynb" and "feature_evaluation.ipynb"file. Due to the privacy of the data, we did not upload the feature data, so users can view different methods of feature selection in this file.
### 3. Contact
If user have any questions, please contact us.
(1) Sizhu Wu - [[email protected]]
(2) Shengyu Liu - [[email protected]]
|
rajistics/layoutlmv3-finetuned-cord_300
|
rajistics
| 2022-08-28T22:32:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:cord-layoutlmv3",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T21:38:54Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- cord-layoutlmv3
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-cord_300
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cord-layoutlmv3
type: cord-layoutlmv3
config: cord
split: train
args: cord
metrics:
- name: Precision
type: precision
value: 0.9325426241660489
- name: Recall
type: recall
value: 0.9416167664670658
- name: F1
type: f1
value: 0.9370577281191806
- name: Accuracy
type: accuracy
value: 0.9363327674023769
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-cord_300
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3434
- Precision: 0.9325
- Recall: 0.9416
- F1: 0.9371
- Accuracy: 0.9363
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 4.17 | 250 | 1.0379 | 0.7204 | 0.7829 | 0.7504 | 0.7941 |
| 1.4162 | 8.33 | 500 | 0.5642 | 0.8462 | 0.8772 | 0.8614 | 0.8820 |
| 1.4162 | 12.5 | 750 | 0.3836 | 0.9055 | 0.9184 | 0.9119 | 0.9206 |
| 0.3211 | 16.67 | 1000 | 0.3209 | 0.9139 | 0.9296 | 0.9217 | 0.9334 |
| 0.3211 | 20.83 | 1250 | 0.2962 | 0.9275 | 0.9386 | 0.9330 | 0.9435 |
| 0.1191 | 25.0 | 1500 | 0.2979 | 0.9254 | 0.9379 | 0.9316 | 0.9402 |
| 0.1191 | 29.17 | 1750 | 0.3079 | 0.9282 | 0.9386 | 0.9334 | 0.9355 |
| 0.059 | 33.33 | 2000 | 0.3039 | 0.9232 | 0.9364 | 0.9298 | 0.9325 |
| 0.059 | 37.5 | 2250 | 0.3254 | 0.9248 | 0.9386 | 0.9316 | 0.9355 |
| 0.0342 | 41.67 | 2500 | 0.3404 | 0.9246 | 0.9364 | 0.9305 | 0.9334 |
| 0.0342 | 45.83 | 2750 | 0.3386 | 0.9354 | 0.9431 | 0.9392 | 0.9355 |
| 0.0226 | 50.0 | 3000 | 0.3274 | 0.9354 | 0.9431 | 0.9392 | 0.9359 |
| 0.0226 | 54.17 | 3250 | 0.3282 | 0.9341 | 0.9446 | 0.9393 | 0.9393 |
| 0.017 | 58.33 | 3500 | 0.3475 | 0.9319 | 0.9424 | 0.9371 | 0.9363 |
| 0.017 | 62.5 | 3750 | 0.3367 | 0.9340 | 0.9431 | 0.9385 | 0.9372 |
| 0.0145 | 66.67 | 4000 | 0.3434 | 0.9325 | 0.9416 | 0.9371 | 0.9363 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ChaoLi/xlm-roberta-base-finetuned-panx-it
|
ChaoLi
| 2022-08-28T19:55:33Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T19:52:28Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8224755700325732
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2521
- F1: 0.8225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8088 | 1.0 | 70 | 0.3423 | 0.7009 |
| 0.2844 | 2.0 | 140 | 0.2551 | 0.8027 |
| 0.1905 | 3.0 | 210 | 0.2521 | 0.8225 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ChaoLi/xlm-roberta-base-finetuned-panx-fr
|
ChaoLi
| 2022-08-28T19:52:12Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T19:47:35Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8325761399966348
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2978
- F1: 0.8326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.574 | 1.0 | 191 | 0.3495 | 0.7889 |
| 0.2649 | 2.0 | 382 | 0.2994 | 0.8242 |
| 0.1716 | 3.0 | 573 | 0.2978 | 0.8326 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ChaoLi/xlm-roberta-base-finetuned-panx-de-fr
|
ChaoLi
| 2022-08-28T19:46:37Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T19:37:01Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1643
- F1: 0.8626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 |
| 0.1472 | 2.0 | 1430 | 0.1633 | 0.8488 |
| 0.0948 | 3.0 | 2145 | 0.1643 | 0.8626 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
baudm/trba
|
baudm
| 2022-08-28T19:03:01Z | 0 | 0 | null |
[
"pytorch",
"image-to-text",
"en",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2022-08-28T19:01:11Z |
---
language:
- en
license: apache-2.0
tags:
- image-to-text
---
# TRBA v1.0
TRBA model pre-trained on various real [STR datasets](https://github.com/baudm/parseq/blob/main/Datasets.md) at image size 128x32.
Disclaimer: this model card was not written by the original authors.
## Model description
*TODO*
## Intended uses & limitations
You can use the model for STR on images containing Latin characters (62 case-sensitive alphanumeric + 32 punctuation marks).
### How to use
*TODO*
### BibTeX entry and citation info
```bibtex
@InProceedings{Baek_2019_ICCV,
author = {Baek, Jeonghun and Kim, Geewook and Lee, Junyeop and Park, Sungrae and Han, Dongyoon and Yun, Sangdoo and Oh, Seong Joon and Lee, Hwalsuk},
title = {What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {10},
year = {2019}
}
```
|
baudm/abinet-lv
|
baudm
| 2022-08-28T19:00:28Z | 0 | 0 | null |
[
"pytorch",
"image-to-text",
"en",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2022-08-28T18:55:28Z |
---
language:
- en
license: apache-2.0
tags:
- image-to-text
---
# ABINet-LV v1.0
ABINet model pre-trained on various real [STR datasets](https://github.com/baudm/parseq/blob/main/Datasets.md) at image size 128x32.
Disclaimer: this model card was not written by the original authors.
## Model description
*TODO*
## Intended uses & limitations
You can use the model for STR on images containing Latin characters (62 case-sensitive alphanumeric + 32 punctuation marks).
### How to use
*TODO*
### BibTeX entry and citation info
```bibtex
@InProceedings{Fang_2021_CVPR,
author = {Fang, Shancheng and Xie, Hongtao and Wang, Yuxin and Mao, Zhendong and Zhang, Yongdong},
title = {Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {6},
year = {2021},
pages = {7098-7107}
}
```
|
baudm/vitstr-small-patch16-224
|
baudm
| 2022-08-28T18:53:19Z | 0 | 0 | null |
[
"pytorch",
"image-to-text",
"en",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2022-08-28T18:52:01Z |
---
language:
- en
license: apache-2.0
tags:
- image-to-text
---
# ViTSTR small v1.0
ViTSTR model pre-trained on various real [STR datasets](https://github.com/baudm/parseq/blob/main/Datasets.md) at image size 224x224 with a patch size of 16x16.
Disclaimer: this model card was not written by the original author.
## Model description
*TODO*
## Intended uses & limitations
You can use the model for STR on images containing Latin characters (62 case-sensitive alphanumeric + 32 punctuation marks).
### How to use
*TODO*
### BibTeX entry and citation info
```bibtex
@InProceedings{atienza2021vision,
title={Vision transformer for fast and efficient scene text recognition},
author={Atienza, Rowel},
booktitle={International Conference on Document Analysis and Recognition},
pages={319--334},
year={2021},
organization={Springer}
}
```
|
caffsean/t5-base-finetuned-keyword-to-text-generation
|
caffsean
| 2022-08-28T18:36:02Z | 11 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-27T23:29:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-finetuned-keyword-to-text-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-keyword-to-text-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4643
- Rouge1: 2.1108
- Rouge2: 0.3331
- Rougel: 1.7368
- Rougelsum: 1.7391
- Gen Len: 16.591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 375 | 3.4862 | 2.0718 | 0.326 | 1.7275 | 1.7308 | 16.7995 |
| 3.5928 | 2.0 | 750 | 3.4761 | 2.0829 | 0.3253 | 1.7192 | 1.7224 | 16.773 |
| 3.5551 | 3.0 | 1125 | 3.4701 | 2.1028 | 0.3272 | 1.7274 | 1.7296 | 16.6505 |
| 3.5225 | 4.0 | 1500 | 3.4671 | 2.11 | 0.3305 | 1.7343 | 1.7362 | 16.699 |
| 3.5225 | 5.0 | 1875 | 3.4653 | 2.1134 | 0.3319 | 1.7418 | 1.7437 | 16.5485 |
| 3.4987 | 6.0 | 2250 | 3.4643 | 2.1108 | 0.3331 | 1.7368 | 1.7391 | 16.591 |
| 3.4939 | 7.0 | 2625 | 3.4643 | 2.1108 | 0.3331 | 1.7368 | 1.7391 | 16.591 |
| 3.498 | 8.0 | 3000 | 3.4643 | 2.1108 | 0.3331 | 1.7368 | 1.7391 | 16.591 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
vikram71198/roberta-base-finetuned-irony
|
vikram71198
| 2022-08-28T18:19:31Z | 106 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"Irony Detection",
"Text Classification",
"tweet_eval",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T17:36:41Z |
---
license: apache-2.0
tags:
- Irony Detection
- Text Classification
- tweet_eval
#metrics:
#- accuracy
model-index:
- name: roberta-base-finetuned-irony
results: []
---
# roberta-base-finetuned-irony
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the Irony Dataset from [Tweet_Eval](https://huggingface.co/datasets/tweet_eval).
This is the classification report after training for 10 full epochs:
| | Precision | Recall | F-1 Score | Support |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| Not Irony (0) | 0.73 | 0.78| 0.75 | 473 |
| Irony (1) | 0.62 | 0.56 | 0.59 | 311 |
| accuracy | | | 0.69 | 784 |
| macro avg | 0.68 | 0.67 | 0.67 | 784 |
| weighted avg | 0.69 | 0.69 | 0.69 | 784 |
## Training and evaluation data
All of the process to train this model is available in [this](https://github.com/vikram71198/Transformers/tree/main/Irony%20Detection) repository. The dataset has been split into 2,862 examples for training, 955 for validation & 784 for testing.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- optimizer: default AdamW Optimizer
- num_epochs: 10
- warmup_steps: 500
- weight_decay: 0.01
- random seed: 42
I also trained for 10 full epochs on Colab's Tesla P100-PCIE-16GB GPU.
### Training results
| Epoch | Training Loss | Validation Loss |
|:-------------:|:----:|:---------------:|
| 1 | 0.691600 |0.6738196 |
| 2 | 0.621800 | 0.611911 |
| 3 | 0.510800 | 0.516174 |
| 4 | 0.384700 | 0.574607 |
| 5 | 0.273900 | 0.644613 |
| 6 | 0.162300 | 0.846262 |
| 7 | 0.119000 | 0.869178 |
| 8 | 0.079700 | 1.131574 |
| 9 | 0.035800 | 1.5123457 |
| 10 | 0.013600 |1.5706617 |
## Model in Action 🚀
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch.nn as nn
tokenizer = AutoTokenizer.from_pretrained("vikram71198/roberta-base-finetuned-irony")
model = AutoModelForSequenceClassification.from_pretrained("vikram71198/roberta-base-finetuned-irony")
#Following the same truncation & padding strategy used while training
encoded_input = tokenizer("Enter any text/tweet to be classified. Can input a list of tweets too.", padding = True, return_tensors='pt')
output = model(**encoded_input)["logits"]
#detaching the output from the computation graph
detached_output = output.detach()
#Applying softmax here for single label classification
softmax = nn.Softmax(dim = 1)
prediction_probabilities = list(softmax(detached_output).detach().numpy())
predictions = []
for x,y in prediction_probabilities:
predictions.append("not_irony") if x > y else predictions.append("irony")
print(predictions)
```
Please note that if you're performing inference on a lengthy dataset, split it up into multiple batches, otherwise your RAM will overflow, unless you're using a really high end GPU/TPU setup. I'd recommend a batch length of 50, if you're working with a vanilla GPU setup.
### Framework versions
- Transformers 4.12.5
- Pytorch 1.11.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
silviacamplani/distilbert-finetuned-tapt-lm-music
|
silviacamplani
| 2022-08-28T16:28:36Z | 7 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-28T16:24:24Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-finetuned-tapt-lm-music
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-finetuned-tapt-lm-music
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -1000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
aware-ai/wav2vec2-xls-r-300m-english
|
aware-ai
| 2022-08-28T16:15:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_10_0",
"generated_from_trainer",
"de",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-08-26T12:31:54Z |
---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_10_0
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-300m-english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-english
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_10_0 - DE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5577
- Wer: 0.3864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.317 | 1.0 | 7194 | 0.5577 | 0.3864 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
silviacamplani/distilbert-finetuned-dapt-lm-music
|
silviacamplani
| 2022-08-28T15:42:41Z | 65 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-28T11:31:06Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-finetuned-dapt-lm-music
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-finetuned-dapt-lm-music
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 32911, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
buddhist-nlp/mbart-buddhist-chinese-to-eng
|
buddhist-nlp
| 2022-08-28T15:27:25Z | 10 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"translation",
"zh",
"en",
"autotrain_compatible",
"region:us"
] |
translation
| 2022-08-28T10:39:38Z |
---
language:
- zh
- en
tags:
- translation
widget:
- text: "如是我闻:一时,佛在舍卫国只树花林窟,与大比丘众千二百五十人俱。"
inference: false
---
This model is based on MBART and translates Buddhist Chinese to English. It is optimized for a sequence length of 300 (Buddhist Chinese input sequences shouldn't exceed 150 characters). This model uses "#" with a space before and after as delimiter between sentences (in addition to the normal Chinese punctuation). Input should be converted to simplified Chinese before running. The model also doesn't like short sequences very much, for best results supply input sequences between 100 and 150 characters in length.
The model shows good performance on Sūtra texts and does perform not too bad on Abhidharma and Yogācāra. However, it does have the usual problems that NMT systems have with named entities (names of persons and places). Also it shows a tendency to hallucinate at times, i.e. generating a translation that has no direct relationship with the input.
|
huggingtweets/giorgiameloni
|
huggingtweets
| 2022-08-28T15:17:42Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-28T15:16:16Z |
---
language: en
thumbnail: http://www.huggingtweets.com/giorgiameloni/1661699858331/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1134047615354646528/KqlMwvCx_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Giorgia Meloni 🇮🇹 ن</div>
<div style="text-align: center; font-size: 14px;">@giorgiameloni</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Giorgia Meloni 🇮🇹 ن.
| Data | Giorgia Meloni 🇮🇹 ن |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 438 |
| Short tweets | 12 |
| Tweets kept | 2753 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/28rrt6ee/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @giorgiameloni's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2g0ixwv5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2g0ixwv5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/giorgiameloni')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
tanvirkhan/distilbert-base-uncased-finetuned-imdb
|
tanvirkhan
| 2022-08-28T14:59:47Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-28T11:50:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
yirmibesogluz/t2t-ner-ade-balanced
|
yirmibesogluz
| 2022-08-28T12:59:14Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"adverse-drug-events",
"twitter",
"social-media-mining-for-health",
"SMM4H",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-28T12:30:48Z |
---
license: mit
language: en
tags:
- adverse-drug-events
- twitter
- social-media-mining-for-health
- SMM4H
widget:
- text: "ner ade: i'm so irritable when my vyvanse wears off"
example_title: "ADE"
- text: "ner ade: bout to have a kick ass summer then it's time to get serious fer school #vyvanse #geekmode"
example_title: "noADE"
---
## t2t-ner-ade-balanced
t2t-ner-ade-balanced is a text-to-text (**t2t**) adverse drug event (**ade**) extraction (NER) model trained with over- and undersampled (balanced) English tweets reporting adverse drug events. It is trained as part of BOUN-TABI system for the Social Media Mining for Health (SMM4H) 2022 shared task. The system description paper has been accepted for publication in *Proceedings of the Seventh Social Media Mining for Health (#SMM4H) Workshop and Shared Task* and will be available soon. The source code has been released on GitHub at [https://github.com/gokceuludogan/boun-tabi-smm4h22](https://github.com/gokceuludogan/boun-tabi-smm4h22).
The model utilizes the T5 model and its text-to-text formulation. The inputs are fed to the model with the task prefix "ner ade:", followed with a sentence/tweet. In turn, either the extracted adverse event span is returned, or "none".
## Requirements
```
sentencepiece
transformers
```
## Usage
```python
from transformers import pipeline, AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("yirmibesogluz/t2t-ner-ade-balanced")
model = AutoModelForSeq2SeqLM.from_pretrained("yirmibesogluz/t2t-ner-ade-balanced")
predictor = pipeline("text2text-generation", model=model, tokenizer=tokenizer)
predictor("ner ade: i'm so irritable when my vyvanse wears off")
```
## Citation
```bibtex
@inproceedings{uludogan-gokce-yirmibesoglu-zeynep-2022-boun-tabi-smm4h22,
title = "{BOUN}-{TABI}@{SMM4H}'22: Text-to-{T}ext {A}dverse {D}rug {E}vent {E}xtraction with {D}ata {B}alancing and {P}rompting",
author = "Uludo{\u{g}}an, G{\"{o}}k{\c{c}}e and Yirmibe{\c{s}}o{\u{g}}lu, Zeynep",
booktitle = "Proceedings of the Seventh Social Media Mining for Health ({\#}SMM4H) Workshop and Shared Task",
year = "2022",
}
```
|
Mcy/t5-small-finetuned-xsum
|
Mcy
| 2022-08-28T12:40:36Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-26T08:59:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 178 | 1.9530 | 9.1314 | 1.226 | 9.1213 | 9.1047 | 14.4473 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
huggingtweets/bmrf_alerts
|
huggingtweets
| 2022-08-28T11:57:30Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-25T15:42:06Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/947480106469023744/dxcygpaz_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Black Mesa Announcement System</div>
<div style="text-align: center; font-size: 14px;">@bmrf_alerts</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Black Mesa Announcement System.
| Data | Black Mesa Announcement System |
| --- | --- |
| Tweets downloaded | 3251 |
| Retweets | 0 |
| Short tweets | 2 |
| Tweets kept | 3249 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/c177htj1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bmrf_alerts's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/19dwnb8u) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/19dwnb8u/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bmrf_alerts')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Shivus/q-FrozenLake-v1-4x4-noSlippery
|
Shivus
| 2022-08-28T11:25:26Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-28T11:25:18Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Shivus/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
silviacamplani/distilbert-finetuned-ner-music
|
silviacamplani
| 2022-08-28T10:44:38Z | 4 | 1 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T10:40:37Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: silviacamplani/distilbert-finetuned-ner-music
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# silviacamplani/distilbert-finetuned-ner-music
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6767
- Validation Loss: 0.7802
- Train Precision: 0.5256
- Train Recall: 0.5824
- Train F1: 0.5525
- Train Accuracy: 0.8017
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 370, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 2.6671 | 2.0032 | 0.0 | 0.0 | 0.0 | 0.5482 | 0 |
| 1.7401 | 1.5194 | 0.1820 | 0.0693 | 0.1004 | 0.5902 | 1 |
| 1.3487 | 1.2627 | 0.2628 | 0.2952 | 0.2781 | 0.6766 | 2 |
| 1.1390 | 1.0990 | 0.4018 | 0.4527 | 0.4257 | 0.7181 | 3 |
| 0.9823 | 0.9837 | 0.4575 | 0.4887 | 0.4726 | 0.7311 | 4 |
| 0.8741 | 0.9022 | 0.5008 | 0.5338 | 0.5168 | 0.7544 | 5 |
| 0.7904 | 0.8449 | 0.5085 | 0.5626 | 0.5342 | 0.7776 | 6 |
| 0.7327 | 0.8097 | 0.5211 | 0.5779 | 0.5480 | 0.7917 | 7 |
| 0.7000 | 0.7872 | 0.5281 | 0.5842 | 0.5547 | 0.7975 | 8 |
| 0.6767 | 0.7802 | 0.5256 | 0.5824 | 0.5525 | 0.8017 | 9 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
flair/ner-german-large
|
flair
| 2022-08-28T09:08:06Z | 221,703 | 39 |
flair
|
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"dataset:conll2003",
"arxiv:2011.06993",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- flair
- token-classification
- sequence-tagger-model
language: de
datasets:
- conll2003
widget:
- text: "George Washington ging nach Washington"
---
## German NER in Flair (large model)
This is the large 4-class NER model for German that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **92,31** (CoNLL-03 German revised)
Predicts 4 tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
| PER | person name |
| LOC | location name |
| ORG | organization name |
| MISC | other name |
Based on document-level XLM-R embeddings and [FLERT](https://arxiv.org/pdf/2011.06993v1.pdf).
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-german-large")
# make example sentence
sentence = Sentence("George Washington ging nach Washington")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [1,2]: "George Washington" [− Labels: PER (1.0)]
Span [5]: "Washington" [− Labels: LOC (1.0)]
```
So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging nach Washington*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
import torch
# 1. get the corpus
from flair.datasets import CONLL_03_GERMAN
corpus = CONLL_03_GERMAN()
# 2. what tag do we want to predict?
tag_type = 'ner'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize fine-tuneable transformer embeddings WITH document context
from flair.embeddings import TransformerWordEmbeddings
embeddings = TransformerWordEmbeddings(
model='xlm-roberta-large',
layers="-1",
subtoken_pooling="first",
fine_tune=True,
use_context=True,
)
# 5. initialize bare-bones sequence tagger (no CRF, no RNN, no reprojection)
from flair.models import SequenceTagger
tagger = SequenceTagger(
hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type='ner',
use_crf=False,
use_rnn=False,
reproject_embeddings=False,
)
# 6. initialize trainer with AdamW optimizer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW)
# 7. run training with XLM parameters (20 epochs, small LR)
from torch.optim.lr_scheduler import OneCycleLR
trainer.train('resources/taggers/ner-german-large',
learning_rate=5.0e-6,
mini_batch_size=4,
mini_batch_chunk_size=1,
max_epochs=20,
scheduler=OneCycleLR,
embeddings_storage_mode='none',
weight_decay=0.,
)
)
```
---
### Cite
Please cite the following paper when using this model.
```
@misc{schweter2020flert,
title={FLERT: Document-Level Features for Named Entity Recognition},
author={Stefan Schweter and Alan Akbik},
year={2020},
eprint={2011.06993},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
paola-md/recipe-lr1e05-wd0.02-bs32
|
paola-md
| 2022-08-28T08:41:28Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T08:13:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr1e05-wd0.02-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr1e05-wd0.02-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2756
- Rmse: 0.5250
- Mse: 0.2756
- Mae: 0.4181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2769 | 1.0 | 623 | 0.2768 | 0.5261 | 0.2768 | 0.4281 |
| 0.2743 | 2.0 | 1246 | 0.2739 | 0.5234 | 0.2739 | 0.4152 |
| 0.2732 | 3.0 | 1869 | 0.2760 | 0.5253 | 0.2760 | 0.4229 |
| 0.2719 | 4.0 | 2492 | 0.2749 | 0.5243 | 0.2749 | 0.4041 |
| 0.271 | 5.0 | 3115 | 0.2761 | 0.5255 | 0.2761 | 0.4238 |
| 0.2699 | 6.0 | 3738 | 0.2756 | 0.5250 | 0.2756 | 0.4181 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipe-lr1e05-wd0.1-bs32
|
paola-md
| 2022-08-28T08:13:25Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T07:45:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr1e05-wd0.1-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr1e05-wd0.1-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2756
- Rmse: 0.5250
- Mse: 0.2756
- Mae: 0.4181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2769 | 1.0 | 623 | 0.2768 | 0.5261 | 0.2768 | 0.4281 |
| 0.2743 | 2.0 | 1246 | 0.2739 | 0.5234 | 0.2739 | 0.4152 |
| 0.2732 | 3.0 | 1869 | 0.2760 | 0.5253 | 0.2760 | 0.4229 |
| 0.2719 | 4.0 | 2492 | 0.2749 | 0.5243 | 0.2749 | 0.4041 |
| 0.271 | 5.0 | 3115 | 0.2761 | 0.5255 | 0.2761 | 0.4238 |
| 0.2699 | 6.0 | 3738 | 0.2756 | 0.5250 | 0.2756 | 0.4181 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
yoyoyo1118/xlm-roberta-base-finetuned-panx-de-fr
|
yoyoyo1118
| 2022-08-28T07:53:58Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T07:31:23Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1654
- F1: 0.8590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2845 | 1.0 | 715 | 0.1831 | 0.8249 |
| 0.1449 | 2.0 | 1430 | 0.1643 | 0.8479 |
| 0.0929 | 3.0 | 2145 | 0.1654 | 0.8590 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Minds/rare-puppers
|
Minds
| 2022-08-28T06:54:12Z | 45 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-08-28T06:54:01Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8888888955116272
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### fresh leaf of plant

#### plant diseases

|
paola-md/recipe-lr8e06-wd0.02-bs32
|
paola-md
| 2022-08-28T06:49:07Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T06:21:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr8e06-wd0.02-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr8e06-wd0.02-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2752
- Rmse: 0.5246
- Mse: 0.2752
- Mae: 0.4184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2769 | 1.0 | 623 | 0.2773 | 0.5266 | 0.2773 | 0.4296 |
| 0.2745 | 2.0 | 1246 | 0.2739 | 0.5233 | 0.2739 | 0.4144 |
| 0.2733 | 3.0 | 1869 | 0.2752 | 0.5246 | 0.2752 | 0.4215 |
| 0.2722 | 4.0 | 2492 | 0.2744 | 0.5238 | 0.2744 | 0.4058 |
| 0.2714 | 5.0 | 3115 | 0.2758 | 0.5251 | 0.2758 | 0.4232 |
| 0.2705 | 6.0 | 3738 | 0.2752 | 0.5246 | 0.2752 | 0.4184 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipe-lr8e06-wd0.1-bs32
|
paola-md
| 2022-08-28T06:21:06Z | 167 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T05:53:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr8e06-wd0.1-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr8e06-wd0.1-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2752
- Rmse: 0.5246
- Mse: 0.2752
- Mae: 0.4184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2769 | 1.0 | 623 | 0.2773 | 0.5266 | 0.2773 | 0.4297 |
| 0.2745 | 2.0 | 1246 | 0.2739 | 0.5233 | 0.2739 | 0.4144 |
| 0.2733 | 3.0 | 1869 | 0.2752 | 0.5246 | 0.2752 | 0.4215 |
| 0.2722 | 4.0 | 2492 | 0.2744 | 0.5238 | 0.2744 | 0.4058 |
| 0.2714 | 5.0 | 3115 | 0.2758 | 0.5252 | 0.2758 | 0.4233 |
| 0.2705 | 6.0 | 3738 | 0.2752 | 0.5246 | 0.2752 | 0.4184 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
yoyoyo1118/xlm-roberta-base-finetuned-panx-de
|
yoyoyo1118
| 2022-08-28T06:05:49Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T05:45:44Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.863677639046538
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1343
- F1: 0.8637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2578 | 1.0 | 525 | 0.1562 | 0.8273 |
| 0.1297 | 2.0 | 1050 | 0.1330 | 0.8474 |
| 0.0809 | 3.0 | 1575 | 0.1343 | 0.8637 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
paola-md/recipe-lr8e06-wd0.005-bs32
|
paola-md
| 2022-08-28T05:53:02Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T05:25:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr8e06-wd0.005-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr8e06-wd0.005-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2752
- Rmse: 0.5246
- Mse: 0.2752
- Mae: 0.4184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2769 | 1.0 | 623 | 0.2773 | 0.5266 | 0.2773 | 0.4296 |
| 0.2745 | 2.0 | 1246 | 0.2739 | 0.5233 | 0.2739 | 0.4144 |
| 0.2733 | 3.0 | 1869 | 0.2752 | 0.5246 | 0.2752 | 0.4215 |
| 0.2722 | 4.0 | 2492 | 0.2744 | 0.5238 | 0.2744 | 0.4058 |
| 0.2714 | 5.0 | 3115 | 0.2758 | 0.5251 | 0.2758 | 0.4232 |
| 0.2705 | 6.0 | 3738 | 0.2752 | 0.5246 | 0.2752 | 0.4184 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
rebolforces/Reinforce-CartPole-v1-exp2
|
rebolforces
| 2022-08-28T05:35:42Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-28T05:35:26Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1-exp2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
paola-md/recipe-lr8e06-wd0.01-bs32
|
paola-md
| 2022-08-28T05:25:05Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T04:57:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr8e06-wd0.01-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr8e06-wd0.01-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2753
- Rmse: 0.5246
- Mse: 0.2753
- Mae: 0.4184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2769 | 1.0 | 623 | 0.2774 | 0.5266 | 0.2774 | 0.4296 |
| 0.2745 | 2.0 | 1246 | 0.2739 | 0.5233 | 0.2739 | 0.4145 |
| 0.2733 | 3.0 | 1869 | 0.2752 | 0.5246 | 0.2752 | 0.4215 |
| 0.2722 | 4.0 | 2492 | 0.2744 | 0.5238 | 0.2744 | 0.4058 |
| 0.2714 | 5.0 | 3115 | 0.2758 | 0.5251 | 0.2758 | 0.4232 |
| 0.2705 | 6.0 | 3738 | 0.2753 | 0.5246 | 0.2753 | 0.4184 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
rebolforces/Reinforce-CartPole-v1-exp1
|
rebolforces
| 2022-08-28T05:11:04Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-28T05:10:50Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1-exp1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 458.90 +/- 80.57
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
paola-md/recipe-lr1e05-wd0.1-bs8
|
paola-md
| 2022-08-28T03:18:14Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T02:53:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr1e05-wd0.1-bs8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr1e05-wd0.1-bs8
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2779
- Rmse: 0.5271
- Mse: 0.2779
- Mae: 0.4280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2766 | 1.0 | 2490 | 0.2740 | 0.5235 | 0.2740 | 0.4175 |
| 0.2738 | 2.0 | 4980 | 0.2785 | 0.5277 | 0.2785 | 0.4296 |
| 0.2724 | 3.0 | 7470 | 0.2779 | 0.5271 | 0.2779 | 0.4280 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
anas-awadalla/distilroberta-base-task-specific-distilation-on-squad
|
anas-awadalla
| 2022-08-28T01:17:22Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-27T23:50:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilroberta-base-task-specific-distilation-on-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-task-specific-distilation-on-squad
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
paola-md/recipe-lr8e06-wd0.005-bs8
|
paola-md
| 2022-08-28T01:12:20Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T00:48:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr8e06-wd0.005-bs8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr8e06-wd0.005-bs8
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2782
- Rmse: 0.5274
- Mse: 0.2782
- Mae: 0.4298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2766 | 1.0 | 2490 | 0.2739 | 0.5234 | 0.2739 | 0.4154 |
| 0.2739 | 2.0 | 4980 | 0.2768 | 0.5261 | 0.2768 | 0.4273 |
| 0.2725 | 3.0 | 7470 | 0.2782 | 0.5274 | 0.2782 | 0.4298 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipe-lr8e06-wd0.01-bs8
|
paola-md
| 2022-08-28T00:47:15Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T00:22:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr8e06-wd0.01-bs8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr8e06-wd0.01-bs8
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2782
- Rmse: 0.5274
- Mse: 0.2782
- Mae: 0.4299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2766 | 1.0 | 2490 | 0.2739 | 0.5234 | 0.2739 | 0.4152 |
| 0.2739 | 2.0 | 4980 | 0.2769 | 0.5262 | 0.2769 | 0.4274 |
| 0.2725 | 3.0 | 7470 | 0.2782 | 0.5274 | 0.2782 | 0.4299 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipe-lr2e05-wd0.02-bs8
|
paola-md
| 2022-08-28T00:22:11Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T23:57:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr2e05-wd0.02-bs8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr2e05-wd0.02-bs8
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2767
- Rmse: 0.5260
- Mse: 0.2767
- Mae: 0.4245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2771 | 1.0 | 2490 | 0.2746 | 0.5240 | 0.2746 | 0.4201 |
| 0.2739 | 2.0 | 4980 | 0.2810 | 0.5301 | 0.2810 | 0.4329 |
| 0.2723 | 3.0 | 7470 | 0.2767 | 0.5260 | 0.2767 | 0.4245 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipe-lr2e05-wd0.1-bs8
|
paola-md
| 2022-08-27T23:57:08Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T23:32:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr2e05-wd0.1-bs8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr2e05-wd0.1-bs8
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2768
- Rmse: 0.5262
- Mse: 0.2768
- Mae: 0.4258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.277 | 1.0 | 2490 | 0.2745 | 0.5239 | 0.2745 | 0.4180 |
| 0.2739 | 2.0 | 4980 | 0.2814 | 0.5304 | 0.2814 | 0.4321 |
| 0.2723 | 3.0 | 7470 | 0.2768 | 0.5262 | 0.2768 | 0.4258 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
caffsean/t5-small-finetuned-keyword-to-text-generation
|
caffsean
| 2022-08-27T23:15:01Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-27T20:39:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-keyword-to-text-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-keyword-to-text-generation
This model is a fine-tuned version of [caffsean/t5-small-finetuned-keyword-to-text-generation](https://huggingface.co/caffsean/t5-small-finetuned-keyword-to-text-generation) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 188 | 3.8742 | 0.5567 | 0.0851 | 0.4968 | 0.4972 | 16.243 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipe-lr2e05-wd0.01-bs8
|
paola-md
| 2022-08-27T23:07:05Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T22:42:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr2e05-wd0.01-bs8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr2e05-wd0.01-bs8
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2765
- Rmse: 0.5259
- Mse: 0.2765
- Mae: 0.4240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2771 | 1.0 | 2490 | 0.2743 | 0.5237 | 0.2743 | 0.4175 |
| 0.2739 | 2.0 | 4980 | 0.2801 | 0.5292 | 0.2801 | 0.4307 |
| 0.2723 | 3.0 | 7470 | 0.2765 | 0.5259 | 0.2765 | 0.4240 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipe-lr1e05-wd0.02-bs16
|
paola-md
| 2022-08-27T22:42:16Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T22:25:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr1e05-wd0.02-bs16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr1e05-wd0.02-bs16
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2793
- Rmse: 0.5285
- Mse: 0.2793
- Mae: 0.4342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2767 | 1.0 | 1245 | 0.2744 | 0.5239 | 0.2744 | 0.4125 |
| 0.2739 | 2.0 | 2490 | 0.2757 | 0.5250 | 0.2757 | 0.4212 |
| 0.2727 | 3.0 | 3735 | 0.2793 | 0.5285 | 0.2793 | 0.4342 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
theojolliffe/bart-paraphrase-v4-e1-feedback
|
theojolliffe
| 2022-08-27T22:37:46Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-26T22:26:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-paraphrase-v4-e1-feedback
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-paraphrase-v4-e1-feedback
This model is a fine-tuned version of [theojolliffe/bart-paraphrase-v4-e1](https://huggingface.co/theojolliffe/bart-paraphrase-v4-e1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 27 | 3.9313 | 67.6687 | 57.1881 | 66.7507 | 66.2643 | 20.0 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0
- Datasets 1.18.0
- Tokenizers 0.10.3
|
paola-md/recipe-lr1e05-wd0.1-bs16
|
paola-md
| 2022-08-27T22:24:30Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T22:07:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr1e05-wd0.1-bs16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr1e05-wd0.1-bs16
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2794
- Rmse: 0.5286
- Mse: 0.2794
- Mae: 0.4343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2767 | 1.0 | 1245 | 0.2744 | 0.5239 | 0.2744 | 0.4124 |
| 0.2739 | 2.0 | 2490 | 0.2757 | 0.5250 | 0.2757 | 0.4211 |
| 0.2727 | 3.0 | 3735 | 0.2794 | 0.5286 | 0.2794 | 0.4343 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipe-lr8e06-wd0.02-bs16
|
paola-md
| 2022-08-27T21:31:07Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T21:13:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr8e06-wd0.02-bs16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr8e06-wd0.02-bs16
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2795
- Rmse: 0.5287
- Mse: 0.2795
- Mae: 0.4342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2767 | 1.0 | 1245 | 0.2745 | 0.5239 | 0.2745 | 0.4140 |
| 0.2741 | 2.0 | 2490 | 0.2760 | 0.5254 | 0.2760 | 0.4222 |
| 0.2729 | 3.0 | 3735 | 0.2795 | 0.5287 | 0.2795 | 0.4342 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Bahushruth/distilbert-base-uncased-distilled-clinc
|
Bahushruth
| 2022-08-27T21:15:24Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T20:55:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
paola-md/recipe-lr8e06-wd0.1-bs16
|
paola-md
| 2022-08-27T21:13:06Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T20:55:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr8e06-wd0.1-bs16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr8e06-wd0.1-bs16
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2795
- Rmse: 0.5287
- Mse: 0.2795
- Mae: 0.4342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2767 | 1.0 | 1245 | 0.2745 | 0.5239 | 0.2745 | 0.4140 |
| 0.2741 | 2.0 | 2490 | 0.2760 | 0.5253 | 0.2760 | 0.4222 |
| 0.2729 | 3.0 | 3735 | 0.2795 | 0.5287 | 0.2795 | 0.4342 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipe-lr8e06-wd0.005-bs16
|
paola-md
| 2022-08-27T20:55:19Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T20:38:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr8e06-wd0.005-bs16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr8e06-wd0.005-bs16
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2794
- Rmse: 0.5286
- Mse: 0.2794
- Mae: 0.4342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2767 | 1.0 | 1245 | 0.2745 | 0.5239 | 0.2745 | 0.4140 |
| 0.2741 | 2.0 | 2490 | 0.2760 | 0.5253 | 0.2760 | 0.4222 |
| 0.2729 | 3.0 | 3735 | 0.2794 | 0.5286 | 0.2794 | 0.4342 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipe-lr2e05-wd0.02-bs16
|
paola-md
| 2022-08-27T20:19:45Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T20:02:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr2e05-wd0.02-bs16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr2e05-wd0.02-bs16
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2780
- Rmse: 0.5272
- Mse: 0.2780
- Mae: 0.4313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.277 | 1.0 | 1245 | 0.2743 | 0.5237 | 0.2743 | 0.4111 |
| 0.2738 | 2.0 | 2490 | 0.2814 | 0.5305 | 0.2814 | 0.4294 |
| 0.2725 | 3.0 | 3735 | 0.2780 | 0.5272 | 0.2780 | 0.4313 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipe-lr2e05-wd0.1-bs16
|
paola-md
| 2022-08-27T20:01:59Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T19:44:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr2e05-wd0.1-bs16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr2e05-wd0.1-bs16
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2783
- Rmse: 0.5275
- Mse: 0.2783
- Mae: 0.4319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2771 | 1.0 | 1245 | 0.2744 | 0.5238 | 0.2744 | 0.4105 |
| 0.2738 | 2.0 | 2490 | 0.2819 | 0.5309 | 0.2819 | 0.4298 |
| 0.2724 | 3.0 | 3735 | 0.2783 | 0.5275 | 0.2783 | 0.4319 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipe-lr2e05-wd0.01-bs16
|
paola-md
| 2022-08-27T19:26:37Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T19:08:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr2e05-wd0.01-bs16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr2e05-wd0.01-bs16
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2792
- Rmse: 0.5284
- Mse: 0.2792
- Mae: 0.4332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2768 | 1.0 | 1245 | 0.2747 | 0.5241 | 0.2747 | 0.4081 |
| 0.2737 | 2.0 | 2490 | 0.2793 | 0.5285 | 0.2793 | 0.4288 |
| 0.2722 | 3.0 | 3735 | 0.2792 | 0.5284 | 0.2792 | 0.4332 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
BigSalmon/Infill2
|
BigSalmon
| 2022-08-27T19:24:38Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-27T19:08:51Z |
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/Infill2")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/Infill2")
```
```
Demo:
https://huggingface.co/spaces/BigSalmon/FormalInformalConciseWordy
```
```
prompt = """few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep]"""
input_ids = tokenizer.encode(prompt, return_tensors='pt')
outputs = model.generate(input_ids=input_ids,
max_length=10 + len(prompt),
temperature=1.0,
top_k=50,
top_p=0.95,
do_sample=True,
num_return_sequences=5,
early_stopping=True)
for i in range(5):
print(tokenizer.decode(outputs[i]))
```
Most likely outputs (Disclaimer: I highly recommend using this over just generating):
```
prompt = """few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep]"""
text = tokenizer.encode(prompt)
myinput, past_key_values = torch.tensor([text]), None
myinput = myinput
myinput= myinput.to(device)
logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(250)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
words = []
print(best_words)
```
Infill / Infilling / Masking / Phrase Masking
```
his contention [blank] by the evidence [sep] was refuted [answer]
***
few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep] synonymous with [answer]
***
when rick won the lottery, all of his distant relatives [blank] his winnings [sep] clamored for [answer]
***
the library’s quiet atmosphere encourages visitors to [blank] in their work [sep] immerse themselves [answer]
***
```
```
original: Other film stars to have appeared in Scrubs include Heather Graham, while Friends actor Matthew Perry has guest-starred and directed an episode of the [MASK] star, who recently played the title role in historical blockbuster Alexander, will make a cameo appearance as an unruly Irishman. Its leading star, Zach Braff, has recently [MASK] the big screen in Garden State, which he also directed. Farrell is pencilled in to [MASK] of Crockett in a film version of 1980s police [MASK] Farrell's appearance is said to be a result of his friendship with Zach Braff, who stars in the programme.
infill: Other film stars to have appeared in Scrubs include Heather Graham, while Friends actor Matthew Perry has guest-starred and directed an episode of the show. The film star, who recently played the title role in historical blockbuster Alexander, will make a cameo appearance as an unruly Irishman. Its leading star, Zach Braff, has recently been seen on the big screen in Garden State, which he also directed. Farrell is pencilled in to play the role of Crockett in a film version of 1980s police drama Miami Vice. Farrell's appearance is said to be a result of his friendship with Zach Braff, who stars in the programme.
```
|
Bahushruth/distilbert-base-uncased-finetuned-clinc
|
Bahushruth
| 2022-08-27T19:19:43Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T18:37:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9174193548387096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7711
- Accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2892 | 1.0 | 318 | 3.2830 | 0.7426 |
| 2.627 | 2.0 | 636 | 1.8728 | 0.8410 |
| 1.5429 | 3.0 | 954 | 1.1555 | 0.8913 |
| 1.0089 | 4.0 | 1272 | 0.8530 | 0.9126 |
| 0.7939 | 5.0 | 1590 | 0.7711 | 0.9174 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ChaoLi/nlp_for_transformer_book_distilbert-base-uncased-finetuned-emotion
|
ChaoLi
| 2022-08-27T19:17:37Z | 103 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T19:01:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: nlp_for_transformer_book_distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9242101664142519
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nlp_for_transformer_book_distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2189
- Accuracy: 0.9245
- F1: 0.9242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8191 | 1.0 | 250 | 0.3159 | 0.9065 | 0.9046 |
| 0.2411 | 2.0 | 500 | 0.2189 | 0.9245 | 0.9242 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
danieladejumo/Pong-PLE-v0
|
danieladejumo
| 2022-08-27T18:24:35Z | 0 | 0 | null |
[
"Pong-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-27T18:24:26Z |
---
tags:
- Pong-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pong-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-PLE-v0
type: Pong-PLE-v0
metrics:
- type: mean_reward
value: -16.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pong-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pong-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
wannaphong/khanomtan-tts-v1.1
|
wannaphong
| 2022-08-27T16:41:51Z | 10 | 3 |
transformers
|
[
"transformers",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-08-26T15:17:07Z |
---
license: apache-2.0
---
# KhanomTan TTS v1.1
KhanomTan TTS (ขนมตาล) is an open-source Thai text-to-speech model that supports multilingual speakers such as Thai, English, and others.
KhanomTan TTS v1.1 is a YourTTS model trained on multilingual languages that supports Thai. We use Thai speech corpora, TSync 1* and TSync 2* [mbarnig/lb-de-fr-en-pt-12800-TTS-CORPUS](https://huggingface.co/datasets/mbarnig/lb-de-fr-en-pt-12800-TTS-CORPUS) to train the YourTTS model by using code from the 🐸 Coqui-TTS and remove the voice that have the license's problem (All voice that doesn't use CC-0 or public license) from model, so the model's license is apache-2.0.
## Speakers
- Linda (English, female, [LJSpeech](https://keithito.com/LJ-Speech-Dataset/))
- Bernard (fr-fr, male, [m-ailabs](https://www.caito.de/2019/01/03/the-m-ailabs-speech-dataset/))
- Kerstin (x-de, female, [Rhasspy](https://github.com/rhasspy/dataset-voice-kerstin))
- Thorsten (x-de, male, [Thorsten](https://www.thorsten-voice.de/))
## Language
- th-th: Thai
- en: English
- fr-fr: French language
- pt-br: Portuguese
- x-de: Danish
- x-lb: Luxembourgish
*Note: Those are not complete corpus. We can access the public corpus only.
|
espnet/americasnlp22-asr-tav
|
espnet
| 2022-08-27T16:12:23Z | 4 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"tav",
"dataset:americasnlp22",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-06-06T19:08:34Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: tav
datasets:
- americasnlp22
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/americasnlp22-asr-tav`
This model was trained by Pavel Denisov using americasnlp22 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 66ca5df9f08b6084dbde4d9f312fa8ba0a47ecfc
pip install -e .
cd egs2/americasnlp22/asr1
./run.sh \
--skip_data_prep false \
--skip_train true \
--download_model espnet/americasnlp22-asr-tav \
--lang tav \
--local_data_opts "--lang tav" \
--train_set train_tav \
--valid_set dev_tav \
--test_sets dev_tav \
--gpu_inference false \
--inference_nj 8 \
--lm_train_text data/train_tav/text \
--bpe_train_text data/train_tav/text
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sun Jun 5 02:36:59 CEST 2022`
- python version: `3.9.13 (main, May 18 2022, 00:00:00) [GCC 11.3.1 20220421 (Red Hat 11.3.1-2)]`
- espnet version: `espnet 202204`
- pytorch version: `pytorch 1.11.0+cu115`
- Git hash: `d55704daa36d3dd2ca24ae3162ac40d81957208c`
- Commit date: `Wed Jun 1 02:33:09 2022 +0200`
## asr_train_asr_transformer_raw_tav_bpe100_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.cer_ctc.best/dev_tav|250|1201|3.0|83.1|13.9|17.0|114.0|99.6|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.cer_ctc.best/dev_tav|250|8606|57.5|19.9|22.7|12.0|54.5|99.6|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.cer_ctc.best/dev_tav|250|6741|49.2|28.5|22.3|12.6|63.4|99.6|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_transformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_raw_tav_bpe100_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 15
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- cer_ctc
- min
keep_nbest_models: 1
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream.model.feature_extractor
- frontend.upstream.model.encoder.layers.0
- frontend.upstream.model.encoder.layers.1
- frontend.upstream.model.encoder.layers.2
- frontend.upstream.model.encoder.layers.3
- frontend.upstream.model.encoder.layers.4
- frontend.upstream.model.encoder.layers.5
- frontend.upstream.model.encoder.layers.6
- frontend.upstream.model.encoder.layers.7
- frontend.upstream.model.encoder.layers.8
- frontend.upstream.model.encoder.layers.9
- frontend.upstream.model.encoder.layers.10
- frontend.upstream.model.encoder.layers.11
- frontend.upstream.model.encoder.layers.12
- frontend.upstream.model.encoder.layers.13
- frontend.upstream.model.encoder.layers.14
- frontend.upstream.model.encoder.layers.15
- frontend.upstream.model.encoder.layers.16
- frontend.upstream.model.encoder.layers.17
- frontend.upstream.model.encoder.layers.18
- frontend.upstream.model.encoder.layers.19
- frontend.upstream.model.encoder.layers.20
- frontend.upstream.model.encoder.layers.21
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 200000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_tav_bpe100_sp/train/speech_shape
- exp/asr_stats_raw_tav_bpe100_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_tav_bpe100_sp/valid/speech_shape
- exp/asr_stats_raw_tav_bpe100_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_tav_sp/wav.scp
- speech
- sound
- - dump/raw/train_tav_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_tav/wav.scp
- speech
- sound
- - dump/raw/dev_tav/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adamw
optim_conf:
lr: 0.0001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 300
token_list:
- <blank>
- <unk>
- ▁
- a
- ''''
- i
- h
- o
- e
- u
- U
- do
- ':'
- li
- na
- sa
- ▁ti
- n
- k
- ','
- '~'
- p
- ye
- le
- ka
- ta
- pe
- ▁ni
- ti
- ▁ihi
- ▁ma
- ▁~
- 'no'
- ya
- s
- ▁wa
- aye
- t
- .
- y
- m
- g
- d
- r
- ã
- '"'
- õ
- (
- )
- l
- '!'
- c
- '0'
- I
- '['
- ']'
- '2'
- '-'
- ç
- M
- '6'
- f
- A
- D
- '?'
- J
- j
- Y
- z
- Õ
- K
- '`'
- Ã
- O
- N
- F
- C
- '1'
- S
- P
- L
- T
- G
- v
- ñ
- b
- H
- E
- '3'
- '4'
- '5'
- '7'
- B
- W
- é
- ó
- ́
- w
- í
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/tav_token_list/bpe_unigram100/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wav2vec2_url
upstream_ckpt: https://dl.fbaipublicfiles.com/fairseq/wav2vec/xlsr2_300m.pt
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: null
specaug_conf: {}
normalize: utterance_mvn
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 1.0
lsm_weight: 0.0
length_normalized_loss: false
extract_feats_in_collect_stats: false
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: transformer
encoder_conf:
input_layer: conv2d2
num_blocks: 1
linear_units: 2048
dropout_rate: 0.2
output_size: 256
attention_heads: 8
attention_dropout_rate: 0.2
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf: {}
required:
- output_dir
- token_list
version: '202204'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
espnet/americasnlp22-asr-quy
|
espnet
| 2022-08-27T16:07:06Z | 1 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"quy",
"dataset:americasnlp22",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-06-13T17:12:18Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: quy
datasets:
- americasnlp22
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/americasnlp22-asr-quy`
This model was trained by Pavel Denisov using americasnlp22 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout fc62b1ce3e50c5ef8a2ac8cedb0d92ac41df54ca
pip install -e .
cd egs2/americasnlp22/asr1
./run.sh \
--skip_data_prep false \
--skip_train true \
--download_model espnet/americasnlp22-asr-quy \
--lang quy \
--local_data_opts "--lang quy" \
--train_set train_quy \
--valid_set dev_quy \
--test_sets dev_quy \
--gpu_inference false \
--inference_nj 8 \
--lm_train_text data/train_quy/text \
--bpe_train_text data/train_quy/text
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sun Jun 5 04:51:42 CEST 2022`
- python version: `3.9.13 (main, May 18 2022, 00:00:00) [GCC 11.3.1 20220421 (Red Hat 11.3.1-2)]`
- espnet version: `espnet 202204`
- pytorch version: `pytorch 1.11.0+cu115`
- Git hash: `d55704daa36d3dd2ca24ae3162ac40d81957208c`
- Commit date: `Wed Jun 1 02:33:09 2022 +0200`
## asr_train_asr_transformer_raw_quy_bpe100_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.cer_ctc.best/dev_quy|250|11465|18.7|67.0|14.3|4.3|85.6|100.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.cer_ctc.best/dev_quy|250|95334|78.6|8.0|13.4|10.1|31.5|100.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.cer_ctc.best/dev_quy|250|51740|64.7|18.6|16.7|9.7|45.0|100.0|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_transformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_raw_quy_bpe100_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 15
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- cer_ctc
- min
keep_nbest_models: 1
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream.model.feature_extractor
- frontend.upstream.model.encoder.layers.0
- frontend.upstream.model.encoder.layers.1
- frontend.upstream.model.encoder.layers.2
- frontend.upstream.model.encoder.layers.3
- frontend.upstream.model.encoder.layers.4
- frontend.upstream.model.encoder.layers.5
- frontend.upstream.model.encoder.layers.6
- frontend.upstream.model.encoder.layers.7
- frontend.upstream.model.encoder.layers.8
- frontend.upstream.model.encoder.layers.9
- frontend.upstream.model.encoder.layers.10
- frontend.upstream.model.encoder.layers.11
- frontend.upstream.model.encoder.layers.12
- frontend.upstream.model.encoder.layers.13
- frontend.upstream.model.encoder.layers.14
- frontend.upstream.model.encoder.layers.15
- frontend.upstream.model.encoder.layers.16
- frontend.upstream.model.encoder.layers.17
- frontend.upstream.model.encoder.layers.18
- frontend.upstream.model.encoder.layers.19
- frontend.upstream.model.encoder.layers.20
- frontend.upstream.model.encoder.layers.21
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 200000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_quy_bpe100_sp/train/speech_shape
- exp/asr_stats_raw_quy_bpe100_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_quy_bpe100_sp/valid/speech_shape
- exp/asr_stats_raw_quy_bpe100_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_quy_sp/wav.scp
- speech
- sound
- - dump/raw/train_quy_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_quy/wav.scp
- speech
- sound
- - dump/raw/dev_quy/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adamw
optim_conf:
lr: 0.0001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 300
token_list:
- <blank>
- <unk>
- ▁
- a
- n
- y
- u
- qa
- s
- ta
- q
- ri
- ku
- i
- kuna
- r
- m
- e
- cha
- pi
- pa
- o
- lla
- na
- ▁kay
- ▁ka
- ▁chay
- c
- chu
- ki
- ▁wa
- ña
- w
- ▁pa
- ra
- si
- man
- pas
- sqa
- l
- tu
- nku
- ▁ma
- yku
- taq
- ▁a
- ▁ima
- d
- ti
- chi
- manta
- ya
- ka
- mi
- h
- p
- wan
- nchik
- ll
- chkan
- spa
- ▁ha
- ▁ni
- pu
- yta
- chik
- mun
- ni
- paq
- sun
- ▁mana
- ▁wi
- k
- ▁allin
- ▁ancha
- ▁hina
- rí
- ▁punchaw
- ▁yacha
- ▁llaqta
- ñ
- ynin
- ▁rima
- b
- ▁huk
- skan
- ''''
- g
- j
- z
- á
- ó
- í
- ú
- f
- v
- t
- x
- é
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/quy_token_list/bpe_unigram100/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wav2vec2_url
upstream_ckpt: https://dl.fbaipublicfiles.com/fairseq/wav2vec/xlsr2_300m.pt
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: null
specaug_conf: {}
normalize: utterance_mvn
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 1.0
lsm_weight: 0.0
length_normalized_loss: false
extract_feats_in_collect_stats: false
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: transformer
encoder_conf:
input_layer: conv2d2
num_blocks: 1
linear_units: 2048
dropout_rate: 0.2
output_size: 256
attention_heads: 8
attention_dropout_rate: 0.2
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf: {}
required:
- output_dir
- token_list
version: '202204'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
danieladejumo/Reinforce-Pixelcopter-PLE-v0
|
danieladejumo
| 2022-08-27T16:05:55Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-27T16:05:49Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 9.30 +/- 8.66
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
muhtasham/tajroberto-ner
|
muhtasham
| 2022-08-27T15:37:05Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-27T15:27:16Z |
---
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: tajroberto-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
config: tg
split: train+test
args: tg
metrics:
- name: Precision
type: precision
value: 0.3155080213903743
- name: Recall
type: recall
value: 0.5673076923076923
- name: F1
type: f1
value: 0.4054982817869416
- name: Accuracy
type: accuracy
value: 0.83597621407334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tajroberto-ner
This model is a fine-tuned version of [muhtasham/RoBERTa-tg](https://huggingface.co/muhtasham/RoBERTa-tg) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9408
- Precision: 0.3155
- Recall: 0.5673
- F1: 0.4055
- Accuracy: 0.8360
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 2.0 | 50 | 0.7710 | 0.0532 | 0.1827 | 0.0824 | 0.6933 |
| No log | 4.0 | 100 | 0.5901 | 0.0847 | 0.25 | 0.1265 | 0.7825 |
| No log | 6.0 | 150 | 0.5226 | 0.2087 | 0.4615 | 0.2874 | 0.8186 |
| No log | 8.0 | 200 | 0.5041 | 0.2585 | 0.5096 | 0.3430 | 0.8449 |
| No log | 10.0 | 250 | 0.5592 | 0.2819 | 0.5096 | 0.3630 | 0.8499 |
| No log | 12.0 | 300 | 0.5725 | 0.3032 | 0.5481 | 0.3904 | 0.8558 |
| No log | 14.0 | 350 | 0.6433 | 0.3122 | 0.5673 | 0.4027 | 0.8508 |
| No log | 16.0 | 400 | 0.6744 | 0.3543 | 0.5962 | 0.4444 | 0.8553 |
| No log | 18.0 | 450 | 0.7617 | 0.3353 | 0.5577 | 0.4188 | 0.8335 |
| 0.2508 | 20.0 | 500 | 0.7608 | 0.3262 | 0.5865 | 0.4192 | 0.8419 |
| 0.2508 | 22.0 | 550 | 0.8483 | 0.3224 | 0.5673 | 0.4111 | 0.8494 |
| 0.2508 | 24.0 | 600 | 0.8370 | 0.3275 | 0.5385 | 0.4073 | 0.8439 |
| 0.2508 | 26.0 | 650 | 0.8652 | 0.3410 | 0.5673 | 0.4260 | 0.8394 |
| 0.2508 | 28.0 | 700 | 0.9441 | 0.3409 | 0.5769 | 0.4286 | 0.8216 |
| 0.2508 | 30.0 | 750 | 0.9228 | 0.3333 | 0.5577 | 0.4173 | 0.8439 |
| 0.2508 | 32.0 | 800 | 0.9175 | 0.3430 | 0.5673 | 0.4275 | 0.8355 |
| 0.2508 | 34.0 | 850 | 0.9603 | 0.3073 | 0.5288 | 0.3887 | 0.8340 |
| 0.2508 | 36.0 | 900 | 0.9417 | 0.3240 | 0.5577 | 0.4099 | 0.8370 |
| 0.2508 | 38.0 | 950 | 0.9408 | 0.3155 | 0.5673 | 0.4055 | 0.8360 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
fractalego/conversation-qa
|
fractalego
| 2022-08-27T14:25:41Z | 35 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"doi:10.57967/hf/0010",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-21T10:26:36Z |
# Conversational QA
This framework is trained on the [CoQA dataset](https://stanfordnlp.github.io/coqa/).
# Install
pip install conversation-qa
# Example
```python
from conversation_qa import QA, Dialogue
qa = QA("fractalego/conversation-qa")
dialogue = Dialogue()
dialogue.add_dialogue_pair("Where was the cat?", "The fence.")
text = "A white cat is on the fence."
query = "What color is it?"
qa.get_answer(text, dialogue.get_text(), query)
```
|
theojolliffe/T5-model-1-d-4
|
theojolliffe
| 2022-08-27T14:20:07Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-26T21:54:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: T5-model-1-d-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5-model-1-d-4
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0456
- Rouge1: 93.3486
- Rouge2: 82.1873
- Rougel: 92.8611
- Rougelsum: 92.7768
- Gen Len: 14.9953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.0873 | 1.0 | 8043 | 0.0456 | 93.3486 | 82.1873 | 92.8611 | 92.7768 | 14.9953 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Tokenizers 0.12.1
|
nrazavi/xlm-roberta-base-finetuned-panx-all
|
nrazavi
| 2022-08-27T14:19:11Z | 126 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-27T14:01:42Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1727
- F1: 0.8560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3057 | 1.0 | 835 | 0.1901 | 0.8135 |
| 0.1565 | 2.0 | 1670 | 0.1727 | 0.8436 |
| 0.1021 | 3.0 | 2505 | 0.1727 | 0.8560 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
UKI001/ddpm-butterflies-128
|
UKI001
| 2022-08-27T14:10:15Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-27T13:35:30Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/UKI001/ddpm-butterflies-128/tensorboard?#scalars)
|
danieladejumo/Reinforce-CartPole-v1
|
danieladejumo
| 2022-08-27T14:05:13Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-27T14:03:47Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 83.20 +/- 44.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
huggingtweets/nickelodeon-nickjr-sesamestreet
|
huggingtweets
| 2022-08-27T13:55:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-27T13:54:55Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1326222819248791552/u6HtLEIV_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1478805340212838413/YAJM_fei_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1516077327981109259/Z4JJ2Pey_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Sesame Street & Nick Jr. & Nickelodeon</div>
<div style="text-align: center; font-size: 14px;">@nickelodeon-nickjr-sesamestreet</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Sesame Street & Nick Jr. & Nickelodeon.
| Data | Sesame Street | Nick Jr. | Nickelodeon |
| --- | --- | --- | --- |
| Tweets downloaded | 3250 | 3250 | 3250 |
| Retweets | 746 | 51 | 54 |
| Short tweets | 41 | 754 | 658 |
| Tweets kept | 2463 | 2445 | 2538 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2en4utsq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nickelodeon-nickjr-sesamestreet's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6x3fqezt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6x3fqezt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nickelodeon-nickjr-sesamestreet')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Shamus/mBART_skr-en_longerrun
|
Shamus
| 2022-08-27T11:28:03Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-27T07:38:38Z |
---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mBART_skr-en_longerrun
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBART_skr-en_longerrun
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4577
- Bleu: 30.8071
- Gen Len: 34.548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.5444 | 0.72 | 500 | 1.3416 | 28.7505 | 34.228 |
| 0.8576 | 1.45 | 1000 | 1.3411 | 30.1776 | 34.328 |
| 0.6422 | 2.18 | 1500 | 1.3882 | 30.2815 | 34.164 |
| 0.532 | 2.9 | 2000 | 1.3716 | 30.8947 | 34.556 |
| 0.4473 | 3.63 | 2500 | 1.4577 | 30.8071 | 34.548 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
rangacharysrinivasan/electra-small-discriminator-finetuned-squad
|
rangacharysrinivasan
| 2022-08-27T10:31:10Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-26T08:33:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: electra-small-discriminator-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-small-discriminator-finetuned-squad
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.3825 | 1.0 | 5533 | 1.2656 |
| 1.1783 | 2.0 | 11066 | 1.1815 |
| 1.0474 | 3.0 | 16599 | 1.1658 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
pinot/wav2vec2-large-xls-r-300m-ja-colab-3
|
pinot
| 2022-08-27T06:14:51Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_10_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-08-26T23:39:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_10_0
model-index:
- name: wav2vec2-large-xls-r-300m-ja-colab-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ja-colab-3
This model is a fine-tuned version of [pinot/wav2vec2-large-xls-r-300m-ja-colab-2](https://huggingface.co/pinot/wav2vec2-large-xls-r-300m-ja-colab-2) on the common_voice_10_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2696
- Wer: 0.2299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 637 | 1.4666 | 0.2862 |
| No log | 2.0 | 1274 | 1.4405 | 0.2866 |
| No log | 3.0 | 1911 | 1.4162 | 0.2762 |
| No log | 4.0 | 2548 | 1.4128 | 0.2709 |
| 0.2814 | 5.0 | 3185 | 1.3927 | 0.2613 |
| 0.2814 | 6.0 | 3822 | 1.3629 | 0.2536 |
| 0.2814 | 7.0 | 4459 | 1.3349 | 0.2429 |
| 0.2814 | 8.0 | 5096 | 1.3116 | 0.2356 |
| 0.1624 | 9.0 | 5733 | 1.2774 | 0.2307 |
| 0.1624 | 10.0 | 6370 | 1.2696 | 0.2299 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.10.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
bnsh/ddpm-butterflies-128
|
bnsh
| 2022-08-27T05:56:30Z | 5 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-27T04:43:24Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/bnsh/ddpm-butterflies-128/tensorboard?#scalars)
|
JNK789/distilbert-base-uncased-finetuned-emotion
|
JNK789
| 2022-08-27T03:55:45Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-31T18:53:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9305
- name: F1
type: f1
value: 0.9307950942842982
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1712
- Accuracy: 0.9305
- F1: 0.9308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7721 | 1.0 | 250 | 0.2778 | 0.9145 | 0.9131 |
| 0.2103 | 2.0 | 500 | 0.1818 | 0.925 | 0.9249 |
| 0.1446 | 3.0 | 750 | 0.1712 | 0.9305 | 0.9308 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/noagencynewyork
|
huggingtweets
| 2022-08-27T03:15:02Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-27T03:03:46Z |
---
language: en
thumbnail: http://www.huggingtweets.com/noagencynewyork/1661570097601/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1486361303165947905/nUHbxq9z_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">No Agency New York</div>
<div style="text-align: center; font-size: 14px;">@noagencynewyork</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from No Agency New York.
| Data | No Agency New York |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 396 |
| Short tweets | 709 |
| Tweets kept | 2141 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2loewb7b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @noagencynewyork's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/32oryfuk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/32oryfuk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/noagencynewyork')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mindofmadness/faces01
|
mindofmadness
| 2022-08-27T02:11:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-08-27T02:08:30Z |
short narrow face, mid size lips, light freckles on upper cheeks, light grey eyes, brunette hair, nerd glasses
|
caffsean/distilbert-base-uncased-finetuned-for-tweet-sentiment
|
caffsean
| 2022-08-27T02:07:47Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T01:57:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-for-tweet-sentiment
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9249379397708433
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-for-tweet-sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2161
- Accuracy: 0.925
- F1: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3561 | 1.0 | 250 | 0.3072 | 0.9115 | 0.9098 |
| 0.2195 | 2.0 | 500 | 0.2161 | 0.925 | 0.9249 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
caffsean/distilbert-base-uncased-finetuned-emotion
|
caffsean
| 2022-08-27T01:27:28Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T00:35:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9223304536402763
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2111
- Accuracy: 0.9225
- F1: 0.9223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8274 | 1.0 | 250 | 0.3054 | 0.912 | 0.9096 |
| 0.2409 | 2.0 | 500 | 0.2111 | 0.9225 | 0.9223 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
moyix/csrc_774m
|
moyix
| 2022-08-26T23:42:27Z | 9 | 6 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"programming",
"causal-lm",
"code",
"license:cc0-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: code
thumbnail: https://doesnotexist.codes/messlab.png
tags:
- programming
- gpt2
- causal-lm
license: cc0-1.0
---
# GPT-CSRC
This is a GPT2 774M model trained on the C/C++ code of the top 10,000 most popular packages in Debian, according to the [Debian Popularity Contest](https://popcon.debian.org/). The source files were deduplicated using a process similar to the OpenWebText preprocessing (basically a locality-sensitive hash to detect near-duplicates). The model was originally trained using [NVIDIA's Megatron-LM](https://github.com/nvidia/Megatron-LM) but has been converted to Huggingface. Note that the tokenizer is *not* the standard GPT2 BPE vocab, but one that has been trained for this dataset; the tokenizer is also available from this repository.
The processed dataset (in JSON format) can be found here: [csrc\_dataset\_large.json.gz](https://moyix.net/~moyix/csrc_dataset_large.json.gz).
This model was used to generate snippets for the web site [This Code Does Not Exist](https://doesnotexist.codes/).
# Usage
```
>>> import torch
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> model = AutoModelForCausalLM.from_pretrained("moyix/csrc_774m")
>>> device = torch.device("cuda")
>>> model.to(device)
>>> tokenizer = AutoTokenizer.from_pretrained("moyix/csrc_774m")
>>> prompt = tokenizer.encode('// say hello\nvoid hello() {', return_tensors="pt")
>>> output = model.generate(input_ids=prompt.to(device), max_length=32, num_return_sequences=1, do_sample=True, num_beams=4)
>>> print(tokenizer.decode(output[0].tolist(),clean_up_tokenization_spaces=True))
// say hello
void hello() {
std::cout << "hello" << std::endl;
}
int main() {
```
|
nrazavi/xlm-roberta-base-finetuned-panx-de
|
nrazavi
| 2022-08-26T22:31:10Z | 128 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-26T22:12:51Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8609504366564591
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1359
- F1: 0.8610
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2594 | 1.0 | 525 | 0.1734 | 0.8095 |
| 0.1305 | 2.0 | 1050 | 0.1414 | 0.8462 |
| 0.0818 | 3.0 | 1575 | 0.1359 | 0.8610 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 2.4.0
- Tokenizers 0.10.3
|
hhffxx/xlm-roberta-base-finetuned-panx-en
|
hhffxx
| 2022-08-26T20:52:39Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-26T20:08:33Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.en
split: train
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6307099614749588
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7589
- F1: 0.6307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9453 | 1.0 | 1180 | 0.7589 | 0.6307 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
daviddaubner/ppo-LunarLander-v2
|
daviddaubner
| 2022-08-26T20:32:54Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-26T20:32:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 196.29 +/- 79.87
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.