modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
jonfreak/dbvvPinata
|
jonfreak
| 2022-11-11T14:29:38Z | 0 | 3 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-11T13:14:26Z |
---
license: creativeml-openrail-m
---
# Pinata dreambooth model for Stable-Diffusion
Trained on 30 creatures, 2000 steps.
With TheLastBen fast-stable-diffusion (https://github.com/TheLastBen/fast-stable-diffusion)
use the token **dbvvpinata**


|
Terence3927/Reinforce-CartPole-v1
|
Terence3927
| 2022-11-11T13:54:07Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-11T13:49:22Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 92.90 +/- 34.69
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
epec254/my-awesome-setfit-model
|
epec254
| 2022-11-11T13:48:01Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-11T13:47:55Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Marre-Barre/bubblydubbly
|
Marre-Barre
| 2022-11-11T13:23:07Z | 0 | 7 | null |
[
"region:us"
] | null | 2022-11-10T12:51:24Z |
Prompt: {{replace this with subject}}, soft colors, art by bubblydubbly
negative prompt: heavy contrast, red eyes, blue hair
blue hair as negative prompt if you want some normal hair
scale: 8 steps: 50
art by bubblydubbly is the keyword
bubblydubbly_7k is trained at 7000 steps, but tbh, I like the designs from 11,5k steps more, but the style is better for 7k steps. Pick your own and decide for yourself. :)
Note: this is trained at 11500 steps, for 115 images, but seems a bit overtrained. Does some funky stuff with teeth. Will make a new version at 10k steps to see whether there is any difference while still retaining the style.
|
internetoftim/gpt2-finetuned-wikitext2
|
internetoftim
| 2022-11-11T12:40:27Z | 196 | 0 |
transformers
|
[
"transformers",
"pytorch",
"optimum_graphcore",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-11T12:16:29Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 18
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- training precision: Mixed Precision
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| nan | 1.0 | 291 | nan |
| nan | 2.0 | 582 | nan |
| nan | 3.0 | 873 | nan |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.10.0+cpu
- Datasets 2.6.1
- Tokenizers 0.13.2
|
bigmorning/whisper_havest_0035
|
bigmorning
| 2022-11-11T11:30:44Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-11T11:30:35Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: whisper_havest_0035
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_havest_0035
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.5897
- Train Accuracy: 0.0150
- Train Do Wer: 1.0
- Validation Loss: 4.5822
- Validation Accuracy: 0.0130
- Validation Do Wer: 1.0
- Epoch: 34
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Do Wer | Validation Loss | Validation Accuracy | Validation Do Wer | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 9.9191 | 0.0046 | 1.0 | 8.5836 | 0.0067 | 1.0 | 0 |
| 8.0709 | 0.0083 | 1.0 | 7.4667 | 0.0089 | 1.0 | 1 |
| 7.1652 | 0.0100 | 1.0 | 6.8204 | 0.0112 | 1.0 | 2 |
| 6.7196 | 0.0114 | 1.0 | 6.5192 | 0.0114 | 1.0 | 3 |
| 6.4115 | 0.0115 | 1.0 | 6.2357 | 0.0115 | 1.0 | 4 |
| 6.1085 | 0.0115 | 1.0 | 5.9657 | 0.0115 | 1.0 | 5 |
| 5.8206 | 0.0115 | 1.0 | 5.7162 | 0.0115 | 1.0 | 6 |
| 5.5567 | 0.0115 | 1.0 | 5.4963 | 0.0115 | 1.0 | 7 |
| 5.3223 | 0.0116 | 1.0 | 5.3096 | 0.0116 | 1.0 | 8 |
| 5.1222 | 0.0117 | 1.0 | 5.1600 | 0.0117 | 1.0 | 9 |
| 4.9580 | 0.0117 | 1.0 | 5.0391 | 0.0118 | 1.0 | 10 |
| 4.8251 | 0.0119 | 1.0 | 4.9427 | 0.0118 | 1.0 | 11 |
| 4.7171 | 0.0119 | 1.0 | 4.8691 | 0.0119 | 1.0 | 12 |
| 4.6284 | 0.0121 | 1.0 | 4.8123 | 0.0120 | 1.0 | 13 |
| 4.5508 | 0.0121 | 1.0 | 4.7620 | 0.0121 | 1.0 | 14 |
| 4.4855 | 0.0123 | 1.0 | 4.7260 | 0.0121 | 1.0 | 15 |
| 4.4305 | 0.0124 | 1.0 | 4.7018 | 0.0123 | 1.0 | 16 |
| 4.3788 | 0.0125 | 1.0 | 4.6738 | 0.0123 | 1.0 | 17 |
| 4.3305 | 0.0127 | 1.0 | 4.6525 | 0.0124 | 1.0 | 18 |
| 4.2860 | 0.0128 | 1.0 | 4.6401 | 0.0125 | 1.0 | 19 |
| 4.2451 | 0.0130 | 1.0 | 4.6234 | 0.0126 | 1.0 | 20 |
| 4.1994 | 0.0132 | 1.0 | 4.6077 | 0.0128 | 1.0 | 21 |
| 4.1521 | 0.0133 | 1.0 | 4.6098 | 0.0129 | 1.0 | 22 |
| 4.1148 | 0.0134 | 1.0 | 4.5919 | 0.0129 | 1.0 | 23 |
| 4.0701 | 0.0135 | 1.0 | 4.6038 | 0.0128 | 1.0 | 24 |
| 4.0199 | 0.0137 | 1.0 | 4.5777 | 0.0130 | 1.0 | 25 |
| 3.9631 | 0.0138 | 1.0 | 4.5734 | 0.0131 | 1.0 | 26 |
| 3.9175 | 0.0140 | 1.0 | 4.5866 | 0.0129 | 1.0 | 27 |
| 3.8690 | 0.0142 | 1.0 | 4.5900 | 0.0129 | 1.0 | 28 |
| 3.8276 | 0.0143 | 1.0 | 4.5602 | 0.0131 | 1.0 | 29 |
| 3.7499 | 0.0145 | 1.0 | 4.5619 | 0.0132 | 1.0 | 30 |
| 3.6968 | 0.0147 | 1.0 | 4.6203 | 0.0133 | 1.0 | 31 |
| 3.6714 | 0.0149 | 1.0 | 4.7075 | 0.0133 | 1.0 | 32 |
| 3.6318 | 0.0149 | 1.0 | 4.6638 | 0.0125 | 1.0 | 33 |
| 3.5897 | 0.0150 | 1.0 | 4.5822 | 0.0130 | 1.0 | 34 |
### Framework versions
- Transformers 4.25.0.dev0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
bigmorning/whisper_havest_0025
|
bigmorning
| 2022-11-11T11:28:40Z | 67 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-09T19:36:22Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: whisper_havest_0025
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_havest_0025
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.0701
- Train Accuracy: 0.0135
- Train Do Wer: 1.0
- Validation Loss: 4.6038
- Validation Accuracy: 0.0128
- Validation Do Wer: 1.0
- Epoch: 24
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Do Wer | Validation Loss | Validation Accuracy | Validation Do Wer | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 9.9191 | 0.0046 | 1.0 | 8.5836 | 0.0067 | 1.0 | 0 |
| 8.0709 | 0.0083 | 1.0 | 7.4667 | 0.0089 | 1.0 | 1 |
| 7.1652 | 0.0100 | 1.0 | 6.8204 | 0.0112 | 1.0 | 2 |
| 6.7196 | 0.0114 | 1.0 | 6.5192 | 0.0114 | 1.0 | 3 |
| 6.4115 | 0.0115 | 1.0 | 6.2357 | 0.0115 | 1.0 | 4 |
| 6.1085 | 0.0115 | 1.0 | 5.9657 | 0.0115 | 1.0 | 5 |
| 5.8206 | 0.0115 | 1.0 | 5.7162 | 0.0115 | 1.0 | 6 |
| 5.5567 | 0.0115 | 1.0 | 5.4963 | 0.0115 | 1.0 | 7 |
| 5.3223 | 0.0116 | 1.0 | 5.3096 | 0.0116 | 1.0 | 8 |
| 5.1222 | 0.0117 | 1.0 | 5.1600 | 0.0117 | 1.0 | 9 |
| 4.9580 | 0.0117 | 1.0 | 5.0391 | 0.0118 | 1.0 | 10 |
| 4.8251 | 0.0119 | 1.0 | 4.9427 | 0.0118 | 1.0 | 11 |
| 4.7171 | 0.0119 | 1.0 | 4.8691 | 0.0119 | 1.0 | 12 |
| 4.6284 | 0.0121 | 1.0 | 4.8123 | 0.0120 | 1.0 | 13 |
| 4.5508 | 0.0121 | 1.0 | 4.7620 | 0.0121 | 1.0 | 14 |
| 4.4855 | 0.0123 | 1.0 | 4.7260 | 0.0121 | 1.0 | 15 |
| 4.4305 | 0.0124 | 1.0 | 4.7018 | 0.0123 | 1.0 | 16 |
| 4.3788 | 0.0125 | 1.0 | 4.6738 | 0.0123 | 1.0 | 17 |
| 4.3305 | 0.0127 | 1.0 | 4.6525 | 0.0124 | 1.0 | 18 |
| 4.2860 | 0.0128 | 1.0 | 4.6401 | 0.0125 | 1.0 | 19 |
| 4.2451 | 0.0130 | 1.0 | 4.6234 | 0.0126 | 1.0 | 20 |
| 4.1994 | 0.0132 | 1.0 | 4.6077 | 0.0128 | 1.0 | 21 |
| 4.1521 | 0.0133 | 1.0 | 4.6098 | 0.0129 | 1.0 | 22 |
| 4.1148 | 0.0134 | 1.0 | 4.5919 | 0.0129 | 1.0 | 23 |
| 4.0701 | 0.0135 | 1.0 | 4.6038 | 0.0128 | 1.0 | 24 |
### Framework versions
- Transformers 4.25.0.dev0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
bigmorning/whisper_havest_0015
|
bigmorning
| 2022-11-11T11:26:27Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-09T18:52:58Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: whisper_havest_0015
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_havest_0015
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.5508
- Train Accuracy: 0.0121
- Train Do Wer: 1.0
- Validation Loss: 4.7620
- Validation Accuracy: 0.0121
- Validation Do Wer: 1.0
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Do Wer | Validation Loss | Validation Accuracy | Validation Do Wer | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 9.9191 | 0.0046 | 1.0 | 8.5836 | 0.0067 | 1.0 | 0 |
| 8.0709 | 0.0083 | 1.0 | 7.4667 | 0.0089 | 1.0 | 1 |
| 7.1652 | 0.0100 | 1.0 | 6.8204 | 0.0112 | 1.0 | 2 |
| 6.7196 | 0.0114 | 1.0 | 6.5192 | 0.0114 | 1.0 | 3 |
| 6.4115 | 0.0115 | 1.0 | 6.2357 | 0.0115 | 1.0 | 4 |
| 6.1085 | 0.0115 | 1.0 | 5.9657 | 0.0115 | 1.0 | 5 |
| 5.8206 | 0.0115 | 1.0 | 5.7162 | 0.0115 | 1.0 | 6 |
| 5.5567 | 0.0115 | 1.0 | 5.4963 | 0.0115 | 1.0 | 7 |
| 5.3223 | 0.0116 | 1.0 | 5.3096 | 0.0116 | 1.0 | 8 |
| 5.1222 | 0.0117 | 1.0 | 5.1600 | 0.0117 | 1.0 | 9 |
| 4.9580 | 0.0117 | 1.0 | 5.0391 | 0.0118 | 1.0 | 10 |
| 4.8251 | 0.0119 | 1.0 | 4.9427 | 0.0118 | 1.0 | 11 |
| 4.7171 | 0.0119 | 1.0 | 4.8691 | 0.0119 | 1.0 | 12 |
| 4.6284 | 0.0121 | 1.0 | 4.8123 | 0.0120 | 1.0 | 13 |
| 4.5508 | 0.0121 | 1.0 | 4.7620 | 0.0121 | 1.0 | 14 |
### Framework versions
- Transformers 4.25.0.dev0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
bigmorning/whisper_havest_0010
|
bigmorning
| 2022-11-11T11:25:21Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-09T18:31:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: whisper_havest_0010
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_havest_0010
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.1222
- Train Accuracy: 0.0117
- Train Do Wer: 1.0
- Validation Loss: 5.1600
- Validation Accuracy: 0.0117
- Validation Do Wer: 1.0
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Do Wer | Validation Loss | Validation Accuracy | Validation Do Wer | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 9.9191 | 0.0046 | 1.0 | 8.5836 | 0.0067 | 1.0 | 0 |
| 8.0709 | 0.0083 | 1.0 | 7.4667 | 0.0089 | 1.0 | 1 |
| 7.1652 | 0.0100 | 1.0 | 6.8204 | 0.0112 | 1.0 | 2 |
| 6.7196 | 0.0114 | 1.0 | 6.5192 | 0.0114 | 1.0 | 3 |
| 6.4115 | 0.0115 | 1.0 | 6.2357 | 0.0115 | 1.0 | 4 |
| 6.1085 | 0.0115 | 1.0 | 5.9657 | 0.0115 | 1.0 | 5 |
| 5.8206 | 0.0115 | 1.0 | 5.7162 | 0.0115 | 1.0 | 6 |
| 5.5567 | 0.0115 | 1.0 | 5.4963 | 0.0115 | 1.0 | 7 |
| 5.3223 | 0.0116 | 1.0 | 5.3096 | 0.0116 | 1.0 | 8 |
| 5.1222 | 0.0117 | 1.0 | 5.1600 | 0.0117 | 1.0 | 9 |
### Framework versions
- Transformers 4.25.0.dev0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
bigmorning/whisper_havest_0005
|
bigmorning
| 2022-11-11T11:24:14Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-09T18:09:11Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: whisper_havest_0005
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_havest_0005
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.4115
- Train Accuracy: 0.0115
- Train Do Wer: 1.0
- Validation Loss: 6.2357
- Validation Accuracy: 0.0115
- Validation Do Wer: 1.0
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Do Wer | Validation Loss | Validation Accuracy | Validation Do Wer | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 9.9191 | 0.0046 | 1.0 | 8.5836 | 0.0067 | 1.0 | 0 |
| 8.0709 | 0.0083 | 1.0 | 7.4667 | 0.0089 | 1.0 | 1 |
| 7.1652 | 0.0100 | 1.0 | 6.8204 | 0.0112 | 1.0 | 2 |
| 6.7196 | 0.0114 | 1.0 | 6.5192 | 0.0114 | 1.0 | 3 |
| 6.4115 | 0.0115 | 1.0 | 6.2357 | 0.0115 | 1.0 | 4 |
### Framework versions
- Transformers 4.25.0.dev0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
shamim237/en-it-model
|
shamim237
| 2022-11-11T11:19:07Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-11T11:17:21Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: en-it-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# en-it-model
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-it](https://huggingface.co/Helsinki-NLP/opus-mt-en-it) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1962
- Train Acc: 0.4204
- Validation Loss: 0.2883
- Validation Acc: 0.4046
- Epoch: 8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Acc | Validation Loss | Validation Acc | Epoch |
|:----------:|:---------:|:---------------:|:--------------:|:-----:|
| 0.3407 | 0.4068 | 0.2950 | 0.4038 | 0 |
| 0.2827 | 0.4128 | 0.2846 | 0.4052 | 1 |
| 0.2563 | 0.4147 | 0.2787 | 0.4054 | 2 |
| 0.2389 | 0.4166 | 0.2777 | 0.4056 | 3 |
| 0.2262 | 0.4185 | 0.2800 | 0.4051 | 4 |
| 0.2161 | 0.4186 | 0.2817 | 0.4049 | 5 |
| 0.2082 | 0.4199 | 0.2829 | 0.4051 | 6 |
| 0.2014 | 0.4213 | 0.2860 | 0.4047 | 7 |
| 0.1962 | 0.4204 | 0.2883 | 0.4046 | 8 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
ctu-aic/m2m100-418M-multilingual-summarization-multilarge-cs
|
ctu-aic
| 2022-11-11T11:17:40Z | 110 | 1 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"Summarization",
"abstractive summarization",
"multilingual summarization",
"m2m100_418M",
"Czech",
"text2text generation",
"text generation",
"cs",
"en",
"de",
"fr",
"tu",
"zh",
"es",
"ru",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-23T21:38:12Z |
---
language:
- cs
- en
- de
- fr
- tu
- zh
- es
- ru
tags:
- Summarization
- abstractive summarization
- multilingual summarization
- m2m100_418M
- Czech
- text2text generation
- text generation
license: cc-by-sa-4.0
datasets:
- Multilingual_large_dataset_(multilarge)
- cnc/dm
- xsum
- mlsum
- cnewsum
- cnc
- sumeczech
metrics:
- rouge
- rougeraw
- MemesCS
---
# m2m100-418M-multilingual-summarization-multilarge-cs
This model is a fine-tuned checkpoint of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the Multilingual large summarization dataset focused on Czech texts to produce multilingual summaries.
## Task
The model deals with a multi-sentence summary in eight different languages. With the idea of adding other foreign language documents, and by having a considerable amount of Czech documents, we aimed to improve model summarization in the Czech language. Supported languages: ''cs', 'en', 'de', 'es', 'fr', 'ru', 'tu', 'zh'
#Usage
Assume that you are using the provided MultilingualSummarizer.ipynb file and included files from git repository.
```python
## Configuration of summarization pipeline
#
def summ_config():
cfg = OrderedDict([
## summarization model - checkpoint
# ctu-aic/m2m100-418M-multilingual-summarization-multilarge-cs
# ctu-aic/mt5-base-multilingual-summarization-multilarge-cs
# ctu-aic/mbart25-multilingual-summarization-multilarge-cs
("model_name", "ctu-aic/mbart25-multilingual-summarization-multilarge-cs"),
## language of summarization task
# language : string : cs, en, de, fr, es, tr, ru, zh
("language", "en"),
## generation method parameters in dictionary
#
("inference_cfg", OrderedDict([
("num_beams", 4),
("top_k", 40),
("top_p", 0.92),
("do_sample", True),
("temperature", 0.95),
("repetition_penalty", 1.23),
("no_repeat_ngram_size", None),
("early_stopping", True),
("max_length", 128),
("min_length", 10),
])),
#texts to summarize values = (list of strings, string, dataset)
("texts",
[
"english text1 to summarize",
"english text2 to summarize",
]
),
#OPTIONAL: Target summaries values = (list of strings, string, None)
('golds',
[
"target english text1",
"target english text2",
]),
#('golds', None),
])
return cfg
cfg = summ_config()
mSummarize = MultiSummarizer(**cfg)
summaries,scores = mSummarize(**cfg)
```
## Dataset
Multilingual large summarization dataset consists of 10 sub-datasets mainly based on news and daily mails. For the training, it was used the entire training set and 72% of the validation set.
```
Train set: 3 464 563 docs
Validation set: 121 260 docs
```
| Stats | fragment | | | avg document length | | avg summary length | | Documents |
|-------------|----------|---------------------|--------------------|--------|---------|--------|--------|--------|
| __dataset__ |__compression__ | __density__ | __coverage__ | __nsent__ | __nwords__ | __nsent__ | __nwords__ | __count__ |
| cnc | 7.388 | 0.303 | 0.088 | 16.121 | 316.912 | 3.272 | 46.805 | 750K |
| sumeczech | 11.769 | 0.471 | 0.115 | 27.857 | 415.711 | 2.765 | 38.644 | 1M |
| cnndm | 13.688 | 2.983 | 0.538 | 32.783 | 676.026 | 4.134 | 54.036 | 300K |
| xsum | 18.378 | 0.479 | 0.194 | 18.607 | 369.134 | 1.000 | 21.127 | 225K|
| mlsum/tu | 8.666 | 5.418 | 0.461 | 14.271 | 214.496 | 1.793 | 25.675 | 274K |
| mlsum/de | 24.741 | 8.235 | 0.469 | 32.544 | 539.653 | 1.951 | 23.077 | 243K|
| mlsum/fr | 24.388 | 2.688 | 0.424 | 24.533 | 612.080 | 1.320 | 26.93 | 425K |
| mlsum/es | 36.185 | 3.705 | 0.510 | 31.914 | 746.927 | 1.142 | 21.671 | 291K |
| mlsum/ru | 78.909 | 1.194 | 0.246 | 62.141 | 948.079 | 1.012 | 11.976 | 27K|
| cnewsum | 20.183 | 0.000 | 0.000 | 16.834 | 438.271 | 1.109 | 21.926 | 304K |
#### Tokenization
Truncation and padding were set to 512 tokens for the encoder (input text) and 128 for the decoder (summary).
## Training
Trained based on cross-entropy loss.
```
Time: 3 days 10 hours
Epochs: 1072K steps = 10 (from 10)
GPUs: 4x NVIDIA A100-SXM4-40GB
eloss: 2.824 - 1.745
tloss: 4.559 - 1.615
```
### ROUGE results per individual dataset test set:
| ROUGE | ROUGE-1 | | | ROUGE-2 | | | ROUGE-L | | |
|------------|---------|---------|-----------|--------|--------|-----------|--------|--------|---------|
| dataset | Precision | Recall | Fscore | Precision | Recall | Fscore | Precision | Recall | Fscore |
| cnc | 30.13 | 22.56 | 25.21 | 10.53 | 8.01 | 8.9 | 22.47 | 16.92 | 18.86 |
| sumeczech- | 26.6 | 19.66 | 22.01 | 8.17 | 6.12 | 6.82 | 19.93 | 14.81 | 16.54 |
| cnndm | 41.8 | 38.41 | 38.94 | 18.74 | 17.14 | 17.4 | 29.69 | 27.33 | 27.68 |
| xsum | 38.27 | 33.62 | 35.16 | 14.39 | 12.69 | 13.25 | 30.77 | 27.05 | 28.29 |
| mlsum-tu | 52.44 | 44.36 | 46.39 | 36.98 | 31.51 | 32.86 | 46.04 | 39.04 | 40.8 |
| mlsum-de | 42.19 | 40.5 | 40.7 | 28.8 | 28.51 | 28.37 | 38.95 | 37.7 | 37.79 |
| mlsum-fr | 34.57 | 27.74 | 29.95 | 16.27 | 13.04 | 14.08 | 27.18 | 21.89 | 23.6 |
| mlsum-es | 30.93 | 26.41 | 27.66 | 11.42 | 9.85 | 10.28 | 25.12 | 21.59 | 22.55 |
| mlsum-ru | 0.65 | 0.52 | 0.56 | 0.15 | 0.15 | 0.15 | 0.65 | 0.52 | 0.56 |
| cnewsum | 25.14 | 26.56 | 24.45 | 6.89 | 7.54 | 6.78 | 24.77 | 26.15 | 24.08 |
# USAGE
```
soon
```
|
horie-t/distilbert-base-uncased-finetuned-emotion
|
horie-t
| 2022-11-11T10:55:23Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-11T10:32:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.923910566982731
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2236
- Accuracy: 0.924
- F1: 0.9239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3321 | 0.9055 | 0.9023 |
| No log | 2.0 | 500 | 0.2236 | 0.924 | 0.9239 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
amitjohn007/xlm-roberta-base-finetuned-squad
|
amitjohn007
| 2022-11-11T10:38:17Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"xlm-roberta",
"question-answering",
"generated_from_keras_callback",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-11T08:43:38Z |
---
license: cc-by-4.0
tags:
- generated_from_keras_callback
model-index:
- name: amitjohn007/xlm-roberta-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# amitjohn007/xlm-roberta-base-finetuned-squad
This model is a fine-tuned version of [deepset/xlm-roberta-large-squad2](https://huggingface.co/deepset/xlm-roberta-large-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2888
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16800, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.6587 | 0 |
| 0.4550 | 1 |
| 0.2888 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
DeividasM/whisper-small-lt
|
DeividasM
| 2022-11-11T09:32:22Z | 88 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"lt-asr-leaderboard",
"generated_from_trainer",
"lt",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-07T11:26:58Z |
---
language:
- lt
license: apache-2.0
tags:
- lt-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small LT - Lithuanian Whisper
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: lt
split: train+validation
args: lt
metrics:
- name: Wer
type: wer
value: 32.65614439629468
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small LT - Lithuanian Whisper
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3871
- Wer: 32.6561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2419 | 1.8 | 1000 | 0.3749 | 38.7707 |
| 0.0425 | 3.6 | 2000 | 0.3591 | 34.2345 |
| 0.0062 | 5.4 | 3000 | 0.3779 | 32.7555 |
| 0.0034 | 7.19 | 4000 | 0.3871 | 32.6561 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
thisisHJLee/wav2vec2-large-xls-r-1b-korean-convsen5
|
thisisHJLee
| 2022-11-11T09:29:43Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-11T02:32:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-1b-korean-convsen5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-korean-convsen5
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0655
- Cer: 0.0105
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Cer | Validation Loss |
|:-------------:|:-----:|:----:|:------:|:---------------:|
| 0.2312 | 1.0 | 1408 | 0.0869 | 0.4450 |
| 0.109 | 2.0 | 2816 | 0.0789 | 0.4756 |
| 0.0457 | 3.0 | 4224 | 0.0696 | 0.5013 |
| 0.0334 | 4.0 | 5632 | 0.0628 | 0.4815 |
| 0.0222 | 5.0 | 7040 | 0.0655 | 0.0105 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.11.0
|
ntsema/wav2vec2-xlsr-53-espeak-cv-ft-xas-ntsema-colab
|
ntsema
| 2022-11-11T09:24:42Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:audiofolder",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-08T04:52:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- audiofolder
model-index:
- name: wav2vec2-xlsr-53-espeak-cv-ft-xas-ntsema-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-53-espeak-cv-ft-xas-ntsema-colab
This model is a fine-tuned version of [facebook/wav2vec2-xlsr-53-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-xlsr-53-espeak-cv-ft) on the audiofolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Terence3927/testpyramidsrnd
|
Terence3927
| 2022-11-11T08:56:50Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-11-11T08:56:41Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: Terence3927/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
reza-aditya/q-Taxi-v3
|
reza-aditya
| 2022-11-11T08:32:14Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-11T08:32:06Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="reza-aditya/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
geek1024/prompt-extend
|
geek1024
| 2022-11-11T08:09:54Z | 106 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-11T07:21:00Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: prompt-extend
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prompt-extend
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1502
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.3823 | 0.35 | 100 | 4.2083 |
| 3.72 | 0.69 | 200 | 3.2991 |
| 3.1185 | 1.04 | 300 | 2.8394 |
| 2.7284 | 1.39 | 400 | 2.5546 |
| 2.4932 | 1.74 | 500 | 2.3679 |
| 2.3408 | 2.08 | 600 | 2.2430 |
| 2.1997 | 2.43 | 700 | 2.1748 |
| 2.1631 | 2.78 | 800 | 2.1502 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
BearlyWorkingYT/OPT-125M-Kaggle-Creepypasta
|
BearlyWorkingYT
| 2022-11-11T07:39:02Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-11T07:19:36Z |
---
license: other
widget:
- text: "There was a ghost"
example_title: "First Prompt used in video"
- text: "I was playing Terraria but then"
example_title: "Second prompt used in video"
inference:
parameters:
temperature: 0.6
repetition_penalty: 1.15
min_length: 128
max_length: 468
---
This is the model trained for this video:
https://www.youtube.com/watch?v=OEPL5Tm3mmQ
Due to hardware limitations, I trained this model with only a batch size of 2. (I know this isn't ideal).
The quality of the model may be affected.
After training was complete, the best model according to a hold-out set was used.
This model was trained using a filtered version of this dataset:
https://www.kaggle.com/datasets/thomaskonstantin/3500-popular-creepypastas
This dataset had a lot of blank entries and missing text.
Please subscribe to my YouTube Channel for bad quality videos and poorly trained models.
https://www.youtube.com/channel/UCLXxfueCPZRZnyGFWJ07uqA
|
amitjohn007/roberta-base-finetuned-squad
|
amitjohn007
| 2022-11-11T07:26:14Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"question-answering",
"generated_from_keras_callback",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-11T06:35:49Z |
---
license: cc-by-4.0
tags:
- generated_from_keras_callback
model-index:
- name: amitjohn007/roberta-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# amitjohn007/roberta-base-finetuned-squad
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4173
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16608, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.7396 | 0 |
| 0.5461 | 1 |
| 0.4173 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
studio-ousia/luke-large-finetuned-conll-2003
|
studio-ousia
| 2022-11-11T06:57:23Z | 1,240 | 3 |
transformers
|
[
"transformers",
"pytorch",
"luke",
"arxiv:2010.01057",
"arxiv:1906.08237",
"arxiv:1903.07785",
"arxiv:2002.01808",
"arxiv:1910.09700",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
# Model Card for luke-large-finetuned-conll-2003
# Model Details
## Model Description
LUKE (Language Understanding with Knowledge-based Embeddings) is a new pretrained contextualized representation of words and entities based on transformer.
- **Developed by:** Studio Ousia
- **Shared by [Optional]:** More information needed
- **Model type:** EntitySpanClassification
- **Language(s) (NLP):** More information needed
- **License:** Apache-2.0
- **Related Models:** [Luke-large](https://huggingface.co/studio-ousia/luke-large?text=Paris+is+the+%3Cmask%3E+of+France.)
- **Parent Model:** Luke
- **Resources for more information:**
- [GitHub Repo](https://github.com/studio-ousia/luke)
- [Associated Paper](https://arxiv.org/abs/2010.01057)
# Uses
## Direct Use
More information needed
## Downstream Use [Optional]
This model can also be used for the task of named entity recognition, cloze-style question answering, fine-grained entity typing, extractive question answering.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
More information needed
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
### Metrics
LUKE achieves state-of-the-art results on five popular NLP benchmarks including
* **[SQuAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/)** (extractive
question answering),
* **[CoNLL-2003](https://www.clips.uantwerpen.be/conll2003/ner/)** (named entity
recognition), **[ReCoRD](https://sheng-z.github.io/ReCoRD-explorer/)**
(cloze-style question answering),
* **[TACRED](https://nlp.stanford.edu/projects/tacred/)** (relation
classification), and
* **[Open Entity](https://www.cs.utexas.edu/~eunsol/html_pages/open_entity.html)** (entity typing).
## Results
The experimental results are provided as follows:
| Task | Dataset | Metric | LUKE-large | luke-base | Previous SOTA |
| ------------------------------ | ---------------------------------------------------------------------------- | ------ | ----------------- | --------- | ------------------------------------------------------------------------- |
| Extractive Question Answering | [SQuAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/) | EM/F1 | **90.2**/**95.4** | 86.1/92.3 | 89.9/95.1 ([Yang et al., 2019](https://arxiv.org/abs/1906.08237)) |
| Named Entity Recognition | [CoNLL-2003](https://www.clips.uantwerpen.be/conll2003/ner/) | F1 | **94.3** | 93.3 | 93.5 ([Baevski et al., 2019](https://arxiv.org/abs/1903.07785)) |
| Cloze-style Question Answering | [ReCoRD](https://sheng-z.github.io/ReCoRD-explorer/) | EM/F1 | **90.6**/**91.2** | - | 83.1/83.7 ([Li et al., 2019](https://www.aclweb.org/anthology/D19-6011/)) |
| Relation Classification | [TACRED](https://nlp.stanford.edu/projects/tacred/) | F1 | **72.7** | - | 72.0 ([Wang et al. , 2020](https://arxiv.org/abs/2002.01808)) |
| Fine-grained Entity Typing | [Open Entity](https://www.cs.utexas.edu/~eunsol/html_pages/open_entity.html) | F1 | **78.2** | - | 77.6 ([Wang et al. , 2020](https://arxiv.org/abs/2002.01808)) |
Please check the [Github repository](https://github.com/studio-ousia/luke) for more details and updates.
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
* transformers_version: 4.6.0.dev0
### Software
More information needed
# Citation
**BibTeX:**
```
@inproceedings{yamada2020luke,
title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention},
author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto},
booktitle={EMNLP},
year={2020}
}
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Studio Ousia in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, LukeForEntitySpanClassification
tokenizer = AutoTokenizer.from_pretrained("studio-ousia/luke-large-finetuned-conll-2003")
model = LukeForEntitySpanClassification.from_pretrained("studio-ousia/luke-large-finetuned-conll-2003")
```
</details>
|
zhenyueyu/distilbert-base-uncased-finetuned-squad
|
zhenyueyu
| 2022-11-11T04:12:56Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-11T03:44:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2755 | 1.0 | 553 | 2.1210 |
| 1.8766 | 2.0 | 1106 | 1.7363 |
| 1.5381 | 3.0 | 1659 | 1.7093 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
kakaobrain/coyo-align-b7-base
|
kakaobrain
| 2022-11-11T03:42:56Z | 0 | 0 | null |
[
"align",
"clip",
"en",
"dataset:kakaobrain/coyo-700m",
"arxiv:2102.05918",
"license:apache-2.0",
"region:us"
] | null | 2022-11-09T07:13:12Z |
---
language:
- en
tags:
- align
- clip
license: apache-2.0
datasets:
- kakaobrain/coyo-700m
inference: false
---
# Model Details
This is an unofficial implementation of [ALIGN](https://arxiv.org/abs/2102.05918) trained on [COYO-700M](https://github.com/kakaobrain/coyo-dataset). The official ALIGN is trained on its dataset of 1.8B samples. That dataset is not released to the public. Instead, we trained our implementation of ALIGN model on [COYO-700M](https://github.com/kakaobrain/coyo-dataset).
It's developed by Kakao Brain to validate the performance of COYO-700M dataset on a large-scale model.
The training took about 8 days on TPU V3-512.
## Model Date
April 2022
## Model Type
This is dual encoder model where
- image encoder is using EfficientNet-B7 architecture
- text encoder is using BERT-base architecture
# Training data
This model is trained on [COYO-700M](https://github.com/kakaobrain/coyo-dataset) dataset.
# Evaluation results
| | Dataset | ImageNet | Flickr30k | | MsCOCO | |
|----------------------------------|:----------:|:--------:|:---------:|:-------:|:-------:|:-------:|
| | | KNN | I2T R@1 | T2I R@1 | I2T R@1 | T2I R@1 |
| ALIGN-L2-Large(Google) | ALIGN 1.8B | 76.4 | 88.6 | 75.7 | 58.6 | 45.6 |
| ALIGN-B7-Base(Google) | ALIGN 1.8B | 69.3 | - | - | 55.4 | 41.7 |
| COYO-ALIGN-B7-Base(Kakao Brain) | COYO-700M | 68.6 | 88.1 | 73.2 | 61.2 | 43.1 |
|
wilcomply/xlm-roberta-base-finetuned-panx-all
|
wilcomply
| 2022-11-11T03:15:09Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-11T02:43:49Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1731
- F1: 0.8525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2992 | 1.0 | 835 | 0.1936 | 0.8164 |
| 0.1588 | 2.0 | 1670 | 0.1711 | 0.8466 |
| 0.1022 | 3.0 | 2505 | 0.1731 | 0.8525 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
alextoyment/ppo-LunarLander-v2
|
alextoyment
| 2022-11-11T03:03:46Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-11T03:03:14Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 165.65 +/- 21.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sd-concepts-library/obama-self-2
|
sd-concepts-library
| 2022-11-11T02:57:39Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-11T02:57:29Z |
---
license: mit
---
### obama_self_2 on Stable Diffusion
This is the `<Obama>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
jjjj-j/distilbert-base-uncased-finetuned-cola
|
jjjj-j
| 2022-11-11T02:39:27Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-06T22:10:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0884
- Matthews Correlation: 0.2439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 23 | 1.1535 | 0.0 |
| No log | 2.0 | 46 | 1.1430 | 0.0 |
| No log | 3.0 | 69 | 1.1438 | 0.0 |
| No log | 4.0 | 92 | 1.0995 | 0.1890 |
| No log | 5.0 | 115 | 1.1155 | 0.0509 |
| No log | 6.0 | 138 | 1.0881 | 0.1554 |
| No log | 7.0 | 161 | 1.1095 | 0.2136 |
| No log | 8.0 | 184 | 1.0884 | 0.2439 |
| No log | 9.0 | 207 | 1.1145 | 0.2155 |
| No log | 10.0 | 230 | 1.1092 | 0.1897 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
krishnateja/wav2vec2-large-xls-r-300m-tr-colab
|
krishnateja
| 2022-11-11T01:59:27Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-10T02:53:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-tr-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tr-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.2
|
huggingtweets/queenofbithynia
|
huggingtweets
| 2022-11-11T00:37:45Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/queenofbithynia/1668126937466/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1010627358879932416/0xVVQg3X_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">the needle-felted head of joyce carol oates</div>
<div style="text-align: center; font-size: 14px;">@queenofbithynia</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from the needle-felted head of joyce carol oates.
| Data | the needle-felted head of joyce carol oates |
| --- | --- |
| Tweets downloaded | 3186 |
| Retweets | 1 |
| Short tweets | 64 |
| Tweets kept | 3121 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1pdmfti8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @queenofbithynia's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/hmbsp4tx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/hmbsp4tx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/queenofbithynia')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
robertbogdon/model_tuning_mindalle9_jsy6zj-labels-classification
|
robertbogdon
| 2022-11-11T00:34:27Z | 0 | 0 |
sklearn
|
[
"sklearn",
"tabular-classification",
"baseline-trainer",
"license:apache-2.0",
"region:us"
] |
tabular-classification
| 2022-11-11T00:34:25Z |
---
license: apache-2.0
library_name: sklearn
tags:
- tabular-classification
- baseline-trainer
---
## Baseline Model trained on model_tuning_mindalle9_jsy6zj to apply classification on labels
**Metrics of the best model:**
accuracy 0.735922
recall_macro 0.631737
precision_macro 0.440117
f1_macro 0.457940
Name: LogisticRegression(C=0.1, class_weight='balanced', max_iter=1000), dtype: float64
**See model plot below:**
<style>#sk-container-id-3 {color: black;background-color: white;}#sk-container-id-3 pre{padding: 0;}#sk-container-id-3 div.sk-toggleable {background-color: white;}#sk-container-id-3 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-3 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-3 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-3 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-3 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-3 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-3 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-3 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-3 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-3 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-3 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-3 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-3 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-3 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-3 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-3 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-3 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-3 div.sk-item {position: relative;z-index: 1;}#sk-container-id-3 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-3 div.sk-item::before, #sk-container-id-3 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-3 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-3 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-3 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-3 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-3 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-3 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-3 div.sk-label-container {text-align: center;}#sk-container-id-3 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-3 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-3" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float ... free_string useless
temperatures False False ... False False
superconditions True False ... False False
is_megas False False ... False False
feature_0 True False ... False False
feature_1 True False ... False False
... ... ... ... ... ...
feature_763 True False ... False False
feature_764 True False ... False False
feature_765 True False ... False False
feature_766 True False ... False False
feature_767 True False ... False False[771 rows x 7 columns])),('logisticregression',LogisticRegression(C=0.1, class_weight='balanced',max_iter=1000))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-7" type="checkbox" ><label for="sk-estimator-id-7" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float ... free_string useless
temperatures False False ... False False
superconditions True False ... False False
is_megas False False ... False False
feature_0 True False ... False False
feature_1 True False ... False False
... ... ... ... ... ...
feature_763 True False ... False False
feature_764 True False ... False False
feature_765 True False ... False False
feature_766 True False ... False False
feature_767 True False ... False False[771 rows x 7 columns])),('logisticregression',LogisticRegression(C=0.1, class_weight='balanced',max_iter=1000))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-8" type="checkbox" ><label for="sk-estimator-id-8" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float ... free_string useless
temperatures False False ... False False
superconditions True False ... False False
is_megas False False ... False False
feature_0 True False ... False False
feature_1 True False ... False False
... ... ... ... ... ...
feature_763 True False ... False False
feature_764 True False ... False False
feature_765 True False ... False False
feature_766 True False ... False False
feature_767 True False ... False False[771 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-9" type="checkbox" ><label for="sk-estimator-id-9" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression(C=0.1, class_weight='balanced', max_iter=1000)</pre></div></div></div></div></div></div></div>
**Disclaimer:** This model is trained with dabl library as a baseline, for better results, use [AutoTrain](https://huggingface.co/autotrain).
**Logs of training** including the models tried in the process can be found in logs.txt
|
EhtashamNQ/mt5-small-finetuned-amazon-en-es
|
EhtashamNQ
| 2022-11-11T00:24:55Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-10T16:33:31Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: EhtashamNQ/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# EhtashamNQ/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.5396
- Validation Loss: 2.8061
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 5208, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 20.5300 | 7.1775 | 0 |
| 5.5220 | 3.7545 | 1 |
| 3.4137 | 3.5929 | 2 |
| 2.9827 | 3.0892 | 3 |
| 2.7228 | 2.8718 | 4 |
| 2.5396 | 2.8061 | 5 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
evelynerhuan/distilbert-base-uncased-model-1
|
evelynerhuan
| 2022-11-11T00:05:42Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-10T23:31:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-model-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-model-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.0114 | 1.0 | 554 | 1.9485 |
| 1.6658 | 2.0 | 1108 | 1.6325 |
| 1.2555 | 3.0 | 1662 | 1.6071 |
| 1.038 | 4.0 | 2216 | 1.6472 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
evelynerhuan/distilbert-base-uncased-original-finetuned-squad
|
evelynerhuan
| 2022-11-10T22:29:41Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-10T22:01:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-original-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-original-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.965 | 1.0 | 554 | 1.8076 |
| 1.6215 | 2.0 | 1108 | 1.6230 |
| 1.298 | 3.0 | 1662 | 1.6427 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
codefactory4791/distilbert-base-uncased-finetuned-emotion
|
codefactory4791
| 2022-11-10T22:21:33Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-10T15:27:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.934
- name: F1
type: f1
value: 0.9340438701286115
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1618
- Accuracy: 0.934
- F1: 0.9340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1749 | 1.0 | 250 | 0.1700 | 0.9325 | 0.9321 |
| 0.1128 | 2.0 | 500 | 0.1618 | 0.934 | 0.9340 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.11.0
|
ntsema/wav2vec2-xlsr-53-espeak-cv-ft-evn2-ntsema-colab
|
ntsema
| 2022-11-10T22:12:45Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:audiofolder",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-08T06:34:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- wer
model-index:
- name: wav2vec2-xlsr-53-espeak-cv-ft-evn2-ntsema-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: audiofolder
type: audiofolder
config: default
split: train
args: default
metrics:
- name: Wer
type: wer
value: 0.9866666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-53-espeak-cv-ft-evn2-ntsema-colab
This model is a fine-tuned version of [facebook/wav2vec2-xlsr-53-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-xlsr-53-espeak-cv-ft) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0299
- Wer: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.2753 | 6.15 | 400 | 1.6106 | 0.99 |
| 0.8472 | 12.3 | 800 | 1.6731 | 0.99 |
| 0.4462 | 18.46 | 1200 | 1.8516 | 0.99 |
| 0.2556 | 24.61 | 1600 | 2.0299 | 0.9867 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
amitjohn007/albert-finetuned-squad
|
amitjohn007
| 2022-11-10T22:07:05Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"albert",
"question-answering",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-10T18:32:51Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: amitjohn007/albert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# amitjohn007/albert-finetuned-squad
This model is a fine-tuned version of [ahotrod/albert_xxlargev1_squad2_512](https://huggingface.co/ahotrod/albert_xxlargev1_squad2_512) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0498
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16620, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.3775 | 0 |
| 0.1702 | 1 |
| 0.0498 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
AlekseyKorshuk/dalio-all-io-1.3b-3-epoch
|
AlekseyKorshuk
| 2022-11-10T21:12:47Z | 97 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:AlekseyKorshuk/dalio-all-io",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-10T20:52:55Z |
---
license: other
tags:
- generated_from_trainer
datasets:
- AlekseyKorshuk/dalio-all-io
metrics:
- accuracy
model-index:
- name: dalio-all-io-1.3b-3-epoch
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: AlekseyKorshuk/dalio-all-io
type: AlekseyKorshuk/dalio-all-io
metrics:
- name: Accuracy
type: accuracy
value: 0.05841094794583167
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dalio-all-io-1.3b-3-epoch
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the AlekseyKorshuk/dalio-all-io dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3008
- Accuracy: 0.0584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6543 | 0.03 | 1 | 2.6113 | 0.0513 |
| 2.6077 | 0.07 | 2 | 2.6113 | 0.0513 |
| 2.5964 | 0.1 | 3 | 2.5605 | 0.0519 |
| 2.7302 | 0.14 | 4 | 2.5234 | 0.0526 |
| 2.7004 | 0.17 | 5 | 2.5078 | 0.0529 |
| 2.5681 | 0.21 | 6 | 2.4941 | 0.0532 |
| 2.6404 | 0.24 | 7 | 2.4883 | 0.0534 |
| 2.5325 | 0.28 | 8 | 2.4805 | 0.0536 |
| 2.7205 | 0.31 | 9 | 2.4746 | 0.0536 |
| 2.5149 | 0.34 | 10 | 2.4648 | 0.0533 |
| 2.5017 | 0.38 | 11 | 2.4512 | 0.0535 |
| 2.7026 | 0.41 | 12 | 2.4395 | 0.0539 |
| 2.5259 | 0.45 | 13 | 2.4316 | 0.0543 |
| 2.563 | 0.48 | 14 | 2.4219 | 0.0546 |
| 2.5679 | 0.52 | 15 | 2.4141 | 0.0550 |
| 2.3701 | 0.55 | 16 | 2.4082 | 0.0551 |
| 2.4739 | 0.59 | 17 | 2.4082 | 0.0551 |
| 2.481 | 0.62 | 18 | 2.4023 | 0.0548 |
| 2.5795 | 0.66 | 19 | 2.3945 | 0.0549 |
| 2.4902 | 0.69 | 20 | 2.3867 | 0.0549 |
| 2.4509 | 0.72 | 21 | 2.3809 | 0.0551 |
| 2.6052 | 0.76 | 22 | 2.3730 | 0.0553 |
| 2.3323 | 0.79 | 23 | 2.3633 | 0.0555 |
| 2.5994 | 0.83 | 24 | 2.3555 | 0.0556 |
| 2.3347 | 0.86 | 25 | 2.3477 | 0.0556 |
| 2.421 | 0.9 | 26 | 2.3398 | 0.0559 |
| 2.5337 | 0.93 | 27 | 2.3359 | 0.0560 |
| 2.4102 | 0.97 | 28 | 2.3320 | 0.0563 |
| 2.4309 | 1.0 | 29 | 2.3262 | 0.0564 |
| 1.9305 | 1.03 | 30 | 2.3223 | 0.0564 |
| 1.8601 | 1.07 | 31 | 2.3203 | 0.0567 |
| 1.8682 | 1.1 | 32 | 2.3281 | 0.0564 |
| 1.8657 | 1.14 | 33 | 2.3535 | 0.0564 |
| 2.063 | 1.17 | 34 | 2.3398 | 0.0567 |
| 1.6443 | 1.21 | 35 | 2.3242 | 0.0568 |
| 1.7592 | 1.24 | 36 | 2.3164 | 0.0569 |
| 1.8981 | 1.28 | 37 | 2.3105 | 0.0569 |
| 1.9379 | 1.31 | 38 | 2.3047 | 0.0573 |
| 1.6008 | 1.34 | 39 | 2.3027 | 0.0574 |
| 1.595 | 1.38 | 40 | 2.3027 | 0.0575 |
| 1.7096 | 1.41 | 41 | 2.3027 | 0.0575 |
| 1.7245 | 1.45 | 42 | 2.3027 | 0.0576 |
| 1.795 | 1.48 | 43 | 2.3008 | 0.0577 |
| 1.7241 | 1.52 | 44 | 2.3008 | 0.0576 |
| 1.6356 | 1.55 | 45 | 2.2988 | 0.0576 |
| 1.77 | 1.59 | 46 | 2.2969 | 0.0576 |
| 1.6675 | 1.62 | 47 | 2.2930 | 0.0577 |
| 1.6929 | 1.66 | 48 | 2.2910 | 0.0577 |
| 1.6635 | 1.69 | 49 | 2.2910 | 0.0576 |
| 1.6093 | 1.72 | 50 | 2.2910 | 0.0578 |
| 1.7362 | 1.76 | 51 | 2.2891 | 0.0580 |
| 1.7015 | 1.79 | 52 | 2.2852 | 0.0581 |
| 1.9515 | 1.83 | 53 | 2.2812 | 0.0582 |
| 1.6494 | 1.86 | 54 | 2.2773 | 0.0580 |
| 1.7522 | 1.9 | 55 | 2.2734 | 0.0580 |
| 1.7369 | 1.93 | 56 | 2.2676 | 0.0581 |
| 1.6528 | 1.97 | 57 | 2.2637 | 0.0581 |
| 1.51 | 2.0 | 58 | 2.2617 | 0.0583 |
| 1.4579 | 2.03 | 59 | 2.2637 | 0.0585 |
| 1.2645 | 2.07 | 60 | 2.2695 | 0.0585 |
| 1.2424 | 2.1 | 61 | 2.2773 | 0.0584 |
| 1.2117 | 2.14 | 62 | 2.2891 | 0.0584 |
| 1.4059 | 2.17 | 63 | 2.3008 | 0.0580 |
| 1.328 | 2.21 | 64 | 2.3145 | 0.0581 |
| 1.3436 | 2.24 | 65 | 2.3281 | 0.0580 |
| 1.389 | 2.28 | 66 | 2.3379 | 0.0580 |
| 1.2127 | 2.31 | 67 | 2.3398 | 0.0580 |
| 1.3645 | 2.34 | 68 | 2.3418 | 0.0581 |
| 1.3389 | 2.38 | 69 | 2.3379 | 0.0581 |
| 1.2549 | 2.41 | 70 | 2.3320 | 0.0581 |
| 1.2193 | 2.45 | 71 | 2.3281 | 0.0582 |
| 1.3617 | 2.48 | 72 | 2.3223 | 0.0583 |
| 1.2336 | 2.52 | 73 | 2.3184 | 0.0583 |
| 1.179 | 2.55 | 74 | 2.3145 | 0.0583 |
| 1.2468 | 2.59 | 75 | 2.3125 | 0.0583 |
| 1.3325 | 2.62 | 76 | 2.3086 | 0.0583 |
| 1.1471 | 2.66 | 77 | 2.3066 | 0.0583 |
| 1.3123 | 2.69 | 78 | 2.3066 | 0.0583 |
| 1.3285 | 2.72 | 79 | 2.3047 | 0.0585 |
| 1.3232 | 2.76 | 80 | 2.3027 | 0.0584 |
| 1.1228 | 2.79 | 81 | 2.3027 | 0.0584 |
| 1.3524 | 2.83 | 82 | 2.3027 | 0.0584 |
| 1.2042 | 2.86 | 83 | 2.3027 | 0.0583 |
| 1.3588 | 2.9 | 84 | 2.3008 | 0.0583 |
| 1.2982 | 2.93 | 85 | 2.3008 | 0.0584 |
| 1.4373 | 2.97 | 86 | 2.3008 | 0.0585 |
| 1.3562 | 3.0 | 87 | 2.3008 | 0.0584 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
mrojas/roberta-clinical-wl-es-finetuned-ner
|
mrojas
| 2022-11-10T20:38:26Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:wl",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-10T20:16:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wl
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-clinical-wl-es-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wl
type: wl
config: WL
split: train
args: WL
metrics:
- name: Precision
type: precision
value: 0.6865079365079365
- name: Recall
type: recall
value: 0.7355442176870748
- name: F1
type: f1
value: 0.7101806239737274
- name: Accuracy
type: accuracy
value: 0.8267950260730044
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-clinical-wl-es-finetuned-ner
This model is a fine-tuned version of [plncmm/roberta-clinical-wl-es](https://huggingface.co/plncmm/roberta-clinical-wl-es) on the wl dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6227
- Precision: 0.6865
- Recall: 0.7355
- F1: 0.7102
- Accuracy: 0.8268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.028 | 1.0 | 500 | 0.6870 | 0.6558 | 0.6855 | 0.6703 | 0.8035 |
| 0.5923 | 2.0 | 1000 | 0.6248 | 0.6851 | 0.7235 | 0.7038 | 0.8244 |
| 0.4928 | 3.0 | 1500 | 0.6227 | 0.6865 | 0.7355 | 0.7102 | 0.8268 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
AlekseyKorshuk/dalio-all-io-1.3b
|
AlekseyKorshuk
| 2022-11-10T20:11:48Z | 96 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:AlekseyKorshuk/dalio-all-io",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-10T19:59:40Z |
---
license: other
tags:
- generated_from_trainer
datasets:
- AlekseyKorshuk/dalio-all-io
metrics:
- accuracy
model-index:
- name: dalio-all-io-1.3b
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: AlekseyKorshuk/dalio-all-io
type: AlekseyKorshuk/dalio-all-io
metrics:
- name: Accuracy
type: accuracy
value: 0.05582538140677676
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dalio-all-io-1.3b
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the AlekseyKorshuk/dalio-all-io dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3652
- Accuracy: 0.0558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6543 | 0.03 | 1 | 2.6113 | 0.0513 |
| 2.6077 | 0.07 | 2 | 2.6113 | 0.0513 |
| 2.5964 | 0.1 | 3 | 2.5605 | 0.0519 |
| 2.7302 | 0.14 | 4 | 2.5234 | 0.0527 |
| 2.7 | 0.17 | 5 | 2.5078 | 0.0528 |
| 2.5674 | 0.21 | 6 | 2.4941 | 0.0532 |
| 2.6406 | 0.24 | 7 | 2.4883 | 0.0534 |
| 2.5315 | 0.28 | 8 | 2.4805 | 0.0536 |
| 2.7202 | 0.31 | 9 | 2.4727 | 0.0537 |
| 2.5144 | 0.34 | 10 | 2.4648 | 0.0536 |
| 2.4983 | 0.38 | 11 | 2.4512 | 0.0537 |
| 2.7029 | 0.41 | 12 | 2.4414 | 0.0539 |
| 2.5198 | 0.45 | 13 | 2.4336 | 0.0540 |
| 2.5706 | 0.48 | 14 | 2.4258 | 0.0545 |
| 2.5688 | 0.52 | 15 | 2.4180 | 0.0548 |
| 2.3793 | 0.55 | 16 | 2.4102 | 0.0552 |
| 2.4785 | 0.59 | 17 | 2.4043 | 0.0554 |
| 2.4688 | 0.62 | 18 | 2.3984 | 0.0553 |
| 2.5674 | 0.66 | 19 | 2.3984 | 0.0553 |
| 2.5054 | 0.69 | 20 | 2.3945 | 0.0554 |
| 2.452 | 0.72 | 21 | 2.3887 | 0.0555 |
| 2.5999 | 0.76 | 22 | 2.3828 | 0.0556 |
| 2.3665 | 0.79 | 23 | 2.3789 | 0.0556 |
| 2.6223 | 0.83 | 24 | 2.375 | 0.0557 |
| 2.3562 | 0.86 | 25 | 2.3711 | 0.0557 |
| 2.429 | 0.9 | 26 | 2.3691 | 0.0557 |
| 2.563 | 0.93 | 27 | 2.3672 | 0.0558 |
| 2.4573 | 0.97 | 28 | 2.3652 | 0.0558 |
| 2.4883 | 1.0 | 29 | 2.3652 | 0.0558 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
lmvasque/prompt-ls-es-1
|
lmvasque
| 2022-11-10T19:27:28Z | 107 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-08T14:02:07Z |
---
license: cc-by-4.0
---
## UoM&MMU at TSAR-2022 Shared Task - Prompt Learning for Lexical Simplification: prompt-ls-es-1
We present **PromptLS**, a method for fine-tuning large pre-trained masked language models to perform the task of Lexical Simplification.
This model is part of a series of models presented at the [TSAR-2022 Shared Task](https://taln.upf.edu/pages/tsar2022-st/)
by the University of Manchester and Manchester Metropolitan University (UoM&MMU) Team in English, Spanish and Portuguese.
You can find more details about the project in our [GitHub](https://github.com/lmvasque/ls-prompt-tsar2022).
## Models
Our models were fine-tuned using prompt-learning for **Lexical Simplification**. These are the available models you can use (current model page in bold):
| Model Name | Run # | Language | Setting |
|----------------------------------------------------------------------|-------|:-----------:|---------------|
| [prompt-ls-en-1](https://huggingface.co/lmvasque/prompt-ls-en-1) | 1 | English | fine-tune |
| [prompt-ls-en-2](https://huggingface.co/lmvasque/prompt-ls-en-2) | 2 | English | fine-tune |
| [roberta-large](https://huggingface.co/roberta-large) | 3 | English | zero-shot |
| **[prompt-ls-es-1](https://huggingface.co/lmvasque/prompt-ls-es-1)** | **1** | **Spanish** | **fine-tune** |
| [prompt-ls-es-2](https://huggingface.co/lmvasque/prompt-ls-es-2) | 2 | Spanish | fine-tune |
| [prompt-ls-es-3](https://huggingface.co/lmvasque/prompt-ls-es-3) | 3 | Spanish | fine-tune |
| [prompt-ls-pt-1](https://huggingface.co/lmvasque/prompt-ls-pt-1) | 1 | Portuguese | fine-tune |
| [prompt-ls-pt-2](https://huggingface.co/lmvasque/prompt-ls-pt-2) | 2 | Portuguese | fine-tune |
| [prompt-ls-pt-3](https://huggingface.co/lmvasque/prompt-ls-pt-3) | 3 | Portuguese | fine-tune |
For the zero-shot setting, we used the original models with no further training. Links to these models are also updated in the table above.
## Results
We include the [official results](https://github.com/LaSTUS-TALN-UPF/TSAR-2022-Shared-Task/tree/main/results/official) from the competition test set as a reference. However, we encourage the users to also check our results in the development set, which show an increased performance for Spanish and Portuguese.
You can find more details in our [paper](https://drive.google.com/file/d/1x5dRxgcSGAaCCrjsgpCHnYek9G-TmZff/view?usp=share_link).
| Language | # | Model | Setting | Prompt1 | Prompt2 | w | k | Acc@1 | A@3 | M@3 | P@3 |
|------------|---|-------|--------------|---------|---------|---|---|-------|-----|-----|-------------|
| English | 1 | RoBERTa-L | fine-tune | simple | word | 5 | 5 | **0.6353** | **0.5308** | **0.4244** | **0.8739** |
| English | 2 | mBERT | multilingual | easier | word | 10 | 10 | 0.4959 | 0.4235 | 0.3273 | 0.7560 |
| English | 3 | RoBERTa-L | zero-shot | easier | word | 5 | - | 0.2654 | 0.268 | 0.1820 | 0.4906 |
| Spanish | 1 | BERTIN | fine-tune | sinónimo | fácil | - | 3 | 0.3451 | **0.2907** | **0.2238** | **0.5543** |
| Spanish | 2 | BERTIN | fine-tune | palabra | simple | - | 10 | 0.3614 | **0.2907**| 0.2225 | 0.538 |
| Spanish | 3 | BERTIN | fine-tune | sinónimo | fácil | 10 | 10 | **0.3668** | 0.269 | 0.2128 | 0.5326 |
| Portuguese | 1 | BR_BERTo | fine-tune | palavra | simples | - | 8 | **0.1711** | 0.1096 | 0.1011 | 0.2486 |
| Portuguese | 2 | BR_BERTo | fine-tune | sinônimo | fácil | - | 10 | 0.1363 | 0.0962 | 0.0944 | 0.2379 |
| Portuguese | 3 | BR_BERTo | fine-tune | sinônimo | simples | 5 | 10 | 0.1577 | **0.1283**| **0.1071**| **0.2834**|
## Citation
If you use our results and scripts in your research, please cite our work:
"[UoM&MMU at TSAR-2022 Shared Task: Prompt Learning for Lexical Simplification](https://drive.google.com/file/d/1x5dRxgcSGAaCCrjsgpCHnYek9G-TmZff/view?usp=share_link)".
```
@inproceedings{vasquez-rodriguez-etal-2022-prompt-ls,
title = "UoM\&MMU at TSAR-2022 Shared Task: Prompt Learning for Lexical Simplification",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Nguyen, Nhung T. H. and
Shardlow, Matthew and
Ananiadou, Sophia",
booktitle = "Shared Task on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
|
lmvasque/prompt-ls-pt-2
|
lmvasque
| 2022-11-10T19:11:39Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-08T14:05:58Z |
---
license: cc-by-4.0
---
## UoM&MMU at TSAR-2022 Shared Task - Prompt Learning for Lexical Simplification: prompt-ls-pt-2
We present **PromptLS**, a method for fine-tuning large pre-trained masked language models to perform the task of Lexical Simplification.
This model is part of a series of models presented at the [TSAR-2022 Shared Task](https://taln.upf.edu/pages/tsar2022-st/)
by the University of Manchester and Manchester Metropolitan University (UoM&MMU) Team in English, Spanish and Portuguese.
You can find more details about the project in our [GitHub](https://github.com/lmvasque/ls-prompt-tsar2022).
## Models
Our models were fine-tuned using prompt-learning for **Lexical Simplification**. These are the available models you can use (current model page in bold):
| Model Name | Run # | Language | Setting |
|----------------------------------------------------------------------|-------|:--------------:|---------------|
| [prompt-ls-en-1](https://huggingface.co/lmvasque/prompt-ls-en-1) | 1 | English | fine-tune |
| [prompt-ls-en-2](https://huggingface.co/lmvasque/prompt-ls-en-2) | 2 | English | fine-tune |
| [roberta-large](https://huggingface.co/roberta-large) | 3 | English | zero-shot |
| [prompt-ls-es-1](https://huggingface.co/lmvasque/prompt-ls-es-1) | 1 | Spanish | fine-tune |
| [prompt-ls-es-2](https://huggingface.co/lmvasque/prompt-ls-es-2) | 2 | Spanish | fine-tune |
| [prompt-ls-es-3](https://huggingface.co/lmvasque/prompt-ls-es-3) | 3 | Spanish | fine-tune |
| [prompt-ls-pt-1](https://huggingface.co/lmvasque/prompt-ls-pt-1) | 1 | Portuguese | fine-tune |
| **[prompt-ls-pt-2](https://huggingface.co/lmvasque/prompt-ls-pt-2)** | **2** | **Portuguese** | **fine-tune** |
| [prompt-ls-pt-3](https://huggingface.co/lmvasque/prompt-ls-pt-3) | 3 | Portuguese | fine-tune |
For the zero-shot setting, we used the original models with no further training. Links to these models are also updated in the table above.
## Results
We include the [official results](https://github.com/LaSTUS-TALN-UPF/TSAR-2022-Shared-Task/tree/main/results/official) from the competition test set as a reference. However, we encourage the users to also check our results in the development set, which show an increased performance for Spanish and Portuguese.
You can find more details in our [paper](https://drive.google.com/file/d/1x5dRxgcSGAaCCrjsgpCHnYek9G-TmZff/view?usp=share_link).
| Language | # | Model | Setting | Prompt1 | Prompt2 | w | k | Acc@1 | A@3 | M@3 | P@3 |
|------------|---|-------|--------------|---------|---------|---|---|-------|-----|-----|-------------|
| English | 1 | RoBERTa-L | fine-tune | simple | word | 5 | 5 | **0.6353** | **0.5308** | **0.4244** | **0.8739** |
| English | 2 | mBERT | multilingual | easier | word | 10 | 10 | 0.4959 | 0.4235 | 0.3273 | 0.7560 |
| English | 3 | RoBERTa-L | zero-shot | easier | word | 5 | - | 0.2654 | 0.268 | 0.1820 | 0.4906 |
| Spanish | 1 | BERTIN | fine-tune | sinónimo | fácil | - | 3 | 0.3451 | **0.2907** | **0.2238** | **0.5543** |
| Spanish | 2 | BERTIN | fine-tune | palabra | simple | - | 10 | 0.3614 | **0.2907**| 0.2225 | 0.538 |
| Spanish | 3 | BERTIN | fine-tune | sinónimo | fácil | 10 | 10 | **0.3668** | 0.269 | 0.2128 | 0.5326 |
| Portuguese | 1 | BR_BERTo | fine-tune | palavra | simples | - | 8 | **0.1711** | 0.1096 | 0.1011 | 0.2486 |
| Portuguese | 2 | BR_BERTo | fine-tune | sinônimo | fácil | - | 10 | 0.1363 | 0.0962 | 0.0944 | 0.2379 |
| Portuguese | 3 | BR_BERTo | fine-tune | sinônimo | simples | 5 | 10 | 0.1577 | **0.1283**| **0.1071**| **0.2834**|
## Citation
If you use our results and scripts in your research, please cite our work:
"[UoM&MMU at TSAR-2022 Shared Task: Prompt Learning for Lexical Simplification](https://drive.google.com/file/d/1x5dRxgcSGAaCCrjsgpCHnYek9G-TmZff/view?usp=share_link)".
```
@inproceedings{vasquez-rodriguez-etal-2022-prompt-ls,
title = "UoM\&MMU at TSAR-2022 Shared Task: Prompt Learning for Lexical Simplification",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Nguyen, Nhung T. H. and
Shardlow, Matthew and
Ananiadou, Sophia",
booktitle = "Shared Task on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
|
lmvasque/prompt-ls-es-3
|
lmvasque
| 2022-11-10T19:11:28Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-08T14:03:09Z |
---
license: cc-by-4.0
---
## UoM&MMU at TSAR-2022 Shared Task - Prompt Learning for Lexical Simplification: prompt-ls-es-3
We present **PromptLS**, a method for fine-tuning large pre-trained masked language models to perform the task of Lexical Simplification.
This model is part of a series of models presented at the [TSAR-2022 Shared Task](https://taln.upf.edu/pages/tsar2022-st/)
by the University of Manchester and Manchester Metropolitan University (UoM&MMU) Team in English, Spanish and Portuguese.
You can find more details about the project in our [GitHub](https://github.com/lmvasque/ls-prompt-tsar2022).
## Models
Our models were fine-tuned using prompt-learning for **Lexical Simplification**. These are the available models you can use (current model page in bold):
| Model Name | Run # | Language | Setting |
|--------------------------------------------------------------------|----|:-----------:|-----------|
| [prompt-ls-en-1](https://huggingface.co/lmvasque/prompt-ls-en-1) | 1 | English | fine-tune |
| [prompt-ls-en-2](https://huggingface.co/lmvasque/prompt-ls-en-2) | 2 | English | fine-tune |
| [roberta-large](https://huggingface.co/roberta-large) | 3 | English | zero-shot |
| [prompt-ls-es-1](https://huggingface.co/lmvasque/prompt-ls-es-1) | 1 | Spanish | fine-tune |
| [prompt-ls-es-2](https://huggingface.co/lmvasque/prompt-ls-es-2) | 2 | Spanish | fine-tune |
| **[prompt-ls-es-3](https://huggingface.co/lmvasque/prompt-ls-es-3)** | **3** | **Spanish** | **fine-tune** |
| [prompt-ls-pt-1](https://huggingface.co/lmvasque/prompt-ls-pt-1) | 1 | Portuguese | fine-tune |
| [prompt-ls-pt-2](https://huggingface.co/lmvasque/prompt-ls-pt-2) | 2 | Portuguese | fine-tune |
| [prompt-ls-pt-3](https://huggingface.co/lmvasque/prompt-ls-pt-3) | 3 | Portuguese | fine-tune |
For the zero-shot setting, we used the original models with no further training. Links to these models are also updated in the table above.
## Results
We include the [official results](https://github.com/LaSTUS-TALN-UPF/TSAR-2022-Shared-Task/tree/main/results/official) from the competition test set as a reference. However, we encourage the users to also check our results in the development set, which show an increased performance for Spanish and Portuguese.
You can find more details in our [paper](https://drive.google.com/file/d/1x5dRxgcSGAaCCrjsgpCHnYek9G-TmZff/view?usp=share_link).
| Language | # | Model | Setting | Prompt1 | Prompt2 | w | k | Acc@1 | A@3 | M@3 | P@3 |
|------------|---|-------|--------------|---------|---------|---|---|-------|-----|-----|-------------|
| English | 1 | RoBERTa-L | fine-tune | simple | word | 5 | 5 | **0.6353** | **0.5308** | **0.4244** | **0.8739** |
| English | 2 | mBERT | multilingual | easier | word | 10 | 10 | 0.4959 | 0.4235 | 0.3273 | 0.7560 |
| English | 3 | RoBERTa-L | zero-shot | easier | word | 5 | - | 0.2654 | 0.268 | 0.1820 | 0.4906 |
| Spanish | 1 | BERTIN | fine-tune | sinónimo | fácil | - | 3 | 0.3451 | **0.2907** | **0.2238** | **0.5543** |
| Spanish | 2 | BERTIN | fine-tune | palabra | simple | - | 10 | 0.3614 | **0.2907**| 0.2225 | 0.538 |
| Spanish | 3 | BERTIN | fine-tune | sinónimo | fácil | 10 | 10 | **0.3668** | 0.269 | 0.2128 | 0.5326 |
| Portuguese | 1 | BR_BERTo | fine-tune | palavra | simples | - | 8 | **0.1711** | 0.1096 | 0.1011 | 0.2486 |
| Portuguese | 2 | BR_BERTo | fine-tune | sinônimo | fácil | - | 10 | 0.1363 | 0.0962 | 0.0944 | 0.2379 |
| Portuguese | 3 | BR_BERTo | fine-tune | sinônimo | simples | 5 | 10 | 0.1577 | **0.1283**| **0.1071**| **0.2834**|
## Citation
If you use our results and scripts in your research, please cite our work:
"[UoM&MMU at TSAR-2022 Shared Task: Prompt Learning for Lexical Simplification](https://drive.google.com/file/d/1x5dRxgcSGAaCCrjsgpCHnYek9G-TmZff/view?usp=share_link)".
```
@inproceedings{vasquez-rodriguez-etal-2022-prompt-ls,
title = "UoM\&MMU at TSAR-2022 Shared Task: Prompt Learning for Lexical Simplification",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Nguyen, Nhung T. H. and
Shardlow, Matthew and
Ananiadou, Sophia",
booktitle = "Shared Task on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
|
yunocchi/swin-tiny-patch4-window7-224-finetuned-respirator
|
yunocchi
| 2022-11-10T19:03:05Z | 208 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-10T17:18:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-respirator
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9082397003745318
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-respirator
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2124
- Accuracy: 0.9082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4872 | 0.98 | 37 | 0.2124 | 0.9082 |
| 0.4828 | 1.98 | 74 | 0.2124 | 0.9082 |
| 0.4772 | 2.98 | 111 | 0.2124 | 0.9082 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
nateraw/videomae-base-finetuned-ucf101
|
nateraw
| 2022-11-10T18:54:58Z | 158 | 1 |
transformers
|
[
"transformers",
"pytorch",
"videomae",
"video-classification",
"vision",
"en",
"arxiv:1910.09700",
"license:mit",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2022-11-10T18:46:46Z |
---
language: en
license: mit
library_name: transformers
tags:
- video-classification
- videomae
- vision
---
# Model Card for videomae-base-finetuned-ucf101
A [WandB report here](https://wandb.ai/nateraw/videomae-finetune-ucf101/reports/Fine-Tuning-VideoMAE-Base-on-UCF101--VmlldzoyOTUwMjk4) for metrics.
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Model Examination](#model-examination-optional)
7. [Environmental Impact](#environmental-impact)
8. [Technical Specifications](#technical-specifications-optional)
9. [Citation](#citation-optional)
10. [Glossary](#glossary-optional)
11. [More Information](#more-information-optional)
12. [Model Card Authors](#model-card-authors-optional)
13. [Model Card Contact](#model-card-contact)
14. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
VideoMAE Base model fine tuned on UCF101
- **Developed by:** [@nateraw](https://huggingface.co/nateraw)
- **Shared by [optional]:** [More Information Needed]
- **Model type:** fine-tuned
- **Language(s) (NLP):** en
- **License:** mit
- **Related Models [optional]:** [More Information Needed]
- **Parent Model [optional]:** [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base)
- **Resources for more information:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model can be used for Video Action Recognition
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations.
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
We sampled clips from the videos of 64 frames, then took a uniform sample of those frames to get 16 frame inputs for the model. During training, we used PyTorchVideo's [`MixVideo`](https://github.com/facebookresearch/pytorchvideo/blob/main/pytorchvideo/transforms/mix.py) to apply mixup/cutmix.
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
We only trained/evaluated one fold from the UCF101 annotations. Unlike in the VideoMAE paper, we did not perform inference over multiple crops/segments of validation videos, so the results are likely slightly lower than what you would get if you did that too.
- Eval Accuracy: 0.758209764957428
- Eval Accuracy Top 5: 0.8983050584793091
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[@nateraw](https://huggingface.co/nateraw)
# Model Card Contact
[@nateraw](https://huggingface.co/nateraw)
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from decord import VideoReader, cpu
import torch
import numpy as np
from transformers import VideoMAEFeatureExtractor, VideoMAEForVideoClassification
from huggingface_hub import hf_hub_download
np.random.seed(0)
def sample_frame_indices(clip_len, frame_sample_rate, seg_len):
converted_len = int(clip_len * frame_sample_rate)
end_idx = np.random.randint(converted_len, seg_len)
start_idx = end_idx - converted_len
indices = np.linspace(start_idx, end_idx, num=clip_len)
indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64)
return indices
# video clip consists of 300 frames (10 seconds at 30 FPS)
file_path = hf_hub_download(
repo_id="nateraw/dino-clips", filename="archery.mp4", repo_type="space"
)
videoreader = VideoReader(file_path, num_threads=1, ctx=cpu(0))
# sample 16 frames
videoreader.seek(0)
indices = sample_frame_indices(clip_len=16, frame_sample_rate=4, seg_len=len(videoreader))
video = videoreader.get_batch(indices).asnumpy()
feature_extractor = VideoMAEFeatureExtractor.from_pretrained("nateraw/videomae-base-finetuned-ucf101")
model = VideoMAEForVideoClassification.from_pretrained("nateraw/videomae-base-finetuned-ucf101")
inputs = feature_extractor(list(video), return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 101 UCF101 classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
```
</details>
|
robertbogdon/model_tuning_mindallee8kmcfjz-labels-classification
|
robertbogdon
| 2022-11-10T18:46:33Z | 0 | 0 |
sklearn
|
[
"sklearn",
"tabular-classification",
"baseline-trainer",
"license:apache-2.0",
"region:us"
] |
tabular-classification
| 2022-11-10T18:46:30Z |
---
license: apache-2.0
library_name: sklearn
tags:
- tabular-classification
- baseline-trainer
---
## Baseline Model trained on model_tuning_mindallee8kmcfjz to apply classification on labels
**Metrics of the best model:**
accuracy 0.806000
recall_macro 0.416887
precision_macro 0.391691
f1_macro 0.397991
Name: LogisticRegression(C=0.1, class_weight='balanced', max_iter=1000), dtype: float64
**See model plot below:**
<style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float ... free_string useless
temperatures False False ... False False
superconditions True False ... False False
is_megas False False ... False False
feature_0 True False ... False False
feature_1 True False ... False False
... ... ... ... ... ...
feature_763 True False ... False False
feature_764 True False ... False False
feature_765 True False ... False False
feature_766 True False ... False False
feature_767 True False ... False False[771 rows x 7 columns])),('logisticregression',LogisticRegression(C=0.1, class_weight='balanced',max_iter=1000))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" ><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float ... free_string useless
temperatures False False ... False False
superconditions True False ... False False
is_megas False False ... False False
feature_0 True False ... False False
feature_1 True False ... False False
... ... ... ... ... ...
feature_763 True False ... False False
feature_764 True False ... False False
feature_765 True False ... False False
feature_766 True False ... False False
feature_767 True False ... False False[771 rows x 7 columns])),('logisticregression',LogisticRegression(C=0.1, class_weight='balanced',max_iter=1000))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" ><label for="sk-estimator-id-2" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float ... free_string useless
temperatures False False ... False False
superconditions True False ... False False
is_megas False False ... False False
feature_0 True False ... False False
feature_1 True False ... False False
... ... ... ... ... ...
feature_763 True False ... False False
feature_764 True False ... False False
feature_765 True False ... False False
feature_766 True False ... False False
feature_767 True False ... False False[771 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-3" type="checkbox" ><label for="sk-estimator-id-3" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression(C=0.1, class_weight='balanced', max_iter=1000)</pre></div></div></div></div></div></div></div>
**Disclaimer:** This model is trained with dabl library as a baseline, for better results, use [AutoTrain](https://huggingface.co/autotrain).
**Logs of training** including the models tried in the process can be found in logs.txt
|
Omerdor/dry_samples_train
|
Omerdor
| 2022-11-10T18:21:32Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-10T14:50:19Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# dry_samples_train
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 4
- gradient_accumulation_steps: 3
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Omerdor/dry_samples_train/tensorboard?#scalars)
|
Vested-Sigil/VanGO
|
Vested-Sigil
| 2022-11-10T17:54:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-11-10T17:51:20Z |
#!/usr/bin/env python3
from diffusers import DiffusionPipeline
import PIL
import requests
from io import BytesIO
import torch
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="stable_diffusion_mega", torch_dtype=torch.float16, revision="fp16")
pipe.to("cuda")
pipe.enable_attention_slicing()
### Text-to-Image
images = pipe.text2img("An astronaut riding a horse").images
### Image-to-Image
init_image = download_image("https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg")
prompt = "A fantasy landscape, trending on artstation"
images = pipe.img2img(prompt=prompt, init_image=init_image, strength=0.75, guidance_scale=7.5).images
### Inpainting
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))
prompt = "a cat sitting on a bench"
images = pipe.inpaint(prompt=prompt, init_image=init_image, mask_image=mask_image, strength=0.75).images
|
huggingtweets/googlepoetics
|
huggingtweets
| 2022-11-10T17:53:15Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-10T17:52:16Z |
---
language: en
thumbnail: http://www.huggingtweets.com/googlepoetics/1668102791580/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/378800000152236311/e364d2a13dab35a8b65c9decf71ae134_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Google Poetics</div>
<div style="text-align: center; font-size: 14px;">@googlepoetics</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Google Poetics.
| Data | Google Poetics |
| --- | --- |
| Tweets downloaded | 1569 |
| Retweets | 9 |
| Short tweets | 35 |
| Tweets kept | 1525 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2re8zf12/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @googlepoetics's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/cwwobqqi) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/cwwobqqi/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/googlepoetics')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nielsr/detr-table-structure-recognition
|
nielsr
| 2022-11-10T17:22:16Z | 216 | 1 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2022-09-06T14:36:57Z |
Hi,
Please don't use this model anymore, it only worked for a specific branch of mine.
From now on it's recommended to use https://huggingface.co/microsoft/table-transformer-structure-recognition from Transformers.
Thanks, have a great day
|
nielsr/detr-table-detection
|
nielsr
| 2022-11-10T17:21:51Z | 214 | 2 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2022-09-06T14:28:01Z |
Hi,
Please don't use this model anymore, it only worked for a specific branch of mine.
From now on it's recommended to use https://huggingface.co/microsoft/table-transformer-detection from Transformers.
Thanks, have a great day
|
yunocchi/swin-tiny-patch4-window7-224-finetuned-eurosat
|
yunocchi
| 2022-11-10T16:57:04Z | 204 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-10T16:52:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.48148148148148145
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0629
- Accuracy: 0.4815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 1.0629 | 0.4815 |
| No log | 2.0 | 4 | 1.0387 | 0.4815 |
| No log | 3.0 | 6 | 1.0107 | 0.4815 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
amitjohn007/bert-finetuned-squad
|
amitjohn007
| 2022-11-10T16:43:37Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-10T05:33:12Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: amitjohn007/bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# amitjohn007/bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5685
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16638, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2704 | 0 |
| 0.7816 | 1 |
| 0.5685 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
ajtamayoh/NER_EHR_Spanish_model_Mulitlingual_BERT
|
ajtamayoh
| 2022-11-10T16:42:14Z | 113 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-11T02:54:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NER_EHR_Spanish_model_Mulitlingual_BERT
results: []
widget:
- text: 'Presentamos el caso de una mujer de 30 años, fumadora de 20 cigarrillos/día y sin otros antecedentes personales de interés. La paciente refiere infecciones urinarias de repetición. Se indica realización de ecografía abdominal, observándose una lesión nodular intravesical, por lo que es derivada a consulta de urología.
En cistoscopia se visualiza tumoración exofítica de 3x3 cms. en cara lateral derecha con mucosa vesical íntegra, no encontrándose alteraciones en el resto de la vejiga. Se realiza exploración bajo anestesia (EBA) y resección transuretral de dicha lesión (RTU).
En el informe de anatomía patológica macroscópicamente se describen fragmentos de pared vesical con urotelio conservado sin displasia, destacando en la capa muscular propia y en continuidad con el tejido muscular de la misma, una tumoración fusocelular con células que muestran unos núcleos de gran tamaño, pleomórficos, de aspecto vesiculoso y unos citoplasmas amplios eosinófilos. Esta celularidad se dispone en formas de fascículos mal definidos y entre la misma se reconoce abundante celularidad constituida fundamentalmente por numerosas células plasmáticas y leucocitos polimorfonucleares eosinófilos. No se observa un índice mitótico elevado, aunque el índice de proliferación medido como positividad nuclear con anticuerpos frente a MIB-1 se encuentra entre el 10 y el 25% de la celularidad tumoral. No se han objetivado áreas de necrosis. En estudio inmunohistoquímico se observa marcada positividad frente a citoqueratinas (AE1/AE3) y CAM5.2 a nivel citoplasmático, así como una marcada positividad citoplasmática con anticuerpos frente a p80 (proteína ALK). La celularidad descrita ha resultado negativa con anticuerpos frente a músculo liso (actina de músculo liso, MyO D1 y Calretinina), así como para CEA y citoqueratinas de alto peso molecular, observándose tan sólo positividad focal y aislada frente a EMA. Tras realización de FISH sobre material parafinado no se evidencia traslocación en el gen de la ALK.
El diagnóstico anatomopatológico definitivo es tumor miofibroblástico inflamatorio vesical.'
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NER_EHR_Spanish_model_Mulitlingual_BERT
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the DisTEMIST shared task 2022 dataset. It is available at: https://temu.bsc.es/distemist/category/data/
It achieves the following results on the evaluation set:
- Loss: 0.2603
- Precision: 0.5637
- Recall: 0.5801
- F1: 0.5718
- Accuracy: 0.9534
## Model description
For a complete description of our system, please go to: https://ceur-ws.org/Vol-3180/paper-26.pdf
## Training and evaluation data
Dataset provided by DisTEMIST shared task, it is available at: https://temu.bsc.es/distemist/category/data/
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 71 | 0.2060 | 0.5017 | 0.5540 | 0.5266 | 0.9496 |
| No log | 2.0 | 142 | 0.2163 | 0.5363 | 0.5433 | 0.5398 | 0.9495 |
| No log | 3.0 | 213 | 0.2245 | 0.5521 | 0.5356 | 0.5438 | 0.9514 |
| No log | 4.0 | 284 | 0.2453 | 0.5668 | 0.5985 | 0.5822 | 0.9522 |
| No log | 5.0 | 355 | 0.2433 | 0.5657 | 0.5579 | 0.5617 | 0.9530 |
| No log | 6.0 | 426 | 0.2553 | 0.5762 | 0.5762 | 0.5762 | 0.9536 |
| No log | 7.0 | 497 | 0.2603 | 0.5637 | 0.5801 | 0.5718 | 0.9534 |
### How to cite this work:
Tamayo, A., Burgos, D. A., & Gelbukh, A. (2022). mbert and simple post-processing: A baseline for disease mention detection in spanish. In Working Notes of Conference and Labs of the Evaluation (CLEF) Forum. CEUR Workshop Proceedings.
@inproceedings{tamayo2022mbert,
title={mbert and simple post-processing: A baseline for disease mention detection in spanish},
author={Tamayo, Antonio and Burgos, Diego A and Gelbukh, Alexander},
booktitle={Working Notes of Conference and Labs of the Evaluation (CLEF) Forum. CEUR Workshop Proceedings},
year={2022}
}
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.0
- Tokenizers 0.12.1
|
ajtamayoh/NLP-CIC-WFU_Clinical_Cases_NER_Sents_tokenized_mBERT_cased_fine_tuned
|
ajtamayoh
| 2022-11-10T16:34:20Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-08T21:01:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NLP-CIC-WFU_Clinical_Cases_NER_Sents_tokenized_mBERT_cased_fine_tuned
results: []
widget:
- text: 'Se hospitalizó un hombre de 42 años, al que se le había diagnosticado recientemente un carcinoma renal sarcomatoide de células claras metastásico, con fiebre, manejo del dolor por metástasis óseas sintomáticas y para decisiones de tratamiento sistémico de primera línea. El paciente no tenía otros antecedentes. Inicialmente presentó fiebre de 39,0 °C el 12 de marzo de 2020, para la cual recibió ceftriaxona fuera de nuestro centro. El día 6, presentó tos leve y fiebre (38,3°C), lo que llevó a realizar una prueba de PCR en tiempo real para SARS-CoV-2; el resultado fue positivo. El paciente fue ingresado en la sala de COVID-19 de nuestro hospital y se monitorizó estrechamente. La TAC torácica mostró opacidades de vidrio esmerilado bilaterales parcheadas, asociadas al COVID-19 (figura 1). El D7 se le empezó a administrar terapia antivírica con lopinavir y ritonavir (400mg/100mg por vía oral), que se mantuvo durante 5 días, según las directrices locales. El día 8, una disnea súbita y una caída de la saturación obligaron a aumentar el oxígeno a 6 l/min, sin necesidad de ventilación mecánica. Se le administraron dos dosis de tocilizumab, con 8 mg/kg i.v. en cada dosis, separadas 8 horas, con buena tolerancia. Después mostró una mejora clínica, pasando a afebril rápidamente y con un consumo de oxígeno decreciente, que fue retirado por completo el día 12. Una TAC torácica del día 12 confirmó la mejora mostrando regresión parcial de los infiltrados pulmonares y de las opacidades de vidrio esmerilado. La proteína C-reactiva, un marcador indirecto de liberación de citocinas, disminuyó de 225 mg/L a 33 mg/L en 4 días (figura 1). Tras la administración de tocilizumab no se observaron cambios relevantes en las subpoblaciones linfocíticas circulantes y el porcentaje de CD4 + CD25 + linfocitos era alto, antes y después del tocilizumab. Finalmente, el paciente se recuperó totalmente de los síntomas de la COVID-19.'
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP-CIC-WFU_Clinical_Cases_NER_Sents_tokenized_mBERT_cased_fine_tuned
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the LivingNER shared task 2022 dataset. It is available at: https://temu.bsc.es/livingner/category/data/
It achieves the following results on the evaluation set:
- Loss: 0.0546
- Precision: 0.8574
- Recall: 0.7366
- F1: 0.7924
- Accuracy: 0.9893
## Model description
For a complete description of our system, please go to: https://ceur-ws.org/Vol-3202/livingner-paper13.pdf
## Training and evaluation data
Dataset provided by LivingNER shared task, it is available at: https://temu.bsc.es/livingner/category/data/
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0505 | 1.0 | 2568 | 0.0434 | 0.9399 | 0.6781 | 0.7878 | 0.9886 |
| 0.0393 | 2.0 | 5136 | 0.0450 | 0.9384 | 0.6947 | 0.7984 | 0.9892 |
| 0.0306 | 3.0 | 7704 | 0.0451 | 0.9497 | 0.6951 | 0.8027 | 0.9897 |
| 0.0266 | 4.0 | 10272 | 0.0422 | 0.9646 | 0.6904 | 0.8048 | 0.9900 |
| 0.0208 | 5.0 | 12840 | 0.0494 | 0.9576 | 0.6969 | 0.8067 | 0.9902 |
| 0.0141 | 6.0 | 15408 | 0.0506 | 0.8407 | 0.7352 | 0.7844 | 0.9890 |
| 0.0093 | 7.0 | 17976 | 0.0546 | 0.8574 | 0.7366 | 0.7924 | 0.9893 |
### How to cite this work:
Tamayo, A., Burgos, D., & Gelbukh, A. (2022). ParTNER: Paragraph Tuning for Named Entity Recognition on Clinical Cases in Spanish using mBERT+ Rules. In CEUR Workshop Proceedings (Vol. 3202). CEUR-WS.
@inproceedings{tamayo2022partner,
title={ParTNER: Paragraph Tuning for Named Entity Recognition on Clinical Cases in Spanish using mBERT+ Rules},
author={Tamayo, Antonio and Burgos, Diego and Gelbukh, Alexander}
}
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
RafaelEiji/jurisbert-base-classify
|
RafaelEiji
| 2022-11-10T16:19:29Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-10T12:49:42Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [juridics/jurisbert-base-portuguese-uncased](https://huggingface.co/juridics/jurisbert-base-portuguese-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4838
- Accuracy: 0.7176
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.12.0+cu116
- Datasets 2.6.1
- Tokenizers 0.13.1
|
sid321axn/minilm-finetuned-emotionclassification
|
sid321axn
| 2022-11-10T16:16:40Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-10T05:46:34Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: minilm-finetuned-emotionclassification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# minilm-finetuned-emotionclassification
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0554
- F1 Score: 0.6732
## Model description
The base model used is Microsoft MiniLM-L12-H384-uncased which is finetuned on [GoEmotions dataset](https://huggingface.co/datasets/go_emotions) available on huggingface.
With this model, you can classify emotions in English text data. The model predicts 10 basic emotions:
1) anger 🤬
2) love ❤️
3) fear 😨
4) joy 😀
5) excitement 😄
6) sadness 😭
7) surprise 😲
8) gratitude 😊
9) curiosity 🤔
10 caring
## Intended uses & limitations
The model can be used to detect emotions from text/ documents which can be used for analysis contextual emotional analysis of the documents
## Training and evaluation data
The dataset used for Training and Evaluation is [GoEmotions dataset](https://huggingface.co/datasets/go_emotions)
and in this, we have used 10 emotion variables.
{0:'sadness',1:'joy',2:'love',3:'anger',4:'fear',5:'surprise',6:'excitement',7:'gratitude',8:'curiosity',9:'caring'}
## How to use the model
Here is how to use this model to extract the emotions from the given text in PyTorch:
```python
>>> from transformers import pipeline
>>> model_ckpt ="sid321axn/minilm-finetuned-emotionclassification"
>>> pipe = pipeline("text-classification",model=model_ckpt)
>>> pipe("I am really excited about second part of Brahmastra Movie")
[{'label': 'excitement', 'score': 0.7849715352058411}]
```
## Training procedure
The training we have done by following this [video](https://www.youtube.com/watch?v=u--UVvH-LIQ) on Youtube by huggingface
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1659 | 1.0 | 539 | 1.1419 | 0.6347 |
| 1.0719 | 2.0 | 1078 | 1.0789 | 0.6589 |
| 0.9893 | 3.0 | 1617 | 1.0537 | 0.6666 |
| 0.9296 | 4.0 | 2156 | 1.0366 | 0.6729 |
| 0.8763 | 5.0 | 2695 | 1.0359 | 0.6774 |
| 0.8385 | 6.0 | 3234 | 1.0484 | 0.6693 |
| 0.8085 | 7.0 | 3773 | 1.0478 | 0.6758 |
| 0.7842 | 8.0 | 4312 | 1.0488 | 0.6741 |
| 0.7608 | 9.0 | 4851 | 1.0538 | 0.6749 |
| 0.7438 | 10.0 | 5390 | 1.0554 | 0.6732 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
pinxi/bloom-1b7-bloom
|
pinxi
| 2022-11-10T16:00:58Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"license:bigscience-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-10T15:03:46Z |
---
license: bigscience-openrail-m
---
Bloom-1b7 model finetuned on Bloom-175b generated data for email actionable points extraction
|
celinely/camembert-base-finetuned-sentence-simplification-fr
|
celinely
| 2022-11-10T15:26:39Z | 56 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-27T09:41:46Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: camembert-base-finetuned-sentence-simplification-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-finetuned-sentence-simplification-fr
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Rouge1: 100.0
- Rouge2: 100.0
- Rougel: 100.0
- Rougelsum: 100.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:------:|:---------:|
| 0.0202 | 1.0 | 167 | 0.0006 | 99.978 | 99.9587 | 99.978 | 99.978 |
| 0.0034 | 2.0 | 334 | 0.0001 | 100.0 | 100.0 | 100.0 | 100.0 |
| 0.0019 | 3.0 | 501 | 0.0001 | 100.0 | 100.0 | 100.0 | 100.0 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
sanchit-gandhi/whisper-medium-es-5k-1e-5
|
sanchit-gandhi
| 2022-11-10T15:26:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"es",
"dataset:facebook/multilingual_librispeech",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-10T09:01:56Z |
---
language:
- es
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- facebook/multilingual_librispeech
metrics:
- wer
model-index:
- name: Whisper Small Es - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Multilingual LibriSpeech
type: facebook/multilingual_librispeech
args: 'config: es, split: test'
metrics:
- name: Wer
type: wer
value: 4.988756935106611
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Es - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Multilingual LibriSpeech dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1252
- Wer: 4.9888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2346 | 0.2 | 500 | 0.1957 | 8.5131 |
| 0.1252 | 0.4 | 1000 | 0.1448 | 5.7876 |
| 0.2076 | 0.6 | 1500 | 0.1361 | 5.5786 |
| 0.2356 | 0.8 | 2000 | 0.1504 | 6.6611 |
| 0.1893 | 1.0 | 2500 | 0.1252 | 4.9888 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.0
- Datasets 2.6.2.dev0
- Tokenizers 0.12.1
|
pinxi/bloom-560m-igpt3
|
pinxi
| 2022-11-10T15:02:33Z | 173 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"license:bigscience-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-10T14:42:28Z |
---
license: bigscience-openrail-m
---
Bloom-560m model finetuned on InstructGPT3 generated data for email actionable points extraction
|
Omerdor/dry_samples_test
|
Omerdor
| 2022-11-10T14:39:04Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-10T12:36:19Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# dry_samples_test
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 4
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Omerdor/dry_samples_test/tensorboard?#scalars)
|
Matthijs/mobilenet_v1_0.75_192
|
Matthijs
| 2022-11-10T14:20:14Z | 237 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mobilenet_v1",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:1704.04861",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-22T12:07:44Z |
---
license: other
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# MobileNet V1
MobileNet V1 model pre-trained on ImageNet-1k at resolution 192x192. It was introduced in [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Howard et al, and first released in [this repository](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md).
Disclaimer: The team releasing MobileNet V1 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md):
> MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilenet_v1) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import MobileNetV1FeatureExtractor, MobileNetV1ForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileNetV1FeatureExtractor.from_pretrained("Matthijs/mobilenet_v1_1.0_224")
model = MobileNetV1ForImageClassification.from_pretrained("Matthijs/mobilenet_v1_1.0_224")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Note: This model actually predicts 1001 classes, the 1000 classes from ImageNet plus an extra “background” class (index 0).
Currently, both the feature extractor and model support PyTorch.
|
Matthijs/mobilenet_v1_1.0_224
|
Matthijs
| 2022-11-10T14:20:00Z | 3,887 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mobilenet_v1",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:1704.04861",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-22T12:05:41Z |
---
license: other
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# MobileNet V1
MobileNet V1 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Howard et al, and first released in [this repository](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md).
Disclaimer: The team releasing MobileNet V1 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md):
> MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilenet_v1) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import MobileNetV1FeatureExtractor, MobileNetV1ForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileNetV1FeatureExtractor.from_pretrained("Matthijs/mobilenet_v1_1.0_224")
model = MobileNetV1ForImageClassification.from_pretrained("Matthijs/mobilenet_v1_1.0_224")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Note: This model actually predicts 1001 classes, the 1000 classes from ImageNet plus an extra “background” class (index 0).
Currently, both the feature extractor and model support PyTorch.
|
Narsil/layoutlm-funsd
|
Narsil
| 2022-11-10T13:52:40Z | 54 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlm",
"token-classification",
"generated_from_trainer",
"endpoints-template",
"object-detection",
"dataset:funsd",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2022-11-10T11:35:09Z |
---
tags:
- generated_from_trainer
- endpoints-template
library_name: transformers
pipeline_tag: object-detection
widget:
- src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png"
example_title: invoice
- src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/contract.jpeg"
example_title: contract
datasets:
- funsd
model-index:
- name: layoutlm-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0045
- Answer: {'precision': 0.7348314606741573, 'recall': 0.8084054388133498, 'f1': 0.7698646262507357, 'number': 809}
- Header: {'precision': 0.44285714285714284, 'recall': 0.5210084033613446, 'f1': 0.47876447876447875, 'number': 119}
- Question: {'precision': 0.8211009174311926, 'recall': 0.8403755868544601, 'f1': 0.8306264501160092, 'number': 1065}
- Overall Precision: 0.7599
- Overall Recall: 0.8083
- Overall F1: 0.7866
- Overall Accuracy: 0.8106
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
## Deploy Model with Inference Endpoints
Before we can get started, make sure you meet all of the following requirements:
1. An Organization/User with an active plan and *WRITE* access to the model repository.
2. Can access the UI: [https://ui.endpoints.huggingface.co](https://ui.endpoints.huggingface.co/endpoints)
### 1. Deploy LayoutLM and Send requests
In this tutorial, you will learn how to deploy a [LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm) to [Hugging Face Inference Endpoints](https://huggingface.co/inference-endpoints) and how you can integrate it via an API into your products.
This tutorial is not covering how you create the custom handler for inference. If you want to learn how to create a custom Handler for Inference Endpoints, you can either checkout the [documentation](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) or go through [“Custom Inference with Hugging Face Inference Endpoints”](https://www.philschmid.de/custom-inference-handler)
We are going to deploy [philschmid/layoutlm-funsd](https://huggingface.co/philschmid/layoutlm-funsd) which implements the following `handler.py`
```python
from typing import Dict, List, Any
from transformers import LayoutLMForTokenClassification, LayoutLMv2Processor
import torch
from subprocess import run
# install tesseract-ocr and pytesseract
run("apt install -y tesseract-ocr", shell=True, check=True)
run("pip install pytesseract", shell=True, check=True)
# helper function to unnormalize bboxes for drawing onto the image
def unnormalize_box(bbox, width, height):
return [
width * (bbox[0] / 1000),
height * (bbox[1] / 1000),
width * (bbox[2] / 1000),
height * (bbox[3] / 1000),
]
# set device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
class EndpointHandler:
def __init__(self, path=""):
# load model and processor from path
self.model = LayoutLMForTokenClassification.from_pretrained(path).to(device)
self.processor = LayoutLMv2Processor.from_pretrained(path)
def __call__(self, data: Dict[str, bytes]) -> Dict[str, List[Any]]:
"""
Args:
data (:obj:):
includes the deserialized image file as PIL.Image
"""
# process input
image = data.pop("inputs", data)
# process image
encoding = self.processor(image, return_tensors="pt")
# run prediction
with torch.inference_mode():
outputs = self.model(
input_ids=encoding.input_ids.to(device),
bbox=encoding.bbox.to(device),
attention_mask=encoding.attention_mask.to(device),
token_type_ids=encoding.token_type_ids.to(device),
)
predictions = outputs.logits.softmax(-1)
# post process output
result = []
for item, inp_ids, bbox in zip(
predictions.squeeze(0).cpu(), encoding.input_ids.squeeze(0).cpu(), encoding.bbox.squeeze(0).cpu()
):
label = self.model.config.id2label[int(item.argmax().cpu())]
if label == "O":
continue
score = item.max().item()
text = self.processor.tokenizer.decode(inp_ids)
bbox = unnormalize_box(bbox.tolist(), image.width, image.height)
result.append({"label": label, "score": score, "text": text, "bbox": bbox})
return {"predictions": result}
```
### 2. Send HTTP request using Python
Hugging Face Inference endpoints can directly work with binary data, this means that we can directly send our image from our document to the endpoint. We are going to use `requests` to send our requests. (make your you have it installed `pip install requests`)
```python
import json
import requests as r
import mimetypes
ENDPOINT_URL="" # url of your endpoint
HF_TOKEN="" # organization token where you deployed your endpoint
def predict(path_to_image:str=None):
with open(path_to_image, "rb") as i:
b = i.read()
headers= {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": mimetypes.guess_type(path_to_image)[0]
}
response = r.post(ENDPOINT_URL, headers=headers, data=b)
return response.json()
prediction = predict(path_to_image="path_to_your_image.png")
print(prediction)
# {'predictions': [{'label': 'I-ANSWER', 'score': 0.4823932945728302, 'text': '[CLS]', 'bbox': [0.0, 0.0, 0.0, 0.0]}, {'label': 'B-HEADER', 'score': 0.992474377155304, 'text': 'your', 'bbox': [1712.529, 181.203, 1859.949, 228.88799999999998]},
```
### 3. Draw result on image
To get a better understanding of what the model predicted you can also draw the predictions on the provided image.
```python
from PIL import Image, ImageDraw, ImageFont
# draw results on image
def draw_result(path_to_image,result):
image = Image.open(path_to_image)
label2color = {
"B-HEADER": "blue",
"B-QUESTION": "red",
"B-ANSWER": "green",
"I-HEADER": "blue",
"I-QUESTION": "red",
"I-ANSWER": "green",
}
# draw predictions over the image
draw = ImageDraw.Draw(image)
font = ImageFont.load_default()
for res in result:
draw.rectangle(res["bbox"], outline="black")
draw.rectangle(res["bbox"], outline=label2color[res["label"]])
draw.text((res["bbox"][0] + 10, res["bbox"][1] - 10), text=res["label"], fill=label2color[res["label"]], font=font)
return image
draw_result("path_to_your_image.png", prediction["predictions"])
```
|
toanbui1991/distilbert-base-uncased-finetuned-squad
|
toanbui1991
| 2022-11-10T13:39:29Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-09T03:01:51Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: toanbui1991/distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# toanbui1991/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5101
- Train End Logits Accuracy: 0.6065
- Train Start Logits Accuracy: 0.5692
- Validation Loss: 1.1679
- Validation End Logits Accuracy: 0.6823
- Validation Start Logits Accuracy: 0.6523
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.5101 | 0.6065 | 0.5692 | 1.1679 | 0.6823 | 0.6523 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.10.0
- Datasets 2.6.1
- Tokenizers 0.13.2
|
huggingtweets/barkmeta-lb22_sus-nftherder
|
huggingtweets
| 2022-11-10T13:37:11Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-10T13:34:27Z |
---
language: en
thumbnail: http://www.huggingtweets.com/barkmeta-lb22_sus-nftherder/1668087427349/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1579110344420622342/QzePSc2g_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1559936197564268551/WXSx0leh_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1554766012955955200/W_Ma1gx3_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">LB22 & Bark❓ & OKHotshot</div>
<div style="text-align: center; font-size: 14px;">@barkmeta-lb22_sus-nftherder</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from LB22 & Bark❓ & OKHotshot.
| Data | LB22 | Bark❓ | OKHotshot |
| --- | --- | --- | --- |
| Tweets downloaded | 1220 | 3250 | 3249 |
| Retweets | 467 | 287 | 139 |
| Short tweets | 381 | 1866 | 811 |
| Tweets kept | 372 | 1097 | 2299 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/13l6lr2n/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @barkmeta-lb22_sus-nftherder's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1eghqa00) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1eghqa00/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/barkmeta-lb22_sus-nftherder')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/barkmeta-lb22_sus-nft_god
|
huggingtweets
| 2022-11-10T13:18:55Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-10T13:00:51Z |
---
language: en
thumbnail: http://www.huggingtweets.com/barkmeta-lb22_sus-nft_god/1668086330381/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1579110344420622342/QzePSc2g_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1489268127565324291/ZQK5RoFg_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1559936197564268551/WXSx0leh_400x400.png')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">LB22 & NFT God & Bark❓</div>
<div style="text-align: center; font-size: 14px;">@barkmeta-lb22_sus-nft_god</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from LB22 & NFT God & Bark❓.
| Data | LB22 | NFT God | Bark❓ |
| --- | --- | --- | --- |
| Tweets downloaded | 1220 | 3250 | 3250 |
| Retweets | 467 | 20 | 285 |
| Short tweets | 381 | 165 | 1868 |
| Tweets kept | 372 | 3065 | 1097 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3vq9v8ck/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @barkmeta-lb22_sus-nft_god's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ixknti18) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ixknti18/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/barkmeta-lb22_sus-nft_god')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ViktorDo/SciBERT-WIKI_Lifecycle_Finetuned
|
ViktorDo
| 2022-11-10T12:55:56Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-10T11:38:03Z |
---
tags:
- generated_from_trainer
model-index:
- name: SciBERT-WIKI_Lifecycle_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SciBERT-WIKI_Lifecycle_Finetuned
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1142
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0933 | 1.0 | 2082 | 0.1159 |
| 0.0782 | 2.0 | 4164 | 0.0935 |
| 0.0442 | 3.0 | 6246 | 0.1142 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Matthijs/mobilenet_v2_1.0_224
|
Matthijs
| 2022-11-10T12:48:17Z | 1,045 | 0 |
transformers
|
[
"transformers",
"pytorch",
"coreml",
"mobilenet_v2",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:1801.04381",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-27T13:30:29Z |
---
license: other
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# MobileNet V2
MobileNet V2 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. It was first released in [this repository](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet).
Disclaimer: The team releasing MobileNet V2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md):
> MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature.
The checkpoints are named **mobilenet\_v2\_*depth*\_*size***, for example **mobilenet\_v2\_1.0\_224**, where **1.0** is the depth multiplier and **224** is the resolution of the input images the model was trained on.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import MobileNetV2FeatureExtractor, MobileNetV2ForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileNetV2FeatureExtractor.from_pretrained("Matthijs/mobilenet_v2_1.0_224")
model = MobileNetV2ForImageClassification.from_pretrained("Matthijs/mobilenet_v2_1.0_224")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Note: This model actually predicts 1001 classes, the 1000 classes from ImageNet plus an extra “background” class (index 0).
Currently, both the feature extractor and model support PyTorch.
### BibTeX entry and citation info
```bibtex
@inproceedings{mobilenetv22018,
title={MobileNetV2: Inverted Residuals and Linear Bottlenecks},
author={Mark Sandler and Andrew Howard and Menglong Zhu and Andrey Zhmoginov and Liang-Chieh Chen},
booktitle={CVPR},
year={2018}
}
```
|
Vsevolod/company-names-similarity-sentence-transformer
|
Vsevolod
| 2022-11-10T12:44:01Z | 648 | 16 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-24T11:15:41Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1222 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.WeightedRandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 122.1875,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
huggingtweets/sbe_sus
|
huggingtweets
| 2022-11-10T12:41:46Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-10T12:20:40Z |
---
language: en
thumbnail: http://www.huggingtweets.com/sbe_sus/1668084101960/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1579111637973336071/MkdCeTeX_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">sberto.eth 📈</div>
<div style="text-align: center; font-size: 14px;">@sbe_sus</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from sberto.eth 📈.
| Data | sberto.eth 📈 |
| --- | --- |
| Tweets downloaded | 1273 |
| Retweets | 648 |
| Short tweets | 221 |
| Tweets kept | 404 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1rwjbirb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sbe_sus's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ejp5m2v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ejp5m2v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sbe_sus')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Galeros/Reinforce-pong0001
|
Galeros
| 2022-11-10T12:41:31Z | 0 | 0 | null |
[
"Pong-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-10T12:41:19Z |
---
tags:
- Pong-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pong0001
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-PLE-v0
type: Pong-PLE-v0
metrics:
- type: mean_reward
value: -16.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pong-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pong-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
wakio/dummy-model
|
wakio
| 2022-11-10T12:22:03Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"camembert",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-10T11:54:52Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: dummy-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy-model
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Tokenizers 0.13.2
|
luanngo/evjvqa_mt5_vit_16
|
luanngo
| 2022-11-10T11:04:55Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-07T09:04:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: evjvqa_mt5_vit_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# evjvqa_mt5_vit_16
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2997
- F1: 0.4194
- Bleu4: 0.3783
- Mean Pred Len: 14.85
- Mean Label Len: 15.25
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 40
- eval_batch_size: 40
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 80
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Bleu4 | Mean Pred Len | Mean Label Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------------:|:--------------:|
| 15.7375 | 0.07 | 20 | 9.6637 | 0.0771 | 0.0567 | 10.75 | 15.25 |
| 15.7459 | 0.15 | 40 | 10.0761 | 0.0784 | 0.0754 | 11.5 | 15.25 |
| 15.456 | 0.22 | 60 | 9.5077 | 0.0574 | 0.0595 | 11.35 | 15.25 |
| 15.3725 | 0.3 | 80 | 9.5230 | 0.0589 | 0.0436 | 11.45 | 15.25 |
| 14.9377 | 0.37 | 100 | 8.6082 | 0.079 | 0.0725 | 12.2 | 15.25 |
| 14.5629 | 0.45 | 120 | 9.3522 | 0.0851 | 0.0704 | 12.35 | 15.25 |
| 14.2505 | 0.52 | 140 | 8.0656 | 0.0666 | 0.0473 | 11.85 | 15.25 |
| 13.4648 | 0.6 | 160 | 7.5456 | 0.0783 | 0.054 | 10.4 | 15.25 |
| 13.055 | 0.67 | 180 | 7.0022 | 0.0607 | 0.0529 | 10.2 | 15.25 |
| 12.2861 | 0.75 | 200 | 6.6263 | 0.0704 | 0.0677 | 10.4 | 15.25 |
| 11.8459 | 0.82 | 220 | 6.1817 | 0.0849 | 0.0802 | 11.15 | 15.25 |
| 10.9808 | 0.9 | 240 | 5.6607 | 0.0779 | 0.053 | 11.65 | 15.25 |
| 10.0039 | 0.97 | 260 | 5.3278 | 0.0867 | 0.0619 | 10.55 | 15.25 |
| 8.819 | 1.05 | 280 | 4.5316 | 0.1154 | 0.1346 | 9.45 | 15.25 |
| 7.5032 | 1.12 | 300 | 3.7815 | 0.1355 | 0.1159 | 9.75 | 15.25 |
| 6.1347 | 1.2 | 320 | 3.0172 | 0.1807 | 0.1546 | 9.85 | 15.25 |
| 4.8126 | 1.27 | 340 | 2.6729 | 0.2177 | 0.1978 | 9.35 | 15.25 |
| 4.1824 | 1.35 | 360 | 2.3100 | 0.3017 | 0.3567 | 11.3 | 15.25 |
| 3.6456 | 1.42 | 380 | 2.2327 | 0.3029 | 0.3605 | 11.4 | 15.25 |
| 3.3865 | 1.5 | 400 | 2.0704 | 0.316 | 0.3167 | 13.15 | 15.25 |
| 3.2078 | 1.57 | 420 | 2.0376 | 0.3027 | 0.2856 | 13.5 | 15.25 |
| 3.0357 | 1.65 | 440 | 1.9508 | 0.3207 | 0.3404 | 13.1 | 15.25 |
| 2.9388 | 1.72 | 460 | 1.9042 | 0.3872 | 0.3665 | 13.5 | 15.25 |
| 2.7807 | 1.8 | 480 | 1.8595 | 0.3954 | 0.3692 | 13.65 | 15.25 |
| 2.7234 | 1.87 | 500 | 1.8956 | 0.3871 | 0.3484 | 14.2 | 15.25 |
| 2.6417 | 1.95 | 520 | 1.7809 | 0.4406 | 0.3592 | 15.85 | 15.25 |
| 2.5189 | 2.02 | 540 | 1.7255 | 0.4242 | 0.3844 | 14.8 | 15.25 |
| 2.4075 | 2.1 | 560 | 1.7226 | 0.4378 | 0.4022 | 14.55 | 15.25 |
| 2.3158 | 2.17 | 580 | 1.6749 | 0.46 | 0.4313 | 14.7 | 15.25 |
| 2.3145 | 2.25 | 600 | 1.6850 | 0.4229 | 0.3525 | 15.75 | 15.25 |
| 2.2615 | 2.32 | 620 | 1.6651 | 0.4618 | 0.3666 | 16.65 | 15.25 |
| 2.1983 | 2.4 | 640 | 1.6409 | 0.4101 | 0.3297 | 15.1 | 15.25 |
| 2.1365 | 2.47 | 660 | 1.6350 | 0.4317 | 0.3728 | 15.4 | 15.25 |
| 2.1286 | 2.55 | 680 | 1.6045 | 0.389 | 0.3352 | 14.95 | 15.25 |
| 2.1301 | 2.62 | 700 | 1.5884 | 0.4391 | 0.3679 | 15.55 | 15.25 |
| 2.1368 | 2.7 | 720 | 1.5702 | 0.415 | 0.3352 | 15.4 | 15.25 |
| 2.0449 | 2.77 | 740 | 1.5415 | 0.4215 | 0.366 | 14.7 | 15.25 |
| 2.0286 | 2.85 | 760 | 1.5434 | 0.406 | 0.3291 | 15.35 | 15.25 |
| 2.0126 | 2.92 | 780 | 1.5358 | 0.389 | 0.3033 | 15.0 | 15.25 |
| 1.9923 | 3.0 | 800 | 1.4857 | 0.4471 | 0.3605 | 15.85 | 15.25 |
| 1.8807 | 3.07 | 820 | 1.4665 | 0.4743 | 0.3717 | 15.95 | 15.25 |
| 1.8989 | 3.15 | 840 | 1.4760 | 0.3996 | 0.3502 | 14.8 | 15.25 |
| 1.8745 | 3.22 | 860 | 1.4294 | 0.3815 | 0.3258 | 15.2 | 15.25 |
| 1.9292 | 3.3 | 880 | 1.4454 | 0.4366 | 0.3694 | 15.6 | 15.25 |
| 1.8473 | 3.37 | 900 | 1.4205 | 0.4032 | 0.3523 | 15.65 | 15.25 |
| 1.8723 | 3.45 | 920 | 1.4080 | 0.4167 | 0.3609 | 15.5 | 15.25 |
| 1.8272 | 3.52 | 940 | 1.4069 | 0.3944 | 0.3734 | 14.45 | 15.25 |
| 1.8443 | 3.6 | 960 | 1.4088 | 0.409 | 0.3712 | 14.65 | 15.25 |
| 1.7956 | 3.67 | 980 | 1.3970 | 0.3848 | 0.3573 | 14.6 | 15.25 |
| 1.802 | 3.75 | 1000 | 1.3971 | 0.4116 | 0.3856 | 14.75 | 15.25 |
| 1.8154 | 3.82 | 1020 | 1.4013 | 0.4382 | 0.3731 | 14.85 | 15.25 |
| 1.7599 | 3.9 | 1040 | 1.4035 | 0.4106 | 0.3566 | 15.25 | 15.25 |
| 1.8375 | 3.97 | 1060 | 1.3992 | 0.4286 | 0.3594 | 15.6 | 15.25 |
| 1.739 | 4.04 | 1080 | 1.3955 | 0.4218 | 0.3686 | 15.1 | 15.25 |
| 1.7291 | 4.12 | 1100 | 1.3968 | 0.4702 | 0.4011 | 15.65 | 15.25 |
| 1.7279 | 4.19 | 1120 | 1.3743 | 0.4328 | 0.3668 | 15.5 | 15.25 |
| 1.7092 | 4.27 | 1140 | 1.3650 | 0.4321 | 0.3721 | 15.55 | 15.25 |
| 1.7002 | 4.34 | 1160 | 1.3413 | 0.3999 | 0.3669 | 15.25 | 15.25 |
| 1.7333 | 4.42 | 1180 | 1.3715 | 0.4459 | 0.3758 | 16.15 | 15.25 |
| 1.707 | 4.49 | 1200 | 1.3630 | 0.4173 | 0.3686 | 15.0 | 15.25 |
| 1.6815 | 4.57 | 1220 | 1.3326 | 0.4344 | 0.3755 | 15.1 | 15.25 |
| 1.7045 | 4.64 | 1240 | 1.3440 | 0.4083 | 0.3801 | 14.7 | 15.25 |
| 1.6511 | 4.72 | 1260 | 1.3361 | 0.3976 | 0.3722 | 14.7 | 15.25 |
| 1.682 | 4.79 | 1280 | 1.3314 | 0.3964 | 0.3707 | 14.85 | 15.25 |
| 1.6511 | 4.87 | 1300 | 1.3461 | 0.4081 | 0.3704 | 15.0 | 15.25 |
| 1.5936 | 4.94 | 1320 | 1.3362 | 0.4185 | 0.3667 | 15.15 | 15.25 |
| 1.6287 | 5.02 | 1340 | 1.3312 | 0.4296 | 0.374 | 14.85 | 15.25 |
| 1.6401 | 5.09 | 1360 | 1.3152 | 0.403 | 0.366 | 14.95 | 15.25 |
| 1.6093 | 5.17 | 1380 | 1.3316 | 0.3931 | 0.3689 | 14.75 | 15.25 |
| 1.6002 | 5.24 | 1400 | 1.3506 | 0.3948 | 0.3702 | 14.8 | 15.25 |
| 1.6245 | 5.32 | 1420 | 1.3344 | 0.401 | 0.3605 | 15.1 | 15.25 |
| 1.6005 | 5.39 | 1440 | 1.3310 | 0.4174 | 0.3698 | 15.1 | 15.25 |
| 1.5903 | 5.47 | 1460 | 1.3218 | 0.4156 | 0.3716 | 14.85 | 15.25 |
| 1.6016 | 5.54 | 1480 | 1.3219 | 0.4368 | 0.3984 | 14.8 | 15.25 |
| 1.6143 | 5.62 | 1500 | 1.3157 | 0.4094 | 0.3729 | 14.55 | 15.25 |
| 1.6082 | 5.69 | 1520 | 1.3109 | 0.4068 | 0.3778 | 14.9 | 15.25 |
| 1.5451 | 5.77 | 1540 | 1.3057 | 0.4056 | 0.3703 | 14.95 | 15.25 |
| 1.6312 | 5.84 | 1560 | 1.3055 | 0.4032 | 0.3656 | 14.85 | 15.25 |
| 1.5476 | 5.92 | 1580 | 1.3282 | 0.4154 | 0.3662 | 15.2 | 15.25 |
| 1.5758 | 5.99 | 1600 | 1.3205 | 0.4136 | 0.3623 | 15.2 | 15.25 |
| 1.598 | 6.07 | 1620 | 1.3200 | 0.4159 | 0.3675 | 14.9 | 15.25 |
| 1.567 | 6.14 | 1640 | 1.3359 | 0.4153 | 0.3699 | 14.7 | 15.25 |
| 1.5349 | 6.22 | 1660 | 1.3378 | 0.4036 | 0.3649 | 14.8 | 15.25 |
| 1.5536 | 6.29 | 1680 | 1.3374 | 0.4143 | 0.3691 | 14.85 | 15.25 |
| 1.5382 | 6.37 | 1700 | 1.3274 | 0.4052 | 0.38 | 14.65 | 15.25 |
| 1.5238 | 6.44 | 1720 | 1.3217 | 0.406 | 0.3674 | 14.9 | 15.25 |
| 1.5434 | 6.52 | 1740 | 1.3174 | 0.4096 | 0.3759 | 14.85 | 15.25 |
| 1.5326 | 6.59 | 1760 | 1.3134 | 0.4096 | 0.3759 | 14.85 | 15.25 |
| 1.5263 | 6.67 | 1780 | 1.3157 | 0.4104 | 0.3635 | 15.05 | 15.25 |
| 1.4775 | 6.74 | 1800 | 1.3197 | 0.4096 | 0.3759 | 14.85 | 15.25 |
| 1.5173 | 6.82 | 1820 | 1.3121 | 0.4167 | 0.3722 | 14.9 | 15.25 |
| 1.5304 | 6.89 | 1840 | 1.3240 | 0.4198 | 0.3818 | 14.7 | 15.25 |
| 1.5344 | 6.97 | 1860 | 1.3250 | 0.4135 | 0.3793 | 14.7 | 15.25 |
| 1.5392 | 7.04 | 1880 | 1.3187 | 0.4135 | 0.3793 | 14.7 | 15.25 |
| 1.5201 | 7.12 | 1900 | 1.3128 | 0.4143 | 0.3681 | 14.8 | 15.25 |
| 1.5139 | 7.19 | 1920 | 1.3072 | 0.4143 | 0.3654 | 14.95 | 15.25 |
| 1.4878 | 7.27 | 1940 | 1.3021 | 0.4143 | 0.3654 | 14.95 | 15.25 |
| 1.5123 | 7.34 | 1960 | 1.3041 | 0.4143 | 0.3681 | 14.8 | 15.25 |
| 1.4569 | 7.42 | 1980 | 1.3203 | 0.417 | 0.3712 | 14.8 | 15.25 |
| 1.4984 | 7.49 | 2000 | 1.3149 | 0.4198 | 0.3832 | 14.65 | 15.25 |
| 1.5187 | 7.57 | 2020 | 1.3102 | 0.4076 | 0.3818 | 14.7 | 15.25 |
| 1.5394 | 7.64 | 2040 | 1.3223 | 0.4176 | 0.3907 | 14.65 | 15.25 |
| 1.4602 | 7.72 | 2060 | 1.3102 | 0.4101 | 0.3686 | 14.9 | 15.25 |
| 1.4959 | 7.79 | 2080 | 1.3123 | 0.4178 | 0.3688 | 15.05 | 15.25 |
| 1.5462 | 7.87 | 2100 | 1.3083 | 0.4262 | 0.3692 | 15.1 | 15.25 |
| 1.4951 | 7.94 | 2120 | 1.2964 | 0.4301 | 0.3816 | 14.95 | 15.25 |
| 1.5016 | 8.01 | 2140 | 1.3078 | 0.4274 | 0.3784 | 14.9 | 15.25 |
| 1.4464 | 8.09 | 2160 | 1.3154 | 0.4178 | 0.3654 | 15.1 | 15.25 |
| 1.4654 | 8.16 | 2180 | 1.3070 | 0.4243 | 0.3702 | 15.0 | 15.25 |
| 1.4519 | 8.24 | 2200 | 1.2995 | 0.4339 | 0.3708 | 15.05 | 15.25 |
| 1.5098 | 8.31 | 2220 | 1.3051 | 0.4395 | 0.3903 | 14.75 | 15.25 |
| 1.4601 | 8.39 | 2240 | 1.3013 | 0.4376 | 0.3881 | 14.8 | 15.25 |
| 1.4693 | 8.46 | 2260 | 1.2981 | 0.4278 | 0.3871 | 14.8 | 15.25 |
| 1.5386 | 8.54 | 2280 | 1.3002 | 0.4112 | 0.3781 | 14.8 | 15.25 |
| 1.5115 | 8.61 | 2300 | 1.2994 | 0.4153 | 0.3806 | 14.9 | 15.25 |
| 1.5133 | 8.69 | 2320 | 1.2971 | 0.4236 | 0.385 | 14.85 | 15.25 |
| 1.4691 | 8.76 | 2340 | 1.2979 | 0.4321 | 0.3896 | 14.75 | 15.25 |
| 1.4548 | 8.84 | 2360 | 1.3054 | 0.4276 | 0.385 | 14.75 | 15.25 |
| 1.4816 | 8.91 | 2380 | 1.3029 | 0.4259 | 0.3857 | 14.7 | 15.25 |
| 1.4386 | 8.99 | 2400 | 1.2983 | 0.4196 | 0.3826 | 14.75 | 15.25 |
| 1.5242 | 9.06 | 2420 | 1.2958 | 0.421 | 0.3739 | 14.95 | 15.25 |
| 1.4824 | 9.14 | 2440 | 1.2939 | 0.4292 | 0.3827 | 14.9 | 15.25 |
| 1.5137 | 9.21 | 2460 | 1.2896 | 0.4213 | 0.3796 | 14.8 | 15.25 |
| 1.4634 | 9.29 | 2480 | 1.2934 | 0.4191 | 0.3855 | 14.85 | 15.25 |
| 1.4881 | 9.36 | 2500 | 1.2982 | 0.4134 | 0.3838 | 14.65 | 15.25 |
| 1.4185 | 9.44 | 2520 | 1.2995 | 0.4117 | 0.3795 | 14.65 | 15.25 |
| 1.3843 | 9.51 | 2540 | 1.3013 | 0.4217 | 0.3826 | 14.65 | 15.25 |
| 1.4563 | 9.59 | 2560 | 1.3005 | 0.4117 | 0.3795 | 14.65 | 15.25 |
| 1.461 | 9.66 | 2580 | 1.3008 | 0.4194 | 0.3783 | 14.85 | 15.25 |
| 1.47 | 9.74 | 2600 | 1.2999 | 0.4194 | 0.3783 | 14.85 | 15.25 |
| 1.4892 | 9.81 | 2620 | 1.2994 | 0.4196 | 0.3826 | 14.75 | 15.25 |
| 1.4503 | 9.89 | 2640 | 1.2992 | 0.4196 | 0.3826 | 14.75 | 15.25 |
| 1.4216 | 9.96 | 2660 | 1.2997 | 0.4194 | 0.3783 | 14.85 | 15.25 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.12.1
|
karolill/nb-bert-finetuned-on-norec
|
karolill
| 2022-11-10T10:41:04Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-10T09:08:18Z |
---
license: mit
---
# NB-BERT fine-tuned on NoReC
## Description
This model is based on the pre-trained [NB-BERT-large model](https://huggingface.co/NbAiLab/nb-bert-large?text=P%C3%A5+biblioteket+kan+du+l%C3%A5ne+en+%5BMASK%5D.). It is a model for sentiment analysis.
## Data for fine-tuning
This model was fine-tuned on 1000 exemples from the [NoReC train dataset](https://github.com/ltgoslo/norec) that belonged to the screen category. The training lasted 3 epochs with a learning rate of 5e-5. The code used to create this model (and some additional models) can be found on [Github](https://github.com/Karolill/NB-BERT-fine-tuned-on-english).
|
Norod78/hebrew-gpt_neo-xl
|
Norod78
| 2022-11-10T10:38:56Z | 65 | 9 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"onnx",
"safetensors",
"gpt_neo",
"text-generation",
"he",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
language: he
thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg
widget:
- text: "עוד בימי קדם"
- text: "קוראים לי דורון ואני מעוניין ל"
- text: "קוראים לי איציק ואני חושב ש"
- text: "החתול שלך מאוד חמוד ו"
- text: "ובדרך ראינו שהגן"
license: mit
---
# hebrew-gpt_neo-xl
Hebrew text generation model based on [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Each was trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program.
## Datasets
1. An assortment of various Hebrew corpuses - I have made it available [here](https://mega.nz/folder/CodSSA4R#4INvMes-56m_WUi7jQMbJQ)
2. oscar / unshuffled_deduplicated_he - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he)
The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.
3. CC100-Hebrew Dataset [Homepage](https://metatext.io/datasets/cc100-hebrew)
Created by Conneau & Wenzek et al. at 2020, the CC100-Hebrew This dataset is one of the 100 corpora of monolingual data that was processed from the January-December 2018 Commoncrawl snapshots from the CC-Net repository. The size of this corpus is 6.1G., in Hebrew language.
## Training Config
Available [here](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-xl/configs) <BR>
## Usage
### Google Colab Notebook
Available [here ](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-xl/Norod78_hebrew_gpt_neo_xl_Colab.ipynb) <BR>
#### Simple usage sample code
```python
!pip install tokenizers==0.10.3 transformers==4.8.0
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-xl")
model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-xl", pad_token_id=tokenizer.eos_token_id)
prompt_text = "אני אוהב שוקולד ועוגות"
max_len = 512
sample_output_num = 3
seed = 1000
import numpy as np
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count()
print(f"device: {device}, n_gpu: {n_gpu}")
np.random.seed(seed)
torch.manual_seed(seed)
if n_gpu > 0:
torch.cuda.manual_seed_all(seed)
model.to(device)
encoded_prompt = tokenizer.encode(
prompt_text, add_special_tokens=False, return_tensors="pt")
encoded_prompt = encoded_prompt.to(device)
if encoded_prompt.size()[-1] == 0:
input_ids = None
else:
input_ids = encoded_prompt
print("input_ids = " + str(input_ids))
if input_ids != None:
max_len += len(encoded_prompt[0])
if max_len > 2048:
max_len = 2048
print("Updated max_len = " + str(max_len))
stop_token = "<|endoftext|>"
new_lines = "\
\
\
"
sample_outputs = model.generate(
input_ids,
do_sample=True,
max_length=max_len,
top_k=50,
top_p=0.95,
num_return_sequences=sample_output_num
)
print(100 * '-' + "\
\t\tOutput\
" + 100 * '-')
for i, sample_output in enumerate(sample_outputs):
text = tokenizer.decode(sample_output, skip_special_tokens=True)
# Remove all text after the stop token
text = text[: text.find(stop_token) if stop_token else None]
# Remove all text after 3 newlines
text = text[: text.find(new_lines) if new_lines else None]
print("\
{}: {}".format(i, text))
print("\
" + 100 * '-')
```
|
reza-aditya/lunar-reinforcement-learning
|
reza-aditya
| 2022-11-10T10:26:45Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-10T09:09:16Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -711.41 +/- 372.03
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ArtbyArbi/picbixex
|
ArtbyArbi
| 2022-11-10T10:00:15Z | 33 | 0 |
diffusers
|
[
"diffusers",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-10T09:57:15Z |
---
license: mit
---
### PicBixex on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### model by ArbiCreatesArt
This your the Stable Diffusion model fine-tuned the PicBixex concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt(s)`: **PicBixex**
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:
PicBixex
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.JPG)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
|
naverpapago/garnet
|
naverpapago
| 2022-11-10T09:33:03Z | 0 | 2 |
pytorch
|
[
"pytorch",
"Scene Text Removal",
"Image to Image",
"arxiv:2210.07489",
"license:apache-2.0",
"region:us"
] | null | 2022-11-08T02:01:55Z |
---
license: apache-2.0
tags:
- Scene Text Removal
- Image to Image
library_name: pytorch
---
### GaRNet
This is text-removal model that introduced in the paper below and first released at [this page](https://github.com/naver/garnet). \
[The Surprisingly Straightforward Scene Text Removal Method With Gated Attention and Region of Interest Generation: A Comprehensive Prominent Model Analysis](https://arxiv.org/abs/2210.07489). \
Hyeonsu Lee, Chankyu Choi \
Naver Corp. \
In ECCV 2022.
### Model description
GaRNet is a generator that create non-text image with given image and coresponding text box mask. It consists of convolution encoder and decoder. The encoder consists of residual block with attention module called Gated Attention.
Gated Attention module has two Spatial attention branch. Each attention branch finds text stroke or its surrounding regions. The module adjusts the weight of these two domains by trainable parameters.
The model was trained in PatchGAN manner with Region-of-Interest Generation. \
The discriminator is consists of convolution encoder. Given an image, it determines whether each patch, which indicates text-box regions, is real or fake.
All loss functions treat non-textbox regions as 'don't care'.
### Intended uses & limitations
This model can be used for areas that require the process of erasing text from an image, such as concealment private information, text editing.\
You can use the raw model or pre-trained model.\
Note that pre-trained model was trained in both Synthetic and SCUT_EnsText dataset. And the SCUT-EnsText dataset can only be used for non-commercial research purposes.
### How to use
You can use inference code in [this page](https://github.com/naver/garnet).
### BibTeX entry and citation info
```
@inproceedings{lee2022surprisingly,
title={The Surprisingly Straightforward Scene Text Removal Method with Gated Attention and Region of Interest Generation: A Comprehensive Prominent Model Analysis},
author={Lee, Hyeonsu and Choi, Chankyu},
booktitle={European Conference on Computer Vision},
pages={457--472},
year={2022},
organization={Springer}
}
```
|
Galeros/Reinforce-cartpole0001
|
Galeros
| 2022-11-10T09:10:39Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-10T08:57:49Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole0001
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 101.90 +/- 9.57
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
ntsema/wav2vec2-xlsr-53-espeak-cv-ft-sah2-ntsema-colab
|
ntsema
| 2022-11-10T08:25:33Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:audiofolder",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-09T07:30:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- wer
model-index:
- name: wav2vec2-xlsr-53-espeak-cv-ft-sah2-ntsema-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: audiofolder
type: audiofolder
config: default
split: train
args: default
metrics:
- name: Wer
type: wer
value: 0.3295938104448743
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-53-espeak-cv-ft-sah2-ntsema-colab
This model is a fine-tuned version of [facebook/wav2vec2-xlsr-53-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-xlsr-53-espeak-cv-ft) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3586
- Wer: 0.3296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4128 | 5.71 | 400 | 0.4462 | 0.5733 |
| 0.2344 | 11.43 | 800 | 0.3489 | 0.3969 |
| 0.1181 | 17.14 | 1200 | 0.3470 | 0.3602 |
| 0.0837 | 22.85 | 1600 | 0.3608 | 0.3451 |
| 0.0645 | 28.57 | 2000 | 0.3586 | 0.3296 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
vent42/test
|
vent42
| 2022-11-10T08:22:44Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-11-10T08:22:44Z |
---
license: bigscience-openrail-m
---
|
Kohei1201/distilbert-base-uncased-finetuned-cola
|
Kohei1201
| 2022-11-10T07:48:32Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-10T06:43:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5567273065308361
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8434
- Matthews Correlation: 0.5567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5224 | 1.0 | 535 | 0.5360 | 0.4275 |
| 0.3498 | 2.0 | 1070 | 0.5205 | 0.5078 |
| 0.2383 | 3.0 | 1605 | 0.6466 | 0.5318 |
| 0.1739 | 4.0 | 2140 | 0.7723 | 0.5532 |
| 0.1276 | 5.0 | 2675 | 0.8434 | 0.5567 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.10.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
libok/test
|
libok
| 2022-11-10T06:57:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-11-10T06:56:42Z |
a robot reading the book and playing the piano
|
NoCrypt/momocha-mix
|
NoCrypt
| 2022-11-10T06:49:03Z | 0 | 19 | null |
[
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-11-10T06:39:29Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Momocha mix models
Scrapped from [chenyfan's sharepoint](https://cyfan-my.sharepoint.com/:f:/g/personal/chenyfan_cyfan_onmicrosoft_com/EilOWB40m3ZJn6ahczIUIs4B6v0XvizO5YorOhG_5eYSUw?e=ZyP7qE)
Example output:

|
Terence3927/q-Taxi-v3
|
Terence3927
| 2022-11-10T06:20:54Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-10T06:20:45Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Terence3927/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Terence3927/q-FrozenLake-v1-4x4-noSlippery
|
Terence3927
| 2022-11-10T06:12:39Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-10T06:08:58Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Terence3927/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
xu1998hz/sescore_german_mt
|
xu1998hz
| 2022-11-10T03:59:25Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-11-05T01:44:41Z |
SEScore German checkpoint for Machine Translation
|
zhangfx7/deberta-base-finetuned-cola
|
zhangfx7
| 2022-11-10T02:43:42Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-10T02:22:29Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: deberta-base-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-finetuned-cola
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6187
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6187 | 1.0 | 535 | 0.6187 | 0.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
42MARU/ko-42maru-wav2vec2-conformer-del-1s
|
42MARU
| 2022-11-10T02:33:57Z | 81 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2-conformer",
"automatic-speech-recognition",
"audio",
"ko",
"dataset:KsponSpeech",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-10-31T07:50:05Z |
---
language:
- ko # Example: fr
license: apache-2.0 # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses
library_name: transformers # Optional. Example: keras or any library from https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Libraries.ts
tags:
- audio
- automatic-speech-recognition
datasets:
- KsponSpeech
metrics:
- wer # Example: wer. Use metric id from https://hf.co/metrics
---
# ko-42maru-wav2vec2-conformer-del-1s
## Table of Contents
- [ko-42maru-wav2vec2-conformer-del-1s](#ko-42maru-wav2vec2-conformer-del-1s)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Evaluation](#evaluation)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
- **Model Description:**
해당 모델은 wav2vec2-conformer base architecture에 scratch pre-training 되었습니다. <br />
Wav2Vec2ConformerForCTC를 이용하여 KsponSpeech에 대한 Fine-Tuning 모델입니다. <br />
- Dataset use [AIHub KsponSpeech](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=123) <br />
Datasets는 해당 Data를 전처리하여 임의로 만들어 사용하였습니다. <br />
del-1s의 의미는 1초 이하의 데이터 필터링을 의미합니다. <br />
해당 모델은 **음성전사를 자체 커스텀한 42maru** 기준의 데이터로 학습된 모델입니다. (숫자와 영어는 한글 표기법을 따름) <br />
- **Developed by:** TADev (@lIlBrother, @ddobokki, @jp42maru)
- **Language(s):** Korean
- **License:** apache-2.0
- **Parent Model:** See the [wav2vec2-conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer) for more information about the pre-trained base model. (해당 모델은 wav2vec2-conformer base architecture에 scratch pre-training 되었습니다.)
## Evaluation
Just using `load_metric("wer")` and `load_metric("wer")` in huggingface `datasets` library <br />
## How to Get Started With the Model
KenLM과 혼용된 Wav2Vec2ProcessorWithLM 예제를 보시려면 [42maru-kenlm 예제](https://huggingface.co/42MARU/ko-ctc-kenlm-42maru-only-wiki)를 참고하세요
```python
import librosa
from pyctcdecode import build_ctcdecoder
from transformers import (
AutoConfig,
AutoFeatureExtractor,
AutoModelForCTC,
AutoTokenizer,
Wav2Vec2ProcessorWithLM,
)
from transformers.pipelines import AutomaticSpeechRecognitionPipeline
audio_path = ""
# 모델과 토크나이저, 예측을 위한 각 모듈들을 불러옵니다.
model = AutoModelForCTC.from_pretrained("42MARU/ko-42maru-wav2vec2-conformer-del-1s")
feature_extractor = AutoFeatureExtractor.from_pretrained("42MARU/ko-42maru-wav2vec2-conformer-del-1s")
tokenizer = AutoTokenizer.from_pretrained("42MARU/ko-42maru-wav2vec2-conformer-del-1s")
beamsearch_decoder = build_ctcdecoder(
labels=list(tokenizer.encoder.keys()),
kenlm_model_path=None,
)
processor = Wav2Vec2ProcessorWithLM(
feature_extractor=feature_extractor, tokenizer=tokenizer, decoder=beamsearch_decoder
)
# 실제 예측을 위한 파이프라인에 정의된 모듈들을 삽입.
asr_pipeline = AutomaticSpeechRecognitionPipeline(
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
decoder=processor.decoder,
device=-1,
)
# 음성파일을 불러오고 beamsearch 파라미터를 특정하여 예측을 수행합니다.
raw_data, _ = librosa.load(audio_path, sr=16000)
kwargs = {"decoder_kwargs": {"beam_width": 100}}
pred = asr_pipeline(inputs=raw_data, **kwargs)["text"]
# 모델이 자소 분리 유니코드 텍스트로 나오므로, 일반 String으로 변환해줄 필요가 있습니다.
result = unicodedata.normalize("NFC", pred)
print(result)
# 안녕하세요 하나둘셋 테스트입니다.
```
*Beam-100 Result (WER)*:
| "clean" | "other" |
| ------- | ------- |
| 21.52 | 25.72 |
|
undertheseanlp/vietnamese-ner-v1.4.0a2
|
undertheseanlp
| 2022-11-10T02:29:43Z | 389 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"vi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-10T02:16:40Z |
---
license: apache-2.0
language: vi
---
|
irfan-noordin/segformer-b0-finetuned-segments-sidewalk-oct-22
|
irfan-noordin
| 2022-11-10T02:23:44Z | 157 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2022-11-09T06:58:03Z |
---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-sidewalk-oct-22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-oct-22
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9249
- Mean Iou: 0.1675
- Mean Accuracy: 0.2109
- Overall Accuracy: 0.7776
- Accuracy Unlabeled: nan
- Accuracy Flat-road: 0.8631
- Accuracy Flat-sidewalk: 0.9423
- Accuracy Flat-crosswalk: 0.0
- Accuracy Flat-cyclinglane: 0.4704
- Accuracy Flat-parkingdriveway: 0.1421
- Accuracy Flat-railtrack: 0.0
- Accuracy Flat-curb: 0.0061
- Accuracy Human-person: 0.0
- Accuracy Human-rider: 0.0
- Accuracy Vehicle-car: 0.8937
- Accuracy Vehicle-truck: 0.0
- Accuracy Vehicle-bus: 0.0
- Accuracy Vehicle-tramtrain: 0.0
- Accuracy Vehicle-motorcycle: 0.0
- Accuracy Vehicle-bicycle: 0.0
- Accuracy Vehicle-caravan: 0.0
- Accuracy Vehicle-cartrailer: 0.0
- Accuracy Construction-building: 0.9143
- Accuracy Construction-door: 0.0
- Accuracy Construction-wall: 0.0055
- Accuracy Construction-fenceguardrail: 0.0
- Accuracy Construction-bridge: 0.0
- Accuracy Construction-tunnel: nan
- Accuracy Construction-stairs: 0.0
- Accuracy Object-pole: 0.0
- Accuracy Object-trafficsign: 0.0
- Accuracy Object-trafficlight: 0.0
- Accuracy Nature-vegetation: 0.9291
- Accuracy Nature-terrain: 0.8710
- Accuracy Sky: 0.9207
- Accuracy Void-ground: 0.0
- Accuracy Void-dynamic: 0.0
- Accuracy Void-static: 0.0
- Accuracy Void-unclear: 0.0
- Iou Unlabeled: nan
- Iou Flat-road: 0.6127
- Iou Flat-sidewalk: 0.8192
- Iou Flat-crosswalk: 0.0
- Iou Flat-cyclinglane: 0.4256
- Iou Flat-parkingdriveway: 0.1262
- Iou Flat-railtrack: 0.0
- Iou Flat-curb: 0.0061
- Iou Human-person: 0.0
- Iou Human-rider: 0.0
- Iou Vehicle-car: 0.6655
- Iou Vehicle-truck: 0.0
- Iou Vehicle-bus: 0.0
- Iou Vehicle-tramtrain: 0.0
- Iou Vehicle-motorcycle: 0.0
- Iou Vehicle-bicycle: 0.0
- Iou Vehicle-caravan: 0.0
- Iou Vehicle-cartrailer: 0.0
- Iou Construction-building: 0.5666
- Iou Construction-door: 0.0
- Iou Construction-wall: 0.0054
- Iou Construction-fenceguardrail: 0.0
- Iou Construction-bridge: 0.0
- Iou Construction-tunnel: nan
- Iou Construction-stairs: 0.0
- Iou Object-pole: 0.0
- Iou Object-trafficsign: 0.0
- Iou Object-trafficlight: 0.0
- Iou Nature-vegetation: 0.7875
- Iou Nature-terrain: 0.6912
- Iou Sky: 0.8218
- Iou Void-ground: 0.0
- Iou Void-dynamic: 0.0
- Iou Void-static: 0.0
- Iou Void-unclear: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Flat-road | Accuracy Flat-sidewalk | Accuracy Flat-crosswalk | Accuracy Flat-cyclinglane | Accuracy Flat-parkingdriveway | Accuracy Flat-railtrack | Accuracy Flat-curb | Accuracy Human-person | Accuracy Human-rider | Accuracy Vehicle-car | Accuracy Vehicle-truck | Accuracy Vehicle-bus | Accuracy Vehicle-tramtrain | Accuracy Vehicle-motorcycle | Accuracy Vehicle-bicycle | Accuracy Vehicle-caravan | Accuracy Vehicle-cartrailer | Accuracy Construction-building | Accuracy Construction-door | Accuracy Construction-wall | Accuracy Construction-fenceguardrail | Accuracy Construction-bridge | Accuracy Construction-tunnel | Accuracy Construction-stairs | Accuracy Object-pole | Accuracy Object-trafficsign | Accuracy Object-trafficlight | Accuracy Nature-vegetation | Accuracy Nature-terrain | Accuracy Sky | Accuracy Void-ground | Accuracy Void-dynamic | Accuracy Void-static | Accuracy Void-unclear | Iou Unlabeled | Iou Flat-road | Iou Flat-sidewalk | Iou Flat-crosswalk | Iou Flat-cyclinglane | Iou Flat-parkingdriveway | Iou Flat-railtrack | Iou Flat-curb | Iou Human-person | Iou Human-rider | Iou Vehicle-car | Iou Vehicle-truck | Iou Vehicle-bus | Iou Vehicle-tramtrain | Iou Vehicle-motorcycle | Iou Vehicle-bicycle | Iou Vehicle-caravan | Iou Vehicle-cartrailer | Iou Construction-building | Iou Construction-door | Iou Construction-wall | Iou Construction-fenceguardrail | Iou Construction-bridge | Iou Construction-tunnel | Iou Construction-stairs | Iou Object-pole | Iou Object-trafficsign | Iou Object-trafficlight | Iou Nature-vegetation | Iou Nature-terrain | Iou Sky | Iou Void-ground | Iou Void-dynamic | Iou Void-static | Iou Void-unclear |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:------------------:|:----------------------:|:-----------------------:|:-------------------------:|:-----------------------------:|:-----------------------:|:------------------:|:---------------------:|:--------------------:|:--------------------:|:----------------------:|:--------------------:|:--------------------------:|:---------------------------:|:------------------------:|:------------------------:|:---------------------------:|:------------------------------:|:--------------------------:|:--------------------------:|:------------------------------------:|:----------------------------:|:----------------------------:|:----------------------------:|:--------------------:|:---------------------------:|:----------------------------:|:--------------------------:|:-----------------------:|:------------:|:--------------------:|:---------------------:|:--------------------:|:---------------------:|:-------------:|:-------------:|:-----------------:|:------------------:|:--------------------:|:------------------------:|:------------------:|:-------------:|:----------------:|:---------------:|:---------------:|:-----------------:|:---------------:|:---------------------:|:----------------------:|:-------------------:|:-------------------:|:----------------------:|:-------------------------:|:---------------------:|:---------------------:|:-------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:---------------:|:----------------------:|:-----------------------:|:---------------------:|:------------------:|:-------:|:---------------:|:----------------:|:---------------:|:----------------:|
| 2.832 | 0.05 | 20 | 3.1768 | 0.0700 | 0.1095 | 0.5718 | nan | 0.1365 | 0.9472 | 0.0019 | 0.0006 | 0.0004 | 0.0 | 0.0205 | 0.0 | 0.0 | 0.2074 | 0.0 | 0.0 | 0.0 | 0.0017 | 0.0001 | 0.0 | 0.0 | 0.7360 | 0.0 | 0.0235 | 0.0050 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9559 | 0.0429 | 0.5329 | 0.0010 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1260 | 0.5906 | 0.0016 | 0.0006 | 0.0004 | 0.0 | 0.0175 | 0.0 | 0.0 | 0.2006 | 0.0 | 0.0 | 0.0 | 0.0003 | 0.0001 | 0.0 | 0.0 | 0.3729 | 0.0 | 0.0209 | 0.0044 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5778 | 0.0408 | 0.4932 | 0.0009 | 0.0 | 0.0 | 0.0 |
| 2.3224 | 0.1 | 40 | 2.4686 | 0.0885 | 0.1321 | 0.6347 | nan | 0.5225 | 0.9260 | 0.0005 | 0.0001 | 0.0006 | 0.0 | 0.0113 | 0.0 | 0.0 | 0.3738 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8191 | 0.0 | 0.0263 | 0.0012 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9649 | 0.0701 | 0.6434 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4240 | 0.6602 | 0.0005 | 0.0001 | 0.0006 | 0.0 | 0.0109 | 0.0 | 0.0 | 0.3292 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3962 | 0.0 | 0.0260 | 0.0011 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6019 | 0.0617 | 0.5862 | 0.0001 | 0.0 | 0.0 | 0.0 |
| 2.1961 | 0.15 | 60 | 1.9886 | 0.0988 | 0.1431 | 0.6500 | nan | 0.5168 | 0.9319 | 0.0 | 0.0001 | 0.0000 | 0.0 | 0.0032 | 0.0 | 0.0 | 0.5761 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8325 | 0.0 | 0.0132 | 0.0003 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9612 | 0.1260 | 0.7625 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.3929 | 0.6721 | 0.0 | 0.0001 | 0.0000 | 0.0 | 0.0032 | 0.0 | 0.0 | 0.4609 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4375 | 0.0 | 0.0131 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6342 | 0.1108 | 0.6353 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.2964 | 0.2 | 80 | 2.0597 | 0.1066 | 0.1503 | 0.6682 | nan | 0.6577 | 0.9207 | 0.0 | 0.0000 | 0.0002 | 0.0 | 0.0044 | 0.0 | 0.0 | 0.5257 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8466 | 0.0 | 0.0094 | 0.0001 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9526 | 0.2022 | 0.8392 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.4276 | 0.7093 | 0.0 | 0.0000 | 0.0002 | 0.0 | 0.0044 | 0.0 | 0.0 | 0.4438 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4488 | 0.0 | 0.0093 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6560 | 0.1833 | 0.7408 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.9751 | 0.25 | 100 | 1.7493 | 0.1186 | 0.1645 | 0.6944 | nan | 0.7604 | 0.9146 | 0.0 | 0.0004 | 0.0012 | 0.0 | 0.0016 | 0.0 | 0.0 | 0.7381 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8273 | 0.0 | 0.0026 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9636 | 0.3289 | 0.8909 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.4904 | 0.7490 | 0.0 | 0.0004 | 0.0012 | 0.0 | 0.0016 | 0.0 | 0.0 | 0.5465 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4913 | 0.0 | 0.0026 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6542 | 0.2761 | 0.7004 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.7626 | 0.3 | 120 | 1.5608 | 0.1295 | 0.1752 | 0.7118 | nan | 0.8168 | 0.9102 | 0.0 | 0.0002 | 0.0025 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.8094 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8362 | 0.0 | 0.0030 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9492 | 0.5677 | 0.8861 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.4958 | 0.7592 | 0.0 | 0.0002 | 0.0025 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.5680 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5095 | 0.0 | 0.0030 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7082 | 0.4878 | 0.7392 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.32 | 0.35 | 140 | 1.5048 | 0.1323 | 0.1797 | 0.7181 | nan | 0.7883 | 0.9260 | 0.0 | 0.0000 | 0.0037 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.8711 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8590 | 0.0 | 0.0022 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9128 | 0.7088 | 0.8576 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5141 | 0.7598 | 0.0 | 0.0000 | 0.0037 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.5287 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5016 | 0.0 | 0.0022 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7458 | 0.5602 | 0.7499 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.6464 | 0.4 | 160 | 1.3886 | 0.1342 | 0.1783 | 0.7217 | nan | 0.7859 | 0.9390 | 0.0 | 0.0 | 0.0059 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7401 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8508 | 0.0 | 0.0010 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9368 | 0.7223 | 0.9025 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5173 | 0.7561 | 0.0 | 0.0 | 0.0058 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5846 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5059 | 0.0 | 0.0010 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7366 | 0.5802 | 0.7401 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4757 | 0.45 | 180 | 1.3649 | 0.1367 | 0.1840 | 0.7255 | nan | 0.8587 | 0.9185 | 0.0 | 0.0001 | 0.0039 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8588 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8337 | 0.0 | 0.0014 | 0.0001 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9036 | 0.7809 | 0.9138 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5077 | 0.7693 | 0.0 | 0.0001 | 0.0039 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5980 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5264 | 0.0 | 0.0014 | 0.0001 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7521 | 0.6078 | 0.7438 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.0018 | 0.5 | 200 | 1.3118 | 0.1353 | 0.1839 | 0.7242 | nan | 0.7797 | 0.9457 | 0.0 | 0.0029 | 0.0057 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8345 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8509 | 0.0 | 0.0018 | 0.0001 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8704 | 0.8688 | 0.9069 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5321 | 0.7602 | 0.0 | 0.0029 | 0.0057 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6060 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5276 | 0.0 | 0.0018 | 0.0001 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7133 | 0.5551 | 0.7593 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4636 | 0.55 | 220 | 1.2729 | 0.1330 | 0.1797 | 0.7249 | nan | 0.8619 | 0.9203 | 0.0 | 0.0015 | 0.0067 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8903 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8514 | 0.0 | 0.0031 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9447 | 0.5448 | 0.9040 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5249 | 0.7844 | 0.0 | 0.0015 | 0.0066 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5735 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5336 | 0.0 | 0.0031 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7136 | 0.4869 | 0.7613 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.1856 | 0.6 | 240 | 1.2551 | 0.1382 | 0.1828 | 0.7274 | nan | 0.7497 | 0.9518 | 0.0 | 0.0005 | 0.0048 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8893 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8153 | 0.0 | 0.0048 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9475 | 0.7597 | 0.9107 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5097 | 0.7477 | 0.0 | 0.0005 | 0.0047 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6172 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5527 | 0.0 | 0.0048 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7293 | 0.6250 | 0.7703 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4577 | 0.65 | 260 | 1.1862 | 0.1387 | 0.1848 | 0.7304 | nan | 0.8842 | 0.9065 | 0.0 | 0.0001 | 0.0024 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8566 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8632 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9442 | 0.7313 | 0.9080 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5121 | 0.7833 | 0.0 | 0.0001 | 0.0024 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6297 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5381 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7437 | 0.6199 | 0.7486 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.0748 | 0.7 | 280 | 1.2000 | 0.1391 | 0.1846 | 0.7301 | nan | 0.7249 | 0.9690 | 0.0 | 0.0005 | 0.0064 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8909 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8656 | 0.0 | 0.0014 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8917 | 0.8362 | 0.9065 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5306 | 0.7403 | 0.0 | 0.0005 | 0.0063 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6223 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5491 | 0.0 | 0.0014 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7566 | 0.6061 | 0.7761 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.642 | 0.75 | 300 | 1.1452 | 0.1432 | 0.1880 | 0.7409 | nan | 0.8682 | 0.9389 | 0.0 | 0.0030 | 0.0062 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8605 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8759 | 0.0 | 0.0020 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9092 | 0.8515 | 0.8892 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5333 | 0.7905 | 0.0 | 0.0030 | 0.0062 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6393 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5418 | 0.0 | 0.0020 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7655 | 0.6551 | 0.7893 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.2166 | 0.8 | 320 | 1.1450 | 0.1388 | 0.1849 | 0.7391 | nan | 0.8516 | 0.9460 | 0.0 | 0.0043 | 0.0060 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.8944 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8803 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9283 | 0.6849 | 0.9071 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5584 | 0.7932 | 0.0 | 0.0043 | 0.0060 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.5844 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5259 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7548 | 0.5985 | 0.7549 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.1346 | 0.85 | 340 | 1.1215 | 0.1428 | 0.1887 | 0.7411 | nan | 0.7956 | 0.9551 | 0.0 | 0.0145 | 0.0098 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.8646 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8884 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9131 | 0.8828 | 0.9024 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5611 | 0.7721 | 0.0 | 0.0145 | 0.0097 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.6313 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5405 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7563 | 0.6337 | 0.7917 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8351 | 0.9 | 360 | 1.1012 | 0.1433 | 0.1896 | 0.7449 | nan | 0.8723 | 0.9432 | 0.0 | 0.0025 | 0.0114 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8822 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8662 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9213 | 0.8361 | 0.9201 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5472 | 0.7989 | 0.0 | 0.0025 | 0.0113 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6277 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5416 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7666 | 0.6674 | 0.7664 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.152 | 0.95 | 380 | 1.1045 | 0.1452 | 0.1891 | 0.7453 | nan | 0.8827 | 0.9332 | 0.0 | 0.0457 | 0.0124 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8396 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8848 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9399 | 0.7910 | 0.9107 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5462 | 0.7966 | 0.0 | 0.0457 | 0.0123 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6494 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5395 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7636 | 0.6627 | 0.7763 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.2062 | 1.0 | 400 | 1.0607 | 0.1469 | 0.1897 | 0.7482 | nan | 0.8192 | 0.9644 | 0.0 | 0.0944 | 0.0198 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8406 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8821 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9193 | 0.8054 | 0.9137 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5772 | 0.7742 | 0.0 | 0.0941 | 0.0195 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6414 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5360 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7740 | 0.6591 | 0.7710 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.0116 | 1.05 | 420 | 1.0503 | 0.1493 | 0.1950 | 0.7554 | nan | 0.8686 | 0.9478 | 0.0 | 0.2033 | 0.0295 | 0.0 | 0.0 | 0.0 | 0.0 | 0.9166 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8409 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9414 | 0.7667 | 0.9196 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5809 | 0.8022 | 0.0 | 0.1995 | 0.0287 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5916 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5517 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7628 | 0.6441 | 0.7652 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.009 | 1.1 | 440 | 1.0723 | 0.1529 | 0.1958 | 0.7553 | nan | 0.7797 | 0.9670 | 0.0 | 0.2214 | 0.0547 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.8978 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8927 | 0.0 | 0.0000 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9274 | 0.8016 | 0.9176 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5898 | 0.7717 | 0.0 | 0.2157 | 0.0526 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.6389 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5499 | 0.0 | 0.0000 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7760 | 0.6697 | 0.7818 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.1496 | 1.15 | 460 | 1.0417 | 0.1571 | 0.2017 | 0.7607 | nan | 0.7736 | 0.9645 | 0.0 | 0.3606 | 0.0669 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.8775 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8801 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9098 | 0.8906 | 0.9326 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6102 | 0.7737 | 0.0 | 0.3374 | 0.0634 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.6549 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5538 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7682 | 0.6437 | 0.7772 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4669 | 1.2 | 480 | 1.0161 | 0.1566 | 0.2024 | 0.7637 | nan | 0.8236 | 0.9531 | 0.0 | 0.3507 | 0.0584 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.9165 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8675 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9263 | 0.8597 | 0.9222 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6005 | 0.7983 | 0.0 | 0.3296 | 0.0556 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.6153 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5498 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7752 | 0.6654 | 0.7770 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.075 | 1.25 | 500 | 1.0124 | 0.1556 | 0.2000 | 0.7634 | nan | 0.8521 | 0.9499 | 0.0 | 0.3154 | 0.0410 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.8944 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8618 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9442 | 0.8133 | 0.9290 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5910 | 0.8068 | 0.0 | 0.2992 | 0.0394 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.6338 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5507 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7689 | 0.6697 | 0.7737 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.888 | 1.3 | 520 | 0.9797 | 0.1597 | 0.2028 | 0.7677 | nan | 0.8590 | 0.9472 | 0.0 | 0.3534 | 0.0469 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.8900 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8807 | 0.0 | 0.0005 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9379 | 0.8578 | 0.9187 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5908 | 0.8056 | 0.0 | 0.3311 | 0.0448 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.6598 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5676 | 0.0 | 0.0005 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7712 | 0.6912 | 0.8088 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8099 | 1.35 | 540 | 0.9760 | 0.1589 | 0.2026 | 0.7678 | nan | 0.8526 | 0.9534 | 0.0 | 0.3370 | 0.0313 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.9235 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8862 | 0.0 | 0.0005 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9252 | 0.8551 | 0.9206 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5954 | 0.8014 | 0.0 | 0.3188 | 0.0303 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.6382 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5706 | 0.0 | 0.0005 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7830 | 0.6934 | 0.8122 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.1998 | 1.4 | 560 | 0.9815 | 0.1578 | 0.2030 | 0.7631 | nan | 0.8956 | 0.9250 | 0.0 | 0.3267 | 0.0461 | 0.0 | 0.0004 | 0.0 | 0.0 | 0.8929 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8956 | 0.0 | 0.0002 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9206 | 0.8669 | 0.9275 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5656 | 0.8136 | 0.0 | 0.3102 | 0.0440 | 0.0 | 0.0004 | 0.0 | 0.0 | 0.6574 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5524 | 0.0 | 0.0002 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7894 | 0.6940 | 0.7818 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.5591 | 1.45 | 580 | 0.9654 | 0.1618 | 0.2043 | 0.7698 | nan | 0.8198 | 0.9655 | 0.0 | 0.3715 | 0.0848 | 0.0 | 0.0003 | 0.0 | 0.0 | 0.8935 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8965 | 0.0 | 0.0013 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9146 | 0.8730 | 0.9198 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6182 | 0.7898 | 0.0 | 0.3467 | 0.0792 | 0.0 | 0.0003 | 0.0 | 0.0 | 0.6590 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5647 | 0.0 | 0.0013 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7871 | 0.6835 | 0.8101 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.861 | 1.5 | 600 | 0.9622 | 0.1607 | 0.2045 | 0.7689 | nan | 0.8163 | 0.9648 | 0.0 | 0.3780 | 0.0907 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.9187 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8714 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9229 | 0.8485 | 0.9361 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6180 | 0.7903 | 0.0 | 0.3541 | 0.0844 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.6307 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5609 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7854 | 0.6904 | 0.7884 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8335 | 1.55 | 620 | 0.9569 | 0.1598 | 0.2050 | 0.7686 | nan | 0.8421 | 0.9561 | 0.0 | 0.3493 | 0.0928 | 0.0 | 0.0012 | 0.0 | 0.0 | 0.9261 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8753 | 0.0 | 0.0013 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9172 | 0.8688 | 0.9335 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6069 | 0.8031 | 0.0 | 0.3306 | 0.0860 | 0.0 | 0.0012 | 0.0 | 0.0 | 0.6123 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5618 | 0.0 | 0.0013 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7851 | 0.6911 | 0.7950 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.9988 | 1.6 | 640 | 0.9337 | 0.1611 | 0.2050 | 0.7711 | nan | 0.8595 | 0.9538 | 0.0 | 0.3512 | 0.0928 | 0.0 | 0.0006 | 0.0 | 0.0 | 0.8962 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8854 | 0.0 | 0.0004 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9281 | 0.8594 | 0.9367 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6062 | 0.8105 | 0.0 | 0.3310 | 0.0868 | 0.0 | 0.0006 | 0.0 | 0.0 | 0.6565 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5596 | 0.0 | 0.0004 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7819 | 0.6958 | 0.7880 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.966 | 1.65 | 660 | 0.9322 | 0.1612 | 0.2051 | 0.7707 | nan | 0.8706 | 0.9494 | 0.0 | 0.3470 | 0.0997 | 0.0 | 0.0005 | 0.0 | 0.0 | 0.8905 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8722 | 0.0 | 0.0016 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9347 | 0.8652 | 0.9364 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5953 | 0.8136 | 0.0 | 0.3281 | 0.0922 | 0.0 | 0.0005 | 0.0 | 0.0 | 0.6654 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5696 | 0.0 | 0.0016 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7756 | 0.6890 | 0.7885 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.2154 | 1.7 | 680 | 0.9373 | 0.1611 | 0.2048 | 0.7710 | nan | 0.8448 | 0.9577 | 0.0 | 0.3717 | 0.1010 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.9173 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8613 | 0.0 | 0.0026 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9411 | 0.8371 | 0.9246 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6096 | 0.8056 | 0.0 | 0.3487 | 0.0930 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.6272 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5696 | 0.0 | 0.0026 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7762 | 0.6911 | 0.7931 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.7979 | 1.75 | 700 | 0.9429 | 0.1622 | 0.2067 | 0.7717 | nan | 0.8496 | 0.9548 | 0.0 | 0.3821 | 0.1182 | 0.0 | 0.0013 | 0.0 | 0.0 | 0.9071 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8803 | 0.0 | 0.0043 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9202 | 0.8812 | 0.9204 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6104 | 0.8088 | 0.0 | 0.3583 | 0.1074 | 0.0 | 0.0013 | 0.0 | 0.0 | 0.6410 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5675 | 0.0 | 0.0043 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7784 | 0.6767 | 0.7994 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8366 | 1.8 | 720 | 0.9379 | 0.1645 | 0.2075 | 0.7745 | nan | 0.8359 | 0.9580 | 0.0 | 0.4130 | 0.1275 | 0.0 | 0.0021 | 0.0 | 0.0 | 0.8998 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8704 | 0.0 | 0.0088 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.9450 | 0.8617 | 0.9251 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6227 | 0.8035 | 0.0 | 0.3850 | 0.1147 | 0.0 | 0.0021 | 0.0 | 0.0 | 0.6544 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5777 | 0.0 | 0.0088 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.7682 | 0.6867 | 0.8055 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.0448 | 1.85 | 740 | 0.9419 | 0.1659 | 0.2087 | 0.7769 | nan | 0.8483 | 0.9532 | 0.0 | 0.4442 | 0.1387 | 0.0 | 0.0028 | 0.0 | 0.0 | 0.8986 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8865 | 0.0 | 0.0042 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9458 | 0.8442 | 0.9215 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6240 | 0.8122 | 0.0 | 0.4077 | 0.1237 | 0.0 | 0.0028 | 0.0 | 0.0 | 0.6529 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5700 | 0.0 | 0.0041 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7767 | 0.6938 | 0.8070 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.9737 | 1.9 | 760 | 0.9193 | 0.1664 | 0.2082 | 0.7772 | nan | 0.8420 | 0.9586 | 0.0 | 0.4353 | 0.1193 | 0.0 | 0.0010 | 0.0 | 0.0 | 0.9082 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8955 | 0.0 | 0.0079 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9385 | 0.8464 | 0.9190 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6232 | 0.8053 | 0.0 | 0.4022 | 0.1088 | 0.0 | 0.0010 | 0.0 | 0.0 | 0.6549 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5766 | 0.0 | 0.0079 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7843 | 0.7077 | 0.8180 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.0716 | 1.95 | 780 | 0.9170 | 0.1672 | 0.2098 | 0.7785 | nan | 0.8434 | 0.9539 | 0.0 | 0.4671 | 0.1283 | 0.0 | 0.0037 | 0.0 | 0.0 | 0.9012 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8984 | 0.0 | 0.0058 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9398 | 0.8661 | 0.9157 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6242 | 0.8106 | 0.0 | 0.4232 | 0.1156 | 0.0 | 0.0037 | 0.0 | 0.0 | 0.6631 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5777 | 0.0 | 0.0057 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7811 | 0.6920 | 0.8223 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4144 | 2.0 | 800 | 0.9249 | 0.1675 | 0.2109 | 0.7776 | nan | 0.8631 | 0.9423 | 0.0 | 0.4704 | 0.1421 | 0.0 | 0.0061 | 0.0 | 0.0 | 0.8937 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.9143 | 0.0 | 0.0055 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9291 | 0.8710 | 0.9207 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6127 | 0.8192 | 0.0 | 0.4256 | 0.1262 | 0.0 | 0.0061 | 0.0 | 0.0 | 0.6655 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5666 | 0.0 | 0.0054 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7875 | 0.6912 | 0.8218 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.6.1
- Tokenizers 0.12.1
|
ntsema/wav2vec2-xlsr-53-espeak-cv-ft-mhr2-ntsema-colab
|
ntsema
| 2022-11-10T01:46:14Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:audiofolder",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-10T00:13:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- wer
model-index:
- name: wav2vec2-xlsr-53-espeak-cv-ft-mhr2-ntsema-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: audiofolder
type: audiofolder
config: default
split: train
args: default
metrics:
- name: Wer
type: wer
value: 0.7993311036789298
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-53-espeak-cv-ft-mhr2-ntsema-colab
This model is a fine-tuned version of [facebook/wav2vec2-xlsr-53-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-xlsr-53-espeak-cv-ft) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7562
- Wer: 0.7993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.5636 | 5.79 | 400 | 1.8357 | 1.0 |
| 1.6348 | 11.59 | 800 | 0.6797 | 0.8528 |
| 0.8624 | 17.39 | 1200 | 0.6651 | 0.8194 |
| 0.5248 | 23.19 | 1600 | 0.6892 | 0.7826 |
| 0.3328 | 28.98 | 2000 | 0.7562 | 0.7993 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.14.0.dev20221109+cu116
- Datasets 2.6.1
- Tokenizers 0.13.2
|
burakyldrm/wav2vec2-burak-new-300-v2-6
|
burakyldrm
| 2022-11-10T01:45:10Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-09T19:25:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-burak-new-300-v2-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-burak-new-300-v2-6
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3074
- Wer: 0.2340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 151
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 6.3136 | 9.61 | 500 | 3.1262 | 1.0 |
| 1.8247 | 19.23 | 1000 | 0.4049 | 0.5065 |
| 0.5387 | 28.83 | 1500 | 0.2828 | 0.3462 |
| 0.3713 | 38.45 | 2000 | 0.2761 | 0.3125 |
| 0.293 | 48.08 | 2500 | 0.2872 | 0.3001 |
| 0.2436 | 57.68 | 3000 | 0.2912 | 0.2904 |
| 0.2116 | 67.3 | 3500 | 0.2910 | 0.2725 |
| 0.1859 | 76.91 | 4000 | 0.2937 | 0.2533 |
| 0.1731 | 86.53 | 4500 | 0.2985 | 0.2485 |
| 0.1569 | 96.15 | 5000 | 0.3022 | 0.2409 |
| 0.1471 | 105.76 | 5500 | 0.3070 | 0.2374 |
| 0.1385 | 115.38 | 6000 | 0.2954 | 0.2429 |
| 0.1289 | 124.99 | 6500 | 0.3016 | 0.2361 |
| 0.1268 | 134.61 | 7000 | 0.3000 | 0.2368 |
| 0.12 | 144.23 | 7500 | 0.3074 | 0.2340 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
sanchit-gandhi/whisper-medium-es-5k
|
sanchit-gandhi
| 2022-11-10T01:33:57Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"es",
"dataset:facebook/multilingual_librispeech",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-09T19:30:55Z |
---
language:
- es
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- facebook/multilingual_librispeech
metrics:
- wer
model-index:
- name: Whisper Small Es - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Multilingual LibriSpeech
type: facebook/multilingual_librispeech
args: 'config: es, split: test'
metrics:
- name: Wer
type: wer
value: 60.16226172047142
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Es - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Multilingual LibriSpeech dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2668
- Wer: 60.1623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 2.2112 | 0.2 | 500 | 1.7394 | 61.1126 |
| 1.4913 | 0.4 | 1000 | 1.3758 | 62.8143 |
| 1.6651 | 0.6 | 1500 | 1.3100 | 61.3261 |
| 1.7031 | 0.8 | 2000 | 1.2752 | 60.5261 |
| 1.4289 | 1.0 | 2500 | 1.2668 | 60.1623 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.0
- Datasets 2.6.2.dev0
- Tokenizers 0.12.1
|
flamesbob/Steampunk_angel
|
flamesbob
| 2022-11-10T01:08:39Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-10T01:06:53Z |
---
license: creativeml-openrail-m
---
art by `Steampunk_angel` this style gives a steampunk look and feel with gears and sometimes mechanical wings to prompts.
License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
|
hcho22/opus-mt-ko-en-finetuned-kr-to-en
|
hcho22
| 2022-11-10T00:23:13Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-08T18:23:44Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: hcho22/opus-mt-ko-en-finetuned-kr-to-en
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hcho22/opus-mt-ko-en-finetuned-kr-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ko-en](https://huggingface.co/Helsinki-NLP/opus-mt-ko-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2330
- Validation Loss: 1.2844
- Train Bleu: 30.7578
- Train Gen Len: 13.9104
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Bleu | Train Gen Len | Epoch |
|:----------:|:---------------:|:----------:|:-------------:|:-----:|
| 1.2330 | 1.2844 | 30.7578 | 13.9104 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
bigmorning/whisper_final_havest
|
bigmorning
| 2022-11-10T00:22:37Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-10T00:22:19Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: whisper_final_havest
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_final_havest
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0287
- Train Accuracy: 0.0346
- Validation Loss: 0.6219
- Validation Accuracy: 0.0314
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 5.0949 | 0.0116 | 4.4444 | 0.0124 | 0 |
| 4.3242 | 0.0130 | 4.0648 | 0.0140 | 1 |
| 3.9308 | 0.0145 | 3.6837 | 0.0157 | 2 |
| 3.5552 | 0.0159 | 3.3410 | 0.0171 | 3 |
| 3.1591 | 0.0175 | 2.8089 | 0.0198 | 4 |
| 2.2408 | 0.0221 | 1.7104 | 0.0255 | 5 |
| 1.4220 | 0.0261 | 1.2181 | 0.0279 | 6 |
| 1.0460 | 0.0280 | 0.9912 | 0.0290 | 7 |
| 0.8363 | 0.0291 | 0.8645 | 0.0296 | 8 |
| 0.6967 | 0.0299 | 0.7748 | 0.0301 | 9 |
| 0.5942 | 0.0305 | 0.7201 | 0.0304 | 10 |
| 0.5151 | 0.0309 | 0.6675 | 0.0307 | 11 |
| 0.4496 | 0.0314 | 0.6382 | 0.0308 | 12 |
| 0.3951 | 0.0318 | 0.6060 | 0.0310 | 13 |
| 0.3473 | 0.0321 | 0.5945 | 0.0311 | 14 |
| 0.3053 | 0.0324 | 0.5752 | 0.0312 | 15 |
| 0.2684 | 0.0327 | 0.5700 | 0.0313 | 16 |
| 0.2355 | 0.0330 | 0.5651 | 0.0313 | 17 |
| 0.2065 | 0.0332 | 0.5619 | 0.0313 | 18 |
| 0.1785 | 0.0334 | 0.5522 | 0.0314 | 19 |
| 0.1535 | 0.0337 | 0.5609 | 0.0313 | 20 |
| 0.1310 | 0.0339 | 0.5590 | 0.0314 | 21 |
| 0.1115 | 0.0340 | 0.5695 | 0.0313 | 22 |
| 0.0951 | 0.0342 | 0.5723 | 0.0314 | 23 |
| 0.0787 | 0.0343 | 0.5796 | 0.0314 | 24 |
| 0.0649 | 0.0344 | 0.5967 | 0.0313 | 25 |
| 0.0539 | 0.0345 | 0.6019 | 0.0313 | 26 |
| 0.0441 | 0.0346 | 0.6113 | 0.0313 | 27 |
| 0.0364 | 0.0346 | 0.6110 | 0.0314 | 28 |
| 0.0287 | 0.0346 | 0.6219 | 0.0314 | 29 |
### Framework versions
- Transformers 4.25.0.dev0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.