modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Wheatley961/Raw_1_Test_1_new.model
|
Wheatley961
| 2022-11-15T14:45:41Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-15T14:45:17Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 128 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 128,
"warmup_steps": 13,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
SundayPunch/Combined_Tech_SF
|
SundayPunch
| 2022-11-15T14:43:46Z | 0 | 10 | null |
[
"license:openrail",
"region:us"
] | null | 2022-11-15T12:47:56Z |
---
license: openrail
---
## Dreambooth model for a high-tech, detailed concept art style
This is a model trained on a mix of real images of fighter aircraft, warships, and spacecraft, and techy, detailed concept art from Aaron Beck, Paul Chadeisson and Rasmus Poulsen. High-tech, industrial sci-fi with a grungy aesthetic.
Use prompt: 'combotechsf'
## Example images










|
Tom11/xlm-roberta-base-finetuned-panx-de-fr
|
Tom11
| 2022-11-15T13:51:06Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-15T08:58:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1637
- F1: 0.8581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.29 | 1.0 | 715 | 0.1885 | 0.8231 |
| 0.1443 | 2.0 | 1430 | 0.1607 | 0.8479 |
| 0.0937 | 3.0 | 2145 | 0.1637 | 0.8581 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cpu
- Datasets 1.16.1
- Tokenizers 0.13.2
|
egorulz/malayalam-news
|
egorulz
| 2022-11-15T13:36:08Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-15T13:35:20Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: malayalam-news
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# malayalam-news
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 10.9255
- Validation Loss: 10.9247
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -999, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.9636 | 10.9321 | 0 |
| 10.9425 | 10.9296 | 1 |
| 10.9255 | 10.9247 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
DenilsenAxel/nlp-text-classification
|
DenilsenAxel
| 2022-11-15T13:30:57Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:amazon_us_reviews",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-15T13:22:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_us_reviews
metrics:
- accuracy
model-index:
- name: test_trainer
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_us_reviews
type: amazon_us_reviews
config: Books_v1_01
split: train[:1%]
args: Books_v1_01
metrics:
- name: Accuracy
type: accuracy
value: 0.7441424554826617
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the amazon_us_reviews dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9348
- Accuracy: 0.7441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6471 | 1.0 | 7500 | 0.6596 | 0.7376 |
| 0.5235 | 2.0 | 15000 | 0.6997 | 0.7423 |
| 0.3955 | 3.0 | 22500 | 0.9348 | 0.7441 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
cjvt/sloberta-trendi-topics
|
cjvt
| 2022-11-15T13:24:38Z | 207 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-15T09:03:01Z |
---
license: apache-2.0
---
*Text classification model SloBERTa-Trendi-Topics 1.0*
The SloBerta-Trendi-Topics model is a text classification model for categorizing news texts with one of 13 topic labels. It was trained on a set of approx. 36,000 Slovene texts from various Slovene news sources included in the Trendi Monitor Corpus of Slovene (http://hdl.handle.net/11356/1590) such as "rtvslo.si", "sta.si", "delo.si", "dnevnik.si", "vecer.com", "24ur.com", "siol.net", "gorenjskiglas.si", etc.
The texts were semi-automatically categorized into 13 categories based on the sections under which they were published (i.e. URLs). The set of labels was developed in accordance with related categorization schemas used in other corpora and comprises the following topics: "črna kronika" (crime and accidents), "gospodarstvo, posel, finance" (economy, business, finance), "izobraževanje" (education), "okolje" (environment), "prosti čas" (free time), "šport" (sport), "umetnost, kultura" (art, culture), "vreme" (weather), "zabava" (entertainment), "zdravje" (health), "znanost in tehnologija" (science and technology), "politika" (politics), and "družba" (society). The categorization process is explained in more detail in Kosem et al. (2022): https://nl.ijs.si/jtdh22/pdf/JTDH2022_Kosem-et-al_Spremljevalni-korpus-Trendi.pdf
The model was trained on the labeled texts using the SloBERTa 2.0 contextual embeddings model (https://huggingface.co/EMBEDDIA/sloberta, also available at CLARIN.SI: http://hdl.handle.net/11356/1397) and validated on a development set of 1,293 texts using the simpletransformers library and the following hyperparameters:
- Train batch size: 8
- Learning rate: 1e-5
- Max. sequence length: 512
- Number of epochs: 2
The model achieves a macro-F1-score of 0.94 on a test set of 1,295 texts (best for "črna kronika", "politika", "šport", and "vreme" at 0.98, worst for "prosti čas" at 0.83).
|
ogkalu/Illustration-Diffusion
|
ogkalu
| 2022-11-15T12:57:36Z | 0 | 162 | null |
[
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-10-22T02:13:26Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
2D Illustration Styles are scarce on Stable Diffusion. Inspired by Hollie Mengert, this a fine-tuned Stable Diffusion model trained on her work. The correct token is holliemengert artstyle.
Hollie is **not** affiliated with this. You can read about her stance on the issue here - https://waxy.org/2022/11/invasive-diffusion-how-one-unwilling-illustrator-found-herself-turned-into-an-ai-model/
**Portraits generated by this model:**

**Lanscapes generated by this model:**


|
davide1998/a2c-AntBulletEnv-v0
|
davide1998
| 2022-11-15T12:40:21Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-15T12:39:17Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1309.17 +/- 78.04
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
OSalem99/Pixelcopter-PLE-v0
|
OSalem99
| 2022-11-15T12:03:00Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-15T12:02:52Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 11.50 +/- 7.03
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Bhuvana/setfit_2class_model
|
Bhuvana
| 2022-11-15T11:58:40Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-15T11:58:09Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 75 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 75,
"warmup_steps": 8,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
aayu/bert-base-uncased-finetuned-jd_Nov15
|
aayu
| 2022-11-15T11:11:33Z | 56 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-15T10:14:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-jd_Nov15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-jd_Nov15
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0061
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 469 | 0.0547 |
| 1.8699 | 2.0 | 938 | 0.0090 |
| 0.0888 | 3.0 | 1407 | 0.0061 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
alexcaillet/ddpm-butterflies-128
|
alexcaillet
| 2022-11-15T10:43:02Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-08T10:37:38Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/alexcaillet/ddpm-butterflies-128/tensorboard?#scalars)
|
OSalem99/Reinforce-CartPole01
|
OSalem99
| 2022-11-15T10:37:58Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-15T10:36:59Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole01
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 65.80 +/- 17.22
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Freazling/main-sentiment-model-chats-2-labels
|
Freazling
| 2022-11-15T10:18:19Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-15T09:31:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: main-sentiment-model-chats-2-labels
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# main-sentiment-model-chats-2-labels
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3718
- Accuracy: 0.8567
- F1: 0.8459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.14.0.dev20221113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Bhuvana/setfit_2class
|
Bhuvana
| 2022-11-15T09:19:34Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-15T09:19:15Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
iampedroalz/ppo-LunarLander-v2
|
iampedroalz
| 2022-11-15T07:55:45Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-15T07:40:41Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 115.65 +/- 116.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nightalon/distilroberta-base-finetuned-wikitext2
|
nightalon
| 2022-11-15T07:53:44Z | 195 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-15T07:24:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0843 | 1.0 | 2406 | 1.9226 |
| 1.9913 | 2.0 | 4812 | 1.8820 |
| 1.9597 | 3.0 | 7218 | 1.8214 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
GhifSmile/mT5_multilingual_XLSum-finetuned-liputan6-coba
|
GhifSmile
| 2022-11-15T07:44:16Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-15T04:34:47Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mT5_multilingual_XLSum-finetuned-liputan6-coba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetuned-liputan6-coba
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2713
- Rouge1: 0.3371
- Rouge2: 0.2029
- Rougel: 0.2927
- Rougelsum: 0.309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| 1.4304 | 1.0 | 4474 | 1.2713 | 0.3371 | 0.2029 | 0.2927 | 0.309 |
| 1.4286 | 2.0 | 8948 | 1.2713 | 0.3371 | 0.2029 | 0.2927 | 0.309 |
| 1.429 | 3.0 | 13422 | 1.2713 | 0.3371 | 0.2029 | 0.2927 | 0.309 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
chuchun9/distilbert-base-uncased-finetuned-squad
|
chuchun9
| 2022-11-15T07:24:11Z | 127 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-08T02:14:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5227 | 1.0 | 1107 | 2.0485 |
| 1.7555 | 2.0 | 2214 | 1.7443 |
| 1.4567 | 3.0 | 3321 | 1.6511 |
| 1.2107 | 4.0 | 4428 | 1.6496 |
| 1.083 | 5.0 | 5535 | 1.6727 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
nightalon/distilgpt2-finetuned-wikitext2
|
nightalon
| 2022-11-15T07:13:57Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-15T06:37:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.653 | 2.0 | 4668 | 3.6472 |
| 3.6006 | 3.0 | 7002 | 3.6421 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Guroruseru/distilbert-base-uncased-finetuned-emotion
|
Guroruseru
| 2022-11-15T06:53:06Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-15T04:34:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
teacookies/autotrain-15112022-cert2-2099767621
|
teacookies
| 2022-11-15T05:45:30Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-15112022-cert2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-15T05:27:01Z |
---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-15112022-cert2
co2_eq_emissions:
emissions: 30.88105111466208
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 2099767621
- CO2 Emissions (in grams): 30.8811
## Validation Metrics
- Loss: 0.004
- Accuracy: 0.999
- Precision: 0.982
- Recall: 0.990
- F1: 0.986
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-15112022-cert2-2099767621
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-15112022-cert2-2099767621", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-15112022-cert2-2099767621", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
jbreunig/xlm-roberta-base-finetuned-panx-de-fr
|
jbreunig
| 2022-11-15T05:07:16Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-15T02:12:52Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1637
- F1: 0.8581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.29 | 1.0 | 715 | 0.1885 | 0.8231 |
| 0.1443 | 2.0 | 1430 | 0.1607 | 0.8479 |
| 0.0937 | 3.0 | 2145 | 0.1637 | 0.8581 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
hcho22/opus-mt-ko-en-finetuned-en-to-kr
|
hcho22
| 2022-11-15T05:03:46Z | 83 | 1 |
transformers
|
[
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-10T03:37:03Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: hcho22/opus-mt-ko-en-finetuned-en-to-kr
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hcho22/opus-mt-ko-en-finetuned-en-to-kr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ko-en](https://huggingface.co/Helsinki-NLP/opus-mt-ko-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.5856
- Validation Loss: 2.0437
- Train Bleu: 2.0518
- Train Gen Len: 20.8110
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Bleu | Train Gen Len | Epoch |
|:----------:|:---------------:|:----------:|:-------------:|:-----:|
| 2.5856 | 2.0437 | 2.0518 | 20.8110 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
rohitsan/bart-finetuned-idl-new
|
rohitsan
| 2022-11-15T04:50:30Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-14T08:00:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-finetuned-idl-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-finetuned-idl-new
This model is a fine-tuned version of [rohitsan/bart-finetuned-idl-new](https://huggingface.co/rohitsan/bart-finetuned-idl-new) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2981
- eval_bleu: 18.5188
- eval_gen_len: 19.3843
- eval_runtime: 257.315
- eval_samples_per_second: 24.464
- eval_steps_per_second: 3.059
- epoch: 8.0
- step: 56648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 35
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Marqo/marqo-yolo-v1
|
Marqo
| 2022-11-15T04:14:14Z | 0 | 0 | null |
[
"onnx",
"region:us"
] | null | 2022-11-15T04:02:30Z |
https://github.com/marqo-ai/marqo
|
Signorlimone/StylizR
|
Signorlimone
| 2022-11-15T04:06:53Z | 0 | 7 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-15T03:38:15Z |
---
license: creativeml-openrail-m
---
This model tries to mimick the stylized 3d look but with a realistic twist on texture and overall materials rendition.
Use "tdst style" (without quotes) to activate the model
As usual, if you want a better likeness with your subject you can either use brackets like in: [3dst style:10] or give more emphasis to the subject like in: (subject:1.3)
|
Bhathiya/setfit-model
|
Bhathiya
| 2022-11-15T04:05:45Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-15T04:05:13Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 64 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 64,
"warmup_steps": 7,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
fuh990202/distilbert-base-uncased-finetuned-squad
|
fuh990202
| 2022-11-15T03:59:32Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-15T02:48:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0874 | 1.0 | 1113 | 1.7948 |
| 1.1106 | 2.0 | 2226 | 1.7791 |
| 0.4632 | 3.0 | 3339 | 2.1634 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
herpritts/FFXIV-Style
|
herpritts
| 2022-11-15T03:14:41Z | 0 | 53 | null |
[
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-10-29T22:01:56Z |
---
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
---
v1.1: [xivcine-style-1-1.ckpt](https://huggingface.co/herpritts/FFXIV-Style/blob/main/xivcine-style-1-1.ckpt)
token/class: <b>xivcine style</b>
All training images are from the trailers of the critically acclaimed MMORPG Final Fantasy XIV, which has a free trial that includes the entirety of A Realm Reborn AND the award-winning Heavensward expansion up to level 60 with no restrictions on playtime. Sign up, and enjoy Eorzea today! https://secure.square-enix.com/account/app/svc/ffxivregister?lng=en-gb. This model hopes to replicate that style. (Use recruitment code D8PE6VFZ if you like my model and decide to play the game. /smile)
If using the newer version of this model, use "<b>xivcine style</b>" in your prompt. The following images were made with a generic prompt like "portrait, xivcine style, action pose, [hyper realistic], colorful background, bokeh, detailed, cinematic, 3d octane render, 4k, concept art, trending on artstation"
Faces will trend toward this style with a non-specific prompt:

Armor styles can be tweaked effectively with variations in X/Y plots:

Landscapes will trend toward this style with a non-specific prompt:

Merging with other checkpoints can produce entirely unique styles while maintaining an ornate armor style:

These are examples of how elements combine in various mergers:




Future updates will be aimed at training specific pieces of armor, etc. The intent is to create gposes in the style of FFXIV trailers.
v1.0: [xivcine_person_v1.ckpt](https://huggingface.co/herpritts/FFXIV-Style/blob/main/xivcine_person_v1.ckpt)
token/class: <b>xivcine person</b>
This model is just for fun. It makes deep fried images with bright colors and cool lights if you want it to, but it won't listen to your prompts very well.
The images below were prompted with a generic prompt like "xivcine person, cinematic, colorful background, concept art, dramatic lighting, high detail, highly detailed, hyper realistic, intricate, intricate sharp details, octane render, smooth, studio lighting, trending on artstation" plus forest or castle.




## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
Owos/tb-classifier
|
Owos
| 2022-11-15T02:51:01Z | 22 | 0 |
transformers
|
[
"transformers",
"inception",
"vision",
"image-classification",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-23T04:17:59Z |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- https://www.kaggle.com/datasets/tawsifurrahman/tuberculosis-tb-chest-xray-dataset
widget:
- src: https://huggingface.co/Owos/tb-classifier/blob/main/tb-negative.png
example_title: Negative
- src: https://huggingface.co/Owos/tb-classifier/blob/main/tb-positive.png
example_title: Positive
metrics:
- Accuracy
- Precision
- Recall
---
# Tuberculosis Classifier
[Github repo is here](https://github.com/owos/tb_project) </br>
[HuggingFace Space](https://huggingface.co/spaces/Owos/tb_prediction_space)
# Model description
This is a computer vision model that was built with TensorFlow to classify if a given x-ray scan is positive for Tuberculosis or not.
# Intended uses & limitations
The model was built to help support low-resourced and short-staffed primary healthcare centers in Nigeria. Particularly, the aim to was created a computer-aided diagnosing tool for Radiologists in these centers.
The model has not undergone clinical testing and usage is at ueser's own risk.The model has however been tested on real life data images that are positive for tuberculosis
# How to use
Download the pre-trained model and use it to make inference.
A space has been created for testing [here](space.com)
# Training data
The entire dataset consist of 3500 negative images and 700 positive TB images. </br>
The data was splitted in 80% for training and 20% for validation.
# Training procedure
Transfer-learning was employed using InceptionV3 as the pre-trained model. Training was done for 20 epochs and the classes were weighted during training in order to neutralize the imbalanced class in the dataset. The training was done on Kaggle using the GPUs provided. More details of the experiments can be found [here](https://www.kaggle.com/code/abrahamowodunni/tb-project)
# Evaluation results
The result of the evaluation are as follows: - loss: 0.0923 - binary_accuracy: 0.9857 - precision: 0.9259 - recall: 0.9843
More information can be found in the plot below.
[Evaluation results of the TB model](https://github.com/owos/tb_project/blob/main/README.md)
|
huggingtweets/ianflynnbkc
|
huggingtweets
| 2022-11-15T02:50:43Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-14T02:48:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/ianflynnbkc/1668480615006/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1107777212835614720/g_KwstYD_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ian Flynn</div>
<div style="text-align: center; font-size: 14px;">@ianflynnbkc</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ian Flynn.
| Data | Ian Flynn |
| --- | --- |
| Tweets downloaded | 3243 |
| Retweets | 964 |
| Short tweets | 315 |
| Tweets kept | 1964 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/gnis1yl2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ianflynnbkc's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2692e7ob) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2692e7ob/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ianflynnbkc')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hfl/minirbt-h256
|
hfl
| 2022-11-15T02:21:47Z | 461 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bert",
"fill-mask",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-14T05:13:06Z |
---
language:
- zh
tags:
- bert
license: "apache-2.0"
---
# Please use 'Bert' related functions to load this model!
## Chinese small pre-trained model MiniRBT
In order to further promote the research and development of Chinese information processing, we launched a Chinese small pre-training model MiniRBT based on the self-developed knowledge distillation tool TextBrewer, combined with Whole Word Masking technology and Knowledge Distillation technology.
This repository is developed based on:https://github.com/iflytek/MiniRBT
You may also interested in,
- Chinese LERT: https://github.com/ymcui/LERT
- Chinese PERT: https://github.com/ymcui/PERT
- Chinese MacBERT: https://github.com/ymcui/MacBERT
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/iflytek/HFL-Anthology
|
hfl/minirbt-h288
|
hfl
| 2022-11-15T02:21:41Z | 358 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bert",
"fill-mask",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-14T05:53:35Z |
---
language:
- zh
tags:
- bert
license: "apache-2.0"
---
# Please use 'Bert' related functions to load this model!
## Chinese small pre-trained model MiniRBT
In order to further promote the research and development of Chinese information processing, we launched a Chinese small pre-training model MiniRBT based on the self-developed knowledge distillation tool TextBrewer, combined with Whole Word Masking technology and Knowledge Distillation technology.
This repository is developed based on:https://github.com/iflytek/MiniRBT
You may also interested in,
- Chinese LERT: https://github.com/ymcui/LERT
- Chinese PERT: https://github.com/ymcui/PERT
- Chinese MacBERT: https://github.com/ymcui/MacBERT
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/iflytek/HFL-Anthology
|
hfl/rbt4-h312
|
hfl
| 2022-11-15T02:21:23Z | 211 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bert",
"fill-mask",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-14T06:24:07Z |
---
language:
- zh
tags:
- bert
license: "apache-2.0"
---
# Please use 'Bert' related functions to load this model!
## Chinese small pre-trained model MiniRBT
In order to further promote the research and development of Chinese information processing, we launched a Chinese small pre-training model MiniRBT based on the self-developed knowledge distillation tool TextBrewer, combined with Whole Word Masking technology and Knowledge Distillation technology.
This repository is developed based on:https://github.com/iflytek/MiniRBT
You may also interested in,
- Chinese LERT: https://github.com/ymcui/LERT
- Chinese PERT: https://github.com/ymcui/PERT
- Chinese MacBERT: https://github.com/ymcui/MacBERT
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/iflytek/HFL-Anthology
|
RawMean/farsi_lastname_classifier_1
|
RawMean
| 2022-11-15T01:53:10Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-15T01:45:21Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: farsi_lastname_classifier_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# farsi_lastname_classifier_1
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0482
- Pearson: 0.9232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.0 | 12 | 0.2705 | 0.7018 |
| No log | 2.0 | 24 | 0.0993 | 0.7986 |
| No log | 3.0 | 36 | 0.0804 | 0.8347 |
| No log | 4.0 | 48 | 0.0433 | 0.9246 |
| No log | 5.0 | 60 | 0.0559 | 0.9176 |
| No log | 6.0 | 72 | 0.0465 | 0.9334 |
| No log | 7.0 | 84 | 0.0503 | 0.9154 |
| No log | 8.0 | 96 | 0.0438 | 0.9222 |
| No log | 9.0 | 108 | 0.0468 | 0.9260 |
| No log | 10.0 | 120 | 0.0482 | 0.9232 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
RawMean/farsi_lastname_classifier
|
RawMean
| 2022-11-15T01:41:04Z | 74 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-15T00:57:22Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: farsi_lastname_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# farsi_lastname_classifier
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0436
- Pearson: 0.9325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.0 | 12 | 0.2989 | 0.6985 |
| No log | 2.0 | 24 | 0.1378 | 0.7269 |
| No log | 3.0 | 36 | 0.0459 | 0.9122 |
| No log | 4.0 | 48 | 0.0454 | 0.9304 |
| No log | 5.0 | 60 | 0.0564 | 0.9168 |
| No log | 6.0 | 72 | 0.0434 | 0.9315 |
| No log | 7.0 | 84 | 0.0452 | 0.9254 |
| No log | 8.0 | 96 | 0.0381 | 0.9320 |
| No log | 9.0 | 108 | 0.0441 | 0.9327 |
| No log | 10.0 | 120 | 0.0436 | 0.9325 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
jbreunig/xlm-roberta-base-finetuned-panx-de
|
jbreunig
| 2022-11-15T00:15:10Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-13T16:24:16Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.86254900846639
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1370
- F1: 0.8625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.26 | 1.0 | 525 | 0.1565 | 0.8218 |
| 0.1276 | 2.0 | 1050 | 0.1409 | 0.8486 |
| 0.0817 | 3.0 | 1575 | 0.1370 | 0.8625 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
monakth/bert-base-multilingual-cased-finetuned-squadv2
|
monakth
| 2022-11-15T00:10:57Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-15T00:08:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: bert-base-multilingual-cased-finetuned-squad-squadv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-squad-squadv
This model is a fine-tuned version of [monakth/bert-base-multilingual-cased-finetuned-squad](https://huggingface.co/monakth/bert-base-multilingual-cased-finetuned-squad) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
javierman/dafnesoul
|
javierman
| 2022-11-14T23:00:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-11-14T22:58:51Z |
depressed man sittin on a bar drinking whisky and smoke a cigarrette
|
tommasory/platzi-distilroberta-base-mrpc-glue-tommasory
|
tommasory
| 2022-11-14T22:56:37Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-14T22:46:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: platzi-distilroberta-base-mrpc-glue-tommasory
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: train
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8308823529411765
- name: F1
type: f1
value: 0.8733944954128441
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-tommasory
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7098
- Accuracy: 0.8309
- F1: 0.8734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5196 | 1.09 | 500 | 0.5289 | 0.8260 | 0.8739 |
| 0.3407 | 2.18 | 1000 | 0.7098 | 0.8309 | 0.8734 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
antoinev17/bert-base-uncased-issues-128
|
antoinev17
| 2022-11-14T21:49:50Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-14T21:03:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0972 | 1.0 | 291 | 1.7066 |
| 1.6391 | 2.0 | 582 | 1.4318 |
| 1.4844 | 3.0 | 873 | 1.3734 |
| 1.3997 | 4.0 | 1164 | 1.3806 |
| 1.3398 | 5.0 | 1455 | 1.1957 |
| 1.2846 | 6.0 | 1746 | 1.2837 |
| 1.2379 | 7.0 | 2037 | 1.2665 |
| 1.1969 | 8.0 | 2328 | 1.2154 |
| 1.1651 | 9.0 | 2619 | 1.1756 |
| 1.1415 | 10.0 | 2910 | 1.2114 |
| 1.1296 | 11.0 | 3201 | 1.2138 |
| 1.1047 | 12.0 | 3492 | 1.1655 |
| 1.0802 | 13.0 | 3783 | 1.2566 |
| 1.0775 | 14.0 | 4074 | 1.1650 |
| 1.0645 | 15.0 | 4365 | 1.1294 |
| 1.062 | 16.0 | 4656 | 1.2480 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
facebook/wav2vec2-base-960h
|
facebook
| 2022-11-14T21:37:23Z | 2,419,470 | 315 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: wav2vec2-base-960h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.4
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 8.6
---
# Wav2Vec2-Base-960h
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-base-960h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
def map_to_pred(batch):
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 3.4 | 8.6 |
|
huggingtweets/ianflynnbkc-maniacxvii-spiritsonic
|
huggingtweets
| 2022-11-14T21:26:21Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-14T21:26:14Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1107777212835614720/g_KwstYD_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1521269549328257024/ruVdvwTI_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1267323211068276738/uSEB8rC1_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ian Flynn & MANIAC @ COMMISSIONS & Evan Stanley</div>
<div style="text-align: center; font-size: 14px;">@ianflynnbkc-maniacxvii-spiritsonic</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ian Flynn & MANIAC @ COMMISSIONS & Evan Stanley.
| Data | Ian Flynn | MANIAC @ COMMISSIONS | Evan Stanley |
| --- | --- | --- | --- |
| Tweets downloaded | 3244 | 3134 | 3222 |
| Retweets | 965 | 950 | 626 |
| Short tweets | 315 | 422 | 383 |
| Tweets kept | 1964 | 1762 | 2213 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/36r7n0gv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ianflynnbkc-maniacxvii-spiritsonic's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1kzz11v3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1kzz11v3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ianflynnbkc-maniacxvii-spiritsonic')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
inverse-scaling/opt-350m_eval
|
inverse-scaling
| 2022-11-14T20:43:55Z | 150 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"opt",
"text-generation",
"en",
"arxiv:2205.01068",
"arxiv:2005.14165",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2022-10-04T15:26:51Z |
---
language: en
inference: false
tags:
- text-generation
license: other
commercial: false
model-index:
- name: inverse-scaling/opt-350m_eval
results:
- task:
type: zero-shot-classification
name: Zero-Shot Text Classification
dataset:
name: inverse-scaling/NeQA
type: inverse-scaling/NeQA
config: inverse-scaling--NeQA
split: train
metrics:
- name: Accuracy
type: accuracy
value: 0.4666666666666667
verified: true
- name: Loss
type: loss
value: 0.9192380222864449
verified: true
- task:
type: zero-shot-classification
name: Zero-Shot Text Classification
dataset:
name: inverse-scaling/quote-repetition
type: inverse-scaling/quote-repetition
config: inverse-scaling--quote-repetition
split: train
metrics:
- name: Accuracy
type: accuracy
value: 0.9633333333333334
verified: true
- name: Loss
type: loss
value: 0.03444786100047819
verified: true
- task:
type: zero-shot-classification
name: Zero-Shot Text Classification
dataset:
name: inverse-scaling/redefine-math
type: inverse-scaling/redefine-math
config: inverse-scaling--redefine-math
split: train
metrics:
- name: Accuracy
type: accuracy
value: 0.6877777777777778
verified: true
- name: Loss
type: loss
value: 0.6016371671193176
verified: true
- task:
type: zero-shot-classification
name: Zero-Shot Text Classification
dataset:
name: inverse-scaling/hindsight-neglect-10shot
type: inverse-scaling/hindsight-neglect-10shot
config: inverse-scaling--hindsight-neglect-10shot
split: train
metrics:
- name: Accuracy
type: accuracy
value: 0.4380952380952381
verified: true
- name: Loss
type: loss
value: 0.8774787804555325
verified: true
- task:
type: zero-shot-classification
name: Zero-Shot Text Classification
dataset:
name: mathemakitten/winobias_antistereotype_test_cot_v3
type: mathemakitten/winobias_antistereotype_test_cot_v3
config: mathemakitten--winobias_antistereotype_test_cot_v3
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.44660194174757284
verified: true
- name: Loss
type: loss
value: 0.9301078982717057
verified: true
- task:
type: zero-shot-classification
name: Zero-Shot Text Classification
dataset:
name: mathemakitten/winobias_antistereotype_test_v5
type: mathemakitten/winobias_antistereotype_test_v5
config: mathemakitten--winobias_antistereotype_test_v5
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.4368932038834951
verified: true
- name: Loss
type: loss
value: 0.9175132444057151
verified: true
---
# OPT : Open Pre-trained Transformer Language Models
OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI.
**Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf).
Content from **this** model card has been written by the Hugging Face team.
## Intro
To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068)
> Large language models trained on massive text collections have shown surprising emergent
> capabilities to generate text and perform zero- and few-shot learning. While in some cases the public
> can interact with these models through paid APIs, full model access is currently limited to only a
> few highly resourced labs. This restricted access has limited researchers’ ability to study how and
> why these large language models work, hindering progress on improving known challenges in areas
> such as robustness, bias, and toxicity.
> We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M
> to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match
> the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data
> collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and
> to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the
> collective research community as a whole, which is only possible when models are available for study.
## Model description
OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective.
OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective.
For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read
the [official paper](https://arxiv.org/abs/2205.01068).
## Intended uses & limitations
The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation.
In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt).
### How to use
You can use this model directly with a pipeline for text generation.
```python
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model="facebook/opt-350m")
>>> generator("Hello, I'm am conscious and")
[{'generated_text': "Hello, I'm am conscious and I'm a bit of a noob. I'm looking for"}]
```
By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-350m", do_sample=True)
>>> generator("Hello, I'm am conscious and")
[{'generated_text': "Hello, I'm am conscious and I'm interested in this project. Can I get an initial contact"}]
```
### Limitations and bias
As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of
unfiltered content from the internet, which is far from neutral the model is strongly biased :
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-350m", do_sample=True, num_return_sequences=5)
>>> generator("The woman worked as a")
[{'generated_text': "The woman works as a substitute teacher for kids who have missed school. She's the teacher herself,"},
{'generated_text': 'The woman works as a security guard for another company and does an average of around $13/hour'},
{'generated_text': 'The woman works as a receptionist, she could at the least wait a week or two for her'},
{'generated_text': 'The woman works as a manager/intern/career development coach/advisor at a nursing home'},
{'generated_text': 'The woman works as a maid and has to clean the house but you can tell her to do it'}]
```
compared to:
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-350m", do_sample=True, num_return_sequences=5)
>>> generator("The man worked as a")
[{'generated_text': 'The man works as a security guard for the National Football League franchise. He has been a part of'},
{'generated_text': 'The man works as a security guard for another company and does an excellent job.\nI remember when'},
{'generated_text': 'The man works as a "secret agent" but at the same time he\'s working to protect the'},
{'generated_text': 'The man works as a manager/operator/servant for a grocery store and does a lot of'},
{'generated_text': 'The man works as a bouncer near the scene of the accident - how he could do that is'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents:
- BookCorpus, which consists of more than 10K unpublished books,
- CC-Stories, which contains a subset of CommonCrawl data filtered to match the
story-like style of Winograd schemas,
- The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included.
- Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in
Roller et al. (2021)
- CCNewsV2 containing an updated version of the English portion of the CommonCrawl News
dataset that was used in RoBERTa (Liu et al., 2019b)
The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally
to each dataset’s size in the pretraining corpus.
The dataset might contains offensive content as parts of the dataset are a subset of
public Common Crawl data, along with a subset of public Reddit data, which could contain sentences
that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety.
### Collection process
The dataset was collected form internet, and went through classic data processing algorithms and
re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or
*This ebook by Project Gutenberg.*
## Training procedure
### Preprocessing
The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens.
The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training.
### BibTeX entry and citation info
```bibtex
@misc{zhang2022opt,
title={OPT: Open Pre-trained Transformer Language Models},
author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
year={2022},
eprint={2205.01068},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
tommasory/platzi-vit-model-tommasory-beans
|
tommasory
| 2022-11-14T20:01:54Z | 188 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-14T19:37:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: platzi-vit-model-tommasory-beans
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9924812030075187
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit-model-tommasory-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0343
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1441 | 3.85 | 500 | 0.0343 | 0.9925 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
gngpostalsrvc/BERiT_2000_custom_architecture_40_epochs
|
gngpostalsrvc
| 2022-11-14T19:55:50Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-14T19:08:19Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: BERiT_2000_custom_architecture_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERiT_2000_custom_architecture_3
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.6575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 16.5165 | 0.19 | 500 | 8.9072 |
| 8.208 | 0.39 | 1000 | 7.5024 |
| 7.3849 | 0.58 | 1500 | 7.1180 |
| 7.0298 | 0.77 | 2000 | 6.8964 |
| 6.9022 | 0.97 | 2500 | 6.7857 |
| 6.7756 | 1.16 | 3000 | 6.5215 |
| 6.6462 | 1.36 | 3500 | 6.4494 |
| 6.5787 | 1.55 | 4000 | 6.3246 |
| 6.5193 | 1.74 | 4500 | 6.3231 |
| 6.4626 | 1.94 | 5000 | 6.2818 |
| 6.474 | 2.13 | 5500 | 6.3444 |
| 6.4314 | 2.32 | 6000 | 6.2374 |
| 6.3658 | 2.52 | 6500 | 6.2517 |
| 6.4031 | 2.71 | 7000 | 6.2055 |
| 6.3549 | 2.9 | 7500 | 6.2022 |
| 6.3202 | 3.1 | 8000 | 6.2163 |
| 6.3294 | 3.29 | 8500 | 6.2001 |
| 6.2981 | 3.49 | 9000 | 6.1819 |
| 6.3281 | 3.68 | 9500 | 6.1564 |
| 6.2914 | 3.87 | 10000 | 6.2122 |
| 6.3275 | 4.07 | 10500 | 6.1463 |
| 6.2637 | 4.26 | 11000 | 6.1404 |
| 6.2641 | 4.45 | 11500 | 6.2438 |
| 6.2557 | 4.65 | 12000 | 6.1504 |
| 6.2541 | 4.84 | 12500 | 6.1816 |
| 6.2465 | 5.03 | 13000 | 6.1646 |
| 6.2436 | 5.23 | 13500 | 6.1698 |
| 6.2461 | 5.42 | 14000 | 6.1665 |
| 6.2304 | 5.62 | 14500 | 6.1873 |
| 6.2235 | 5.81 | 15000 | 6.1555 |
| 6.2262 | 6.0 | 15500 | 6.1128 |
| 6.2238 | 6.2 | 16000 | 6.1545 |
| 6.2127 | 6.39 | 16500 | 6.1131 |
| 6.221 | 6.58 | 17000 | 6.1513 |
| 6.1974 | 6.78 | 17500 | 6.1712 |
| 6.175 | 6.97 | 18000 | 6.1073 |
| 6.2042 | 7.16 | 18500 | 6.1176 |
| 6.1898 | 7.36 | 19000 | 6.0470 |
| 6.1961 | 7.55 | 19500 | 6.1011 |
| 6.1883 | 7.75 | 20000 | 6.1064 |
| 6.2171 | 7.94 | 20500 | 6.1299 |
| 6.175 | 8.13 | 21000 | 6.1313 |
| 6.1757 | 8.33 | 21500 | 6.0899 |
| 6.1776 | 8.52 | 22000 | 6.1196 |
| 6.1377 | 8.71 | 22500 | 6.1554 |
| 6.1688 | 8.91 | 23000 | 6.1037 |
| 6.1555 | 9.1 | 23500 | 6.1622 |
| 6.1665 | 9.3 | 24000 | 6.0622 |
| 6.144 | 9.49 | 24500 | 6.0763 |
| 6.1394 | 9.68 | 25000 | 6.0803 |
| 6.1731 | 9.88 | 25500 | 6.1243 |
| 6.1655 | 10.07 | 26000 | 6.0929 |
| 6.1028 | 10.26 | 26500 | 6.1178 |
| 6.1145 | 10.46 | 27000 | 6.1426 |
| 6.1153 | 10.65 | 27500 | 6.1156 |
| 6.1274 | 10.84 | 28000 | 6.0922 |
| 6.1441 | 11.04 | 28500 | 6.0556 |
| 6.1179 | 11.23 | 29000 | 6.1316 |
| 6.1379 | 11.43 | 29500 | 6.0560 |
| 6.1273 | 11.62 | 30000 | 6.1321 |
| 6.1104 | 11.81 | 30500 | 6.1229 |
| 6.1156 | 12.01 | 31000 | 6.0803 |
| 6.0711 | 12.2 | 31500 | 6.0110 |
| 6.1132 | 12.39 | 32000 | 6.1489 |
| 6.065 | 12.59 | 32500 | 6.1082 |
| 6.0774 | 12.78 | 33000 | 6.0590 |
| 6.096 | 12.97 | 33500 | 6.0611 |
| 6.1172 | 13.17 | 34000 | 6.0857 |
| 6.0845 | 13.36 | 34500 | 6.0799 |
| 6.0551 | 13.56 | 35000 | 6.0768 |
| 6.0593 | 13.75 | 35500 | 6.0880 |
| 6.0605 | 13.94 | 36000 | 6.0715 |
| 6.0849 | 14.14 | 36500 | 5.9769 |
| 6.0739 | 14.33 | 37000 | 6.0450 |
| 6.0721 | 14.52 | 37500 | 6.0144 |
| 6.0778 | 14.72 | 38000 | 6.0817 |
| 6.067 | 14.91 | 38500 | 6.0142 |
| 6.0456 | 15.1 | 39000 | 6.1092 |
| 6.0624 | 15.3 | 39500 | 6.0543 |
| 6.0556 | 15.49 | 40000 | 6.0204 |
| 6.0358 | 15.69 | 40500 | 6.0146 |
| 6.0397 | 15.88 | 41000 | 6.0312 |
| 6.0352 | 16.07 | 41500 | 6.0761 |
| 6.0356 | 16.27 | 42000 | 6.0177 |
| 6.0149 | 16.46 | 42500 | 6.0044 |
| 5.9803 | 16.65 | 43000 | 6.0192 |
| 6.0615 | 16.85 | 43500 | 6.0227 |
| 6.0029 | 17.04 | 44000 | 6.0205 |
| 6.0005 | 17.23 | 44500 | 6.0298 |
| 6.0087 | 17.43 | 45000 | 5.9892 |
| 5.9895 | 17.62 | 45500 | 5.9715 |
| 6.0123 | 17.82 | 46000 | 6.0088 |
| 6.0015 | 18.01 | 46500 | 5.9670 |
| 5.9764 | 18.2 | 47000 | 5.9593 |
| 5.9399 | 18.4 | 47500 | 6.0001 |
| 5.9928 | 18.59 | 48000 | 5.9966 |
| 5.9823 | 18.78 | 48500 | 5.8836 |
| 5.9442 | 18.98 | 49000 | 5.9294 |
| 5.9532 | 19.17 | 49500 | 5.9487 |
| 5.9551 | 19.36 | 50000 | 5.9434 |
| 5.996 | 19.56 | 50500 | 5.9254 |
| 5.9468 | 19.75 | 51000 | 5.9532 |
| 5.9349 | 19.95 | 51500 | 5.9212 |
| 5.9155 | 20.14 | 52000 | 5.9140 |
| 5.9382 | 20.33 | 52500 | 5.8989 |
| 5.9538 | 20.53 | 53000 | 5.9010 |
| 5.9466 | 20.72 | 53500 | 5.8780 |
| 5.9112 | 20.91 | 54000 | 5.8883 |
| 5.908 | 21.11 | 54500 | 5.9060 |
| 5.9228 | 21.3 | 55000 | 5.8949 |
| 5.9428 | 21.49 | 55500 | 5.8879 |
| 5.8808 | 21.69 | 56000 | 5.9383 |
| 5.9311 | 21.88 | 56500 | 5.8401 |
| 5.936 | 22.08 | 57000 | 5.9064 |
| 5.8951 | 22.27 | 57500 | 5.8957 |
| 5.8832 | 22.46 | 58000 | 5.8583 |
| 5.8919 | 22.66 | 58500 | 5.8893 |
| 5.8884 | 22.85 | 59000 | 5.8666 |
| 5.9072 | 23.04 | 59500 | 5.8368 |
| 5.8971 | 23.24 | 60000 | 5.8299 |
| 5.868 | 23.43 | 60500 | 5.8595 |
| 5.8967 | 23.63 | 61000 | 5.8722 |
| 5.8746 | 23.82 | 61500 | 5.8307 |
| 5.8731 | 24.01 | 62000 | 5.8595 |
| 5.8625 | 24.21 | 62500 | 5.7892 |
| 5.8877 | 24.4 | 63000 | 5.8079 |
| 5.9033 | 24.59 | 63500 | 5.7787 |
| 5.8676 | 24.79 | 64000 | 5.8450 |
| 5.889 | 24.98 | 64500 | 5.8286 |
| 5.8732 | 25.17 | 65000 | 5.8433 |
| 5.8684 | 25.37 | 65500 | 5.7564 |
| 5.8516 | 25.56 | 66000 | 5.8181 |
| 5.835 | 25.76 | 66500 | 5.8275 |
| 5.8523 | 25.95 | 67000 | 5.7860 |
| 5.8612 | 26.14 | 67500 | 5.8005 |
| 5.8715 | 26.34 | 68000 | 5.7788 |
| 5.8191 | 26.53 | 68500 | 5.8558 |
| 5.8286 | 26.72 | 69000 | 5.7973 |
| 5.8415 | 26.92 | 69500 | 5.7792 |
| 5.855 | 27.11 | 70000 | 5.8006 |
| 5.8384 | 27.3 | 70500 | 5.7673 |
| 5.825 | 27.5 | 71000 | 5.8130 |
| 5.8243 | 27.69 | 71500 | 5.7763 |
| 5.8242 | 27.89 | 72000 | 5.7433 |
| 5.8251 | 28.08 | 72500 | 5.7670 |
| 5.8022 | 28.27 | 73000 | 5.8067 |
| 5.8014 | 28.47 | 73500 | 5.7979 |
| 5.8013 | 28.66 | 74000 | 5.7940 |
| 5.8154 | 28.85 | 74500 | 5.7362 |
| 5.8046 | 29.05 | 75000 | 5.7319 |
| 5.8222 | 29.24 | 75500 | 5.7902 |
| 5.7801 | 29.43 | 76000 | 5.7563 |
| 5.7932 | 29.63 | 76500 | 5.7724 |
| 5.7543 | 29.82 | 77000 | 5.8041 |
| 5.7936 | 30.02 | 77500 | 5.8168 |
| 5.8053 | 30.21 | 78000 | 5.7699 |
| 5.8103 | 30.4 | 78500 | 5.7276 |
| 5.8019 | 30.6 | 79000 | 5.7498 |
| 5.7647 | 30.79 | 79500 | 5.7413 |
| 5.7424 | 30.98 | 80000 | 5.6823 |
| 5.8021 | 31.18 | 80500 | 5.7597 |
| 5.7717 | 31.37 | 81000 | 5.7509 |
| 5.7908 | 31.56 | 81500 | 5.7664 |
| 5.8212 | 31.76 | 82000 | 5.7693 |
| 5.7733 | 31.95 | 82500 | 5.6974 |
| 5.7672 | 32.15 | 83000 | 5.6966 |
| 5.7533 | 32.34 | 83500 | 5.7002 |
| 5.7898 | 32.53 | 84000 | 5.7604 |
| 5.7422 | 32.73 | 84500 | 5.7043 |
| 5.7864 | 32.92 | 85000 | 5.6966 |
| 5.7563 | 33.11 | 85500 | 5.7300 |
| 5.7747 | 33.31 | 86000 | 5.6817 |
| 5.7718 | 33.5 | 86500 | 5.7329 |
| 5.7416 | 33.69 | 87000 | 5.7174 |
| 5.7838 | 33.89 | 87500 | 5.7136 |
| 5.7499 | 34.08 | 88000 | 5.6524 |
| 5.7716 | 34.28 | 88500 | 5.6702 |
| 5.7486 | 34.47 | 89000 | 5.7338 |
| 5.7932 | 34.66 | 89500 | 5.6822 |
| 5.7593 | 34.86 | 90000 | 5.7193 |
| 5.759 | 35.05 | 90500 | 5.7241 |
| 5.749 | 35.24 | 91000 | 5.6964 |
| 5.7548 | 35.44 | 91500 | 5.6691 |
| 5.7843 | 35.63 | 92000 | 5.7158 |
| 5.7464 | 35.82 | 92500 | 5.6574 |
| 5.735 | 36.02 | 93000 | 5.6470 |
| 5.7466 | 36.21 | 93500 | 5.6833 |
| 5.74 | 36.41 | 94000 | 5.6346 |
| 5.7464 | 36.6 | 94500 | 5.6980 |
| 5.7194 | 36.79 | 95000 | 5.6459 |
| 5.7328 | 36.99 | 95500 | 5.6634 |
| 5.7392 | 37.18 | 96000 | 5.7234 |
| 5.7422 | 37.37 | 96500 | 5.7338 |
| 5.7469 | 37.57 | 97000 | 5.7001 |
| 5.74 | 37.76 | 97500 | 5.7040 |
| 5.7321 | 37.96 | 98000 | 5.6562 |
| 5.7153 | 38.15 | 98500 | 5.6962 |
| 5.7066 | 38.34 | 99000 | 5.7527 |
| 5.7465 | 38.54 | 99500 | 5.6827 |
| 5.7364 | 38.73 | 100000 | 5.7359 |
| 5.7342 | 38.92 | 100500 | 5.6403 |
| 5.7281 | 39.12 | 101000 | 5.7184 |
| 5.7213 | 39.31 | 101500 | 5.6506 |
| 5.7069 | 39.5 | 102000 | 5.6693 |
| 5.7109 | 39.7 | 102500 | 5.6412 |
| 5.7142 | 39.89 | 103000 | 5.6575 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
PrimeQA/nq_tydi_sq1-reader-xlmr_large-20221110
|
PrimeQA
| 2022-11-14T19:47:16Z | 36 | 1 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"MRC",
"TyDiQA",
"Natural Questions",
"SQuAD",
"xlm-roberta-large",
"multilingual",
"arxiv:1606.05250",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-11-14T19:20:52Z |
---
license: apache-2.0
tags:
- MRC
- TyDiQA
- Natural Questions
- SQuAD
- xlm-roberta-large
language:
- multilingual
---
*Task*: MRC
# Model description
An XLM-RoBERTa Large reading comprehension model trained from the combination of TyDi, NQ, and SQuAD v1 datasets, starting from a fine-tuned [Tydi xlm-roberta-large](https://huggingface.co/PrimeQA/tydiqa-primary-task-xlm-roberta-large) model.
## Intended uses & limitations
You can use the raw model for the reading comprehension task. Biases associated with the pre-existing language model, xlm-roberta-large, that we used may be present in our fine-tuned model.
## Usage
You can use this model directly with the [PrimeQA](https://github.com/primeqa/primeqa) pipeline for reading comprehension [squad.ipynb](https://github.com/primeqa/primeqa/blob/main/notebooks/mrc/squad.ipynb).
### BibTeX entry and citation info
```bibtex
@article{kwiatkowski-etal-2019-natural,
title = "Natural Questions: A Benchmark for Question Answering Research",
author = "Kwiatkowski, Tom and
Palomaki, Jennimaria and
Redfield, Olivia and
Collins, Michael and
Parikh, Ankur and
Alberti, Chris and
Epstein, Danielle and
Polosukhin, Illia and
Devlin, Jacob and
Lee, Kenton and
Toutanova, Kristina and
Jones, Llion and
Kelcey, Matthew and
Chang, Ming-Wei and
Dai, Andrew M. and
Uszkoreit, Jakob and
Le, Quoc and
Petrov, Slav",
journal = "Transactions of the Association for Computational Linguistics",
volume = "7",
year = "2019",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/Q19-1026",
doi = "10.1162/tacl_a_00276",
pages = "452--466",
}
```
```bibtex
@article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
}
```
```bibtex
@article{clark-etal-2020-tydi,
title = "{T}y{D}i {QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages",
author = "Clark, Jonathan H. and
Choi, Eunsol and
Collins, Michael and
Garrette, Dan and
Kwiatkowski, Tom and
Nikolaev, Vitaly and
Palomaki, Jennimaria",
journal = "Transactions of the Association for Computational Linguistics",
volume = "8",
year = "2020",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/2020.tacl-1.30",
doi = "10.1162/tacl_a_00317",
pages = "454--470",
}
```
|
Olusegun/autotrain-disease_tokens-2095367455
|
Olusegun
| 2022-11-14T19:41:09Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"token-classification",
"en",
"dataset:Olusegun/autotrain-data-disease_tokens",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-14T19:40:14Z |
---
tags:
- autotrain
- token-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Olusegun/autotrain-data-disease_tokens
co2_eq_emissions:
emissions: 1.569698418187329
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 2095367455
- CO2 Emissions (in grams): 1.5697
## Validation Metrics
- Loss: 0.000
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- F1: 1.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Olusegun/autotrain-disease_tokens-2095367455
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("Olusegun/autotrain-disease_tokens-2095367455", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Olusegun/autotrain-disease_tokens-2095367455", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
AlekseyKorshuk/1.3b-synthetic-v1-after-book
|
AlekseyKorshuk
| 2022-11-14T19:30:25Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:AlekseyKorshuk/dalio-synthetic-io",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-14T19:26:25Z |
---
tags:
- generated_from_trainer
datasets:
- AlekseyKorshuk/dalio-synthetic-io
metrics:
- accuracy
model-index:
- name: 1.3b-synthetic-v1-after-book
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: AlekseyKorshuk/dalio-synthetic-io
type: AlekseyKorshuk/dalio-synthetic-io
metrics:
- name: Accuracy
type: accuracy
value: 0.07541139670882632
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1.3b-synthetic-v1-after-book
This model is a fine-tuned version of [/models/1.3b-dalio-principles-book](https://huggingface.co//models/1.3b-dalio-principles-book) on the AlekseyKorshuk/dalio-synthetic-io dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9805
- Accuracy: 0.0754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0841 | 0.1 | 1 | 2.0254 | 0.0759 |
| 2.062 | 0.2 | 2 | 2.0254 | 0.0759 |
| 2.1509 | 0.3 | 3 | 1.9941 | 0.0761 |
| 2.1206 | 0.4 | 4 | 1.9941 | 0.0756 |
| 2.2087 | 0.5 | 5 | 1.9941 | 0.0757 |
| 2.0337 | 0.6 | 6 | 1.9902 | 0.0755 |
| 2.026 | 0.7 | 7 | 1.9854 | 0.0755 |
| 2.1879 | 0.8 | 8 | 1.9834 | 0.0756 |
| 2.1052 | 0.9 | 9 | 1.9824 | 0.0754 |
| 2.046 | 1.0 | 10 | 1.9805 | 0.0754 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
LadaCroft/finetuning-sentiment-model-3000-samples
|
LadaCroft
| 2022-11-14T19:29:11Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-14T19:14:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.906
- name: F1
type: f1
value: 0.9072978303747534
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2928
- Accuracy: 0.906
- F1: 0.9073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
AlekseyKorshuk/1.3b-handwritten-v1-after-book
|
AlekseyKorshuk
| 2022-11-14T19:23:45Z | 96 | 1 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:AlekseyKorshuk/dalio-handwritten-io",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-14T19:19:28Z |
---
tags:
- generated_from_trainer
datasets:
- AlekseyKorshuk/dalio-handwritten-io
metrics:
- accuracy
model-index:
- name: 1.3b-handwritten-v1-after-book
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: AlekseyKorshuk/dalio-handwritten-io
type: AlekseyKorshuk/dalio-handwritten-io
metrics:
- name: Accuracy
type: accuracy
value: 0.06691769057999736
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1.3b-handwritten-v1-after-book
This model is a fine-tuned version of [/models/1.3b-dalio-principles-book](https://huggingface.co//models/1.3b-dalio-principles-book) on the AlekseyKorshuk/dalio-handwritten-io dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0566
- Accuracy: 0.0669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.3721 | 0.2 | 1 | 2.2148 | 0.0641 |
| 2.241 | 0.4 | 2 | 2.2148 | 0.0641 |
| 2.469 | 0.6 | 3 | 2.1348 | 0.0653 |
| 2.3735 | 0.8 | 4 | 2.1309 | 0.0648 |
| 2.2755 | 1.0 | 5 | 2.1133 | 0.0652 |
| 2.0428 | 1.2 | 6 | 2.0938 | 0.0659 |
| 1.764 | 1.4 | 7 | 2.0781 | 0.0659 |
| 1.7458 | 1.6 | 8 | 2.0781 | 0.0661 |
| 1.868 | 1.8 | 9 | 2.0820 | 0.0660 |
| 1.9548 | 2.0 | 10 | 2.0703 | 0.0663 |
| 1.6772 | 2.2 | 11 | 2.0605 | 0.0665 |
| 1.3997 | 2.4 | 12 | 2.0566 | 0.0668 |
| 1.3717 | 2.6 | 13 | 2.0547 | 0.0669 |
| 1.5284 | 2.8 | 14 | 2.0547 | 0.0667 |
| 1.2264 | 3.0 | 15 | 2.0566 | 0.0669 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
AlekseyKorshuk/1.3b-dalio-principles-book
|
AlekseyKorshuk
| 2022-11-14T19:15:01Z | 97 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-14T19:10:54Z |
---
license: other
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 1.3b-dalio-principles-book
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1.3b-dalio-principles-book
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4512
- Accuracy: 0.4741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6914 | 0.14 | 1 | 2.6895 | 0.4477 |
| 2.6897 | 0.29 | 2 | 2.6895 | 0.4477 |
| 2.668 | 0.43 | 3 | 2.7031 | 0.4403 |
| 2.7434 | 0.57 | 4 | 2.5918 | 0.4533 |
| 2.6265 | 0.71 | 5 | 2.5410 | 0.4618 |
| 2.5259 | 0.86 | 6 | 2.5156 | 0.4641 |
| 2.5566 | 1.0 | 7 | 2.4902 | 0.4667 |
| 2.2317 | 1.14 | 8 | 2.4766 | 0.4707 |
| 2.2397 | 1.29 | 9 | 2.4727 | 0.4705 |
| 2.0162 | 1.43 | 10 | 2.4766 | 0.4690 |
| 2.0537 | 1.57 | 11 | 2.4805 | 0.4707 |
| 2.1432 | 1.71 | 12 | 2.4707 | 0.4714 |
| 2.0822 | 1.86 | 13 | 2.4570 | 0.4724 |
| 1.9056 | 2.0 | 14 | 2.4512 | 0.4741 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
gngpostalsrvc/BERiT_2000_custom_architecture_20_epochs
|
gngpostalsrvc
| 2022-11-14T18:52:14Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-14T18:30:34Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: BERiT_2000_custom_architecture_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERiT_2000_custom_architecture_2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 16.4316 | 0.19 | 500 | 9.0685 |
| 8.2958 | 0.39 | 1000 | 7.6483 |
| 7.4324 | 0.58 | 1500 | 7.1707 |
| 7.0054 | 0.77 | 2000 | 6.8592 |
| 6.8522 | 0.97 | 2500 | 6.7710 |
| 6.7538 | 1.16 | 3000 | 6.5845 |
| 6.634 | 1.36 | 3500 | 6.4525 |
| 6.5784 | 1.55 | 4000 | 6.3129 |
| 6.5135 | 1.74 | 4500 | 6.3312 |
| 6.4552 | 1.94 | 5000 | 6.2546 |
| 6.4685 | 2.13 | 5500 | 6.2857 |
| 6.4356 | 2.32 | 6000 | 6.2285 |
| 6.3566 | 2.52 | 6500 | 6.2295 |
| 6.394 | 2.71 | 7000 | 6.1790 |
| 6.3412 | 2.9 | 7500 | 6.1880 |
| 6.3115 | 3.1 | 8000 | 6.2130 |
| 6.3163 | 3.29 | 8500 | 6.1831 |
| 6.2978 | 3.49 | 9000 | 6.1945 |
| 6.3082 | 3.68 | 9500 | 6.1485 |
| 6.2729 | 3.87 | 10000 | 6.1752 |
| 6.307 | 4.07 | 10500 | 6.1331 |
| 6.2494 | 4.26 | 11000 | 6.1082 |
| 6.2523 | 4.45 | 11500 | 6.2110 |
| 6.2455 | 4.65 | 12000 | 6.1326 |
| 6.2399 | 4.84 | 12500 | 6.1779 |
| 6.2297 | 5.03 | 13000 | 6.1587 |
| 6.2374 | 5.23 | 13500 | 6.1458 |
| 6.2265 | 5.42 | 14000 | 6.1370 |
| 6.2222 | 5.62 | 14500 | 6.1511 |
| 6.2209 | 5.81 | 15000 | 6.1320 |
| 6.2146 | 6.0 | 15500 | 6.1124 |
| 6.214 | 6.2 | 16000 | 6.1439 |
| 6.1907 | 6.39 | 16500 | 6.0981 |
| 6.2119 | 6.58 | 17000 | 6.1465 |
| 6.1858 | 6.78 | 17500 | 6.1594 |
| 6.1552 | 6.97 | 18000 | 6.0742 |
| 6.1926 | 7.16 | 18500 | 6.1176 |
| 6.1813 | 7.36 | 19000 | 6.0107 |
| 6.1812 | 7.55 | 19500 | 6.0852 |
| 6.1852 | 7.75 | 20000 | 6.0845 |
| 6.1945 | 7.94 | 20500 | 6.1260 |
| 6.1542 | 8.13 | 21000 | 6.1032 |
| 6.1685 | 8.33 | 21500 | 6.0650 |
| 6.1619 | 8.52 | 22000 | 6.1028 |
| 6.1279 | 8.71 | 22500 | 6.1269 |
| 6.1575 | 8.91 | 23000 | 6.0793 |
| 6.1401 | 9.1 | 23500 | 6.1479 |
| 6.159 | 9.3 | 24000 | 6.0319 |
| 6.1227 | 9.49 | 24500 | 6.0677 |
| 6.1201 | 9.68 | 25000 | 6.0527 |
| 6.1473 | 9.88 | 25500 | 6.1305 |
| 6.1539 | 10.07 | 26000 | 6.1079 |
| 6.091 | 10.26 | 26500 | 6.1219 |
| 6.1015 | 10.46 | 27000 | 6.1317 |
| 6.1048 | 10.65 | 27500 | 6.1149 |
| 6.0955 | 10.84 | 28000 | 6.1216 |
| 6.129 | 11.04 | 28500 | 6.0427 |
| 6.1007 | 11.23 | 29000 | 6.1289 |
| 6.1266 | 11.43 | 29500 | 6.0564 |
| 6.1203 | 11.62 | 30000 | 6.1143 |
| 6.1038 | 11.81 | 30500 | 6.0957 |
| 6.0989 | 12.01 | 31000 | 6.0707 |
| 6.0571 | 12.2 | 31500 | 6.0013 |
| 6.1017 | 12.39 | 32000 | 6.1356 |
| 6.0649 | 12.59 | 32500 | 6.0981 |
| 6.0704 | 12.78 | 33000 | 6.0588 |
| 6.088 | 12.97 | 33500 | 6.0796 |
| 6.1112 | 13.17 | 34000 | 6.0809 |
| 6.0888 | 13.36 | 34500 | 6.0776 |
| 6.0482 | 13.56 | 35000 | 6.0710 |
| 6.0588 | 13.75 | 35500 | 6.0877 |
| 6.0517 | 13.94 | 36000 | 6.0650 |
| 6.0832 | 14.14 | 36500 | 5.9890 |
| 6.0655 | 14.33 | 37000 | 6.0445 |
| 6.0705 | 14.52 | 37500 | 6.0037 |
| 6.0789 | 14.72 | 38000 | 6.0777 |
| 6.0645 | 14.91 | 38500 | 6.0475 |
| 6.0347 | 15.1 | 39000 | 6.1148 |
| 6.0478 | 15.3 | 39500 | 6.0639 |
| 6.0638 | 15.49 | 40000 | 6.0373 |
| 6.0377 | 15.69 | 40500 | 6.0116 |
| 6.0402 | 15.88 | 41000 | 6.0483 |
| 6.0382 | 16.07 | 41500 | 6.1025 |
| 6.039 | 16.27 | 42000 | 6.0488 |
| 6.0232 | 16.46 | 42500 | 6.0219 |
| 5.9946 | 16.65 | 43000 | 6.0541 |
| 6.063 | 16.85 | 43500 | 6.0436 |
| 6.0141 | 17.04 | 44000 | 6.0609 |
| 6.0196 | 17.23 | 44500 | 6.0551 |
| 6.0331 | 17.43 | 45000 | 6.0576 |
| 6.0174 | 17.62 | 45500 | 6.0498 |
| 6.0366 | 17.82 | 46000 | 6.0782 |
| 6.0299 | 18.01 | 46500 | 6.0196 |
| 6.0009 | 18.2 | 47000 | 6.0262 |
| 5.9758 | 18.4 | 47500 | 6.0824 |
| 6.0285 | 18.59 | 48000 | 6.0799 |
| 6.025 | 18.78 | 48500 | 5.9511 |
| 5.9806 | 18.98 | 49000 | 6.0086 |
| 5.9915 | 19.17 | 49500 | 6.0089 |
| 5.9957 | 19.36 | 50000 | 6.0330 |
| 6.0311 | 19.56 | 50500 | 6.0083 |
| 5.995 | 19.75 | 51000 | 6.0394 |
| 6.0034 | 19.95 | 51500 | 5.9854 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
meongracun/nmt-ted-id-en-lr_1e-3-ep_30-seq_128-bs_32
|
meongracun
| 2022-11-14T18:41:43Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-14T16:10:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: nmt-ted-id-en-lr_1e-3-ep_30-seq_128-bs_32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nmt-ted-id-en-lr_1e-3-ep_30-seq_128-bs_32
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5777
- Bleu: 18.3981
- Gen Len: 16.2277
- Meteor: 0.3652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | Meteor |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:------:|
| 2.1196 | 1.0 | 1250 | 1.6942 | 13.7333 | 16.5568 | 0.3053 |
| 1.7331 | 2.0 | 2500 | 1.5393 | 15.6555 | 16.3938 | 0.3341 |
| 1.5842 | 3.0 | 3750 | 1.4821 | 16.2862 | 16.3699 | 0.3403 |
| 1.4518 | 4.0 | 5000 | 1.4562 | 17.1158 | 16.2073 | 0.3518 |
| 1.3649 | 5.0 | 6250 | 1.4416 | 17.3302 | 16.3344 | 0.355 |
| 1.2987 | 6.0 | 7500 | 1.4276 | 17.3334 | 16.3913 | 0.3547 |
| 1.2227 | 7.0 | 8750 | 1.4411 | 17.9415 | 16.2941 | 0.3612 |
| 1.1766 | 8.0 | 10000 | 1.4435 | 18.0776 | 16.2809 | 0.362 |
| 1.1119 | 9.0 | 11250 | 1.4510 | 18.0156 | 16.2834 | 0.3628 |
| 1.0672 | 10.0 | 12500 | 1.4566 | 18.276 | 16.1982 | 0.3646 |
| 1.0194 | 11.0 | 13750 | 1.4728 | 18.2417 | 16.2589 | 0.364 |
| 0.988 | 12.0 | 15000 | 1.4843 | 18.385 | 16.2671 | 0.3644 |
| 0.9445 | 13.0 | 16250 | 1.4896 | 18.1065 | 16.2321 | 0.3629 |
| 0.9156 | 14.0 | 17500 | 1.5102 | 18.249 | 16.2334 | 0.364 |
| 0.8888 | 15.0 | 18750 | 1.5273 | 18.1876 | 16.2454 | 0.3641 |
| 0.8624 | 16.0 | 20000 | 1.5354 | 18.2708 | 16.2425 | 0.3638 |
| 0.8364 | 17.0 | 21250 | 1.5449 | 18.17 | 16.268 | 0.3645 |
| 0.8133 | 18.0 | 22500 | 1.5578 | 18.2573 | 16.2422 | 0.3639 |
| 0.7994 | 19.0 | 23750 | 1.5678 | 18.3108 | 16.2158 | 0.3648 |
| 0.7823 | 20.0 | 25000 | 1.5777 | 18.3981 | 16.2277 | 0.3652 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
KoboldAI/GPT-J-6B-Skein
|
KoboldAI
| 2022-11-14T18:35:26Z | 1,505 | 14 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- text-generation
---
# Model Card for GPT-J-6B-Skein
# Model Details
## Model Description
- **Developed by:** KoboldAI
- **Shared by [Optional]:** KoboldAI
- **Model type:** Text Generation
- **Language(s) (NLP):** English
- **License:** Apache License 2.0
- **Related Models:** [GPT-J 6B](https://huggingface.co/EleutherAI/gpt-j-6B?text=My+name+is+Mariama%2C+my+favorite)
- **Parent Model:** GPT-J
- **Resources for more information:**
- [GitHub Repo](https://github.com/kingoflolz/mesh-transformer-jax)
- [Associated Model Doc](https://huggingface.co/docs/transformers/main/en/model_doc/gptj#transformers.GPTJForCausalLM)
# Uses
## Direct Use
This model is designed for creative story generation. It can understand both free-form text and text written in interactive fiction style with actions starting with "> You", such as:
```
You become aware of her breathing -- the slight expansion of her ribs, the soft exhalation -- natural, and yet somehow studied. "Ah -- by the way," she says, in a way that utterly fails to be casual, "have you seen the artist out there? -- My artist, that is."
"No," you respond, uneasy. You open your mouth and close it again.
> You ask about the experience of waking up
```
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
See the [GPT-J 6B model card](https://huggingface.co/EleutherAI/gpt-j-6B?text=My+name+is+Mariama%2C+my+favorite) for more information.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The data are mostly comprised of light novels from the dataset of the [KoboldAI/GPT-Neo-2.7B-Horni-LN](https://huggingface.co/KoboldAI/GPT-Neo-2.7B-Horni-LN) model and assorted interactive fiction. The dataset uses `[Themes: <comma-separated list of genres>]` for tagging, which means that if similar text is placed in the context, the model will attempt to generate text in the specified style(s). For more details about the dataset, consult [this document](https://wandb.ai/ve-forbryderne/skein/runs/files/files/datasets/README.txt).
## Training Procedure
### Preprocessing
The data were preprocessed using the Python package ftfy to eliminate as much as possible non-ASCII punctuation characters and possible encoding errors. The interactive fiction in the dataset also underwent deduplication since interactive fiction logs often contain duplicate text from, for example, visiting the same in-game area several times. spaCy was used for grammatical analysis with the purpose of reformatting the actions commonly found in old text adventure games into more complete sentences. There was also some manual elimination of things such as "thank you for playing" messages and title messages.
### Speeds, Sizes, Times
Training took approximately 14 hours in total, with the average speed being 5265 tokens per second.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
https://github.com/kingoflolz/mesh-transformer-jax
# Citation
**BibTeX:**
```
@misc{mesh-transformer-jax,
author = {Wang, Ben},
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
KoboldAI in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("KoboldAI/GPT-J-6B-Skein")
model = AutoModelForCausalLM.from_pretrained("KoboldAI/GPT-J-6B-Skein")
```
</details>
|
sd-concepts-library/cancer_style
|
sd-concepts-library
| 2022-11-14T17:59:23Z | 0 | 6 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-12T13:21:52Z |
---
license: mit
---
### Cancer_style on Stable Diffusion
This is the `<cancer_style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:








|
RoniXZONE/distilbert-base-uncased-finetuned-squad
|
RoniXZONE
| 2022-11-14T17:47:13Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-14T15:49:23Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: RoniXZONE/distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# RoniXZONE/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9645
- Train End Logits Accuracy: 0.7308
- Train Start Logits Accuracy: 0.6936
- Validation Loss: 1.1246
- Validation End Logits Accuracy: 0.7006
- Validation Start Logits Accuracy: 0.6612
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.5015 | 0.6068 | 0.5715 | 1.1471 | 0.6864 | 0.6508 | 0 |
| 0.9645 | 0.7308 | 0.6936 | 1.1246 | 0.7006 | 0.6612 | 1 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
aphostrophy/jessonyo-NLP-finetuned-ja-to-en
|
aphostrophy
| 2022-11-14T17:36:32Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-14T16:11:09Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: aphostrophy/jessonyo-NLP-finetuned-ja-to-en
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# aphostrophy/jessonyo-NLP-finetuned-ja-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9803
- Validation Loss: 1.1675
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 11088, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.6035 | 1.2746 | 0 |
| 1.1843 | 1.1934 | 1 |
| 0.9803 | 1.1675 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
nateraw/mit-b0-finetuned-sidewalks
|
nateraw
| 2022-11-14T17:32:40Z | 34 | 0 |
transformers
|
[
"transformers",
"tf",
"segformer",
"generated_from_keras_callback",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2022-11-14T17:00:32Z |
---
license: other
tags:
- generated_from_keras_callback
model-index:
- name: nateraw/mit-b0-finetuned-sidewalks
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nateraw/mit-b0-finetuned-sidewalks
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5197
- Validation Loss: 0.6268
- Validation Mean Iou: 0.2719
- Validation Mean Accuracy: 0.3442
- Validation Overall Accuracy: 0.8180
- Validation Per Category Iou: [0. 0.62230678 0.81645513 0.18616589 0.66669478 0.30574734
nan 0.36681201 0.31128062 0. 0.76635363 0.
0. nan 0. 0.37874505 0. 0.
0.68193241 0. 0.48867838 0.25809644 0. nan
0. 0.25765818 0. 0. 0.81965205 0.71604385
0.9214592 0. 0.00636635 0.12957446 0. ]
- Validation Per Category Accuracy: [0. 0.89469845 0.88320521 0.45231002 0.72104833 0.3386303
nan 0.53522723 0.72026843 0. 0.93197124 0.
0. nan 0. 0.45525816 0. 0.
0.87276184 0. 0.60762821 0.29654901 0. nan
0. 0.32162193 0. 0. 0.90797988 0.89199119
0.96388697 0. 0.00646084 0.21171965 0. ]
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 6e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Validation Mean Iou | Validation Mean Accuracy | Validation Overall Accuracy | Validation Per Category Iou | Validation Per Category Accuracy | Epoch |
|:----------:|:---------------:|:-------------------:|:------------------------:|:---------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----:|
| 1.3430 | 0.8858 | 0.1724 | 0.2253 | 0.7508 | [0.00000000e+00 5.02535817e-01 7.94050536e-01 1.37476079e-01
5.28949130e-01 1.76391302e-01 nan 1.19967229e-01
0.00000000e+00 0.00000000e+00 6.61310784e-01 0.00000000e+00
0.00000000e+00 nan 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 5.06634036e-01 0.00000000e+00
7.22567226e-02 5.35294630e-03 0.00000000e+00 0.00000000e+00
0.00000000e+00 1.53949868e-02 0.00000000e+00 0.00000000e+00
7.37842004e-01 5.78989440e-01 8.52258994e-01 0.00000000e+00
0.00000000e+00 6.16858377e-05 0.00000000e+00] | [0.00000000e+00 5.80613096e-01 9.43852033e-01 1.50019637e-01
5.77268577e-01 3.25241508e-01 nan 1.68319967e-01
0.00000000e+00 0.00000000e+00 8.60308871e-01 0.00000000e+00
0.00000000e+00 nan 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 9.04260401e-01 0.00000000e+00
7.74112939e-02 5.58025588e-03 0.00000000e+00 nan
0.00000000e+00 1.56055377e-02 0.00000000e+00 0.00000000e+00
8.41648672e-01 8.58416118e-01 9.02457570e-01 0.00000000e+00
0.00000000e+00 6.18892982e-05 0.00000000e+00] | 0 |
| 0.8402 | 0.7211 | 0.2203 | 0.2900 | 0.7927 | [0. 0.60561012 0.80467888 0.10134538 0.57674712 0.21967639
nan 0.279315 0.28998136 0. 0.71924852 0.
0. nan 0. 0.10241989 0. 0.
0.60537245 0. 0.37966409 0.0624908 0. 0.
0. 0.11869763 0. 0. 0.79675107 0.70541969
0.89177953 0. 0. 0.01097213 0. ] | [0. 0.70687024 0.92710849 0.47653578 0.6809956 0.28562204
nan 0.35954555 0.53804171 0. 0.87451178 0.
0. nan 0. 0.10473185 0. 0.
0.88548482 0. 0.52011987 0.06421075 0. nan
0. 0.13802701 0. 0. 0.9278545 0.83106582
0.94693817 0. 0. 0.01170072 0. ] | 1 |
| 0.7051 | 0.6513 | 0.2568 | 0.3210 | 0.8151 | [0.00000000e+00 6.31500555e-01 8.33347761e-01 2.40727740e-01
6.71879162e-01 2.32727132e-01 nan 3.15720178e-01
3.22578864e-01 0.00000000e+00 7.51066980e-01 0.00000000e+00
0.00000000e+00 nan 0.00000000e+00 3.01090014e-01
0.00000000e+00 0.00000000e+00 6.56592309e-01 0.00000000e+00
3.82317489e-01 2.25385079e-01 0.00000000e+00 nan
0.00000000e+00 2.34975219e-01 0.00000000e+00 0.00000000e+00
7.92710603e-01 6.82508692e-01 9.02369099e-01 0.00000000e+00
5.10019193e-04 4.02361131e-02 0.00000000e+00] | [0.00000000e+00 7.76355941e-01 9.39707165e-01 3.90888278e-01
7.70256989e-01 2.84066636e-01 nan 4.57106724e-01
6.33498392e-01 0.00000000e+00 9.05789013e-01 0.00000000e+00
0.00000000e+00 nan 0.00000000e+00 3.57230962e-01
0.00000000e+00 0.00000000e+00 8.45761217e-01 0.00000000e+00
5.16681541e-01 2.82796479e-01 0.00000000e+00 nan
0.00000000e+00 3.07634724e-01 0.00000000e+00 0.00000000e+00
9.04391068e-01 8.86212453e-01 9.64570665e-01 0.00000000e+00
5.17411580e-04 4.71742075e-02 0.00000000e+00] | 2 |
| 0.6294 | 0.6365 | 0.2695 | 0.3320 | 0.8244 | [0. 0.63840754 0.83879521 0.31781353 0.69394774 0.22324776
nan 0.35012894 0.31369877 0. 0.7683448 0.
0. nan 0. 0.36532292 0. 0.
0.65554136 0. 0.37438724 0.25682621 0. nan
0. 0.23051151 0. 0. 0.81818163 0.7633018
0.91092518 0. 0.00145576 0.10215516 0. ] | [0. 0.76103704 0.95305272 0.43848725 0.78760908 0.25645014
nan 0.48971828 0.61853472 0. 0.90793733 0.
0. nan 0. 0.48772201 0. 0.
0.84205031 0. 0.53308407 0.36285878 0. nan
0. 0.27953916 0. 0. 0.93079576 0.87079757
0.96477884 0. 0.00147054 0.13899972 0. ] | 3 |
| 0.5686 | 0.6122 | 0.2715 | 0.3360 | 0.8256 | [0.00000000e+00 6.38345814e-01 8.56252996e-01 3.07043269e-01
6.87537894e-01 3.06534041e-01 nan 3.84145525e-01
3.19438916e-01 0.00000000e+00 7.57233152e-01 0.00000000e+00
0.00000000e+00 nan 0.00000000e+00 4.06585843e-01
0.00000000e+00 0.00000000e+00 6.47648546e-01 2.91885581e-04
4.00547422e-01 1.97261484e-01 0.00000000e+00 nan
0.00000000e+00 2.20793008e-01 0.00000000e+00 0.00000000e+00
8.19526784e-01 7.19306080e-01 9.20192720e-01 0.00000000e+00
2.23374930e-03 9.77508243e-02 0.00000000e+00] | [0.00000000e+00 7.89438910e-01 9.16367241e-01 4.32251205e-01
7.89740409e-01 4.88566404e-01 nan 5.36825005e-01
6.47787376e-01 0.00000000e+00 9.32641501e-01 0.00000000e+00
0.00000000e+00 nan 0.00000000e+00 4.73813253e-01
0.00000000e+00 0.00000000e+00 9.09004353e-01 2.91885581e-04
4.37175308e-01 2.25663128e-01 0.00000000e+00 nan
0.00000000e+00 2.60992057e-01 0.00000000e+00 0.00000000e+00
9.19328058e-01 9.02898346e-01 9.65529369e-01 0.00000000e+00
2.23984750e-03 1.20880721e-01 0.00000000e+00] | 4 |
| 0.5197 | 0.6268 | 0.2719 | 0.3442 | 0.8180 | [0. 0.62230678 0.81645513 0.18616589 0.66669478 0.30574734
nan 0.36681201 0.31128062 0. 0.76635363 0.
0. nan 0. 0.37874505 0. 0.
0.68193241 0. 0.48867838 0.25809644 0. nan
0. 0.25765818 0. 0. 0.81965205 0.71604385
0.9214592 0. 0.00636635 0.12957446 0. ] | [0. 0.89469845 0.88320521 0.45231002 0.72104833 0.3386303
nan 0.53522723 0.72026843 0. 0.93197124 0.
0. nan 0. 0.45525816 0. 0.
0.87276184 0. 0.60762821 0.29654901 0. nan
0. 0.32162193 0. 0. 0.90797988 0.89199119
0.96388697 0. 0.00646084 0.21171965 0. ] | 5 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
VanessaSchenkel/padrao-mbart-vanessa-finetuned-handscrafted-puro
|
VanessaSchenkel
| 2022-11-14T17:04:16Z | 64 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-14T16:56:18Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: VanessaSchenkel/padrao-mbart-vanessa-finetuned-handscrafted-puro
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# VanessaSchenkel/padrao-mbart-vanessa-finetuned-handscrafted-puro
This model is a fine-tuned version of [Narrativa/mbart-large-50-finetuned-opus-en-pt-translation](https://huggingface.co/Narrativa/mbart-large-50-finetuned-opus-en-pt-translation) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.2001
- Validation Loss: 0.7276
- Train Bleu: 66.2393
- Train Gen Len: 11.75
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Bleu | Train Gen Len | Epoch |
|:----------:|:---------------:|:----------:|:-------------:|:-----:|
| 2.2001 | 0.7276 | 66.2393 | 11.75 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
evangeliazve/mpnet-base-articles-ner
|
evangeliazve
| 2022-11-14T16:46:16Z | 87 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mpnet",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-14T16:45:21Z |
---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: mpnet-base-articles-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mpnet-base-articles-ner
This model is a fine-tuned version of [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8471
- F1: 0.7500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.8042 | 1.0 | 5 | 1.6278 | 0.0 |
| 1.5353 | 2.0 | 10 | 1.5332 | 0.0 |
| 1.499 | 3.0 | 15 | 1.4356 | 0.1781 |
| 1.343 | 4.0 | 20 | 1.3254 | 0.3789 |
| 1.2306 | 5.0 | 25 | 1.2572 | 0.5075 |
| 1.1427 | 6.0 | 30 | 1.1572 | 0.5700 |
| 1.0715 | 7.0 | 35 | 1.0875 | 0.6305 |
| 0.9679 | 8.0 | 40 | 1.0261 | 0.6667 |
| 0.9169 | 9.0 | 45 | 0.9924 | 0.6512 |
| 0.8447 | 10.0 | 50 | 0.9457 | 0.7137 |
| 0.8253 | 11.0 | 55 | 0.9216 | 0.7094 |
| 0.7493 | 12.0 | 60 | 0.9068 | 0.7303 |
| 0.7378 | 13.0 | 65 | 0.8896 | 0.7404 |
| 0.7039 | 14.0 | 70 | 0.8827 | 0.7398 |
| 0.7277 | 15.0 | 75 | 0.8632 | 0.7635 |
| 0.6758 | 16.0 | 80 | 0.8517 | 0.775 |
| 0.6642 | 17.0 | 85 | 0.8618 | 0.7449 |
| 0.6327 | 18.0 | 90 | 0.8522 | 0.7490 |
| 0.6238 | 19.0 | 95 | 0.8477 | 0.7500 |
| 0.6101 | 20.0 | 100 | 0.8471 | 0.7500 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.2
|
rajesh426/distilbert-base-uncased_Up_Sampling_Sub_Category_SPEECH_TEXT_DISPLAY_v1
|
rajesh426
| 2022-11-14T16:28:38Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-14T16:17:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased_Up_Sampling_Sub_Category_SPEECH_TEXT_DISPLAY_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_Up_Sampling_Sub_Category_SPEECH_TEXT_DISPLAY_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9368
- Accuracy: 0.6114
- F1: 0.6028
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|
| 0.9716 | 1.0 | 12171 | 2.5228 | 0.5722 | 0.5740 |
| 0.2857 | 2.0 | 24342 | 3.0558 | 0.5947 | 0.5923 |
| 0.1438 | 3.0 | 36513 | 3.3499 | 0.6038 | 0.6007 |
| 0.0842 | 4.0 | 48684 | 3.8879 | 0.5905 | 0.5875 |
| 0.0504 | 5.0 | 60855 | 4.1478 | 0.5905 | 0.5906 |
| 0.031 | 6.0 | 73026 | 4.5368 | 0.5924 | 0.5865 |
| 0.0192 | 7.0 | 85197 | 4.6596 | 0.6042 | 0.5980 |
| 0.01 | 8.0 | 97368 | 4.8874 | 0.6087 | 0.6005 |
| 0.0051 | 9.0 | 109539 | 5.0120 | 0.6118 | 0.6015 |
| 0.0022 | 10.0 | 121710 | 4.9368 | 0.6114 | 0.6028 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.10.2
- Datasets 2.5.2
- Tokenizers 0.12.1
|
meongracun/nmt-ted-id-en-lr_1e-2-ep_30-seq_128-bs_64
|
meongracun
| 2022-11-14T16:00:44Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-14T13:51:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: nmt-ted-id-en-lr_1e-2-ep_30-seq_128-bs_64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nmt-ted-id-en-lr_1e-2-ep_30-seq_128-bs_64
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0751
- Bleu: 16.4354
- Gen Len: 16.3492
- Meteor: 0.3448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | Meteor |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:------:|
| 3.2366 | 1.0 | 625 | 2.4442 | 7.3157 | 16.9063 | 0.2192 |
| 2.5208 | 2.0 | 1250 | 2.0785 | 11.3311 | 16.0768 | 0.2869 |
| 2.1936 | 3.0 | 1875 | 1.8995 | 12.4756 | 16.4486 | 0.2934 |
| 1.872 | 4.0 | 2500 | 1.8241 | 13.9295 | 16.3092 | 0.3163 |
| 1.7185 | 5.0 | 3125 | 1.7624 | 14.3797 | 16.3602 | 0.3213 |
| 1.6177 | 6.0 | 3750 | 1.7049 | 15.2549 | 16.3835 | 0.3304 |
| 1.5355 | 7.0 | 4375 | 1.7059 | 15.7225 | 16.3599 | 0.3346 |
| 1.388 | 8.0 | 5000 | 1.6864 | 15.4343 | 16.4646 | 0.3308 |
| 1.2741 | 9.0 | 5625 | 1.6899 | 16.2174 | 16.3215 | 0.3428 |
| 1.216 | 10.0 | 6250 | 1.6831 | 16.1891 | 16.2815 | 0.3451 |
| 1.1486 | 11.0 | 6875 | 1.7137 | 16.3811 | 16.3451 | 0.3435 |
| 1.0426 | 12.0 | 7500 | 1.7490 | 16.3482 | 16.3791 | 0.343 |
| 0.9509 | 13.0 | 8125 | 1.7674 | 16.3318 | 16.469 | 0.3436 |
| 0.9072 | 14.0 | 8750 | 1.8084 | 16.4721 | 16.3064 | 0.3452 |
| 0.857 | 15.0 | 9375 | 1.8414 | 16.4244 | 16.3718 | 0.3472 |
| 0.7696 | 16.0 | 10000 | 1.8829 | 16.3755 | 16.3816 | 0.3446 |
| 0.7066 | 17.0 | 10625 | 1.9325 | 16.4635 | 16.3957 | 0.3459 |
| 0.6718 | 18.0 | 11250 | 1.9980 | 16.3287 | 16.3124 | 0.3431 |
| 0.6364 | 19.0 | 11875 | 2.0211 | 16.5732 | 16.3558 | 0.3456 |
| 0.5835 | 20.0 | 12500 | 2.0751 | 16.4354 | 16.3492 | 0.3448 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huggingtweets/elonmusk-julicq
|
huggingtweets
| 2022-11-14T15:30:27Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-14T15:26:02Z |
---
language: en
thumbnail: http://www.huggingtweets.com/elonmusk-julicq/1668439800663/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1322125857465524231/VvoMFHkW_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Juliаn Pa🇱🇻🇺🇦L😷w</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-julicq</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Juliаn Pa🇱🇻🇺🇦L😷w.
| Data | Elon Musk | Juliаn Pa🇱🇻🇺🇦L😷w |
| --- | --- | --- |
| Tweets downloaded | 3199 | 1529 |
| Retweets | 141 | 282 |
| Short tweets | 977 | 115 |
| Tweets kept | 2081 | 1132 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/bvahfgf1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-julicq's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2f42eho4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2f42eho4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-julicq')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
devtanumisra/finetuning-sentiment-model-deberta-smote
|
devtanumisra
| 2022-11-14T14:43:33Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-14T05:38:22Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuning-sentiment-model-deberta-smote
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-deberta-smote
This model is a fine-tuned version of [yangheng/deberta-v3-base-absa-v1.1](https://huggingface.co/yangheng/deberta-v3-base-absa-v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4852
- Accuracy: 0.7215
- F1: 0.7215
- Precision: 0.7215
- Recall: 0.7215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Omerdor/128-DRUSEN-cell_
|
Omerdor
| 2022-11-14T13:27:23Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-14T11:08:39Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# 128-DRUSEN-cell_
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-06
- train_batch_size: 64
- eval_batch_size: 4
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Omerdor/128-DRUSEN-cell_/tensorboard?#scalars)
|
ajdowney/bert-wash-binary
|
ajdowney
| 2022-11-14T13:18:05Z | 69 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-14T11:17:30Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-wash-binary
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-wash-binary
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: nan
- Validation Loss: nan
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 129, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| nan | nan | 0 |
| nan | nan | 1 |
| nan | nan | 2 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.10.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
buihungtpd3/layoutlmv3-finetuned-cord_100
|
buihungtpd3
| 2022-11-14T12:23:58Z | 84 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:drug_bill_layoutv3",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-14T02:27:21Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- drug_bill_layoutv3
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-cord_100
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: drug_bill_layoutv3
type: drug_bill_layoutv3
config: Vin_Drug_Bill
split: train
args: Vin_Drug_Bill
metrics:
- name: Precision
type: precision
value: 1.0
- name: Recall
type: recall
value: 1.0
- name: F1
type: f1
value: 1.0
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-cord_100
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the drug_bill_layoutv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0003
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.33 | 250 | 0.0010 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0648 | 2.66 | 500 | 0.0003 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0648 | 3.99 | 750 | 0.0004 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0115 | 5.32 | 1000 | 0.0002 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0115 | 6.65 | 1250 | 0.0002 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0134 | 7.98 | 1500 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0134 | 9.31 | 1750 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0045 | 10.64 | 2000 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0045 | 11.97 | 2250 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0004 | 13.3 | 2500 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
sd-concepts-library/trypophobia
|
sd-concepts-library
| 2022-11-14T12:22:27Z | 0 | 4 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-14T11:20:36Z |
---
license: mit
---
### trypophobia on Stable Diffusion
This is the `trypophobia` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





















Sample-image: "two unspeakable horrors give each other a trypophobia covered embrace"

|
teacookies/autotrain-14112022-cert-2086767210
|
teacookies
| 2022-11-14T12:19:48Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-14112022-cert",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-14T12:08:36Z |
---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-14112022-cert
co2_eq_emissions:
emissions: 18.953935307959163
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 2086767210
- CO2 Emissions (in grams): 18.9539
## Validation Metrics
- Loss: 0.002
- Accuracy: 1.000
- Precision: 0.987
- Recall: 0.989
- F1: 0.988
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-14112022-cert-2086767210
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-14112022-cert-2086767210", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-14112022-cert-2086767210", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Marcuslye0220/swin-small-patch4-window7-224-finetuned-eurosat
|
Marcuslye0220
| 2022-11-14T12:11:41Z | 204 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-14T08:41:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-small-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9956803455723542
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-small-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-small-patch4-window7-224](https://huggingface.co/microsoft/swin-small-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0102
- Accuracy: 0.9957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0225 | 0.98 | 32 | 0.0102 | 0.9957 |
| 0.0378 | 1.98 | 64 | 0.0102 | 0.9957 |
| 0.041 | 2.98 | 96 | 0.0102 | 0.9957 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.13.0+cpu
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Jethuestad/whisper-medium-amksim
|
Jethuestad
| 2022-11-14T11:38:24Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-08T12:44:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: whisper-medium-amksim
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-amksim
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9089
- Wer: 40.3433
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 4.6349 | 0.83 | 5 | 3.7729 | 73.3906 |
| 3.2338 | 1.67 | 10 | 1.4978 | 69.0987 |
| 1.1335 | 2.5 | 15 | 1.1606 | 97.4249 |
| 0.6838 | 3.33 | 20 | 1.0211 | 66.0944 |
| 0.4383 | 4.17 | 25 | 0.9845 | 65.2361 |
| 0.2514 | 5.0 | 30 | 0.9885 | 61.3734 |
| 0.2053 | 5.83 | 35 | 0.9796 | 76.3948 |
| 0.1353 | 6.67 | 40 | 0.9758 | 49.3562 |
| 0.1142 | 7.5 | 45 | 0.9109 | 60.9442 |
| 0.0889 | 8.33 | 50 | 0.9045 | 41.2017 |
| 0.0854 | 9.17 | 55 | 0.9085 | 42.4893 |
| 0.069 | 10.0 | 60 | 0.9089 | 40.3433 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.12.1
|
AlekseyKorshuk/dalio-large-test
|
AlekseyKorshuk
| 2022-11-14T10:52:53Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:AlekseyKorshuk/dalio-handwritten-io",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-14T09:53:05Z |
---
license: other
tags:
- generated_from_trainer
datasets:
- AlekseyKorshuk/dalio-handwritten-io
metrics:
- accuracy
model-index:
- name: dalio-large-test
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: AlekseyKorshuk/dalio-handwritten-io
type: AlekseyKorshuk/dalio-handwritten-io
metrics:
- name: Accuracy
type: accuracy
value: 0.047694543532831285
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dalio-large-test
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the AlekseyKorshuk/dalio-handwritten-io dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1016
- Accuracy: 0.0477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.3342 | 0.05 | 1 | 3.1016 | 0.0477 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Omerdor/128-DRUSEN-CELL
|
Omerdor
| 2022-11-14T10:48:47Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-14T10:04:17Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# 128-DRUSEN-CELL
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 64
- eval_batch_size: 4
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Omerdor/128-DRUSEN-CELL/tensorboard?#scalars)
|
EvaKlimentova/knots_simple_CNN
|
EvaKlimentova
| 2022-11-14T10:39:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-10-31T10:53:09Z |
# Simple CNN for knotted x unknotted recognition
The model is trained on downsampled SPOUT knotted x Rossmann unknotted dataset to fix the knotted x unknotted sequence length distribution
The architecture is [PENGUINN](https://www.frontiersin.org/articles/10.3389/fgene.2020.568546/full) with modified input
|
GinaYang/distilbert-base-uncased-finetuned-emotion
|
GinaYang
| 2022-11-14T10:39:08Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-14T08:28:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9233934828732149
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2248
- Accuracy: 0.9235
- F1: 0.9234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8242 | 1.0 | 250 | 0.3230 | 0.9 | 0.8960 |
| 0.2497 | 2.0 | 500 | 0.2248 | 0.9235 | 0.9234 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
micole66/autotrain-pachyderms-v2-2088767193
|
micole66
| 2022-11-14T10:04:56Z | 179 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"vision",
"image-classification",
"dataset:micole66/autotrain-data-pachyderms-v2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-14T10:03:23Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- micole66/autotrain-data-pachyderms-v2
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 1.190285924893865
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 2088767193
- CO2 Emissions (in grams): 1.1903
## Validation Metrics
- Loss: 0.004
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
|
wiem87/swin-tiny-patch4-window7-224-finetuned-eurosat
|
wiem87
| 2022-11-14T09:54:37Z | 204 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-14T09:31:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9825925925925926
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0454
- Accuracy: 0.9826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2137 | 1.0 | 190 | 0.0981 | 0.9681 |
| 0.1487 | 2.0 | 380 | 0.0517 | 0.9830 |
| 0.1398 | 3.0 | 570 | 0.0454 | 0.9826 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
SSarim/distilbert-base-uncased-finetuned-squad
|
SSarim
| 2022-11-14T09:38:09Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-13T13:54:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2188 | 1.0 | 5533 | 1.1636 |
| 0.9569 | 2.0 | 11066 | 1.1337 |
| 0.7599 | 3.0 | 16599 | 1.1563 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
stupidog04/krenzcolor_chkpt_classifier
|
stupidog04
| 2022-11-14T09:04:49Z | 59 | 0 |
generic
|
[
"generic",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"model-index",
"region:us"
] |
image-classification
| 2022-10-26T12:13:50Z |
---
tags:
- image-classification
- pytorch
library_name: generic
metrics:
- accuracy
model-index:
- name: krenzcolor_chkpt_classifier
results:
- task:
name: Image Classification
type: pair-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9196428656578064
---
# krenzcolor_chkpt_classifier
## KK色彩課程-作業節點檢查AI
Demo for checkpoint classification of the homework in Art course by "Krenz Cushart"
這個AI分類器會判斷同學在課程中L3,L4的臨摹中的三個檢查點,並檢查通過與否。
詳細六個類別如下:
- (1) chk1_fail | (2) chk1_pass
- (3) chk2_fail | (4) chk2_pass
- (5) chk3_fail | (6) chk3_pass
其中chk1,chk2,chk3分別代表檢查點一、二、三;fail及pass代表作業通過與否。
## 快速導覽:
將以下圖片拖曳至右側方框 (Hosted inference API)
Note: 第一次讀取model的時候會跑比較久:~20秒
#### chk1_pass

#### chk2_pass

#### chk3_pass

## 使用方法
### 使用以下樣板填入臨摹
注意:務必將圖調整至224 x 224 pixels的大小再放入樣板右側空白處




### 將圖片上傳到右側方匡

### 上傳後會顯示各類別的機率

|
Kirangritz1997/finetuning-sentiment-model-3000-samples
|
Kirangritz1997
| 2022-11-14T09:04:41Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-14T08:41:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.8741721854304636
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3201
- Accuracy: 0.8733
- F1: 0.8742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Kaku0o0/distilbert-base-uncased-finetuned-squad
|
Kaku0o0
| 2022-11-14T06:32:00Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-13T22:42:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6090
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 274 | 1.5943 |
| 0.9165 | 2.0 | 548 | 1.5836 |
| 0.9165 | 3.0 | 822 | 1.6090 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
LeninGF/robbery_dataset_tf_finetuned_20221113
|
LeninGF
| 2022-11-14T06:30:43Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-14T06:30:14Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: robbery_dataset_tf_finetuned_20221113
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# robbery_dataset_tf_finetuned_20221113
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0506
- Train Sparse Categorical Accuracy: 0.9844
- Validation Loss: 0.4108
- Validation Sparse Categorical Accuracy: 0.9068
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 0.4908 | 0.8335 | 0.2872 | 0.9060 | 0 |
| 0.2496 | 0.9180 | 0.3137 | 0.8978 | 1 |
| 0.1947 | 0.9351 | 0.3234 | 0.9062 | 2 |
| 0.1597 | 0.9483 | 0.3092 | 0.9087 | 3 |
| 0.1304 | 0.9580 | 0.2928 | 0.9140 | 4 |
| 0.1013 | 0.9684 | 0.3450 | 0.9143 | 5 |
| 0.0785 | 0.9742 | 0.3590 | 0.9080 | 6 |
| 0.0709 | 0.9778 | 0.3711 | 0.9057 | 7 |
| 0.0541 | 0.9821 | 0.4010 | 0.9128 | 8 |
| 0.0506 | 0.9844 | 0.4108 | 0.9068 | 9 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
ftorres/distilbert-base-uncased-finetuned-squad
|
ftorres
| 2022-11-14T06:27:29Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-07T21:13:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4965 | 1.0 | 554 | 1.5562 |
| 1.2141 | 2.0 | 1108 | 1.5012 |
| 0.7883 | 3.0 | 1662 | 1.6340 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
ArthurZ/flax-tiny-random-bert-sharded
|
ArthurZ
| 2022-11-14T06:24:51Z | 2,612 | 0 |
transformers
|
[
"transformers",
"jax",
"bert",
"feature-extraction",
"flax",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-06-17T16:08:40Z |
---
tags:
- flax
---
# Model Card for flax-tiny-random-bert-sharded
# Model Details
## Model Description
This model is used to check that the sharding of a flax_model works properly. See [`test_checkpoint_sharding_from_hub`](https://github.com/huggingface/transformers/blob/main/tests/test_modeling_flax_common.py#L1049).
# Uses
The model is not designed to be used and serves a testing purpose.
### Software
- Transformers 4.21.0.dev0
- TensorFlow 2.9.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
coderSounak/finetuning-hs-model-bert-multilingual
|
coderSounak
| 2022-11-14T06:12:56Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-14T05:34:43Z |
---
license: cc-by-nc-3.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuning-hs-model-bert-multilingual
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-hs-model-bert-multilingual
This model is a fine-tuned version of [QCRI/bert-base-multilingual-cased-pos-english](https://huggingface.co/QCRI/bert-base-multilingual-cased-pos-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3158
- Accuracy: 0.9575
- F1: 0.0
- Precision: 0.0
- Recall: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
devtanumisra/finetuning-hate-speech-model-deberta
|
devtanumisra
| 2022-11-14T05:32:52Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-14T04:56:49Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuning-hate-speech-model-deberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-hate-speech-model-deberta
This model is a fine-tuned version of [yangheng/deberta-v3-base-absa-v1.1](https://huggingface.co/yangheng/deberta-v3-base-absa-v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7014
- Accuracy: 0.8430
- F1: 0.8566
- Precision: 0.7981
- Recall: 0.9242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
nvaikun-cmu/output_test
|
nvaikun-cmu
| 2022-11-14T05:07:03Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-03T02:01:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: output_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_test
This model is a fine-tuned version of [google/t5-base-lm-adapt](https://huggingface.co/google/t5-base-lm-adapt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3788 | 1.0 | 184 | 2.0899 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.2
|
liuhaor4/distilbert-base-uncased-finetuned-squad
|
liuhaor4
| 2022-11-14T02:50:10Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-14T02:23:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6172 | 1.0 | 555 | 1.6168 |
| 1.36 | 2.0 | 1110 | 1.4994 |
| 0.9526 | 3.0 | 1665 | 1.6082 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
rajpurkarlab/gilbert
|
rajpurkarlab
| 2022-11-14T02:40:37Z | 324 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"py",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-12T23:24:29Z |
---
language:
- py
metrics:
- f1
---
To use our fine-tuned BioBERT model to remove references to priors from radiology reports, run the following:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
modelname = "rajpurkarlab/gilbert"
tokenizer = AutoTokenizer.from_pretrained(modelname)
model = AutoModelForTokenClassification.from_pretrained(modelname)
```
|
dmis-lab/biosyn-sapbert-ncbi-disease
|
dmis-lab
| 2022-11-14T01:44:07Z | 7,739 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:1901.08746",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
tags:
- bert
---
# Model Card for biosyn-sapbert-ncbi-disease
# Model Details
## Model Description
More information needed
- **Developed by:** Dmis-lab (Data Mining and Information Systems Lab, Korea University)
- **Shared by [Optional]:** Hugging Face
- **Model type:** Feature Extraction
- **Language(s) (NLP):** More information needed
- **License:** More information needed
- **Related Models:**
- **Parent Model:** BERT
- **Resources for more information:**
- [GitHub Repo](https://github.com/jhyuklee/biobert)
- [Associated Paper](https://arxiv.org/abs/1901.08746)
# Uses
## Direct Use
This model can be used for the task of Feature Extraction
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The model creators note in the [associated paper](https://arxiv.org/pdf/1901.08746.pdf)
> We used the BERTBASE model pre-trained on English Wikipedia and BooksCorpus for 1M steps. BioBERT v1.0 (þ PubMed þ PMC) is the version of BioBERT (þ PubMed þ PMC) trained for 470 K steps. When using both the PubMed and PMC corpora, we found that 200K and 270K pre-training steps were optimal for PubMed and PMC, respectively. We also used the ablated versions of BioBERT v1.0, which were pre-trained on only PubMed for 200K steps (BioBERT v1.0 (þ PubMed)) and PMC for 270K steps (BioBERT v1.0 (þ PMC))
## Training Procedure
### Preprocessing
The model creators note in the [associated paper](https://arxiv.org/pdf/1901.08746.pdf)
> We pre-trained BioBERT using Naver Smart Machine Learning (NSML) (Sung et al., 2017), which is utilized for large-scale experiments that need to be run on several GPUs
### Speeds, Sizes, Times
The model creators note in the [associated paper](https://arxiv.org/pdf/1901.08746.pdf)
> The maximum sequence length was fixed to 512 and the mini-batch size was set to 192, resulting in 98 304 words per iteration.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
More information needed
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:**
- **Training:** Eight NVIDIA V100 (32GB) GPUs [ for training],
- **Fine-tuning:** a single NVIDIA Titan Xp (12GB) GPU to fine-tune BioBERT on each task
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
**BibTeX:**
```
@article{lee2019biobert,
title={BioBERT: a pre-trained biomedical language representation model for biomedical text mining},
author={Lee, Jinhyuk and Yoon, Wonjin and Kim, Sungdong and Kim, Donghyeon and Kim, Sunkyu and So, Chan Ho and Kang, Jaewoo},
journal={arXiv preprint arXiv:1901.08746},
year={2019}
}
```
# Glossary [optional]
More information needed
# More Information [optional]
For help or issues using BioBERT, please submit a GitHub issue. Please contact Jinhyuk Lee(`lee.jnhk (at) gmail.com`), or Wonjin Yoon (`wonjin.info (at) gmail.com`) for communication related to BioBERT.
# Model Card Authors [optional]
Dmis-lab (Data Mining and Information Systems Lab, Korea University) in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("dmis-lab/biosyn-sapbert-ncbi-disease")
model = AutoModel.from_pretrained("dmis-lab/biosyn-sapbert-ncbi-disease")
```
</details>
|
huggingtweets/apesahoy-bierincognito-fesshole-jonmao___-meat__hook-ripeacsky-theseandiamond-unfetteredmind1
|
huggingtweets
| 2022-11-14T00:58:21Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-14T00:58:10Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1196519479364268034/5QpniWSP_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1229589315069628421/5Hy71tkj_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1591483220880678915/vDy4TSgn_400x400.png')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Humongous Ape MP & Sean Diamond & Jon Mao & dan & Pesky Splinter - Eternal Goatse Celebrant & Admiral Dan EX QC of the 3rd Antifa fleet! 💙 & Guybrush Tweetbad & Fesshole 🧻</div>
<div style="text-align: center; font-size: 14px;">@apesahoy-bierincognito-fesshole-jonmao___-meat__hook-ripeacsky-theseandiamond-unfetteredmind1</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Humongous Ape MP & Sean Diamond & Jon Mao & dan & Pesky Splinter - Eternal Goatse Celebrant & Admiral Dan EX QC of the 3rd Antifa fleet! 💙 & Guybrush Tweetbad & Fesshole 🧻.
| Data | Humongous Ape MP | Sean Diamond | Jon Mao | dan | Pesky Splinter - Eternal Goatse Celebrant | Admiral Dan EX QC of the 3rd Antifa fleet! 💙 | Guybrush Tweetbad | Fesshole 🧻 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Tweets downloaded | 3242 | 3220 | 662 | 2928 | 3107 | 3220 | 3127 | 3253 |
| Retweets | 176 | 2162 | 53 | 683 | 2406 | 444 | 450 | 17 |
| Short tweets | 577 | 239 | 119 | 305 | 136 | 1180 | 421 | 1 |
| Tweets kept | 2489 | 819 | 490 | 1940 | 565 | 1596 | 2256 | 3235 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/329ftz7y/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @apesahoy-bierincognito-fesshole-jonmao___-meat__hook-ripeacsky-theseandiamond-unfetteredmind1's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2b8bvjnq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2b8bvjnq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/apesahoy-bierincognito-fesshole-jonmao___-meat__hook-ripeacsky-theseandiamond-unfetteredmind1')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/apesahoy-bierincognito-elonmusk-fesshole-jonmao___-meat__hook-ripeacsky-troovus-unfetteredmind1
|
huggingtweets
| 2022-11-13T23:37:55Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-13T23:37:44Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1196519479364268034/5QpniWSP_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1172580448662372353/SwJNqDQl_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Humongous Ape MP & Fesshole 🧻 & troovus & Jon Mao & dan & Pesky Splinter - Eternal Goatse Celebrant & Admiral Dan EX QC of the 3rd Antifa fleet! 💙 & Guybrush Tweetbad</div>
<div style="text-align: center; font-size: 14px;">@apesahoy-bierincognito-elonmusk-fesshole-jonmao___-meat__hook-ripeacsky-troovus-unfetteredmind1</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Humongous Ape MP & Fesshole 🧻 & troovus & Jon Mao & dan & Pesky Splinter - Eternal Goatse Celebrant & Admiral Dan EX QC of the 3rd Antifa fleet! 💙 & Guybrush Tweetbad.
| Data | Elon Musk | Humongous Ape MP | Fesshole 🧻 | troovus | Jon Mao | dan | Pesky Splinter - Eternal Goatse Celebrant | Admiral Dan EX QC of the 3rd Antifa fleet! 💙 | Guybrush Tweetbad |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Tweets downloaded | 3199 | 3242 | 3250 | 3241 | 639 | 2928 | 3107 | 3220 | 3127 |
| Retweets | 141 | 176 | 17 | 1025 | 53 | 682 | 2406 | 444 | 452 |
| Short tweets | 975 | 577 | 1 | 133 | 111 | 305 | 136 | 1180 | 421 |
| Tweets kept | 2083 | 2489 | 3232 | 2083 | 475 | 1941 | 565 | 1596 | 2254 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3masmfhs/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @apesahoy-bierincognito-elonmusk-fesshole-jonmao___-meat__hook-ripeacsky-troovus-unfetteredmind1's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2xp8s411) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2xp8s411/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/apesahoy-bierincognito-elonmusk-fesshole-jonmao___-meat__hook-ripeacsky-troovus-unfetteredmind1')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mlagrand/xlm-roberta-base-finetuned-panx-de
|
mlagrand
| 2022-11-13T21:32:07Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-13T20:07:40Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.01
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| No log | 0.01 | 6 | 1.0252 | 0.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Narrativa/legal-longformer-base-4096-spanish
|
Narrativa
| 2022-11-13T19:55:34Z | 112 | 12 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"Long documents",
"longformer",
"robertalex",
"spanish",
"legal",
"es",
"arxiv:2004.05150",
"doi:10.57967/hf/0105",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-10T17:34:46Z |
---
language:
- es
license: mit
widget:
- text: "Aprobada las Cortes Generales en sesiones plenarias del Congreso de los Diputados y del Senado celebradas el de octubre de , la Constitución fue ratificada en referéndum el de diciembre, siendo sancionada y promulgada por el rey Juan Carlos I el de diciembre y publicada en el Boletín Oficial del Estado el de diciembre del mismo año.
La promulgación de la Constitución implicó la culminación de la llamada transición a la democracia, que tuvo lugar como consecuencia de la muerte, el de noviembre de , del anterior jefe de Estado, el dictador Francisco Franco, precipitando una serie de acontecimientos políticos e históricos que transformaron el anterior régimen dictatorial en un «Estado social y democrático de derecho que propugna como valores superiores del ordenamiento jurídico la libertad, la justicia, la igualdad y el pluralismo político», tal y como proclama el artículo primero de la Constitución. En él también se afianza el principio de «soberanía nacional», que «reside en el pueblo español», y se establece «la Monarquía parlamentaria» como forma de gobierno. Deroga, además, en la Disposición Derogatoria, las Leyes Fundamentales del Reino aprobadas en y modificadas en múltiples ocasiones, la última de ellas en precisamente para abrir paso a la <mask>.
«La Constitución se fundamenta en la indisoluble unidad de la Nación española, patria común e indivisible de todos los españoles y reconoce el derecho a la autonomía de las nacionalidades y regiones que la integran» (artículo ). Establece una organización territorial basada «en municipios, en provincias y en las Comunidades Autónomas que se constituyan», rigiendo «la solidaridad entre todas ellas». Tras el proceso de formación del Estado de las Autonomías, las comunidades autónomas gozan de una autonomía de naturaleza política que configura a España como un Estado autonómico.n. Las entidades locales, como los municipios y las provincias, gozan de una autonomía de naturaleza administrativa, y sus instituciones actúan en conformidad con criterios de oportunidad dentro del marco legal fijado por el Estado y las comunidades autónomas.
El rey es el jefe de Estado, símbolo de su unidad y permanencia, arbitra y modera el funcionamiento regular de las instituciones, asume la más alta representación del Estado español en las relaciones internacionales, especialmente con las naciones de su comunidad histórica, y ejerce las funciones que le atribuyen expresamente la Constitución y las leyes. Sus actos tienen una naturaleza reglada, cuya validez depende del refrendo de la autoridad competente que, según el caso, es el presidente del Gobierno, el presidente del Congreso de los Diputados, o un ministro."
- text: "CONSEJO GENERAL DEL PODER JUDICIAL 18485. Acuerdo de 3 de noviembre de 2022, de la Comisión Permanente del Consejo General del Poder Judicial, por el que se convoca concurso para la provisión de puestos de trabajo en la Gerencia del Consejo. En la Gerencia del Consejo General del Poder Judicial se encuentran vacantes dos puestos de subalterno dotados presupuestariamente, con las características que se relacionan en el anexo I de este acuerdo y cuya provisión se considera necesaria en orden a la correcta asunción de las funciones encomendadas a ese órgano técnico. Por ello la Comisión Permanente del Consejo General del Poder Judicial, en su reunión del día de la fecha, ha acordado convocar un concurso de méritos para la cobertura de los citados puestos, de conformidad con lo dispuesto en los artículos 625 y concordantes de la Ley Orgánica 6/1985, de 1 de julio, del Poder Judicial. El concurso de méritos se regirá por las siguientes Normas Primera. Requisitos de participación. 1. Podrán tomar parte en el presente concurso los funcionarios/as pertenecientes a las agrupaciones profesionales a que se refiere la disposición transitoria tercera del Real Decreto Legislativo 5/2015, de 30 de octubre, por el que se aprueba el texto refundido de la Ley del Estatuto Básico del Empleado Público (anterior grupo E del artículo 25 de la Ley 30/1984, de 2 de agosto) o a los cuerpos o escalas de Auxilio Judicial de la Administración de Justicia, de conformidad con el artículo 624 de la Ley Orgánica 6/1985, de 1 de julio, del Poder Judicial, siempre que reúnan las condiciones generales exigidas al puesto de trabajo y los requisitos determinados en esta convocatoria en la fecha en que termine el plazo de presentación de solicitudes. 2. Los funcionarios/as con destino definitivo podrán participar siempre que hayan transcurrido, al menos, dos años desde la toma de posesión del último destino definitivo. No será necesario cumplir este plazo para los funcionarios/as que hayan sido removidos del puesto de trabajo obtenido por el procedimiento de concurso o, también, si ha sido suprimido su puesto de trabajo. Los funcionarios/as con destino definitivo en el Consejo, podrán participar si ha transcurrido, al menos, un año desde su toma de posesión en el último destino definitivo, salvo en el caso de aquellos/as que participen desde un puesto de trabajo con nivel inferior al convocado. 3. Los funcionarios/as en situación administrativa de <mask> en otras administraciones públicas o de excedencia voluntaria por interés particular o por agrupación familiar solo podrán participar en el concurso si en la fecha de finalización del plazo de presentación de solicitudes han transcurrido más de dos años en las indicadas situaciones. En el caso de la primera situación mencionada deberá haber transcurrido asimismo un plazo de dos años desde que obtuvieron su último destino definitivo. 4. Los funcionarios/as en situación de servicios especiales o en excedencia por cuidado de familiares solo podrán participar si en la fecha en que termina el plazo de presentación de solicitudes han transcurrido dos años desde la toma de posesión del último destino definitivo."
- text: "La Constitución española de 1978 es la <mask> suprema del ordenamiento jurídico español."
tags:
- Long documents
- longformer
- robertalex
- spanish
- legal
---
# Legal ⚖️ longformer-base-4096-spanish
`legal-longformer-base-4096` is a BERT-like model started from the RoBERTa checkpoint (**[RoBERTalex](https://huggingface.co/PlanTL-GOB-ES/RoBERTalex)** in this case) and pre-trained for *MLM* on long documents from the [Spanish Legal Domain Corpora](https://zenodo.org/record/5495529/#.Y205lpHMKV5). It supports sequences of length up to **4,096**!
**Longformer** uses a combination of a sliding window (*local*) attention and *global* attention. Global attention is user-configured based on the task to allow the model to learn task-specific representations.
This model was made following the research done by [Iz Beltagy and Matthew E. Peters and Arman Cohan](https://arxiv.org/abs/2004.05150).
## Model (base checkpoint)
[RoBERTalex](https://huggingface.co/PlanTL-GOB-ES/RoBERTalex?)
There are few models trained for the Spanish language. Some of the models have been trained with a low resource, unclean corpora. The ones derived from the Spanish National Plan for Language Technologies are proficient in solving several tasks and have been trained using large-scale clean corpora. However, the Spanish Legal domain language could be thought of as an independent language on its own. We, therefore, created a Spanish Legal model from scratch trained exclusively on legal corpora.
## Dataset
[Spanish Legal Domain Corpora](https://zenodo.org/record/5495529)
A collection of corpora of Spanish legal domain.
More legal domain resources: https://github.com/PlanTL-GOB-ES/lm-legal-es
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{narrativa2022legal-longformer-base-4096-spanish,
title={Legal Spanish LongFormer by Narrativa},
author={Romero, Manuel},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/Narrativa/legal-longformer-base-4096-spanish}},
year={2022}
}
```
## Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.
> About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI
|
SayakRana/finetune_hate_speech_improved_v1
|
SayakRana
| 2022-11-13T19:46:29Z | 102 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-13T19:21:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetune_hate_speech_improved_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune_hate_speech_improved_v1
This model is a fine-tuned version of [cross-encoder/ms-marco-electra-base](https://huggingface.co/cross-encoder/ms-marco-electra-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5548
- Accuracy: 0.8277
- F1: 0.8416
- Precision: 0.7883
- Recall: 0.9026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
phildav/Reinforce-carpole
|
phildav
| 2022-11-13T19:36:48Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-13T19:09:33Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-carpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 93.60 +/- 30.55
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
BruceZJC/distilbert-base-uncased-finetuned-squad
|
BruceZJC
| 2022-11-13T18:27:43Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-12T22:01:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7793 | 1.0 | 554 | 1.9337 |
| 1.4469 | 2.0 | 1108 | 1.7193 |
| 1.1585 | 3.0 | 1662 | 1.7362 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
ejarkm/ddpm-butterflies-128
|
ejarkm
| 2022-11-13T18:18:21Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-13T17:04:36Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/ejarkm/ddpm-butterflies-128/tensorboard?#scalars)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.