modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
rajistics/setfit-model
|
rajistics
| 2022-10-27T23:47:04Z | 2 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-27T23:46:48Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
huggingtweets/sadieyay
|
huggingtweets
| 2022-10-27T23:42:06Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-27T23:21:37Z |
---
language: en
thumbnail: http://www.huggingtweets.com/sadieyay/1666914122057/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1509399260441292800/yttWeCzW_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">sadie</div>
<div style="text-align: center; font-size: 14px;">@sadieyay</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from sadie.
| Data | sadie |
| --- | --- |
| Tweets downloaded | 636 |
| Retweets | 38 |
| Short tweets | 97 |
| Tweets kept | 501 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2reqej16/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sadieyay's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/usyd3rqz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/usyd3rqz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sadieyay')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ViktorDo/SciBERT-POWO_Climber_Finetuned
|
ViktorDo
| 2022-10-27T22:39:38Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-27T21:19:57Z |
---
tags:
- generated_from_trainer
model-index:
- name: SciBERT-POWO_Climber_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SciBERT-POWO_Climber_Finetuned
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1033 | 1.0 | 2133 | 0.1151 |
| 0.0853 | 2.0 | 4266 | 0.1058 |
| 0.0792 | 3.0 | 6399 | 0.1086 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
andrewzhang505/lunar_lander_example
|
andrewzhang505
| 2022-10-27T22:35:12Z | 5 | 0 |
sample-factory
|
[
"sample-factory",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-27T22:29:42Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- metrics:
- type: mean_reward
value: 93.18 +/- 76.95
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLanderContinuous-v2
type: LunarLanderContinuous-v2
---
A(n) **APPO** model trained on the **LunarLanderContinuous-v2** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
JamesH/Translation_en_to_fr_project
|
JamesH
| 2022-10-27T21:52:09Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"translation",
"en",
"fr",
"dataset:JamesH/autotrain-data-second-project-en2fr",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-10-27T19:57:24Z |
---
tags:
- autotrain
- translation
language:
- en
- fr
datasets:
- JamesH/autotrain-data-second-project-en2fr
co2_eq_emissions:
emissions: 0.6863820434350988
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 1907464829
- CO2 Emissions (in grams): 0.6864
## Validation Metrics
- Loss: 1.117
- SacreBLEU: 16.546
- Gen len: 14.511
|
Aadarsh/bert-finetuned-ner
|
Aadarsh
| 2022-10-27T21:31:02Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-26T22:08:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1429
- Precision: 0.4954
- Recall: 0.6136
- F1: 0.5482
- Accuracy: 0.9642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 141 | 0.2894 | 0.4649 | 0.3258 | 0.3831 | 0.9219 |
| No log | 2.0 | 282 | 0.1767 | 0.4706 | 0.4545 | 0.4624 | 0.9487 |
| No log | 3.0 | 423 | 0.1429 | 0.4954 | 0.6136 | 0.5482 | 0.9642 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
ViktorDo/SciBERT-POWO_Epiphyte_Finetuned
|
ViktorDo
| 2022-10-27T21:10:45Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-27T19:53:27Z |
---
tags:
- generated_from_trainer
model-index:
- name: SciBERT-POWO_Epiphyte_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SciBERT-POWO_Epiphyte_Finetuned
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0909 | 1.0 | 2063 | 0.0860 |
| 0.0763 | 2.0 | 4126 | 0.1000 |
| 0.0627 | 3.0 | 6189 | 0.0898 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
andrewzhang505/sf2-lunar-lander
|
andrewzhang505
| 2022-10-27T19:51:07Z | 2 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-27T19:50:47Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- metrics:
- type: mean_reward
value: 126.58 +/- 137.36
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLanderContinuous-v2
type: LunarLanderContinuous-v2
---
A(n) **APPO** model trained on the **LunarLanderContinuous-v2** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
sam34738/roberta-nisha
|
sam34738
| 2022-10-27T19:29:33Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-27T19:03:16Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-nisha
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-nisha
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3254 | 1.0 | 460 | 0.7247 |
| 0.5791 | 2.0 | 920 | 0.5375 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Tokenizers 0.13.1
|
Eleusinian/haladas
|
Eleusinian
| 2022-10-27T18:07:35Z | 0 | 0 | null |
[
"license:unknown",
"region:us"
] | null | 2022-10-27T18:00:03Z |
---
license: unknown
---
<div style='display: flex; flex-wrap: wrap; column-gap: 0.75rem;'>
<img src='https://s3.amazonaws.com/moonup/production/uploads/1666893412370-noauth.jpeg' width='400' height='400'>
<img src='https://s3.amazonaws.com/moonup/production/uploads/1666893411703-noauth.jpeg' width='400' height='400'>
<img src='https://s3.amazonaws.com/moonup/production/uploads/1666893411826-noauth.jpeg' width='400' height='400'>
<img src='https://s3.amazonaws.com/moonup/production/uploads/1666893411866-noauth.jpeg' width='400' height='400'>
</div>
|
sagteam/rubert-base-cased-mcn
|
sagteam
| 2022-10-27T17:43:35Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"ru",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-10-19T14:31:23Z |
---
language:
- ru
---
# rubert-base-cased-mcn
Normalization model, based on rubert, for linking phrases to their MedDRA concepts in russian. F1-micro of this model is 71.34
on the 4th fold of the RDRS corpus of russian internet drug reviews.
The use of the weights of the current model and the calculation of accuracies on the laid out RDRS corpus is contained in our repository on [our repo](https://github.com/sag111/MedNorm).
|
huggingtweets/nft_god-notthreadguy-theehustlehouse
|
huggingtweets
| 2022-10-27T17:34:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-27T17:32:37Z |
---
language: en
thumbnail: http://www.huggingtweets.com/nft_god-notthreadguy-theehustlehouse/1666892053641/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1489268127565324291/ZQK5RoFg_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1547080829788196865/Natr1sGX_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1585674726432641025/QGSMO68J_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">NFT God & ThreadGuy.eth 👑 & BaronVonHustle.eth</div>
<div style="text-align: center; font-size: 14px;">@nft_god-notthreadguy-theehustlehouse</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from NFT God & ThreadGuy.eth 👑 & BaronVonHustle.eth.
| Data | NFT God | ThreadGuy.eth 👑 | BaronVonHustle.eth |
| --- | --- | --- | --- |
| Tweets downloaded | 3250 | 3243 | 3245 |
| Retweets | 21 | 392 | 1570 |
| Short tweets | 177 | 1524 | 434 |
| Tweets kept | 3052 | 1327 | 1241 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1vneptk2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nft_god-notthreadguy-theehustlehouse's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/cw93h2tk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/cw93h2tk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nft_god-notthreadguy-theehustlehouse')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Houryy/Houry
|
Houryy
| 2022-10-27T16:27:08Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-10-27T16:27:08Z |
---
license: bigscience-openrail-m
---
|
OpenMatch/cocodr-base
|
OpenMatch
| 2022-10-27T16:20:16Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-26T05:51:29Z |
This model has been pretrained on BEIR corpus without relevance-level supervision following the approach described in the paper **COCO-DR: Combating Distribution Shifts in Zero-Shot Dense Retrieval with Contrastive and Distributionally Robust Learning**. The associated GitHub repository is available here https://github.com/OpenMatch/COCO-DR.
This model is trained with BERT-base as the backbone with 110M hyperparameters.
license: mit
---
|
hagerty7/recyclable-materials-classification
|
hagerty7
| 2022-10-27T15:54:32Z | 42 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-24T15:10:05Z |
ViT for Recyclable Material Classification
|
JoAmps/trialzz
|
JoAmps
| 2022-10-27T15:52:17Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-27T15:35:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: trialzz
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trialzz
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 113 | 2.2090 |
| No log | 2.0 | 226 | 2.1168 |
| No log | 3.0 | 339 | 2.1097 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.1
- Datasets 2.5.1
- Tokenizers 0.12.1
|
mgb-dx-meetup/distilbert-multilingual-finetuned-sentiment
|
mgb-dx-meetup
| 2022-10-27T15:43:10Z | 100 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-classification",
"unk",
"dataset:lewtun/autotrain-data-mgb-product-reviews-mbert",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-27T15:34:22Z |
---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- lewtun/autotrain-data-mgb-product-reviews-mbert
co2_eq_emissions:
emissions: 5.523107849339405
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1904564767
- CO2 Emissions (in grams): 5.5231
## Validation Metrics
- Loss: 1.135
- Accuracy: 0.514
- Macro F1: 0.504
- Micro F1: 0.514
- Weighted F1: 0.505
- Macro Precision: 0.506
- Micro Precision: 0.514
- Weighted Precision: 0.507
- Macro Recall: 0.513
- Micro Recall: 0.514
- Weighted Recall: 0.514
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/lewtun/autotrain-mgb-product-reviews-mbert-1904564767
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("lewtun/autotrain-mgb-product-reviews-mbert-1904564767", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("lewtun/autotrain-mgb-product-reviews-mbert-1904564767", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Plaban81/vegetable-classifier
|
Plaban81
| 2022-10-27T15:35:01Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-27T15:34:48Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: vegetable-classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8571428656578064
---
# vegetable-classifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Brinjal

#### Cabbage

#### Cauliflower

#### Raddish

#### Tomato

|
Sennodipoi/LayoutLMv1-FUNSD-ft
|
Sennodipoi
| 2022-10-27T15:27:32Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlm",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-23T08:10:54Z |
LayoutLMv1 fine-tuned on the FUNSD dataset. Code and results are available at the official GitHub repository of my [Master Degree thesis ](https://github.com/AleRosae/thesis-layoutlm).
Results obtained using seqeval in strict mode:
| | Precision | Recall | F1-score | Variance (F1) |
|--------------|-----------|--------|----------|---------------|
| ANSWER | 0.80 | 0.78 | 0.80 | 1e-4 |
| HEADER | 0.62 | 0.47 | 0.53 | 2e-4 |
| QUESTION | 0.85 | 0.71 | 0.83 | 3e-5 |
| Micro avg | 0.83 | 0.77 | 0.81 | 1e-4 |
| Macro avg | 0.77 | 0.56 | 0.72 | 3e-5 |
| Weighted avg | 0.83 | 0.78 | 0.80 | 1e-4 |
|
alanakbik/test-push-public
|
alanakbik
| 2022-10-27T15:10:07Z | 3 | 0 |
flair
|
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"region:us"
] |
token-classification
| 2022-10-27T15:07:07Z |
---
tags:
- flair
- token-classification
- sequence-tagger-model
---
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("alanakbik/test-push-public")
# make example sentence
sentence = Sentence("On September 1st George won 1 dollar while watching Game of Thrones.")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
|
leo93/bert-finetuned-ner-30
|
leo93
| 2022-10-27T15:03:09Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-27T13:19:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0453
- Precision: 0.9275
- Recall: 0.9492
- F1: 0.9382
- Accuracy: 0.9934
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 407 | 0.0539 | 0.8283 | 0.8758 | 0.8514 | 0.9866 |
| 0.1524 | 2.0 | 814 | 0.0333 | 0.8931 | 0.9123 | 0.9026 | 0.9915 |
| 0.0381 | 3.0 | 1221 | 0.0345 | 0.8835 | 0.9280 | 0.9052 | 0.9906 |
| 0.0179 | 4.0 | 1628 | 0.0351 | 0.8890 | 0.9361 | 0.9119 | 0.9909 |
| 0.0089 | 5.0 | 2035 | 0.0310 | 0.9102 | 0.9372 | 0.9235 | 0.9924 |
| 0.0089 | 6.0 | 2442 | 0.0344 | 0.9198 | 0.9383 | 0.9289 | 0.9922 |
| 0.0057 | 7.0 | 2849 | 0.0331 | 0.9144 | 0.9448 | 0.9294 | 0.9931 |
| 0.0039 | 8.0 | 3256 | 0.0340 | 0.9144 | 0.9481 | 0.9309 | 0.9928 |
| 0.0027 | 9.0 | 3663 | 0.0423 | 0.9032 | 0.9481 | 0.9251 | 0.9921 |
| 0.0018 | 10.0 | 4070 | 0.0373 | 0.9047 | 0.9507 | 0.9271 | 0.9923 |
| 0.0018 | 11.0 | 4477 | 0.0448 | 0.8932 | 0.9474 | 0.9195 | 0.9910 |
| 0.0014 | 12.0 | 4884 | 0.0380 | 0.9079 | 0.9474 | 0.9272 | 0.9928 |
| 0.0015 | 13.0 | 5291 | 0.0360 | 0.9231 | 0.9474 | 0.9351 | 0.9936 |
| 0.0013 | 14.0 | 5698 | 0.0378 | 0.9243 | 0.9456 | 0.9348 | 0.9935 |
| 0.0013 | 15.0 | 6105 | 0.0414 | 0.9197 | 0.9496 | 0.9344 | 0.9930 |
| 0.0009 | 16.0 | 6512 | 0.0405 | 0.9202 | 0.9478 | 0.9338 | 0.9929 |
| 0.0009 | 17.0 | 6919 | 0.0385 | 0.9305 | 0.9441 | 0.9373 | 0.9933 |
| 0.0006 | 18.0 | 7326 | 0.0407 | 0.9285 | 0.9437 | 0.9360 | 0.9934 |
| 0.0009 | 19.0 | 7733 | 0.0428 | 0.9203 | 0.9488 | 0.9343 | 0.9929 |
| 0.0006 | 20.0 | 8140 | 0.0455 | 0.9232 | 0.9536 | 0.9382 | 0.9928 |
| 0.0004 | 21.0 | 8547 | 0.0462 | 0.9261 | 0.9529 | 0.9393 | 0.9930 |
| 0.0004 | 22.0 | 8954 | 0.0423 | 0.9359 | 0.9492 | 0.9425 | 0.9940 |
| 0.0005 | 23.0 | 9361 | 0.0446 | 0.9180 | 0.9529 | 0.9351 | 0.9931 |
| 0.0005 | 24.0 | 9768 | 0.0430 | 0.9361 | 0.9467 | 0.9413 | 0.9938 |
| 0.0002 | 25.0 | 10175 | 0.0436 | 0.9322 | 0.9496 | 0.9408 | 0.9935 |
| 0.0002 | 26.0 | 10582 | 0.0440 | 0.9275 | 0.9492 | 0.9382 | 0.9935 |
| 0.0002 | 27.0 | 10989 | 0.0450 | 0.9272 | 0.9488 | 0.9379 | 0.9932 |
| 0.0002 | 28.0 | 11396 | 0.0445 | 0.9304 | 0.9470 | 0.9386 | 0.9935 |
| 0.0003 | 29.0 | 11803 | 0.0449 | 0.9278 | 0.9481 | 0.9378 | 0.9934 |
| 0.0001 | 30.0 | 12210 | 0.0453 | 0.9275 | 0.9492 | 0.9382 | 0.9934 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
pig4431/sst2_bert_3epoch
|
pig4431
| 2022-10-27T15:01:53Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-27T14:55:30Z |
---
tags:
- generated_from_trainer
model-index:
- name: sst2_bert_3epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sst2_bert_3epoch
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Shri3/q-Taxi-v3
|
Shri3
| 2022-10-27T14:36:11Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-27T14:36:05Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Shri3/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
huggingtweets/tykesinties
|
huggingtweets
| 2022-10-27T14:31:37Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-25T19:33:52Z |
---
language: en
thumbnail: http://www.huggingtweets.com/tykesinties/1666881093237/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/917201427583438848/X-zHDjYL_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">RegressCo H.R.</div>
<div style="text-align: center; font-size: 14px;">@tykesinties</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from RegressCo H.R..
| Data | RegressCo H.R. |
| --- | --- |
| Tweets downloaded | 1844 |
| Retweets | 215 |
| Short tweets | 27 |
| Tweets kept | 1602 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2pqqtat7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tykesinties's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1vqh1gov) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1vqh1gov/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tykesinties')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
yeahrmek/arxiv-math-lean
|
yeahrmek
| 2022-10-27T14:05:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-10-27T12:23:41Z |
This is a BPE tokenizer based on "Salesforce/codegen-350M-mono".
The tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece)
so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not.
We used ArXiv subset of The Pile dataset and proof steps from [lean-step-public](https://github.com/jesse-michael-han/lean-step-public) datasets to train the tokenizer.
|
OWG/imagegpt-small
|
OWG
| 2022-10-27T13:10:17Z | 0 | 0 | null |
[
"onnx",
"vision",
"dataset:imagenet-21k",
"license:apache-2.0",
"region:us"
] | null | 2022-10-27T11:52:39Z |
---
license: apache-2.0
tags:
- vision
datasets:
- imagenet-21k
---
# ImageGPT (small-sized model)
ImageGPT (iGPT) model pre-trained on ImageNet ILSVRC 2012 (14 million images, 21,843 classes) at resolution 32x32. It was introduced in the paper [Generative Pretraining from Pixels](https://cdn.openai.com/papers/Generative_Pretraining_from_Pixels_V2.pdf) by Chen et al. and first released in [this repository](https://github.com/openai/image-gpt). See also the official [blog post](https://openai.com/blog/image-gpt/).
## Model description
The ImageGPT (iGPT) is a transformer decoder model (GPT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 32x32 pixels.
The goal for the model is simply to predict the next pixel value, given the previous ones.
By pre-training the model, it learns an inner representation of images that can then be used to:
- extract features useful for downstream tasks: one can either use ImageGPT to produce fixed image features, in order to train a linear model (like a sklearn logistic regression model or SVM). This is also referred to as "linear probing".
- perform (un)conditional image generation.
## Intended uses & limitations
You can use the raw model for either feature extractor or (un) conditional image generation.
### How to use
Here is how to use this model as feature extractor:
```python
from transformers import AutoFeatureExtractor
from onnxruntime import InferenceSession
from datasets import load_dataset
# load image
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
# load model
feature_extractor = AutoFeatureExtractor.from_pretrained("openai/imagegpt-small")
session = InferenceSession("model/model.onnx")
# ONNX Runtime expects NumPy arrays as input
inputs = feature_extractor(image, return_tensors="np")
outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs))
```
Or you can use the model with classification head that returns logits
```python
from transformers import AutoFeatureExtractor
from onnxruntime import InferenceSession
from datasets import load_dataset
# load image
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
# load model
feature_extractor = AutoFeatureExtractor.from_pretrained("openai/imagegpt-small")
session = InferenceSession("model/model_classification.onnx")
# ONNX Runtime expects NumPy arrays as input
inputs = feature_extractor(image, return_tensors="np")
outputs = session.run(output_names=["logits"], input_feed=dict(inputs))
```
## Original implementation
Follow [this link](https://huggingface.co/openai/imagegpt-small) to see the original implementation.
## Training data
The ImageGPT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes.
## Training procedure
### Preprocessing
Images are first resized/rescaled to the same resolution (32x32) and normalized across the RGB channels. Next, color-clustering is performed. This means that every pixel is turned into one of 512 possible cluster values. This way, one ends up with a sequence of 32x32 = 1024 pixel values, rather than 32x32x3 = 3072, which is prohibitively large for Transformer-based models.
### Pretraining
Training details can be found in section 3.4 of v2 of the paper.
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to the original paper.
### BibTeX entry and citation info
```bibtex
@InProceedings{pmlr-v119-chen20s,
title = {Generative Pretraining From Pixels},
author = {Chen, Mark and Radford, Alec and Child, Rewon and Wu, Jeffrey and Jun, Heewoo and Luan, David and Sutskever, Ilya},
booktitle = {Proceedings of the 37th International Conference on Machine Learning},
pages = {1691--1703},
year = {2020},
editor = {III, Hal Daumé and Singh, Aarti},
volume = {119},
series = {Proceedings of Machine Learning Research},
month = {13--18 Jul},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v119/chen20s/chen20s.pdf},
url = {https://proceedings.mlr.press/v119/chen20s.html
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
```
|
kevinbror/bertbaseuncasedny
|
kevinbror
| 2022-10-27T12:13:45Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-27T12:13:00Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bertbaseuncasedny
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bertbaseuncasedny
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3901
- Train End Logits Accuracy: 0.8823
- Train Start Logits Accuracy: 0.8513
- Validation Loss: 1.2123
- Validation End Logits Accuracy: 0.7291
- Validation Start Logits Accuracy: 0.6977
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 29508, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.2597 | 0.6683 | 0.6277 | 1.0151 | 0.7214 | 0.6860 | 0 |
| 0.7699 | 0.7820 | 0.7427 | 1.0062 | 0.7342 | 0.6996 | 1 |
| 0.5343 | 0.8425 | 0.8064 | 1.1162 | 0.7321 | 0.7010 | 2 |
| 0.3901 | 0.8823 | 0.8513 | 1.2123 | 0.7291 | 0.6977 | 3 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
kosec39/distilbert-base-uncased-finetuned-imdb
|
kosec39
| 2022-10-27T12:00:24Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-27T11:31:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Rijgersberg/whisper-small-fy-NL
|
Rijgersberg
| 2022-10-27T08:50:21Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-10-25T22:17:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: whisper-small-fy-NL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-fy-NL
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the [CommonVoice 11 `fy-NL` (West-Frisian)](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/fy-NL/train) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5276
- Wer: 0.2919
The Wer before finetuning was 1.0622.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| | 0 | 0 | | 1.0622|
| 0.9177 | 1.0 | 211 | 0.8145 | 0.3450 |
| 0.5807 | 2.0 | 422 | 0.7113 | 0.3640 |
| 0.2884 | 3.0 | 633 | 0.5276 | 0.2919 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
arshiya20/epochs-finetuned-squad
|
arshiya20
| 2022-10-27T07:44:45Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-27T05:38:23Z |
---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: epochs-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# epochs-finetuned-squad
This model was trained from scratch on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.7553 | 1.0 | 5533 | 1.2460 |
| 0.739 | 2.0 | 11066 | 1.2609 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
teacookies/autotrain-27102022-cert-1899564594
|
teacookies
| 2022-10-27T07:34:21Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-27102022-cert",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-27T07:21:17Z |
---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-27102022-cert
co2_eq_emissions:
emissions: 22.03607609264655
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1899564594
- CO2 Emissions (in grams): 22.0361
## Validation Metrics
- Loss: 0.003
- Accuracy: 0.999
- Precision: 0.981
- Recall: 0.982
- F1: 0.981
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-27102022-cert-1899564594
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-27102022-cert-1899564594", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-27102022-cert-1899564594", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
shawn-nyk/wav2vec-large-xlsr-malayalam-with-lm
|
shawn-nyk
| 2022-10-27T07:27:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ml",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-10-16T09:44:27Z |
---
language: ml
datasets:
- Indic TTS Malayalam Speech Corpus
- Openslr Malayalam Speech Corpus
- SMC Malayalam Speech Corpus
- IIIT-H Indic Speech Databases
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Malayalam XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Test split of combined dataset using all datasets mentioned above
type: custom
args: ml
metrics:
- name: Test WER
type: wer
value: 28.43
---
# Wav2Vec2-Large-XLSR-53-ml
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on ml (Malayalam) using the [Indic TTS Malayalam Speech Corpus (via Kaggle)](https://www.kaggle.com/kavyamanohar/indic-tts-malayalam-speech-corpus), [Openslr Malayalam Speech Corpus](http://openslr.org/63/), [SMC Malayalam Speech Corpus](https://blog.smc.org.in/malayalam-speech-corpus/) and [IIIT-H Indic Speech Databases](http://speech.iiit.ac.in/index.php/research-svl/69.html). The notebooks used to train model are available [here](https://github.com/gauthamsuresh09/wav2vec2-large-xlsr-53-malayalam/). When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = <load-test-split-of-combined-dataset> # Details on loading this dataset in the evaluation section
processor = Wav2Vec2Processor.from_pretrained("gvs/wav2vec2-large-xlsr-malayalam")
model = Wav2Vec2ForCTC.from_pretrained("gvs/wav2vec2-large-xlsr-malayalam")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"])
```
## Evaluation
The model can be evaluated as follows on the test data of combined custom dataset. For more details on dataset preparation, check the notebooks mentioned at the end of this file.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
from datasets import load_dataset, load_metric
from pathlib import Path
# The custom dataset needs to be created using notebook mentioned at the end of this file
data_dir = Path('<path-to-custom-dataset>')
dataset_folders = {
'iiit': 'iiit_mal_abi',
'openslr': 'openslr',
'indic-tts': 'indic-tts-ml',
'msc-reviewed': 'msc-reviewed-speech-v1.0+20200825',
}
# Set directories for datasets
openslr_male_dir = data_dir / dataset_folders['openslr'] / 'male'
openslr_female_dir = data_dir / dataset_folders['openslr'] / 'female'
iiit_dir = data_dir / dataset_folders['iiit']
indic_tts_male_dir = data_dir / dataset_folders['indic-tts'] / 'male'
indic_tts_female_dir = data_dir / dataset_folders['indic-tts'] / 'female'
msc_reviewed_dir = data_dir / dataset_folders['msc-reviewed']
# Load the datasets
openslr_male = load_dataset("json", data_files=[f"{str(openslr_male_dir.absolute())}/sample_{i}.json" for i in range(2023)], split="train")
openslr_female = load_dataset("json", data_files=[f"{str(openslr_female_dir.absolute())}/sample_{i}.json" for i in range(2103)], split="train")
iiit = load_dataset("json", data_files=[f"{str(iiit_dir.absolute())}/sample_{i}.json" for i in range(1000)], split="train")
indic_tts_male = load_dataset("json", data_files=[f"{str(indic_tts_male_dir.absolute())}/sample_{i}.json" for i in range(5649)], split="train")
indic_tts_female = load_dataset("json", data_files=[f"{str(indic_tts_female_dir.absolute())}/sample_{i}.json" for i in range(2950)], split="train")
msc_reviewed = load_dataset("json", data_files=[f"{str(msc_reviewed_dir.absolute())}/sample_{i}.json" for i in range(1541)], split="train")
# Create test split as 20%, set random seed as well.
test_size = 0.2
random_seed=1
openslr_male_splits = openslr_male.train_test_split(test_size=test_size, seed=random_seed)
openslr_female_splits = openslr_female.train_test_split(test_size=test_size, seed=random_seed)
iiit_splits = iiit.train_test_split(test_size=test_size, seed=random_seed)
indic_tts_male_splits = indic_tts_male.train_test_split(test_size=test_size, seed=random_seed)
indic_tts_female_splits = indic_tts_female.train_test_split(test_size=test_size, seed=random_seed)
msc_reviewed_splits = msc_reviewed.train_test_split(test_size=test_size, seed=random_seed)
# Get combined test dataset
split_list = [openslr_male_splits, openslr_female_splits, indic_tts_male_splits, indic_tts_female_splits, msc_reviewed_splits, iiit_splits]
test_dataset = datasets.concatenate_datasets([split['test'] for split in split_list)
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gvs/wav2vec2-large-xlsr-malayalam")
model = Wav2Vec2ForCTC.from_pretrained("gvs/wav2vec2-large-xlsr-malayalam")
model.to("cuda")
resamplers = {
48000: torchaudio.transforms.Resample(48_000, 16_000),
}
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“\\\\%\\\\‘\\\\”\\\\�Utrnle\\\\_]'
unicode_ignore_regex = r'[\\\\u200e]'
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"])
batch["sentence"] = re.sub(unicode_ignore_regex, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
# Resample if its not in 16kHz
if sampling_rate != 16000:
batch["speech"] = resamplers[sampling_rate](speech_array).squeeze().numpy()
else:
batch["speech"] = speech_array.squeeze().numpy()
# If more than one dimension is present, pick first one
if batch["speech"].ndim > 1:
batch["speech"] = batch["speech"][0]
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result (WER)**: 28.43 %
## Training
A combined dataset was created using [Indic TTS Malayalam Speech Corpus (via Kaggle)](https://www.kaggle.com/kavyamanohar/indic-tts-malayalam-speech-corpus), [Openslr Malayalam Speech Corpus](http://openslr.org/63/), [SMC Malayalam Speech Corpus](https://blog.smc.org.in/malayalam-speech-corpus/) and [IIIT-H Indic Speech Databases](http://speech.iiit.ac.in/index.php/research-svl/69.html). The datasets were downloaded and was converted to HF Dataset format using [this notebook](https://github.com/gauthamsuresh09/wav2vec2-large-xlsr-53-malayalam/blob/main/make_hf_dataset.ipynb)
The notebook used for training and evaluation can be found [here](https://github.com/gauthamsuresh09/wav2vec2-large-xlsr-53-malayalam/blob/main/fine-tune-xlsr-wav2vec2-on-malayalam-asr-with-transformers_v2.ipynb)
|
teacookies/autotrain-27102022-cert1-1899464570
|
teacookies
| 2022-10-27T06:29:42Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-27102022-cert1",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-27T06:19:22Z |
---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-27102022-cert1
co2_eq_emissions:
emissions: 16.254745105263574
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1899464570
- CO2 Emissions (in grams): 16.2547
## Validation Metrics
- Loss: 0.004
- Accuracy: 0.999
- Precision: 0.972
- Recall: 0.979
- F1: 0.975
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-27102022-cert1-1899464570
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-27102022-cert1-1899464570", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-27102022-cert1-1899464570", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
huggingtweets/ferret_gf
|
huggingtweets
| 2022-10-27T06:27:00Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-27T06:26:17Z |
---
language: en
thumbnail: http://www.huggingtweets.com/ferret_gf/1666852015981/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1583569492789153799/vJ1FEmHw_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">alex</div>
<div style="text-align: center; font-size: 14px;">@ferret_gf</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from alex.
| Data | alex |
| --- | --- |
| Tweets downloaded | 703 |
| Retweets | 163 |
| Short tweets | 183 |
| Tweets kept | 357 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/95pl7wzb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ferret_gf's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2k6rhew5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2k6rhew5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ferret_gf')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/daymoded-menthalovely-scolopendridaes
|
huggingtweets
| 2022-10-27T05:43:26Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-27T05:26:45Z |
---
language: en
thumbnail: http://www.huggingtweets.com/daymoded-menthalovely-scolopendridaes/1666849354903/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1541285406531956736/T36HqJWY_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1576010406446907395/cXmkdxpb_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1576595483157749760/GgLl95Ug_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">meri & Mentha & 𓆣</div>
<div style="text-align: center; font-size: 14px;">@daymoded-menthalovely-scolopendridaes</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from meri & Mentha & 𓆣.
| Data | meri | Mentha | 𓆣 |
| --- | --- | --- | --- |
| Tweets downloaded | 3208 | 3203 | 646 |
| Retweets | 595 | 1723 | 407 |
| Short tweets | 560 | 449 | 131 |
| Tweets kept | 2053 | 1031 | 108 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ervd3sj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @daymoded-menthalovely-scolopendridaes's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/28d01du3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/28d01du3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/daymoded-menthalovely-scolopendridaes')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Negs/ddpm-butterflies-128
|
Negs
| 2022-10-27T04:07:05Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-10-27T02:51:00Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Negs/ddpm-butterflies-128/tensorboard?#scalars)
|
huggingtweets/schizo_freq
|
huggingtweets
| 2022-10-27T03:52:41Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-09T17:50:33Z |
---
language: en
thumbnail: http://www.huggingtweets.com/schizo_freq/1666842754202/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1582126821025382400/PZjx83du_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lukas (computer)</div>
<div style="text-align: center; font-size: 14px;">@schizo_freq</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lukas (computer).
| Data | Lukas (computer) |
| --- | --- |
| Tweets downloaded | 3234 |
| Retweets | 481 |
| Short tweets | 324 |
| Tweets kept | 2429 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/11autkzl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @schizo_freq's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2km4y95n) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2km4y95n/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/schizo_freq')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Alex-VisTas/swin-tiny-patch4-window7-224-finetuned-woody_130epochs
|
Alex-VisTas
| 2022-10-27T03:11:10Z | 49 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-26T14:13:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-woody_130epochs
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8921212121212121
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-woody_130epochs
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4550
- Accuracy: 0.8921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 130
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6694 | 1.0 | 58 | 0.6370 | 0.6594 |
| 0.6072 | 2.0 | 116 | 0.5813 | 0.7030 |
| 0.6048 | 3.0 | 174 | 0.5646 | 0.7030 |
| 0.5849 | 4.0 | 232 | 0.5778 | 0.6970 |
| 0.5671 | 5.0 | 290 | 0.5394 | 0.7236 |
| 0.5575 | 6.0 | 348 | 0.5212 | 0.7382 |
| 0.568 | 7.0 | 406 | 0.5218 | 0.7358 |
| 0.5607 | 8.0 | 464 | 0.5183 | 0.7527 |
| 0.5351 | 9.0 | 522 | 0.5138 | 0.7467 |
| 0.5459 | 10.0 | 580 | 0.5290 | 0.7394 |
| 0.5454 | 11.0 | 638 | 0.5212 | 0.7345 |
| 0.5291 | 12.0 | 696 | 0.5130 | 0.7576 |
| 0.5378 | 13.0 | 754 | 0.5372 | 0.7503 |
| 0.5264 | 14.0 | 812 | 0.6089 | 0.6861 |
| 0.4909 | 15.0 | 870 | 0.4852 | 0.7636 |
| 0.5591 | 16.0 | 928 | 0.4817 | 0.76 |
| 0.4966 | 17.0 | 986 | 0.5673 | 0.6933 |
| 0.4988 | 18.0 | 1044 | 0.5131 | 0.7418 |
| 0.5339 | 19.0 | 1102 | 0.4998 | 0.7394 |
| 0.4804 | 20.0 | 1160 | 0.4655 | 0.7733 |
| 0.503 | 21.0 | 1218 | 0.4554 | 0.7685 |
| 0.4859 | 22.0 | 1276 | 0.4713 | 0.7770 |
| 0.504 | 23.0 | 1334 | 0.4545 | 0.7721 |
| 0.478 | 24.0 | 1392 | 0.4658 | 0.7830 |
| 0.4759 | 25.0 | 1450 | 0.4365 | 0.8012 |
| 0.4686 | 26.0 | 1508 | 0.4452 | 0.7855 |
| 0.4668 | 27.0 | 1566 | 0.4427 | 0.7879 |
| 0.4615 | 28.0 | 1624 | 0.4439 | 0.7685 |
| 0.4588 | 29.0 | 1682 | 0.4378 | 0.7830 |
| 0.4588 | 30.0 | 1740 | 0.4229 | 0.7988 |
| 0.4296 | 31.0 | 1798 | 0.4188 | 0.7976 |
| 0.4208 | 32.0 | 1856 | 0.4316 | 0.7891 |
| 0.4481 | 33.0 | 1914 | 0.4331 | 0.7891 |
| 0.4253 | 34.0 | 1972 | 0.4524 | 0.7879 |
| 0.4117 | 35.0 | 2030 | 0.4570 | 0.7952 |
| 0.4405 | 36.0 | 2088 | 0.4307 | 0.7927 |
| 0.4154 | 37.0 | 2146 | 0.4257 | 0.8024 |
| 0.3962 | 38.0 | 2204 | 0.5077 | 0.7818 |
| 0.414 | 39.0 | 2262 | 0.4602 | 0.8012 |
| 0.3937 | 40.0 | 2320 | 0.4741 | 0.7770 |
| 0.4186 | 41.0 | 2378 | 0.4250 | 0.8 |
| 0.4076 | 42.0 | 2436 | 0.4353 | 0.7988 |
| 0.3777 | 43.0 | 2494 | 0.4442 | 0.7879 |
| 0.3968 | 44.0 | 2552 | 0.4525 | 0.7879 |
| 0.377 | 45.0 | 2610 | 0.4198 | 0.7988 |
| 0.378 | 46.0 | 2668 | 0.4297 | 0.8097 |
| 0.3675 | 47.0 | 2726 | 0.4435 | 0.8085 |
| 0.3562 | 48.0 | 2784 | 0.4477 | 0.7952 |
| 0.381 | 49.0 | 2842 | 0.4206 | 0.8255 |
| 0.3603 | 50.0 | 2900 | 0.4136 | 0.8109 |
| 0.3331 | 51.0 | 2958 | 0.4141 | 0.8230 |
| 0.3471 | 52.0 | 3016 | 0.4253 | 0.8109 |
| 0.346 | 53.0 | 3074 | 0.5203 | 0.8048 |
| 0.3481 | 54.0 | 3132 | 0.4288 | 0.8242 |
| 0.3411 | 55.0 | 3190 | 0.4416 | 0.8194 |
| 0.3275 | 56.0 | 3248 | 0.4149 | 0.8291 |
| 0.3067 | 57.0 | 3306 | 0.4623 | 0.8218 |
| 0.3166 | 58.0 | 3364 | 0.4432 | 0.8255 |
| 0.3294 | 59.0 | 3422 | 0.4599 | 0.8267 |
| 0.3146 | 60.0 | 3480 | 0.4266 | 0.8291 |
| 0.3091 | 61.0 | 3538 | 0.4318 | 0.8315 |
| 0.3277 | 62.0 | 3596 | 0.4252 | 0.8242 |
| 0.296 | 63.0 | 3654 | 0.4332 | 0.8436 |
| 0.3241 | 64.0 | 3712 | 0.4729 | 0.8194 |
| 0.3104 | 65.0 | 3770 | 0.4228 | 0.8448 |
| 0.2878 | 66.0 | 3828 | 0.4173 | 0.8388 |
| 0.265 | 67.0 | 3886 | 0.4210 | 0.8497 |
| 0.3011 | 68.0 | 3944 | 0.4276 | 0.8436 |
| 0.2861 | 69.0 | 4002 | 0.4923 | 0.8315 |
| 0.2994 | 70.0 | 4060 | 0.4472 | 0.8182 |
| 0.276 | 71.0 | 4118 | 0.4541 | 0.8315 |
| 0.2796 | 72.0 | 4176 | 0.4218 | 0.8521 |
| 0.2727 | 73.0 | 4234 | 0.4053 | 0.8448 |
| 0.255 | 74.0 | 4292 | 0.4356 | 0.8376 |
| 0.276 | 75.0 | 4350 | 0.4193 | 0.8436 |
| 0.261 | 76.0 | 4408 | 0.4484 | 0.8533 |
| 0.2416 | 77.0 | 4466 | 0.4722 | 0.8194 |
| 0.2602 | 78.0 | 4524 | 0.4431 | 0.8533 |
| 0.2591 | 79.0 | 4582 | 0.4269 | 0.8606 |
| 0.2613 | 80.0 | 4640 | 0.4335 | 0.8485 |
| 0.2555 | 81.0 | 4698 | 0.4269 | 0.8594 |
| 0.2832 | 82.0 | 4756 | 0.3968 | 0.8715 |
| 0.264 | 83.0 | 4814 | 0.4173 | 0.8703 |
| 0.2462 | 84.0 | 4872 | 0.4150 | 0.8606 |
| 0.2424 | 85.0 | 4930 | 0.4377 | 0.8630 |
| 0.2574 | 86.0 | 4988 | 0.4120 | 0.8679 |
| 0.2273 | 87.0 | 5046 | 0.4393 | 0.8533 |
| 0.2334 | 88.0 | 5104 | 0.4366 | 0.8630 |
| 0.2258 | 89.0 | 5162 | 0.4189 | 0.8630 |
| 0.2153 | 90.0 | 5220 | 0.4474 | 0.8630 |
| 0.2462 | 91.0 | 5278 | 0.4362 | 0.8642 |
| 0.2356 | 92.0 | 5336 | 0.4454 | 0.8715 |
| 0.2019 | 93.0 | 5394 | 0.4413 | 0.88 |
| 0.209 | 94.0 | 5452 | 0.4410 | 0.8703 |
| 0.2201 | 95.0 | 5510 | 0.4323 | 0.8691 |
| 0.2245 | 96.0 | 5568 | 0.4999 | 0.8618 |
| 0.2178 | 97.0 | 5626 | 0.4612 | 0.8655 |
| 0.2163 | 98.0 | 5684 | 0.4340 | 0.8703 |
| 0.2228 | 99.0 | 5742 | 0.4504 | 0.8788 |
| 0.2151 | 100.0 | 5800 | 0.4602 | 0.8703 |
| 0.1988 | 101.0 | 5858 | 0.4414 | 0.8812 |
| 0.2227 | 102.0 | 5916 | 0.4392 | 0.8824 |
| 0.1772 | 103.0 | 5974 | 0.5069 | 0.8630 |
| 0.2199 | 104.0 | 6032 | 0.4648 | 0.8667 |
| 0.1936 | 105.0 | 6090 | 0.4806 | 0.8691 |
| 0.199 | 106.0 | 6148 | 0.4569 | 0.8764 |
| 0.2149 | 107.0 | 6206 | 0.4445 | 0.8739 |
| 0.1917 | 108.0 | 6264 | 0.4444 | 0.8727 |
| 0.201 | 109.0 | 6322 | 0.4594 | 0.8727 |
| 0.1938 | 110.0 | 6380 | 0.4564 | 0.8764 |
| 0.1977 | 111.0 | 6438 | 0.4398 | 0.8739 |
| 0.1776 | 112.0 | 6496 | 0.4356 | 0.88 |
| 0.1939 | 113.0 | 6554 | 0.4412 | 0.8848 |
| 0.178 | 114.0 | 6612 | 0.4373 | 0.88 |
| 0.1926 | 115.0 | 6670 | 0.4508 | 0.8812 |
| 0.1979 | 116.0 | 6728 | 0.4477 | 0.8848 |
| 0.1958 | 117.0 | 6786 | 0.4488 | 0.8897 |
| 0.189 | 118.0 | 6844 | 0.4553 | 0.8836 |
| 0.1838 | 119.0 | 6902 | 0.4605 | 0.8848 |
| 0.1755 | 120.0 | 6960 | 0.4463 | 0.8836 |
| 0.1958 | 121.0 | 7018 | 0.4474 | 0.8861 |
| 0.1857 | 122.0 | 7076 | 0.4550 | 0.8921 |
| 0.1466 | 123.0 | 7134 | 0.4494 | 0.8885 |
| 0.1751 | 124.0 | 7192 | 0.4560 | 0.8873 |
| 0.175 | 125.0 | 7250 | 0.4383 | 0.8897 |
| 0.207 | 126.0 | 7308 | 0.4601 | 0.8873 |
| 0.1756 | 127.0 | 7366 | 0.4425 | 0.8897 |
| 0.1695 | 128.0 | 7424 | 0.4533 | 0.8909 |
| 0.1873 | 129.0 | 7482 | 0.4510 | 0.8897 |
| 0.1726 | 130.0 | 7540 | 0.4463 | 0.8909 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
bishalbaaniya/opus-mt-en-ro-finetuned-en-to-ro
|
bishalbaaniya
| 2022-10-27T01:37:52Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-27T00:03:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-ro-finetuned-en-to-ro
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
config: ro-en
split: train
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 28.1505
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2886
- Bleu: 28.1505
- Gen Len: 34.1036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.7437 | 1.0 | 38145 | 1.2886 | 28.1505 | 34.1036 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
CharlieP/t5-small-nlpfinalproject-xsum
|
CharlieP
| 2022-10-27T00:12:48Z | 9 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-26T15:42:09Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: CharlieP/t5-small-nlpfinalproject-xsum
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# CharlieP/t5-small-nlpfinalproject-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.2391
- Validation Loss: 3.0511
- Train Rouge1: 21.2434
- Train Rouge2: 4.0808
- Train Rougel: 16.6836
- Train Rougelsum: 16.6460
- Train Gen Len: 18.42
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 3.8204 | 3.2757 | 18.2829 | 2.7616 | 14.7101 | 14.7047 | 18.59 | 0 |
| 3.4646 | 3.1560 | 20.4371 | 3.6903 | 16.0587 | 16.0790 | 18.35 | 1 |
| 3.3630 | 3.1028 | 20.7907 | 3.9282 | 15.9696 | 15.8916 | 18.42 | 2 |
| 3.2904 | 3.0713 | 21.6980 | 4.3218 | 16.7261 | 16.6776 | 18.42 | 3 |
| 3.2391 | 3.0511 | 21.2434 | 4.0808 | 16.6836 | 16.6460 | 18.42 | 4 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
sd-concepts-library/anime-background-style
|
sd-concepts-library
| 2022-10-26T23:48:27Z | 0 | 7 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-26T23:39:03Z |
---
license: mit
---
### Anime Background Style on Stable Diffusion
This is the `<anime-background-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:










Here are images generated with this style:



This style does not produce good results as most of the training images were too small. I'll likely train it again with bigger ones.
|
huggingtweets/gretathotburg
|
huggingtweets
| 2022-10-26T23:34:18Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-26T16:17:41Z |
---
language: en
thumbnail: http://www.huggingtweets.com/gretathotburg/1666827253516/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1551255816992350210/yjE--1UN_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">cathy</div>
<div style="text-align: center; font-size: 14px;">@gretathotburg</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from cathy.
| Data | cathy |
| --- | --- |
| Tweets downloaded | 1108 |
| Retweets | 257 |
| Short tweets | 362 |
| Tweets kept | 489 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2j0c2wea/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gretathotburg's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2d4e53sz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2d4e53sz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gretathotburg')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/unormal
|
huggingtweets
| 2022-10-26T23:12:43Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-26T23:12:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/unormal/1666825958784/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1214437142149160960/LjmOMDT3_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">🐔 Brian Bucklew 🐔 ₑͤ>∿<ₑͤ ∞🌮</div>
<div style="text-align: center; font-size: 14px;">@unormal</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 🐔 Brian Bucklew 🐔 ₑͤ>∿<ₑͤ ∞🌮.
| Data | 🐔 Brian Bucklew 🐔 ₑͤ>∿<ₑͤ ∞🌮 |
| --- | --- |
| Tweets downloaded | 3238 |
| Retweets | 580 |
| Short tweets | 870 |
| Tweets kept | 1788 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3lbbzcxv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @unormal's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2uyfuin5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2uyfuin5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/unormal')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/_a_bat
|
huggingtweets
| 2022-10-26T23:12:22Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-26T23:08:36Z |
---
language: en
thumbnail: http://www.huggingtweets.com/_a_bat/1666825888934/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/2415729722/9rhiyt5scbbzagfdxrx2_400x400.gif')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Taw - version2.bat</div>
<div style="text-align: center; font-size: 14px;">@_a_bat</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Taw - version2.bat.
| Data | Taw - version2.bat |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 336 |
| Short tweets | 258 |
| Tweets kept | 2653 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2fdjcy6g/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_a_bat's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/n2exl5h2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/n2exl5h2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/_a_bat')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
muzafferenes/workshop
|
muzafferenes
| 2022-10-26T22:54:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-10-15T09:13:48Z |
architectural section drawing 3d parametric design for elderly people living with courtyards in between two forms with small bridges
|
musika/musika_misc
|
musika
| 2022-10-26T22:48:07Z | 0 | 1 | null |
[
"audio",
"music",
"generation",
"tensorflow",
"arxiv:2208.08706",
"license:mit",
"region:us"
] | null | 2022-10-26T22:46:21Z |
---
license: mit
tags:
- audio
- music
- generation
- tensorflow
---
# Musika Model: musika_misc
## Model provided by: marcop
Pretrained musika_misc model for the [Musika system](https://github.com/marcoppasini/musika) for fast infinite waveform music generation.
Introduced in [this paper](https://arxiv.org/abs/2208.08706).
## How to use
You can generate music from this pretrained musika_misc model using the notebook available [here](https://colab.research.google.com/drive/1HJWliBXPi-Xlx3gY8cjFI5-xaZgrTD7r).
### Model description
This pretrained GAN system consists of a ResNet-style generator and discriminator. During training, stability is controlled by adapting the strength of gradient penalty regularization on-the-fly. The gradient penalty weighting term is contained in *switch.npy*. The generator is conditioned on a latent coordinate system to produce samples of arbitrary length. The latent representations produced by the generator are then passed to a decoder which converts them into waveform audio.
The generator has a context window of about 12 seconds of audio.
|
musika/musika_techno
|
musika
| 2022-10-26T22:45:22Z | 0 | 1 | null |
[
"audio",
"music",
"generation",
"tensorflow",
"arxiv:2208.08706",
"license:mit",
"region:us"
] | null | 2022-10-26T22:40:50Z |
---
license: mit
tags:
- audio
- music
- generation
- tensorflow
---
# Musika Model: musika_techno
## Model provided by: marcop
Pretrained musika_techno model for the [Musika system](https://github.com/marcoppasini/musika) for fast infinite waveform music generation.
Introduced in [this paper](https://arxiv.org/abs/2208.08706).
## How to use
You can generate music from this pretrained musika_techno model using the notebook available [here](https://colab.research.google.com/drive/1HJWliBXPi-Xlx3gY8cjFI5-xaZgrTD7r?usp=sharing).
### Model description
This pretrained GAN system consists of a ResNet-style generator and discriminator. During training, stability is controlled by adapting the strength of gradient penalty regularization on-the-fly. The gradient penalty weighting term is contained in *switch.npy*. The generator is conditioned on a latent coordinate system to produce samples of arbitrary length. The latent representations produced by the generator are then passed to a decoder which converts them into waveform audio.
The generator has a context window of about 12 seconds of audio.
|
sd-concepts-library/kentaro-miura
|
sd-concepts-library
| 2022-10-26T22:24:04Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-26T22:23:57Z |
---
license: mit
---
### Kentaro Miura on Stable Diffusion
This is the `<kentaro-miura>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
JoAmps/littledatasets
|
JoAmps
| 2022-10-26T22:20:47Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-26T22:05:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: littledatasets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# littledatasets
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 85 | 0.0053 |
| No log | 2.0 | 170 | 0.0002 |
| No log | 3.0 | 255 | 0.0001 |
| No log | 4.0 | 340 | 0.0001 |
| No log | 5.0 | 425 | 0.0001 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.1
- Datasets 2.5.1
- Tokenizers 0.12.1
|
huggingtweets/the_boolaidman
|
huggingtweets
| 2022-10-26T21:55:47Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-26T17:15:53Z |
---
language: en
thumbnail: http://www.huggingtweets.com/the_boolaidman/1666821342474/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1528444052034789378/E1BRWZyE_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">theboghog</div>
<div style="text-align: center; font-size: 14px;">@the_boolaidman</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from theboghog.
| Data | theboghog |
| --- | --- |
| Tweets downloaded | 184 |
| Retweets | 44 |
| Short tweets | 32 |
| Tweets kept | 108 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/lez3uo4l/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @the_boolaidman's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34ufbard) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34ufbard/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/the_boolaidman')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/big___oven-schizo_freq
|
huggingtweets
| 2022-10-26T21:50:36Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-26T17:42:08Z |
---
language: en
thumbnail: http://www.huggingtweets.com/big___oven-schizo_freq/1666821031327/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1571653458972794884/eaxhUsib_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1582126821025382400/PZjx83du_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">oskcar & Lukas (computer)</div>
<div style="text-align: center; font-size: 14px;">@big___oven-schizo_freq</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from oskcar & Lukas (computer).
| Data | oskcar | Lukas (computer) |
| --- | --- | --- |
| Tweets downloaded | 2642 | 3234 |
| Retweets | 605 | 480 |
| Short tweets | 325 | 326 |
| Tweets kept | 1712 | 2428 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/t7nn481m/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @big___oven-schizo_freq's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ljhfklh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ljhfklh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/big___oven-schizo_freq')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
andrewzhang505/doom_test
|
andrewzhang505
| 2022-10-26T20:56:17Z | 1 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"region:us"
] |
reinforcement-learning
| 2022-10-26T20:54:41Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
---
A(n) **APPO** model trained on the **doom_deathmatch_bots** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
Kristijan/gpt2_wt103-40m_12-layer
|
Kristijan
| 2022-10-26T20:55:16Z | 3 | 0 |
pytorch
|
[
"pytorch",
"gpt2",
"language-model",
"transformer",
"wikitext-103",
"en",
"arxiv:2210.13569",
"model-index",
"region:us"
] | null | 2022-10-26T17:46:18Z |
---
language:
- en
library_name: pytorch
tags:
- language-model
- gpt2
- transformer
- wikitext-103
model-index:
- name: gpt2_wt103-40m_12-layer
results:
- task:
type: language-modeling
dataset:
type: wikitext
name: Wikitext-103
metrics:
- type: perplexity
value: 40.3
---
# Model description
paper: [Characterizing Verbatim Short-Term Memory in Neural Language Models](https://arxiv.org/abs/2210.13569)
This is a gpt2-small-like decoder-only transformer model trained on a 40M token subset of the [wikitext-103 dataset](https://paperswithcode.com/dataset/wikitext-103).
# Usage
You can download and load the model as follows:
```python
from transformers import GPT2LMHeadModel
model = GPT2LMHeadModel.from_pretrained("Kristijan/gpt2_wt103-40m_12-layer")
```
Alternatively, if you've downloaded the checkpoint files in this repository, you could also do:
```python
from transformers import GPT2LMHeadModel
model = GPT2LMHeadModel.from_pretrained(path_to_folder_with_checkpoint_files)
```
To tokenize your text for this model, you should use the [tokenizer trained on Wikitext-103](https://huggingface.co/Kristijan/wikitext-103-tokenizer)
# Intended uses
This checkpoint is intended for research purposes, for example those interested in studying the behavior of transformer language models trained on smaller datasets.
|
GhifSmile/mT5_multilingual_XLSum-finetuned-indosum
|
GhifSmile
| 2022-10-26T20:49:59Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-26T15:43:40Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mT5_multilingual_XLSum-finetuned-indosum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetuned-indosum
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5512
- Rouge1: 0.3819
- Rouge2: 0.3102
- Rougel: 0.3529
- Rougelsum: 0.3687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| 1.8183 | 1.0 | 7131 | 1.5512 | 0.3819 | 0.3102 | 0.3529 | 0.3687 |
| 1.8191 | 2.0 | 14262 | 1.5512 | 0.3819 | 0.3102 | 0.3529 | 0.3687 |
| 1.8197 | 3.0 | 21393 | 1.5512 | 0.3819 | 0.3102 | 0.3529 | 0.3687 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Kristijan/wikitext-103-tokenizer
|
Kristijan
| 2022-10-26T20:32:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-10-26T20:16:48Z |
## Model info
This is a BPE tokenizer retrained from scratch on the concatenated [Wikitext-103](https://paperswithcode.com/dataset/wikitext-103) train, evaluation, and test sets. The vocabulary had 28,439 entries.
This tokenizer was use to tokenize text for [the GPT-2 model trained on Wikitext-103](https://huggingface.co/Kristijan/gpt2_wt103-40m_12-layer).
## Usage
You can download the tokenizer directly from hub as follows:
```
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained("Kristijan/wikitext-103-tokenizer")
```
After cloning/downloading the files, you can load the tokenizer using the `/from_pretrained()` methods as follows:
```
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained(path_to_folder_with_merges_and_vocab_files)
```
|
Karelito00/beit-base-patch16-224-pt22k-ft22k-finetuned-mnist
|
Karelito00
| 2022-10-26T19:25:37Z | 49 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:mnist",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-26T15:25:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mnist
metrics:
- accuracy
model-index:
- name: beit-base-patch16-224-pt22k-ft22k-finetuned-mnist
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: mnist
type: mnist
config: mnist
split: train
args: mnist
metrics:
- name: Accuracy
type: accuracy
value: 0.9935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-pt22k-ft22k-finetuned-mnist
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the mnist dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0202
- Accuracy: 0.9935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3376 | 1.0 | 937 | 0.0446 | 0.9855 |
| 0.318 | 2.0 | 1874 | 0.0262 | 0.9916 |
| 0.2374 | 3.0 | 2811 | 0.0202 | 0.9935 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/simerino1
|
huggingtweets
| 2022-10-26T19:03:41Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-26T19:02:08Z |
---
language: en
thumbnail: http://www.huggingtweets.com/simerino1/1666811016675/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1174133652399300608/3UF7GOrK_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">computer</div>
<div style="text-align: center; font-size: 14px;">@simerino1</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from computer.
| Data | computer |
| --- | --- |
| Tweets downloaded | 980 |
| Retweets | 366 |
| Short tweets | 96 |
| Tweets kept | 518 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/356xy36h/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @simerino1's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1eld4xfg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1eld4xfg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/simerino1')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
kevinbror/whynotwork
|
kevinbror
| 2022-10-26T19:02:37Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-26T19:02:10Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: whynotwork
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whynotwork
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2892
- Train End Logits Accuracy: 0.6617
- Train Start Logits Accuracy: 0.6190
- Validation Loss: 1.0393
- Validation End Logits Accuracy: 0.7213
- Validation Start Logits Accuracy: 0.6877
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7377, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.2892 | 0.6617 | 0.6190 | 1.0393 | 0.7213 | 0.6877 | 0 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
kevinbror/yespublic
|
kevinbror
| 2022-10-26T18:51:03Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-26T12:14:13Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: yespublic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# yespublic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2892
- Train End Logits Accuracy: 0.6617
- Train Start Logits Accuracy: 0.6190
- Validation Loss: 1.0393
- Validation End Logits Accuracy: 0.7213
- Validation Start Logits Accuracy: 0.6877
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7377, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.2892 | 0.6617 | 0.6190 | 1.0393 | 0.7213 | 0.6877 | 0 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huggingtweets/snobrights
|
huggingtweets
| 2022-10-26T18:18:39Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-26T18:17:24Z |
---
language: en
thumbnail: http://www.huggingtweets.com/snobrights/1666808315124/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1562231899925397504/PZnUZWaV_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">vote4ana</div>
<div style="text-align: center; font-size: 14px;">@snobrights</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from vote4ana.
| Data | vote4ana |
| --- | --- |
| Tweets downloaded | 1947 |
| Retweets | 510 |
| Short tweets | 353 |
| Tweets kept | 1084 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/163lcflh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @snobrights's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6bnd5aob) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6bnd5aob/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/snobrights')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
PraveenKishore/dqn-SpaceInvadersNoFrameskip-v4
|
PraveenKishore
| 2022-10-26T18:07:45Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-26T18:07:09Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 626.50 +/- 127.69
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga PraveenKishore -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga PraveenKishore -f logs/
rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga PraveenKishore
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
huggingtweets/gretathotburg-snobrights
|
huggingtweets
| 2022-10-26T17:59:14Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-26T16:41:44Z |
---
language: en
thumbnail: http://www.huggingtweets.com/gretathotburg-snobrights/1666807149420/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1551255816992350210/yjE--1UN_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1562231899925397504/PZnUZWaV_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">cathy & vote4ana</div>
<div style="text-align: center; font-size: 14px;">@gretathotburg-snobrights</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from cathy & vote4ana.
| Data | cathy | vote4ana |
| --- | --- | --- |
| Tweets downloaded | 1107 | 1948 |
| Retweets | 254 | 511 |
| Short tweets | 362 | 353 |
| Tweets kept | 491 | 1084 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2129jbxh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gretathotburg-snobrights's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3dq4zw12) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3dq4zw12/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gretathotburg-snobrights')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mjawadazad2321/donut-base-Medical_Handwritten_Blocks_Data_Extraction
|
mjawadazad2321
| 2022-10-26T16:39:16Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2022-10-26T16:27:29Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-Medical_Handwritten_Blocks_Data_Extraction
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-Medical_Handwritten_Blocks_Data_Extraction
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
pig4431/rtm_ALBERT_5E
|
pig4431
| 2022-10-26T15:04:14Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:rotten_tomatoes",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-26T15:03:22Z |
---
tags:
- generated_from_trainer
datasets:
- rotten_tomatoes
model-index:
- name: model_output_dir
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_output_dir
This model was trained from scratch on the rotten_tomatoes dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
YumaSaito/distilbert-base-uncased-finetuned-emotion
|
YumaSaito
| 2022-10-26T15:03:55Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-23T14:15:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9261092845869646
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2181
- Accuracy: 0.926
- F1: 0.9261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8618 | 1.0 | 250 | 0.3206 | 0.903 | 0.8990 |
| 0.2549 | 2.0 | 500 | 0.2181 | 0.926 | 0.9261 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
dominiks/dqn-SpaceInvadersNoFrameskip-v4
|
dominiks
| 2022-10-26T14:31:01Z | 6 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-26T14:30:14Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 689.50 +/- 181.29
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dominiks -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dominiks -f logs/
rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga dominiks
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
judaschrist/ddpm-butterflies-128
|
judaschrist
| 2022-10-26T14:30:42Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:json",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-10-25T15:52:48Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: json
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `json` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/judaschrist/ddpm-butterflies-128/tensorboard?#scalars)
|
gstqtfr/ddpm-butterflies-128
|
gstqtfr
| 2022-10-26T13:57:03Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-10-25T17:02:11Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/gstqtfr/ddpm-butterflies-128/tensorboard?#scalars)
|
mrm8488/codebert-base-finetuned-code-ner-15e
|
mrm8488
| 2022-10-26T13:42:00Z | 24 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-26T11:57:15Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: codebert-base-finetuned-code-ner-15e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codebert-base-finetuned-code-ner-15e
This model is a fine-tuned version of [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3831
- Precision: 0.6363
- Recall: 0.6494
- F1: 0.6428
- Accuracy: 0.9197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 191 | 0.4566 | 0.5021 | 0.4220 | 0.4585 | 0.8827 |
| No log | 2.0 | 382 | 0.3756 | 0.5699 | 0.5764 | 0.5731 | 0.9043 |
| 0.5133 | 3.0 | 573 | 0.3605 | 0.6001 | 0.5767 | 0.5882 | 0.9093 |
| 0.5133 | 4.0 | 764 | 0.3500 | 0.6130 | 0.6130 | 0.6130 | 0.9153 |
| 0.5133 | 5.0 | 955 | 0.3501 | 0.6337 | 0.6172 | 0.6254 | 0.9178 |
| 0.2203 | 6.0 | 1146 | 0.3645 | 0.6250 | 0.6352 | 0.6300 | 0.9163 |
| 0.2203 | 7.0 | 1337 | 0.3488 | 0.6263 | 0.6422 | 0.6341 | 0.9189 |
| 0.1457 | 8.0 | 1528 | 0.3575 | 0.6372 | 0.6397 | 0.6384 | 0.9194 |
| 0.1457 | 9.0 | 1719 | 0.3662 | 0.6406 | 0.6343 | 0.6375 | 0.9189 |
| 0.1457 | 10.0 | 1910 | 0.3613 | 0.6374 | 0.6473 | 0.6423 | 0.9201 |
| 0.107 | 11.0 | 2101 | 0.3716 | 0.6329 | 0.6544 | 0.6435 | 0.9197 |
| 0.107 | 12.0 | 2292 | 0.3754 | 0.6328 | 0.6487 | 0.6406 | 0.9193 |
| 0.107 | 13.0 | 2483 | 0.3826 | 0.6395 | 0.6490 | 0.6443 | 0.9204 |
| 0.0863 | 14.0 | 2674 | 0.3821 | 0.6368 | 0.6535 | 0.6451 | 0.9200 |
| 0.0863 | 15.0 | 2865 | 0.3831 | 0.6363 | 0.6494 | 0.6428 | 0.9197 |
### Evaluation results
| | Algorithm | Application | Class | Code_Block | Data_Structure | Data_Type | Device | Error_Name | File_Name | File_Type | Function | HTML_XML_Tag | Keyboard_IP | Language | Library | Operating_System | Output_Block | User_Interface_Element | User_Name | Value | Variable | Version | Website | overall_precision | overall_recall | overall_f1 | overall_accuracy |
|:----------|------------:|--------------:|------------:|-------------:|-----------------:|------------:|----------:|-------------:|------------:|------------:|-----------:|---------------:|--------------:|-----------:|-----------:|-------------------:|---------------:|-------------------------:|------------:|-----------:|-----------:|-----------:|----------:|--------------------:|-----------------:|-------------:|-------------------:|
| precision | 0 | 0.619835 | 0.680851 | 0.455629 | 0.813187 | 0.592593 | 0.395062 | 0.181818 | 0.800505 | 0.775956 | 0.757664 | 0.585366 | 0.333333 | 0.689769 | 0.61807 | 0.769231 | 0.0212766 | 0.542214 | 0.4375 | 0.370236 | 0.560479 | 0.883721 | 0.382353 | 0.626308 | 0.642171 | 0.63414 | 0.918927 |
| recall | 0 | 0.677711 | 0.696864 | 0.494253 | 0.840909 | 0.8 | 0.533333 | 0.333333 | 0.794486 | 0.628319 | 0.631387 | 0.470588 | 0.0169492 | 0.81323 | 0.546279 | 0.843373 | 0.04 | 0.653846 | 0.518519 | 0.52987 | 0.54482 | 0.914089 | 0.270833 | 0.626308 | 0.642171 | 0.63414 | 0.918927 |
| f1 | 0 | 0.647482 | 0.688765 | 0.474156 | 0.826816 | 0.680851 | 0.453901 | 0.235294 | 0.797484 | 0.694377 | 0.688786 | 0.521739 | 0.0322581 | 0.746429 | 0.579961 | 0.804598 | 0.0277778 | 0.592821 | 0.474576 | 0.435897 | 0.552538 | 0.898649 | 0.317073 | 0.626308 | 0.642171 | 0.63414 | 0.918927 |
| number | 31 | 664 | 1148 | 696 | 264 | 120 | 60 | 30 | 798 | 226 | 822 | 102 | 59 | 257 | 551 | 83 | 25 | 442 | 54 | 385 | 859 | 291 | 48 | 0.626308 | 0.642171 | 0.63414 | 0.918927 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Karelito00/swin-tiny-patch4-window7-224-finetuned-eurosat
|
Karelito00
| 2022-10-26T13:40:05Z | 59 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-26T13:15:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9822222222222222
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0501
- Accuracy: 0.9822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3259 | 1.0 | 379 | 0.0760 | 0.9763 |
| 0.1882 | 2.0 | 758 | 0.0694 | 0.9778 |
| 0.1563 | 3.0 | 1137 | 0.0501 | 0.9822 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
lafayettecreditrepair/Credit-Repair-Services-Lafayette
|
lafayettecreditrepair
| 2022-10-26T13:08:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-10-26T13:07:58Z |
We are a family-owned and operated Credit Repair company, founded in 2013. Our goal is to help you achieve financial success and reach your credit goals.
We’re not your average credit repair firm, we truly care, so we only charge for the items we pursue on your report. Not only does this make us one of the FASTEST credit restoration companies, but we’re also one of the most affordable.
Follow this [link](https://lafayette.asapcreditrepairusa.com/)
|
KGsteven/distilbert-base-uncased-finetuned-cola
|
KGsteven
| 2022-10-26T12:36:42Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-19T11:25:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3038
- Matthews Correlation: 0.9198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 1.2169 | 1.0 | 626 | 0.6782 | 0.8605 |
| 0.5513 | 2.0 | 1252 | 0.4085 | 0.8998 |
| 0.343 | 3.0 | 1878 | 0.3346 | 0.9122 |
| 0.1642 | 4.0 | 2504 | 0.3106 | 0.9165 |
| 0.1216 | 5.0 | 3130 | 0.3038 | 0.9198 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Tokenizers 0.13.1
|
huggingtweets/doaenel
|
huggingtweets
| 2022-10-26T12:29:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-30T20:24:02Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1469646540612509701/x4eJRlkK_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Dantes</div>
<div style="text-align: center; font-size: 14px;">@doaenel</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Dantes.
| Data | Dantes |
| --- | --- |
| Tweets downloaded | 2609 |
| Retweets | 29 |
| Short tweets | 464 |
| Tweets kept | 2116 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1sbwdgoz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @doaenel's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/8u23yy7u) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/8u23yy7u/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/doaenel')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
studiolike/caps
|
studiolike
| 2022-10-26T12:04:28Z | 13 | 0 |
tf-keras
|
[
"tf-keras",
"ocr",
"computer vision",
"object detection",
"image-to-text",
"license:cc0-1.0",
"region:us"
] |
image-to-text
| 2022-10-22T05:21:34Z |
---
tags:
- ocr
- computer vision
- object detection
- image-to-text
license:
- cc0-1.0
---
## Keras Implementation of OCR model for reading captcha 🤖🦹🏻
|
huggingtweets/femoidfurry
|
huggingtweets
| 2022-10-26T11:56:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/femoidfurry/1666785376927/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1569453578493763590/MerXNdrF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">shitbrain dyke upside down era</div>
<div style="text-align: center; font-size: 14px;">@femoidfurry</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from shitbrain dyke upside down era.
| Data | shitbrain dyke upside down era |
| --- | --- |
| Tweets downloaded | 3211 |
| Retweets | 1977 |
| Short tweets | 106 |
| Tweets kept | 1128 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/34ui7fp9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @femoidfurry's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/177yzikv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/177yzikv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/femoidfurry')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
chintagunta85/biobert-base-cased-v1.2-bc2gm-ner
|
chintagunta85
| 2022-10-26T11:38:53Z | 30 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:bc2gm_corpus",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-26T10:46:44Z |
---
tags:
- generated_from_trainer
datasets:
- bc2gm_corpus
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: biobert-base-cased-v1.2-bc2gm-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: bc2gm_corpus
type: bc2gm_corpus
config: bc2gm_corpus
split: train
args: bc2gm_corpus
metrics:
- name: Precision
type: precision
value: 0.7988356059445381
- name: Recall
type: recall
value: 0.8243478260869566
- name: F1
type: f1
value: 0.8113912231559292
- name: Accuracy
type: accuracy
value: 0.9772069842818806
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.2-bc2gm-ner
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the bc2gm_corpus dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1528
- Precision: 0.7988
- Recall: 0.8243
- F1: 0.8114
- Accuracy: 0.9772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.057 | 1.0 | 782 | 0.0670 | 0.7446 | 0.8051 | 0.7736 | 0.9738 |
| 0.0586 | 2.0 | 1564 | 0.0689 | 0.7689 | 0.8106 | 0.7892 | 0.9755 |
| 0.0123 | 3.0 | 2346 | 0.0715 | 0.7846 | 0.8076 | 0.7959 | 0.9750 |
| 0.0002 | 4.0 | 3128 | 0.0896 | 0.7942 | 0.8199 | 0.8068 | 0.9767 |
| 0.0004 | 5.0 | 3910 | 0.1119 | 0.7971 | 0.8201 | 0.8084 | 0.9765 |
| 0.0004 | 6.0 | 4692 | 0.1192 | 0.7966 | 0.8337 | 0.8147 | 0.9768 |
| 0.013 | 7.0 | 5474 | 0.1274 | 0.7932 | 0.8266 | 0.8095 | 0.9773 |
| 0.0236 | 8.0 | 6256 | 0.1419 | 0.7976 | 0.8213 | 0.8093 | 0.9771 |
| 0.0004 | 9.0 | 7038 | 0.1519 | 0.8004 | 0.8261 | 0.8130 | 0.9772 |
| 0.0 | 10.0 | 7820 | 0.1528 | 0.7988 | 0.8243 | 0.8114 | 0.9772 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
mcallencreditrepair/Credit-Repair-Services-McAllen
|
mcallencreditrepair
| 2022-10-26T11:06:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-10-26T11:06:02Z |
Are you looking for [credit repair McAllen](https://mcallen.asapcreditrepairusa.com/)? You are at the right place.
ASAP Credit Repair McAllen will help you repair your credit scores by removing derogatory items from your accounts. Call or text us today!
|
elpasoasapcreditrepair/Credit-Repair-in-ElPaso
|
elpasoasapcreditrepair
| 2022-10-26T10:59:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-10-26T10:58:56Z |
We want to get to know you, but first you should get to know us!
We are a family-owned and operated Credit Repair company, founded in 2013. Our goal is to help you achieve financial success and reach your credit goals.
Follow this [link](https://elpaso.asapcreditrepairusa.com/)
|
huggingtweets/alberteinstein-physicstoday-physicstweet
|
huggingtweets
| 2022-10-26T10:33:34Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-26T10:30:36Z |
---
language: en
thumbnail: http://www.huggingtweets.com/alberteinstein-physicstoday-physicstweet/1666780409313/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/879355674957926400/VSGZHGib_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1576931585408073728/9Y0JqcIu_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/2821633714/ea74608b616cb0dc06a2562c01dcbe2e_400x400.jpeg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Albert Einstein & Physics Today & Physics Tweet</div>
<div style="text-align: center; font-size: 14px;">@alberteinstein-physicstoday-physicstweet</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Albert Einstein & Physics Today & Physics Tweet.
| Data | Albert Einstein | Physics Today | Physics Tweet |
| --- | --- | --- | --- |
| Tweets downloaded | 3251 | 3249 | 3250 |
| Retweets | 126 | 754 | 0 |
| Short tweets | 101 | 14 | 0 |
| Tweets kept | 3024 | 2481 | 3250 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/uingbn5k/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @alberteinstein-physicstoday-physicstweet's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/vwa3h6sy) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/vwa3h6sy/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/alberteinstein-physicstoday-physicstweet')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
israel/byt5_en_am
|
israel
| 2022-10-26T10:10:40Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"am",
"dataset:sample",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-25T09:31:06Z |
---
language:
- am
datasets:
- sample
license: cc-by-4.0
---
|
NchuNLP/Legal-Document-Question-Answering
|
NchuNLP
| 2022-10-26T09:45:48Z | 178 | 5 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"zh",
"dataset:LegalDocumentDataset",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-17T08:21:34Z |
---
language: zh
datasets:
- LegalDocumentDataset
---
# bert-base-chinese for QA
This is the [bert-base-chinese](https://huggingface.co/bert-base-chinese) model, fine-tuned using the Legal Document Dataset. It's been trained on question-answer pairs for the task of Question Answering.
## Usage
### In Transformers
```python
from transformers import BertTokenizerFast, BertForQuestionAnswering, pipeline
model_name = "NchuNLP/Legal-Document-Question-Answering"
tokenizer = BertTokenizerFast.from_pretrained(model_name)
model = BertForQuestionAnswering.from_pretrained(model_name)
# a) Get predictions
nlp = pipeline('question-answering', model=model, tokenizer=tokenizer)
QA_input = {
'question': '被告人偽造了什麼文書?',
'context': '犯罪事實一、韓金虎在采豐開發有限公司(址設臺北市○○區○○路0段000巷00○0號,下稱采豐公司)擔任臨時派遣員工,詎其竟意圖為自己不法之所有,基於行使偽造私文書、詐欺取財等犯意,於民國110年9月2日下午5時20分前某時許,在不詳地點,在采豐公司所使用之空白工作確認單中主任簽名欄上偽簽謝宏奇之簽名,佯裝其有於110年9月1日到班工作,並經工地主任確認之意,提出與采豐公司主任曾子昕而行使之,曾子昕因見該份工作確認單上有謝奇宏之簽名,因陷於錯誤而信韓金虎確實有於110年9月1日到班工作,准發薪資新臺幣(下同)2,000元給韓金虎,足生損害於采豐公司。嗣曾子昕於110年9月3日上午11時20分許,發現工作確認單點交數量有異,遂報警處理,始悉上情。二、案經曾子昕訴由臺北市政府警察局萬華分局報告偵辦。'
}
res = nlp(QA_input)
```
## Authors
**Kei Yu Heish:** [email protected]
**Yao-Chung Fan:** [email protected]
## About us
[中興大學自然語言處理實驗室](https://nlpnchu.org/)研究方向圍繞於深度學習技術在文字資料探勘 (Text Mining) 與自然語言處理 (Natural Language Processing) 方面之研究,目前實驗室成員的研究主題著重於機器閱讀理解 (Machine Reading Comprehension) 以及自然語言生成 (Natural Language Generation) 兩面向。
## More Information
<p>For more info about Nchu NLP Lab, visit our <strong><a href="https://demo.nlpnchu.org/">Lab Online Demo</a></strong> repo and <strong><a href="https://github.com/NCHU-NLP-Lab">GitHub</a></strong>.
|
biu-nlp/lingmess-coref
|
biu-nlp
| 2022-10-26T08:55:32Z | 3,558 | 10 |
transformers
|
[
"transformers",
"pytorch",
"longformer",
"coreference-resolution",
"en",
"dataset:ontonotes",
"arxiv:2205.12644",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-06-09T19:05:32Z |
---
language:
- en
tags:
- coreference-resolution
license: mit
datasets:
- ontonotes
metrics:
- CoNLL
task_categories:
- coreference-resolution
model-index:
- name: biu-nlp/lingmess-coref
results:
- task:
type: coreference-resolution
name: coreference-resolution
dataset:
name: ontonotes
type: coreference
metrics:
- name: Avg. F1
type: CoNLL
value: 81.4
---
## LingMess: Linguistically Informed Multi Expert Scorers for Coreference Resolution
[LingMess](https://arxiv.org/abs/2205.12644) is a linguistically motivated categorization of mention-pairs into 6 types of coreference decisions and learn a dedicated trainable scoring function for each category. This significantly improves the accuracy of the pairwise scorer as well as of the overall coreference performance on the English Ontonotes coreference corpus.
Please check the [official repository](https://github.com/shon-otmazgin/lingmess-coref) for more details and updates.
#### Training on OntoNotes
We present the test results on OntoNotes 5.0 dataset.
| Model | Avg. F1 |
|---------------------------------|---------|
| SpanBERT-large + e2e | 79.6 |
| Longformer-large + s2e | 80.3 |
| **Longformer-large + LingMess** | 81.4 |
### Citation
If you find LingMess useful for your work, please cite the following paper:
``` latex
@misc{https://doi.org/10.48550/arxiv.2205.12644,
doi = {10.48550/ARXIV.2205.12644},
url = {https://arxiv.org/abs/2205.12644},
author = {Otmazgin, Shon and Cattan, Arie and Goldberg, Yoav},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {LingMess: Linguistically Informed Multi Expert Scorers for Coreference Resolution},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
philadelphiacredit/Credit-Repair-Philadelphia
|
philadelphiacredit
| 2022-10-26T08:34:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-10-26T08:32:38Z |
We’re not your average credit repair firm, we truly care, so we only charge for the items we pursue on your report. Not only does this make us one of the FASTEST credit restoration companies, but we’re also one of the most affordable.
We offer FREE consultations, evaluations, and credit education. Our process only takes 30-60 days and we offer a 100% MONEY-BACK GUARANTEE on almost all our services.
Follow this [link](https://philadelphia.asapcreditrepairusa.com/)
|
GV05/distilbert-base-uncased-finetuned-emotion
|
GV05
| 2022-10-26T07:56:46Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-26T07:18:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9244695413548749
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2144
- Accuracy: 0.9245
- F1: 0.9245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8227 | 1.0 | 250 | 0.3150 | 0.902 | 0.8992 |
| 0.246 | 2.0 | 500 | 0.2144 | 0.9245 | 0.9245 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
sania-nawaz/finetuning-sentiment-model-3000-samples
|
sania-nawaz
| 2022-10-26T06:15:45Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-26T06:04:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.8666666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3286
- Accuracy: 0.8667
- F1: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
debbiesoon/bart_large_summarise_v2
|
debbiesoon
| 2022-10-26T05:22:32Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:multi_news",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-22T16:30:50Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- multi_news
metrics:
- rouge
model-index:
- name: bart_large_summarise_v2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: multi_news
type: multi_news
config: default
split: train
args: default
metrics:
- name: Rouge1
type: rouge
value: 39.305
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart_large_summarise_v2
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the multi_news dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2988
- Rouge1: 39.305
- Rouge2: 13.4171
- Rougel: 20.4214
- Rougelsum: 34.971
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.2.dev0
- Tokenizers 0.13.1
|
huggingtweets/kubiekit
|
huggingtweets
| 2022-10-26T05:03:03Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-26T04:57:38Z |
---
language: en
thumbnail: http://www.huggingtweets.com/kubiekit/1666760547210/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1581568862616662016/XxeL1VBT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">kubie</div>
<div style="text-align: center; font-size: 14px;">@kubiekit</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from kubie.
| Data | kubie |
| --- | --- |
| Tweets downloaded | 3136 |
| Retweets | 180 |
| Short tweets | 611 |
| Tweets kept | 2345 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2mv38hcu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kubiekit's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1uk7te5z) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1uk7te5z/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kubiekit')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ArafatBHossain/bert-distilled-multi_teacher_avg_logit_twitter_sentiment_07_alpha0.8
|
ArafatBHossain
| 2022-10-26T04:50:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-26T04:22:55Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-distilled-multi_teacher_avg_logit_twitter_sentiment_07_alpha0.8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-distilled-multi_teacher_avg_logit_twitter_sentiment_07_alpha0.8
This model is a fine-tuned version of [ArafatBHossain/distilbert-base-uncased-twitter_eval_sentiment_data](https://huggingface.co/ArafatBHossain/distilbert-base-uncased-twitter_eval_sentiment_data) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3250
- Accuracy: 0.671
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4044 | 1.0 | 1875 | 0.3820 | 0.6655 |
| 0.2636 | 2.0 | 3750 | 0.3914 | 0.668 |
| 0.206 | 3.0 | 5625 | 0.3595 | 0.6655 |
| 0.1694 | 4.0 | 7500 | 0.3548 | 0.6725 |
| 0.1437 | 5.0 | 9375 | 0.3360 | 0.6725 |
| 0.1272 | 6.0 | 11250 | 0.3259 | 0.6755 |
| 0.1167 | 7.0 | 13125 | 0.3250 | 0.671 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
TaoH/dj
|
TaoH
| 2022-10-26T03:43:46Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"semantic-search",
"chinese",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-26T03:43:33Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- semantic-search
- chinese
---
# DMetaSoul/sbert-chinese-general-v2
此模型基于 [bert-base-chinese](https://huggingface.co/bert-base-chinese) 版本 BERT 模型,在百万级语义相似数据集 [SimCLUE](https://github.com/CLUEbenchmark/SimCLUE) 上进行训练,适用于**通用语义匹配**场景,从效果来看该模型在各种任务上**泛化能力更好**。
注:此模型的[轻量化版本](https://huggingface.co/DMetaSoul/sbert-chinese-general-v2-distill),也已经开源啦!
# Usage
## 1. Sentence-Transformers
通过 [sentence-transformers](https://www.SBERT.net) 框架来使用该模型,首先进行安装:
```
pip install -U sentence-transformers
```
然后使用下面的代码来载入该模型并进行文本表征向量的提取:
```python
from sentence_transformers import SentenceTransformer
sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"]
model = SentenceTransformer('DMetaSoul/sbert-chinese-general-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## 2. HuggingFace Transformers
如果不想使用 [sentence-transformers](https://www.SBERT.net) 的话,也可以通过 HuggingFace Transformers 来载入该模型并进行文本向量抽取:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('DMetaSoul/sbert-chinese-general-v2')
model = AutoModel.from_pretrained('DMetaSoul/sbert-chinese-general-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation
该模型在公开的几个语义匹配数据集上进行了评测,计算了向量相似度跟真实标签之间的相关性系数:
| | **csts_dev** | **csts_test** | **afqmc** | **lcqmc** | **bqcorpus** | **pawsx** | **xiaobu** |
| ---------------------------- | ------------ | ------------- | ---------- | ---------- | ------------ | ---------- | ---------- |
| **sbert-chinese-general-v1** | **84.54%** | **82.17%** | 23.80% | 65.94% | 45.52% | 11.52% | 48.51% |
| **sbert-chinese-general-v2** | 77.20% | 72.60% | **36.80%** | **76.92%** | **49.63%** | **16.24%** | **63.16%** |
这里对比了本模型跟之前我们发布 [sbert-chinese-general-v1](https://huggingface.co/DMetaSoul/sbert-chinese-general-v1) 之间的差异,可以看到本模型在多个任务上的泛化能力更好。
## Citing & Authors
E-mail: [email protected]
|
studiolike/cap_01
|
studiolike
| 2022-10-26T02:48:54Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2022-10-24T02:48:57Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
|
JamesH/Movie_review_sentiment_analysis_model
|
JamesH
| 2022-10-26T01:02:13Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-classification",
"en",
"dataset:JamesH/autotrain-data-third-project",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-26T00:58:53Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- JamesH/autotrain-data-third-project
co2_eq_emissions:
emissions: 6.9919208994196795
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1883864250
- CO2 Emissions (in grams): 6.9919
## Validation Metrics
- Loss: 0.175
- Accuracy: 0.950
- Precision: 0.950
- Recall: 0.950
- AUC: 0.986
- F1: 0.950
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/JamesH/autotrain-third-project-1883864250
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("JamesH/autotrain-third-project-1883864250", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("JamesH/autotrain-third-project-1883864250", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
agiron123/hello_hugging_face
|
agiron123
| 2022-10-25T23:28:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-10-25T23:27:43Z |
Creating a simple hugging face model.
|
francos/distilbert-base-uncased-finetuned-clinc
|
francos
| 2022-10-25T23:21:48Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-25T22:47:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9183870967741935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7720
- Accuracy: 0.9184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2891 | 0.7429 |
| 3.7868 | 2.0 | 636 | 1.8755 | 0.8374 |
| 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 |
| 1.6928 | 4.0 | 1272 | 0.8573 | 0.9132 |
| 0.9056 | 5.0 | 1590 | 0.7720 | 0.9184 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
redevaaa/test4
|
redevaaa
| 2022-10-25T23:20:58Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:ner",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-25T22:53:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: test4
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: ner
type: ner
config: default
split: train
args: default
metrics:
- name: Precision
type: precision
value: 0.594855305466238
- name: Recall
type: recall
value: 0.6423611111111112
- name: F1
type: f1
value: 0.6176961602671119
- name: Accuracy
type: accuracy
value: 0.9579571605593911
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test4
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3100
- Precision: 0.5949
- Recall: 0.6424
- F1: 0.6177
- Accuracy: 0.9580
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 418 | 0.2052 | 0.2415 | 0.2465 | 0.2440 | 0.9423 |
| 0.3341 | 2.0 | 836 | 0.1816 | 0.4286 | 0.4792 | 0.4525 | 0.9513 |
| 0.1296 | 3.0 | 1254 | 0.2039 | 0.4589 | 0.5035 | 0.4801 | 0.9526 |
| 0.0727 | 4.0 | 1672 | 0.2130 | 0.5237 | 0.5764 | 0.5488 | 0.9566 |
| 0.0553 | 5.0 | 2090 | 0.2290 | 0.5171 | 0.5764 | 0.5452 | 0.9551 |
| 0.0412 | 6.0 | 2508 | 0.2351 | 0.5390 | 0.5521 | 0.5455 | 0.9555 |
| 0.0412 | 7.0 | 2926 | 0.2431 | 0.5280 | 0.5903 | 0.5574 | 0.9542 |
| 0.0321 | 8.0 | 3344 | 0.2490 | 0.5825 | 0.625 | 0.6030 | 0.9570 |
| 0.0249 | 9.0 | 3762 | 0.2679 | 0.5764 | 0.5764 | 0.5764 | 0.9573 |
| 0.0192 | 10.0 | 4180 | 0.2574 | 0.5506 | 0.6042 | 0.5762 | 0.9558 |
| 0.0206 | 11.0 | 4598 | 0.2857 | 0.5498 | 0.5938 | 0.5710 | 0.9559 |
| 0.0147 | 12.0 | 5016 | 0.2638 | 0.5548 | 0.5972 | 0.5753 | 0.9550 |
| 0.0147 | 13.0 | 5434 | 0.2771 | 0.5677 | 0.5972 | 0.5821 | 0.9577 |
| 0.0129 | 14.0 | 5852 | 0.3016 | 0.5761 | 0.6181 | 0.5963 | 0.9549 |
| 0.0118 | 15.0 | 6270 | 0.3055 | 0.5587 | 0.6111 | 0.5837 | 0.9570 |
| 0.0099 | 16.0 | 6688 | 0.2937 | 0.5682 | 0.6076 | 0.5872 | 0.9564 |
| 0.0099 | 17.0 | 7106 | 0.3075 | 0.5313 | 0.6181 | 0.5714 | 0.9531 |
| 0.0085 | 18.0 | 7524 | 0.3079 | 0.6026 | 0.6424 | 0.6218 | 0.9580 |
| 0.0085 | 19.0 | 7942 | 0.3082 | 0.5833 | 0.6319 | 0.6067 | 0.9572 |
| 0.0074 | 20.0 | 8360 | 0.3100 | 0.5949 | 0.6424 | 0.6177 | 0.9580 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
dominiks/q-Taxi-v3
|
dominiks
| 2022-10-25T21:32:21Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-25T21:32:13Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.76
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="dominiks/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
huggingtweets/ok_0s
|
huggingtweets
| 2022-10-25T20:20:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-25T20:18:48Z |
---
language: en
thumbnail: http://www.huggingtweets.com/ok_0s/1666729242111/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1575869051850612737/Hz2LIceC_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">⓪𝕊 is minting Youts</div>
<div style="text-align: center; font-size: 14px;">@ok_0s</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ⓪𝕊 is minting Youts.
| Data | ⓪𝕊 is minting Youts |
| --- | --- |
| Tweets downloaded | 1390 |
| Retweets | 132 |
| Short tweets | 287 |
| Tweets kept | 971 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/11ejsejg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ok_0s's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1z3prl6a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1z3prl6a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ok_0s')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
yucao16/ddpm-butterflies-128
|
yucao16
| 2022-10-25T20:10:43Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-10-25T18:55:09Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/yucao16/ddpm-butterflies-128/tensorboard?#scalars)
|
pig4431/rtm_ELECTRA_5E
|
pig4431
| 2022-10-25T19:44:23Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"generated_from_trainer",
"dataset:rotten_tomatoes",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-25T19:36:47Z |
---
tags:
- generated_from_trainer
datasets:
- rotten_tomatoes
model-index:
- name: rtm-electra-511E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rtm-electra-511E
This model was trained from scratch on the rotten_tomatoes dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/prathgodbole
|
huggingtweets
| 2022-10-25T18:52:19Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-25T18:47:21Z |
---
language: en
thumbnail: http://www.huggingtweets.com/prathgodbole/1666723893377/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1041700878858674178/q1uKuS6o_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Prathamesh Godbole</div>
<div style="text-align: center; font-size: 14px;">@prathgodbole</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Prathamesh Godbole.
| Data | Prathamesh Godbole |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 52 |
| Short tweets | 241 |
| Tweets kept | 2952 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/yqz5qdl4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @prathgodbole's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1mum0rf3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1mum0rf3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/prathgodbole')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hyesunyun/edit5
|
hyesunyun
| 2022-10-25T18:34:47Z | 7 | 1 |
transformers
|
[
"transformers",
"jax",
"t5",
"t5x",
"edit",
"en",
"dataset:fruit-wiki",
"arxiv:2112.08634",
"license:unknown",
"endpoints_compatible",
"region:us"
] | null | 2022-10-19T18:51:01Z |
---
language:
- en
tags:
- t5
- t5x
- edit
license: unknown
datasets:
- fruit-wiki
metrics:
- rouge
---
# EdiT5
Reproduction of the model in [FRUIT: Faithfully Reflecting Updated Information in Text](https://arxiv.org/abs/2112.08634).
## Training data
The model was trained on the [FRUIT Wikipeda dataset](https://github.com/google-research/language/tree/master/language/fruit) for updates.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.