modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 00:36:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 00:36:49
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
NethamTSG/ppo-LunarLander-v5
|
NethamTSG
| 2023-08-22T15:09:22Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-22T15:08:58Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 287.98 +/- 17.19
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
pbeyens/mydataset-repo
|
pbeyens
| 2023-08-22T15:08:45Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"lilt",
"token-classification",
"generated_from_trainer",
"dataset:mydataset",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-21T14:03:22Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- mydataset
model-index:
- name: mydataset-repo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mydataset-repo
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the mydataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Total-str: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10}
- Total-val: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10}
- Overall Precision: 1.0
- Overall Recall: 1.0
- Overall F1: 1.0
- Overall Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Total-str | Total-val | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:------:|:----:|:---------------:|:----------------------------------------------------------:|:----------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.0674 | 28.57 | 200 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0001 | 57.14 | 400 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 85.71 | 600 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 114.29 | 800 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 142.86 | 1000 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 171.43 | 1200 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 200.0 | 1400 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 228.57 | 1600 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 257.14 | 1800 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 285.71 | 2000 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 314.29 | 2200 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 342.86 | 2400 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 371.43 | 2600 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 400.0 | 2800 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 428.57 | 3000 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 457.14 | 3200 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 485.71 | 3400 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 514.29 | 3600 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 542.86 | 3800 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 571.43 | 4000 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 600.0 | 4200 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 628.57 | 4400 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 657.14 | 4600 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 685.71 | 4800 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 714.29 | 5000 | 0.0000 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.13.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Tarti/xlm-roberta-base-finetuned-panx-de
|
Tarti
| 2023-08-22T14:58:26Z | 134 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-22T10:41:31Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8602627537962806
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1355
- F1: 0.8603
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2574 | 1.0 | 525 | 0.1627 | 0.8221 |
| 0.1295 | 2.0 | 1050 | 0.1435 | 0.8467 |
| 0.0815 | 3.0 | 1575 | 0.1355 | 0.8603 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu117
- Datasets 1.16.1
- Tokenizers 0.13.3
|
agarc15/gpt2-finetuned-PRC
|
agarc15
| 2023-08-22T14:52:00Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-classification",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-21T07:48:25Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: gpt2-finetuned-PRC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-PRC
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4564
- Accuracy: 0.8711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 391 | 0.5038 | 0.8420 |
| 0.7758 | 2.0 | 782 | 0.4486 | 0.8629 |
| 0.4031 | 3.0 | 1173 | 0.4664 | 0.8678 |
| 0.3225 | 4.0 | 1564 | 0.4564 | 0.8711 |
| 0.3225 | 5.0 | 1955 | 0.4637 | 0.8693 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
camenduru/seamless-m4t-medium
|
camenduru
| 2023-08-22T14:50:58Z | 0 | 1 | null |
[
"SeamlessM4T",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-08-22T14:45:53Z |
---
inference: false
tags:
- SeamlessM4T
license: cc-by-nc-4.0
---
# SeamlessM4T Medium
SeamlessM4T is a collection of models designed to provide high quality translation, allowing people from different
linguistic communities to communicate effortlessly through speech and text.
SeamlessM4T covers:
- 📥 101 languages for speech input
- ⌨️ 96 Languages for text input/output
- 🗣️ 35 languages for speech output.
This is the "medium" variant of the unified model, which enables multiple tasks without relying on multiple separate models:
- Speech-to-speech translation (S2ST)
- Speech-to-text translation (S2TT)
- Text-to-speech translation (T2ST)
- Text-to-text translation (T2TT)
- Automatic speech recognition (ASR)
## SeamlessM4T models
The SeamlessM4T models come in two checkpoints of different size:
| Model Name | #params | checkpoint | metrics |
| - | - | - | - |
| [SeamlessM4T-Medium]((https://huggingface.co/facebook/seamless-m4t-medium) | 1.2B | [checkpoint](https://huggingface.co/facebook/seamless-m4t-medium/resolve/main/multitask_unity_medium.pt) | [metrics]() |
| [SeamlessM4T-Large](https://huggingface.co/facebook/seamless-m4t-large) | 2.3B | [checkpoint](https://huggingface.co/facebook/seamless-m4t-large/resolve/main/multitask_unity_large.pt) | [metrics]() |
We provide extensive evaluation results of SeamlessM4T-Medium and SeamlessM4T-Large in the SeamlessM4T paper (as averages) in the `metrics` files above.
## Instructions to run inference with SeamlessM4T models
The SeamlessM4T models are currently available through the `seamless_communication` package. The `seamless_communication`
package can be installed by following the instructions outlined here: [Installation](https://github.com/fairinternal/seamless_communication/tree/main#installation).
Once installed, a [`Translator`](https://github.com/fairinternal/seamless_communication/blob/590547965b343b590d15847a0aa25a6779fc3753/src/seamless_communication/models/inference/translator.py#L47)
object can be instantiated to perform all five of the spoken langauge tasks. The `Translator` is instantiated with three arguments:
1. **model_name_or_card**: SeamlessM4T checkpoint. Can be either `seamlessM4T_medium` for the medium model, or `seamlessM4T_large` for the large model
2. **vocoder_name_or_card**: vocoder checkpoint (`vocoder_36langs`)
3. **device**: Torch device
```python
import torch
from seamless_communication.models.inference import Translator
# Initialize a Translator object with a multitask model, vocoder on the GPU.
translator = Translator("seamlessM4T_medium", vocoder_name_or_card="vocoder_36langs", device=torch.device("cuda:0"))
```
Once instantiated, the `predict()` method can be used to run inference as many times on any of the supported tasks.
Given an input audio with `<path_to_input_audio>` or an input text `<input_text>` in `<src_lang>`, we can translate
into `<tgt_lang>` as follows.
### S2ST and T2ST:
```python
# S2ST
translated_text, wav, sr = translator.predict(<path_to_input_audio>, "s2st", <tgt_lang>)
# T2ST
translated_text, wav, sr = translator.predict(<input_text>, "t2st", <tgt_lang>, src_lang=<src_lang>)
```
Note that `<src_lang>` must be specified for T2ST.
The generated units are synthesized and the output audio file is saved with:
```python
wav, sr = translator.synthesize_speech(<speech_units>, <tgt_lang>)
# Save the translated audio generation.
torchaudio.save(
<path_to_save_audio>,
wav[0].cpu(),
sample_rate=sr,
)
```
### S2TT, T2TT and ASR:
```python
# S2TT
translated_text, _, _ = translator.predict(<path_to_input_audio>, "s2tt", <tgt_lang>)
# ASR
# This is equivalent to S2TT with `<tgt_lang>=<src_lang>`.
transcribed_text, _, _ = translator.predict(<path_to_input_audio>, "asr", <src_lang>)
# T2TT
translated_text, _, _ = translator.predict(<input_text>, "t2tt", <tgt_lang>, src_lang=<src_lang>)
```
Note that `<src_lang>` must be specified for T2TT.
## Citation
If you plan to use SeamlessM4T in your work or any models/datasets/artifacts published in SeamlessM4T, please cite:
```bibtex
@article{seamlessm4t2023,
title={"SeamlessM4T—Massively Multilingual \& Multimodal Machine Translation"},
author={{Seamless Communication}, Lo\"{i}c Barrault, Yu-An Chung, Mariano Cora Meglioli, David Dale, Ning Dong, Paul-Ambroise Duquenne, Hady Elsahar, Hongyu Gong, Kevin Heffernan, John Hoffman, Christopher Klaiber, Pengwei Li, Daniel Licht, Jean Maillard, Alice Rakotoarison, Kaushik Ram Sadagopan, Guillaume Wenzek, Ethan Ye, Bapi Akula, Peng-Jen Chen, Naji El Hachem, Brian Ellis, Gabriel Mejia Gonzalez, Justin Haaheim, Prangthip Hansanti, Russ Howes, Bernie Huang, Min-Jae Hwang, Hirofumi Inaguma, Somya Jain, Elahe Kalbassi, Amanda Kallet, Ilia Kulikov, Janice Lam, Daniel Li, Xutai Ma, Ruslan Mavlyutov, Benjamin Peloquin, Mohamed Ramadan, Abinesh Ramakrishnan, Anna Sun, Kevin Tran, Tuan Tran, Igor Tufanov, Vish Vogeti, Carleigh Wood, Yilin Yang, Bokai Yu, Pierre Andrews, Can Balioglu, Marta R. Costa-juss\`{a} \footnotemark[3], Onur \,{C}elebi,Maha Elbayad,Cynthia Gao, Francisco Guzm\'an, Justine Kao, Ann Lee, Alexandre Mourachko, Juan Pino, Sravya Popuri, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Paden Tomasello, Changhan Wang, Jeff Wang, Skyler Wang},
journal={ArXiv},
year={2023}
}
```
## License
The Seamless Communication code and weights are CC-BY-NC 4.0 licensed.
|
sarankup-newgen/llama2-13b-email-trained
|
sarankup-newgen
| 2023-08-22T14:49:42Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-22T14:49:16Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
pknayak/speecht5_finetuned_voxpopuli_es
|
pknayak
| 2023-08-22T14:46:12Z | 82 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-08-22T14:06:01Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- text-to-speech
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_es
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4902 | 153.85 | 500 | 0.4620 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Amirhnrn/Reinforce-Cartpole
|
Amirhnrn
| 2023-08-22T14:40:23Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-22T07:30:33Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Luciferio/MiniLLM-finetuned
|
Luciferio
| 2023-08-22T14:39:58Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:microsoft/MiniLM-L12-H384-uncased",
"base_model:finetune:microsoft/MiniLM-L12-H384-uncased",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-22T12:57:56Z |
---
license: mit
base_model: microsoft/MiniLM-L12-H384-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- f1
model-index:
- name: MiniLLM-finetuned
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: F1
type: f1
value: 0.922353805579638
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLLM-finetuned
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2932
- F1: 0.9224
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 2000 | 0.4408 | 0.8888 |
| No log | 2.0 | 4000 | 0.2932 | 0.9224 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ALM-AHME/beit-large-patch16-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20-Shuffled-3rd
|
ALM-AHME
| 2023-08-22T14:31:20Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"beit",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/beit-large-patch16-224",
"base_model:finetune:microsoft/beit-large-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-22T11:54:04Z |
---
license: apache-2.0
base_model: microsoft/beit-large-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: beit-large-patch16-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20-Shuffled-3rd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-large-patch16-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20-Shuffled-3rd
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0488
- Accuracy: 0.9901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.9
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9835 | 1.0 | 114 | 1.9296 | 0.2315 |
| 1.6045 | 2.0 | 229 | 1.4334 | 0.5172 |
| 1.0525 | 3.0 | 343 | 0.9298 | 0.6962 |
| 0.795 | 4.0 | 458 | 0.6580 | 0.7709 |
| 0.5739 | 5.0 | 572 | 0.4717 | 0.8366 |
| 0.5821 | 6.0 | 687 | 0.3511 | 0.8851 |
| 0.4566 | 7.0 | 801 | 0.2705 | 0.9204 |
| 0.2751 | 8.0 | 916 | 0.2114 | 0.9384 |
| 0.2352 | 9.0 | 1030 | 0.1303 | 0.9688 |
| 0.1831 | 10.0 | 1145 | 0.1194 | 0.9688 |
| 0.1515 | 11.0 | 1259 | 0.0673 | 0.9869 |
| 0.204 | 11.95 | 1368 | 0.0488 | 0.9901 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Toobese/lilt-invoices
|
Toobese
| 2023-08-22T14:24:48Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"lilt",
"token-classification",
"generated_from_trainer",
"base_model:SCUT-DLVCLab/lilt-roberta-en-base",
"base_model:finetune:SCUT-DLVCLab/lilt-roberta-en-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-21T13:32:34Z |
---
license: mit
base_model: SCUT-DLVCLab/lilt-roberta-en-base
tags:
- generated_from_trainer
model-index:
- name: lilt-invoices
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lilt-invoices
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0031
- Endorname: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 177}
- Escription: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 183}
- Illingaddress: {'precision': 1.0, 'recall': 0.9937888198757764, 'f1': 0.9968847352024921, 'number': 161}
- Mount: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 175}
- Nitprice: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 156}
- Nvoicedate: {'precision': 0.9941520467836257, 'recall': 1.0, 'f1': 0.9970674486803519, 'number': 170}
- Nvoicetotal: {'precision': 0.9946808510638298, 'recall': 0.9946808510638298, 'f1': 0.9946808510638298, 'number': 188}
- Otaltax: {'precision': 1.0, 'recall': 0.9927007299270073, 'f1': 0.9963369963369962, 'number': 137}
- Uantity: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 167}
- Ubtotal: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 151}
- Overall Precision: 0.9988
- Overall Recall: 0.9982
- Overall F1: 0.9985
- Overall Accuracy: 0.9994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Endorname | Escription | Illingaddress | Mount | Nitprice | Nvoicedate | Nvoicetotal | Otaltax | Uantity | Ubtotal | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------:|:-----------------------------------------------------------:|:-----------------------------------------------------------------------------------------:|:-----------------------------------------------------------:|:-----------------------------------------------------------:|:-----------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------:|:-----------------------------------------------------------:|:-----------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.1736 | 21.74 | 500 | 0.0031 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 177} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 183} | {'precision': 1.0, 'recall': 0.9937888198757764, 'f1': 0.9968847352024921, 'number': 161} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 175} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 156} | {'precision': 0.9941520467836257, 'recall': 1.0, 'f1': 0.9970674486803519, 'number': 170} | {'precision': 0.9946808510638298, 'recall': 0.9946808510638298, 'f1': 0.9946808510638298, 'number': 188} | {'precision': 1.0, 'recall': 0.9927007299270073, 'f1': 0.9963369963369962, 'number': 137} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 167} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 151} | 0.9988 | 0.9982 | 0.9985 | 0.9994 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
wjbmattingly/distilbert-base-uncased-finetuned-imdb
|
wjbmattingly
| 2023-08-22T14:18:48Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-08-22T14:00:15Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1689 | 1.0 | 8 | 3.0928 |
| 2.9706 | 2.0 | 16 | 2.7093 |
| 2.9273 | 3.0 | 24 | 2.7136 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cpu
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Tarti/xlm-roberta-base-finetuned-panx-de-fr
|
Tarti
| 2023-08-22T14:14:38Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-22T14:02:58Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1600
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.288 | 1.0 | 715 | 0.1823 | 0.8218 |
| 0.1458 | 2.0 | 1430 | 0.1533 | 0.8503 |
| 0.0934 | 3.0 | 2145 | 0.1600 | 0.8591 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu117
- Datasets 1.16.1
- Tokenizers 0.13.3
|
loicspigeleer/q-Taxi-v3
|
loicspigeleer
| 2023-08-22T14:11:38Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-22T14:11:35Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.78
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="loicspigeleer/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
facebook/blaser-2.0-ref
|
facebook
| 2023-08-22T14:07:37Z | 0 | 5 | null |
[
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-08-21T13:25:31Z |
---
license: cc-by-nc-4.0
---
# BLASER 2.0
[[Blog]](https://ai.meta.com/resources/models-and-libraries/seamless-communication/)
[[Code](https://github.com/facebookresearch/SONAR)]
BLASER 2.0 is the new version of BLASER ([Chen et al., 2023](https://aclanthology.org/2023.acl-long.504/)),
a family of models for automatic evaluation of machine translation quality.
BLASER 2.0 is based on [SONAR](https://huggingface.co/facebook/SONAR) sentence embeddings
and works with both speech and text modalities.
The actual model predicts a similarity score for the translated sentence based on the the translation,
the source sentence, and the reference translation.
Its sibling model, [BLASER 2.0 QE](https://huggingface.co/facebook/blaser-2.0-qe), does not use references.
Supervised BLASER models are trained to predict cross-lingual semantic similarity scores,
XSTS ([Licht et al., 2022](https://aclanthology.org/2022.amta-research.24/)),
on a scale where 1 corresponds to completely unrelated sentences and
5 corresponds to fully semantically equivalent sentences.
The models predictions, though, are unbounded and can occasionally surpass these limits.
## Installation
See the SONAR github [repo](https://github.com/facebookresearch/SONAR) for the installation instructions.
## Usage
BLASER 2.0 models accept 1024-dimensional SONAR sentence embeddings as inputs,
and produce a single score as an output.
The code below illustrates their usage with text embeddings:
```Python
from sonar.inference_pipelines.text import TextToEmbeddingModelPipeline
from sonar.models.blaser.loader import load_blaser_model
blaser = load_blaser_model("blaser_2_0_ref").eval()
text_embedder = TextToEmbeddingModelPipeline(encoder="text_sonar_basic_encoder", tokenizer="text_sonar_basic_encoder")
src_embs = text_embedder.predict(["Le chat s'assit sur le tapis."], source_lang="fra_Latn")
ref_embs = text_embedder.predict(["The cat sat on the mat."], source_lang="eng_Latn")
mt_embs = text_embedder.predict(["The cat sat down on the carpet."], source_lang="eng_Latn")
print(blaser(src=src_embs, ref=ref_embs, mt=mt_embs).item()) # 4.688
```
With BLASER 2.0 models, SONAR text and speech embeddings can be used interchangeably.
## Model details
- **Developed by:** Seamless Communication et al.
- **License:** CC-BY-NC 4.0 license
- **Citation:** If you use BLASER 2.0 in your work, please cite
[the paper](https://ai.meta.com/resources/models-and-libraries/seamless-communication/):
```bibtex
@article{seamlessm4t2023,
title={SeamlessM4T—Massively Multilingual \& Multimodal Machine Translation},
author={{Seamless Communication}, Lo\"{i}c Barrault, Yu-An Chung, Mariano Cora Meglioli, David Dale, Ning Dong, Paul-Ambroise Duquenne, Hady Elsahar, Hongyu Gong, Kevin Heffernan, John Hoffman, Christopher Klaiber, Pengwei Li, Daniel Licht, Jean Maillard, Alice Rakotoarison, Kaushik Ram Sadagopan, Guillaume Wenzek, Ethan Ye, Bapi Akula, Peng-Jen Chen, Naji El Hachem, Brian Ellis, Gabriel Mejia Gonzalez, Justin Haaheim, Prangthip Hansanti, Russ Howes, Bernie Huang, Min-Jae Hwang, Hirofumi Inaguma, Somya Jain, Elahe Kalbassi, Amanda Kallet, Ilia Kulikov, Janice Lam, Daniel Li, Xutai Ma, Ruslan Mavlyutov, Benjamin Peloquin, Mohamed Ramadan, Abinesh Ramakrishnan, Anna Sun, Kevin Tran, Tuan Tran, Igor Tufanov, Vish Vogeti, Carleigh Wood, Yilin Yang, Bokai Yu, Pierre Andrews, Can Balioglu, Marta R. Costa-juss\`{a} \footnotemark[3], Onur \,{C}elebi,Maha Elbayad,Cynthia Gao, Francisco Guzm\'an, Justine Kao, Ann Lee, Alexandre Mourachko, Juan Pino, Sravya Popuri, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Paden Tomasello, Changhan Wang, Jeff Wang, Skyler Wang},
journal={ArXiv},
year={2023}
}
```
|
facebook/blaser-2.0-qe
|
facebook
| 2023-08-22T14:07:31Z | 0 | 7 | null |
[
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-08-21T15:01:40Z |
---
license: cc-by-nc-4.0
---
# BLASER 2.0
[[Blog]](https://ai.meta.com/resources/models-and-libraries/seamless-communication/)
[[Code](https://github.com/facebookresearch/SONAR)]
BLASER 2.0 is the new version of BLASER ([Chen et al., 2023](https://aclanthology.org/2023.acl-long.504/)),
a family of models for automatic evaluation of machine translation quality.
BLASER 2.0 is based on [SONAR](https://huggingface.co/facebook/SONAR) sentence embeddings
and works with both speech and text modalities.
The actual model predicts a similarity score for the translated sentence based on the translation and the source sentence.
This, it can be applied in settings where reference translations are missing or if their quality is questionable.
In contrast, its sibling model, [BLASER 2.0-referenced](https://huggingface.co/facebook/blaser-2.0-ref), requires also a reference translation.
Supervised BLASER models are trained to predict cross-lingual semantic similarity scores,
XSTS ([Licht et al., 2022](https://aclanthology.org/2022.amta-research.24/)),
on a scale where 1 corresponds to completely unrelated sentences and
5 corresponds to fully semantically equivalent sentences.
The models predictions, though, are unbounded and can occasionally surpass these limits.
## Installation
See the SONAR github [repo](https://github.com/facebookresearch/SONAR) for the installation instructions.
## Usage
BLASER 2.0 models accept 1024-dimensional SONAR sentence embeddings as inputs,
and produce a single score as an output.
The code below illustrates their usage with text embeddings:
```Python
from sonar.inference_pipelines.text import TextToEmbeddingModelPipeline
from sonar.models.blaser.loader import load_blaser_model
blaser = load_blaser_model("blaser_2_0_qe").eval()
text_embedder = TextToEmbeddingModelPipeline(encoder="text_sonar_basic_encoder", tokenizer="text_sonar_basic_encoder")
src_embs = text_embedder.predict(["Le chat s'assit sur le tapis."], source_lang="fra_Latn")
mt_embs = text_embedder.predict(["The cat sat down on the carpet."], source_lang="eng_Latn")
print(blaser(src=src_embs, mt=mt_embs).item()) # 4.708
```
With BLASER 2.0 models, SONAR text and speech embeddings can be used interchangeably.
## Model details
- **Developed by:** Seamless Communication et al.
- **License:** CC-BY-NC 4.0 license
- **Citation:** If you use BLASER 2.0 in your work, please cite
[the paper](https://ai.meta.com/resources/models-and-libraries/seamless-communication/):
```bibtex
@article{seamlessm4t2023,
title={SeamlessM4T—Massively Multilingual \& Multimodal Machine Translation},
author={{Seamless Communication}, Lo\"{i}c Barrault, Yu-An Chung, Mariano Cora Meglioli, David Dale, Ning Dong, Paul-Ambroise Duquenne, Hady Elsahar, Hongyu Gong, Kevin Heffernan, John Hoffman, Christopher Klaiber, Pengwei Li, Daniel Licht, Jean Maillard, Alice Rakotoarison, Kaushik Ram Sadagopan, Guillaume Wenzek, Ethan Ye, Bapi Akula, Peng-Jen Chen, Naji El Hachem, Brian Ellis, Gabriel Mejia Gonzalez, Justin Haaheim, Prangthip Hansanti, Russ Howes, Bernie Huang, Min-Jae Hwang, Hirofumi Inaguma, Somya Jain, Elahe Kalbassi, Amanda Kallet, Ilia Kulikov, Janice Lam, Daniel Li, Xutai Ma, Ruslan Mavlyutov, Benjamin Peloquin, Mohamed Ramadan, Abinesh Ramakrishnan, Anna Sun, Kevin Tran, Tuan Tran, Igor Tufanov, Vish Vogeti, Carleigh Wood, Yilin Yang, Bokai Yu, Pierre Andrews, Can Balioglu, Marta R. Costa-juss\`{a} \footnotemark[3], Onur \,{C}elebi,Maha Elbayad,Cynthia Gao, Francisco Guzm\'an, Justine Kao, Ann Lee, Alexandre Mourachko, Juan Pino, Sravya Popuri, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Paden Tomasello, Changhan Wang, Jeff Wang, Skyler Wang},
journal={ArXiv},
year={2023}
}
```
|
ColdChair/hw_1
|
ColdChair
| 2023-08-22T13:53:18Z | 0 | 0 | null |
[
"dataset:roneneldan/TinyStories",
"license:openrail",
"region:us"
] | null | 2023-08-22T13:51:46Z |
---
license: openrail
datasets:
- roneneldan/TinyStories
---
|
natfil/pmserver_bloom-3b
|
natfil
| 2023-08-22T13:45:44Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-22T13:45:33Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Jbdddsai/lora-trained-xl-colab_gieskanne_500it_lr_1e-4
|
Jbdddsai
| 2023-08-22T13:20:10Z | 4 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-22T10:04:27Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of datadrivers watering can
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Jbddai/lora-trained-xl-colab_gieskanne_500it_lr_1e-4
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of datadrivers watering can using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.















|
dkqjrm/20230822202040
|
dkqjrm
| 2023-08-22T13:20:05Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-22T11:20:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230822202040'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230822202040
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5208
- Accuracy: 0.7365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 0.7722 | 0.5271 |
| 0.7133 | 2.0 | 624 | 0.5588 | 0.4982 |
| 0.7133 | 3.0 | 936 | 0.6273 | 0.4729 |
| 0.6364 | 4.0 | 1248 | 0.5976 | 0.4946 |
| 0.6219 | 5.0 | 1560 | 0.7382 | 0.5415 |
| 0.6219 | 6.0 | 1872 | 0.5328 | 0.6282 |
| 0.5974 | 7.0 | 2184 | 0.5253 | 0.6282 |
| 0.5974 | 8.0 | 2496 | 0.8677 | 0.5668 |
| 0.5614 | 9.0 | 2808 | 0.5249 | 0.5884 |
| 0.5732 | 10.0 | 3120 | 0.5113 | 0.6895 |
| 0.5732 | 11.0 | 3432 | 0.5092 | 0.6931 |
| 0.5559 | 12.0 | 3744 | 0.4693 | 0.7148 |
| 0.5301 | 13.0 | 4056 | 0.4781 | 0.7256 |
| 0.5301 | 14.0 | 4368 | 0.5693 | 0.6823 |
| 0.4999 | 15.0 | 4680 | 0.4649 | 0.7256 |
| 0.4999 | 16.0 | 4992 | 0.5702 | 0.6859 |
| 0.4712 | 17.0 | 5304 | 0.4598 | 0.7401 |
| 0.4431 | 18.0 | 5616 | 0.4750 | 0.7076 |
| 0.4431 | 19.0 | 5928 | 0.4782 | 0.7184 |
| 0.4348 | 20.0 | 6240 | 0.6236 | 0.6570 |
| 0.4113 | 21.0 | 6552 | 0.5125 | 0.7473 |
| 0.4113 | 22.0 | 6864 | 0.5703 | 0.6787 |
| 0.4035 | 23.0 | 7176 | 0.5080 | 0.7112 |
| 0.4035 | 24.0 | 7488 | 0.4619 | 0.7365 |
| 0.3898 | 25.0 | 7800 | 0.5639 | 0.7076 |
| 0.3736 | 26.0 | 8112 | 0.4968 | 0.7292 |
| 0.3736 | 27.0 | 8424 | 0.4483 | 0.7509 |
| 0.3708 | 28.0 | 8736 | 0.4929 | 0.7220 |
| 0.3656 | 29.0 | 9048 | 0.5168 | 0.7401 |
| 0.3656 | 30.0 | 9360 | 0.5618 | 0.7256 |
| 0.3545 | 31.0 | 9672 | 0.4900 | 0.7365 |
| 0.3545 | 32.0 | 9984 | 0.4676 | 0.7256 |
| 0.3474 | 33.0 | 10296 | 0.5222 | 0.7220 |
| 0.3326 | 34.0 | 10608 | 0.4861 | 0.7437 |
| 0.3326 | 35.0 | 10920 | 0.4560 | 0.7401 |
| 0.3313 | 36.0 | 11232 | 0.5375 | 0.7256 |
| 0.3209 | 37.0 | 11544 | 0.5606 | 0.7329 |
| 0.3209 | 38.0 | 11856 | 0.5173 | 0.7401 |
| 0.3169 | 39.0 | 12168 | 0.5060 | 0.7329 |
| 0.3169 | 40.0 | 12480 | 0.5250 | 0.7365 |
| 0.3096 | 41.0 | 12792 | 0.5133 | 0.7256 |
| 0.3097 | 42.0 | 13104 | 0.5012 | 0.7437 |
| 0.3097 | 43.0 | 13416 | 0.5274 | 0.7401 |
| 0.3049 | 44.0 | 13728 | 0.5086 | 0.7329 |
| 0.2929 | 45.0 | 14040 | 0.4934 | 0.7329 |
| 0.2929 | 46.0 | 14352 | 0.5667 | 0.7401 |
| 0.293 | 47.0 | 14664 | 0.5047 | 0.7437 |
| 0.293 | 48.0 | 14976 | 0.5353 | 0.7292 |
| 0.291 | 49.0 | 15288 | 0.5280 | 0.7401 |
| 0.2817 | 50.0 | 15600 | 0.5142 | 0.7365 |
| 0.2817 | 51.0 | 15912 | 0.5141 | 0.7329 |
| 0.2822 | 52.0 | 16224 | 0.4990 | 0.7329 |
| 0.2758 | 53.0 | 16536 | 0.5074 | 0.7292 |
| 0.2758 | 54.0 | 16848 | 0.5147 | 0.7329 |
| 0.2763 | 55.0 | 17160 | 0.5138 | 0.7365 |
| 0.2763 | 56.0 | 17472 | 0.5291 | 0.7365 |
| 0.2782 | 57.0 | 17784 | 0.5204 | 0.7329 |
| 0.272 | 58.0 | 18096 | 0.5093 | 0.7365 |
| 0.272 | 59.0 | 18408 | 0.5217 | 0.7365 |
| 0.2758 | 60.0 | 18720 | 0.5208 | 0.7365 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Polo123/llama2-qlora-finetunined-task
|
Polo123
| 2023-08-22T13:19:45Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-22T13:19:12Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
dkqjrm/20230822202124
|
dkqjrm
| 2023-08-22T13:12:14Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-22T11:21:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230822202124'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230822202124
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4836
- Accuracy: 0.7437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 156 | 0.5548 | 0.4693 |
| No log | 2.0 | 312 | 0.5565 | 0.4838 |
| No log | 3.0 | 468 | 0.5531 | 0.4729 |
| 0.6259 | 4.0 | 624 | 0.5810 | 0.4729 |
| 0.6259 | 5.0 | 780 | 0.6010 | 0.5596 |
| 0.6259 | 6.0 | 936 | 0.4969 | 0.6462 |
| 0.5907 | 7.0 | 1092 | 0.7982 | 0.5487 |
| 0.5907 | 8.0 | 1248 | 0.4883 | 0.6318 |
| 0.5907 | 9.0 | 1404 | 0.4714 | 0.6931 |
| 0.5602 | 10.0 | 1560 | 0.9236 | 0.5560 |
| 0.5602 | 11.0 | 1716 | 0.4972 | 0.6968 |
| 0.5602 | 12.0 | 1872 | 0.5116 | 0.6895 |
| 0.5015 | 13.0 | 2028 | 0.4913 | 0.7076 |
| 0.5015 | 14.0 | 2184 | 0.4683 | 0.7112 |
| 0.5015 | 15.0 | 2340 | 0.5265 | 0.6895 |
| 0.5015 | 16.0 | 2496 | 0.4616 | 0.7040 |
| 0.4782 | 17.0 | 2652 | 0.5788 | 0.6679 |
| 0.4782 | 18.0 | 2808 | 0.4471 | 0.7292 |
| 0.4782 | 19.0 | 2964 | 0.4588 | 0.7545 |
| 0.4628 | 20.0 | 3120 | 0.6477 | 0.6426 |
| 0.4628 | 21.0 | 3276 | 0.5305 | 0.6968 |
| 0.4628 | 22.0 | 3432 | 0.4549 | 0.7292 |
| 0.4248 | 23.0 | 3588 | 0.5101 | 0.7256 |
| 0.4248 | 24.0 | 3744 | 0.4763 | 0.7184 |
| 0.4248 | 25.0 | 3900 | 0.5809 | 0.6895 |
| 0.4067 | 26.0 | 4056 | 0.4461 | 0.7473 |
| 0.4067 | 27.0 | 4212 | 0.4460 | 0.7473 |
| 0.4067 | 28.0 | 4368 | 0.4454 | 0.7509 |
| 0.3941 | 29.0 | 4524 | 0.4664 | 0.7365 |
| 0.3941 | 30.0 | 4680 | 0.5039 | 0.7292 |
| 0.3941 | 31.0 | 4836 | 0.4548 | 0.7473 |
| 0.3941 | 32.0 | 4992 | 0.4484 | 0.7437 |
| 0.3749 | 33.0 | 5148 | 0.4924 | 0.7473 |
| 0.3749 | 34.0 | 5304 | 0.4569 | 0.7473 |
| 0.3749 | 35.0 | 5460 | 0.4604 | 0.7617 |
| 0.3586 | 36.0 | 5616 | 0.4448 | 0.7653 |
| 0.3586 | 37.0 | 5772 | 0.4768 | 0.7365 |
| 0.3586 | 38.0 | 5928 | 0.5052 | 0.7473 |
| 0.3521 | 39.0 | 6084 | 0.5167 | 0.7329 |
| 0.3521 | 40.0 | 6240 | 0.4425 | 0.7509 |
| 0.3521 | 41.0 | 6396 | 0.4730 | 0.7545 |
| 0.3407 | 42.0 | 6552 | 0.4624 | 0.7509 |
| 0.3407 | 43.0 | 6708 | 0.4847 | 0.7509 |
| 0.3407 | 44.0 | 6864 | 0.5371 | 0.7329 |
| 0.3329 | 45.0 | 7020 | 0.4841 | 0.7545 |
| 0.3329 | 46.0 | 7176 | 0.4815 | 0.7365 |
| 0.3329 | 47.0 | 7332 | 0.4678 | 0.7509 |
| 0.3329 | 48.0 | 7488 | 0.4918 | 0.7473 |
| 0.3235 | 49.0 | 7644 | 0.4592 | 0.7581 |
| 0.3235 | 50.0 | 7800 | 0.5005 | 0.7437 |
| 0.3235 | 51.0 | 7956 | 0.4777 | 0.7545 |
| 0.3193 | 52.0 | 8112 | 0.4558 | 0.7545 |
| 0.3193 | 53.0 | 8268 | 0.4870 | 0.7437 |
| 0.3193 | 54.0 | 8424 | 0.4792 | 0.7437 |
| 0.3132 | 55.0 | 8580 | 0.4673 | 0.7437 |
| 0.3132 | 56.0 | 8736 | 0.4943 | 0.7437 |
| 0.3132 | 57.0 | 8892 | 0.4970 | 0.7437 |
| 0.311 | 58.0 | 9048 | 0.4914 | 0.7401 |
| 0.311 | 59.0 | 9204 | 0.4887 | 0.7437 |
| 0.311 | 60.0 | 9360 | 0.4836 | 0.7437 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dkqjrm/20230822202056
|
dkqjrm
| 2023-08-22T13:11:55Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-22T11:21:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230822202056'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230822202056
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1724
- Accuracy: 0.7112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 0.1785 | 0.5307 |
| 0.2552 | 2.0 | 624 | 0.1826 | 0.5054 |
| 0.2552 | 3.0 | 936 | 0.3328 | 0.4729 |
| 0.24 | 4.0 | 1248 | 0.2050 | 0.4729 |
| 0.2369 | 5.0 | 1560 | 0.1750 | 0.6065 |
| 0.2369 | 6.0 | 1872 | 0.1752 | 0.4765 |
| 0.2199 | 7.0 | 2184 | 0.1799 | 0.5921 |
| 0.2199 | 8.0 | 2496 | 0.1896 | 0.4729 |
| 0.1955 | 9.0 | 2808 | 0.1727 | 0.6245 |
| 0.185 | 10.0 | 3120 | 0.1734 | 0.5668 |
| 0.185 | 11.0 | 3432 | 0.1781 | 0.5812 |
| 0.184 | 12.0 | 3744 | 0.1711 | 0.6318 |
| 0.1819 | 13.0 | 4056 | 0.1783 | 0.4910 |
| 0.1819 | 14.0 | 4368 | 0.1703 | 0.6534 |
| 0.1793 | 15.0 | 4680 | 0.1697 | 0.6931 |
| 0.1793 | 16.0 | 4992 | 0.1710 | 0.6643 |
| 0.179 | 17.0 | 5304 | 0.1728 | 0.6534 |
| 0.1784 | 18.0 | 5616 | 0.1712 | 0.6498 |
| 0.1784 | 19.0 | 5928 | 0.1726 | 0.6065 |
| 0.1778 | 20.0 | 6240 | 0.1720 | 0.6679 |
| 0.1761 | 21.0 | 6552 | 0.1724 | 0.6606 |
| 0.1761 | 22.0 | 6864 | 0.1792 | 0.6534 |
| 0.1761 | 23.0 | 7176 | 0.1700 | 0.6715 |
| 0.1761 | 24.0 | 7488 | 0.1698 | 0.6679 |
| 0.1748 | 25.0 | 7800 | 0.1697 | 0.6968 |
| 0.1744 | 26.0 | 8112 | 0.1729 | 0.6859 |
| 0.1744 | 27.0 | 8424 | 0.1702 | 0.6570 |
| 0.1736 | 28.0 | 8736 | 0.1708 | 0.6931 |
| 0.1723 | 29.0 | 9048 | 0.1698 | 0.6787 |
| 0.1723 | 30.0 | 9360 | 0.1799 | 0.6462 |
| 0.1735 | 31.0 | 9672 | 0.1727 | 0.6751 |
| 0.1735 | 32.0 | 9984 | 0.1732 | 0.6498 |
| 0.1722 | 33.0 | 10296 | 0.1702 | 0.6751 |
| 0.1709 | 34.0 | 10608 | 0.1707 | 0.6968 |
| 0.1709 | 35.0 | 10920 | 0.1714 | 0.6968 |
| 0.1697 | 36.0 | 11232 | 0.1712 | 0.6751 |
| 0.1696 | 37.0 | 11544 | 0.1788 | 0.6570 |
| 0.1696 | 38.0 | 11856 | 0.1703 | 0.6787 |
| 0.1697 | 39.0 | 12168 | 0.1735 | 0.6751 |
| 0.1697 | 40.0 | 12480 | 0.1740 | 0.6787 |
| 0.1683 | 41.0 | 12792 | 0.1710 | 0.6895 |
| 0.1688 | 42.0 | 13104 | 0.1724 | 0.7076 |
| 0.1688 | 43.0 | 13416 | 0.1718 | 0.7004 |
| 0.1679 | 44.0 | 13728 | 0.1736 | 0.7040 |
| 0.1681 | 45.0 | 14040 | 0.1720 | 0.7040 |
| 0.1681 | 46.0 | 14352 | 0.1717 | 0.7076 |
| 0.1664 | 47.0 | 14664 | 0.1710 | 0.6895 |
| 0.1664 | 48.0 | 14976 | 0.1766 | 0.6895 |
| 0.1662 | 49.0 | 15288 | 0.1729 | 0.7040 |
| 0.1655 | 50.0 | 15600 | 0.1704 | 0.7076 |
| 0.1655 | 51.0 | 15912 | 0.1711 | 0.7184 |
| 0.1665 | 52.0 | 16224 | 0.1709 | 0.7040 |
| 0.1651 | 53.0 | 16536 | 0.1711 | 0.6931 |
| 0.1651 | 54.0 | 16848 | 0.1736 | 0.7040 |
| 0.1646 | 55.0 | 17160 | 0.1712 | 0.7112 |
| 0.1646 | 56.0 | 17472 | 0.1740 | 0.7076 |
| 0.1647 | 57.0 | 17784 | 0.1723 | 0.7076 |
| 0.1642 | 58.0 | 18096 | 0.1715 | 0.7004 |
| 0.1642 | 59.0 | 18408 | 0.1727 | 0.7076 |
| 0.1643 | 60.0 | 18720 | 0.1724 | 0.7112 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dkqjrm/20230822202110
|
dkqjrm
| 2023-08-22T13:09:57Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-22T11:21:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230822202110'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230822202110
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1679
- Accuracy: 0.7148
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 156 | 0.4220 | 0.5271 |
| No log | 2.0 | 312 | 0.2767 | 0.4729 |
| No log | 3.0 | 468 | 0.4345 | 0.4729 |
| 0.2507 | 4.0 | 624 | 0.2006 | 0.5343 |
| 0.2507 | 5.0 | 780 | 0.1797 | 0.4729 |
| 0.2507 | 6.0 | 936 | 0.2180 | 0.5271 |
| 0.2023 | 7.0 | 1092 | 0.1726 | 0.5054 |
| 0.2023 | 8.0 | 1248 | 0.1811 | 0.4729 |
| 0.2023 | 9.0 | 1404 | 0.1828 | 0.5451 |
| 0.2077 | 10.0 | 1560 | 0.1921 | 0.5343 |
| 0.2077 | 11.0 | 1716 | 0.1772 | 0.4838 |
| 0.2077 | 12.0 | 1872 | 0.1724 | 0.6462 |
| 0.189 | 13.0 | 2028 | 0.1718 | 0.5379 |
| 0.189 | 14.0 | 2184 | 0.1728 | 0.5126 |
| 0.189 | 15.0 | 2340 | 0.1775 | 0.5126 |
| 0.189 | 16.0 | 2496 | 0.1813 | 0.5596 |
| 0.1803 | 17.0 | 2652 | 0.1739 | 0.6318 |
| 0.1803 | 18.0 | 2808 | 0.1718 | 0.6137 |
| 0.1803 | 19.0 | 2964 | 0.1711 | 0.6390 |
| 0.1791 | 20.0 | 3120 | 0.1797 | 0.5957 |
| 0.1791 | 21.0 | 3276 | 0.1710 | 0.6859 |
| 0.1791 | 22.0 | 3432 | 0.1729 | 0.6643 |
| 0.1781 | 23.0 | 3588 | 0.1701 | 0.6823 |
| 0.1781 | 24.0 | 3744 | 0.1706 | 0.6390 |
| 0.1781 | 25.0 | 3900 | 0.1708 | 0.6859 |
| 0.1765 | 26.0 | 4056 | 0.1697 | 0.6643 |
| 0.1765 | 27.0 | 4212 | 0.1698 | 0.6715 |
| 0.1765 | 28.0 | 4368 | 0.1710 | 0.6426 |
| 0.176 | 29.0 | 4524 | 0.1710 | 0.6931 |
| 0.176 | 30.0 | 4680 | 0.1703 | 0.6968 |
| 0.176 | 31.0 | 4836 | 0.1725 | 0.6570 |
| 0.176 | 32.0 | 4992 | 0.1699 | 0.6715 |
| 0.1749 | 33.0 | 5148 | 0.1710 | 0.6895 |
| 0.1749 | 34.0 | 5304 | 0.1694 | 0.7220 |
| 0.1749 | 35.0 | 5460 | 0.1700 | 0.6534 |
| 0.1739 | 36.0 | 5616 | 0.1690 | 0.7112 |
| 0.1739 | 37.0 | 5772 | 0.1685 | 0.7220 |
| 0.1739 | 38.0 | 5928 | 0.1696 | 0.7040 |
| 0.1738 | 39.0 | 6084 | 0.1688 | 0.7148 |
| 0.1738 | 40.0 | 6240 | 0.1692 | 0.7220 |
| 0.1738 | 41.0 | 6396 | 0.1683 | 0.7365 |
| 0.1726 | 42.0 | 6552 | 0.1690 | 0.6679 |
| 0.1726 | 43.0 | 6708 | 0.1679 | 0.7076 |
| 0.1726 | 44.0 | 6864 | 0.1691 | 0.7184 |
| 0.1719 | 45.0 | 7020 | 0.1692 | 0.7292 |
| 0.1719 | 46.0 | 7176 | 0.1685 | 0.7329 |
| 0.1719 | 47.0 | 7332 | 0.1684 | 0.7184 |
| 0.1719 | 48.0 | 7488 | 0.1690 | 0.7112 |
| 0.1712 | 49.0 | 7644 | 0.1690 | 0.7292 |
| 0.1712 | 50.0 | 7800 | 0.1685 | 0.6931 |
| 0.1712 | 51.0 | 7956 | 0.1680 | 0.7256 |
| 0.1705 | 52.0 | 8112 | 0.1687 | 0.7076 |
| 0.1705 | 53.0 | 8268 | 0.1685 | 0.7184 |
| 0.1705 | 54.0 | 8424 | 0.1689 | 0.7365 |
| 0.1705 | 55.0 | 8580 | 0.1677 | 0.7148 |
| 0.1705 | 56.0 | 8736 | 0.1694 | 0.7220 |
| 0.1705 | 57.0 | 8892 | 0.1682 | 0.7256 |
| 0.1692 | 58.0 | 9048 | 0.1684 | 0.7148 |
| 0.1692 | 59.0 | 9204 | 0.1679 | 0.7148 |
| 0.1692 | 60.0 | 9360 | 0.1679 | 0.7148 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
kaanhho/speecht5_finetuned_voxpopuli_it
|
kaanhho
| 2023-08-22T13:07:13Z | 86 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-08-22T12:08:05Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- text-to-speech
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_it
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5644 | 1.53 | 1000 | 0.5845 |
| 0.5521 | 3.07 | 2000 | 0.5724 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
anshudaur/chair_model_with_prior_preservation
|
anshudaur
| 2023-08-22T13:05:37Z | 12 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-22T12:48:20Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: photo of a <new1> chair
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - anshudaur/chair_model_with_prior_preservation
These are Custom Diffusion adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on photo of a <new1> chair using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.


For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
akar49/detr-crack-II
|
akar49
| 2023-08-22T12:54:57Z | 34 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:crack_detection-merged-ii",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-08-22T11:09:26Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
datasets:
- crack_detection-merged-ii
model-index:
- name: detr-crack-II
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-crack-II
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the crack_detection-merged-ii dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
barbieheimer/MND_TweetEvalBert_model
|
barbieheimer
| 2023-08-22T12:40:58Z | 107 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tweet_eval",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-19T05:33:08Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- tweet_eval
model-index:
- name: MND_TweetEvalBert_model
results: []
language:
- en
pipeline_tag: text-classification
metrics:
- accuracy
widget:
- text: 'I loved Barbie and Oppenheimer'
example_title: Barbenheimer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MND_TweetEvalBert_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7241
## Model description
This is how to use the model with the transformer library to do a text classification task.
This model was trained and built for sentiment analysis with a text classification model architecture.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("barbieheimer/MND_TweetEvalBert_model")
model = AutoModelForSequenceClassification.from_pretrained("barbieheimer/MND_TweetEvalBert_model")
# We can now use the model in the pipeline.
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
# Get some text to fool around with for a basic test.
text = "I loved Oppenheimer and Barbie "
classifier(text) # Let's see if the model works on our example text.
```
```
[{'label': 'JOY', 'score': 0.9845513701438904}]
```
## Training Evalutation Results
```python
{'eval_loss': 0.7240552306175232,
'eval_runtime': 3.7803,
'eval_samples_per_second': 375.896,
'eval_steps_per_second': 23.543,
'epoch': 5.0}
```
## Overall Model Evaluation Results
```python
{'accuracy': {'confidence_interval': (0.783, 0.832),
'standard_error': 0.01241992329458207,
'score': 0.808},
'total_time_in_seconds': 150.93268656500004,
'samples_per_second': 6.625470087086432,
'latency_in_seconds': 0.15093268656500003}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
```python
{'training_loss'=0.3821827131159165}
{'train_runtime': 174.1546, 'train_samples_per_second': 93.509,
'train_steps_per_second': 5.857, 'total_flos': 351397804992312.0,
'train_loss': 0.3821827131159165, 'epoch': 5.0}
```
```
Step: 500
{training loss: 0.607100}
Step: 1000
{training loss: 0.169000}
```
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
GrantW65/ppo-Huggy
|
GrantW65
| 2023-08-22T12:40:40Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-22T12:40:35Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: GrantW65/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
natsusakiyomi/SakuraMix
|
natsusakiyomi
| 2023-08-22T12:30:44Z | 115 | 70 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"ja",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-03-17T17:37:21Z |
---
license: creativeml-openrail-m
language:
- ja
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
library_name: diffusers
---
<div class="flex justify-center">
<div class="container p-0 w-100">
<img class="mt-0 object-cover rounded-t-lg w-100"
style="height: 320px;"
src="https://pbs.twimg.com/media/Fwzt7HZaEAAkX2U?format=jpg"
width="100%"/>
<div class="flex px-4">
<div class="flex-auto">
<h1 class="mb-2 text-3xl font-bold leading-tight" style="color: rgb(252, 238, 235/var(--tw-text-opacity));">
SakuraMixSeries
</h1>
<p class="mb-4 text-base text-neutral-600 dark:text-neutral-200">
背景とキャラクタークオリティーを両立させたVAE内蔵型モデル<br>
Model with built-in VAE for both background and character quality
</p>
</div>
<div>
<a
href="https://twitter.com/min__san"
class="mb-2 inline-block rounded px-6 py-2.5 text-white shadow-md"
style="background-color: #1da1f2">
<svg xmlns="http://www.w3.org/2000/svg" class="h-3.5 w-3.5" fill="currentColor" viewBox="0 0 24 24">
<path d="M24 4.557c-.883.392-1.832.656-2.828.775 1.017-.609 1.798-1.574 2.165-2.724-.951.564-2.005.974-3.127 1.195-.897-.957-2.178-1.555-3.594-1.555-3.179 0-5.515 2.966-4.797 6.045-4.091-.205-7.719-2.165-10.148-5.144-1.29 2.213-.669 5.108 1.523 6.574-.806-.026-1.566-.247-2.229-.616-.054 2.281 1.581 4.415 3.949 4.89-.693.188-1.452.232-2.224.084.626 1.956 2.444 3.379 4.6 3.419-2.07 1.623-4.678 2.348-7.29 2.04 2.179 1.397 4.768 2.212 7.548 2.212 9.142 0 14.307-7.721 13.995-14.646.962-.695 1.797-1.562 2.457-2.549z" />
</svg>
</a>
</div>
</div>
</div>
</div>
---
<h4>📄 ライセンス / License</h4>
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
<h3 id="blue_pencil-v7" class="mt-0 text-2xl">
<code>SakuraMix-v4</code> <small></small>
</h3>
<div>
v3の改修モデル
全体的に手や破綻の少なくなったモデル<br>
若干書き込み量が減ったような気がするので昔からSakuraMix好きな人はflat loraを使うことを推奨
<hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));">
<h3 id="blue_pencil-v7" class="mt-0 text-2xl">
<code>SakuraMix-v3</code> <small></small>
</h3>
<div>
v2の改修モデル
服装や構図が前よりも増えた気がする
破綻しやすいがいいものが生成できるときはとてもいいものが生成できる<br>
個人的にはv2をお勧めします
<hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));">
<h3 id="SakuraMix-v2" class="mt-0 text-2xl">
<code>SakuraMix-v2</code> <small></small>
</h3>
<div>
HimawariMix-v2B(没案)を改造したモデル<br>
HimawariMix-v2自体character自体を強化したモデルだがさらにキャラを強くしたモデル
<hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));">
<h3 id="SakuraMix-v1" class="mt-0 text-2xl">
<code>SakuraMix-v1</code> <small></small>
</h3>
<div>
初代SakuraMix
特徴とか知らん忘れた<br>
---
# 作者&連絡先
Twiter: [@min__san](https://twitter.com/min__san)<br>
mail: ([email protected])
|
asenella/ms_config_1_alpha_10_beta_250_seed_1
|
asenella
| 2023-08-22T12:29:55Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-08-22T12:29:53Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
kevinassemi/WaterWizard
|
kevinassemi
| 2023-08-22T12:19:21Z | 0 | 0 | null |
[
"text-generation",
"en",
"license:llama2",
"region:us"
] |
text-generation
| 2023-08-22T12:17:16Z |
---
license: llama2
language:
- en
pipeline_tag: text-generation
---
|
Muhammadreza/mann-e-bitmap-revised-2
|
Muhammadreza
| 2023-08-22T12:14:16Z | 4 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-22T12:01:29Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### mann-e_bitmap_revised-2 Dreambooth model trained by Muhammadreza with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Saurabh16100/distilgpt2-finetuned-wikitext2
|
Saurabh16100
| 2023-08-22T12:13:32Z | 204 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-22T11:28:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.653 | 2.0 | 4668 | 3.6472 |
| 3.6006 | 3.0 | 7002 | 3.6421 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
loicspigeleer/PPO-LunarLander-v2
|
loicspigeleer
| 2023-08-22T11:56:15Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-22T11:55:51Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.06 +/- 16.19
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
EliKet/lora-trained-xl-colab
|
EliKet
| 2023-08-22T11:53:44Z | 2 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-0.9",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-0.9",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-17T09:17:36Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-0.9
instance_prompt: a photo of model
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - EliKet/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-0.9. The weights were trained on a photo of model using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
Arup-Dutta-Bappy/bert-large-cased-whole-word-masking-finetuned-squad
|
Arup-Dutta-Bappy
| 2023-08-22T11:50:57Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:google-bert/bert-large-cased-whole-word-masking",
"base_model:finetune:google-bert/bert-large-cased-whole-word-masking",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-13T17:53:28Z |
---
license: apache-2.0
base_model: bert-large-cased-whole-word-masking
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-large-cased-whole-word-masking-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-whole-word-masking-finetuned-squad
This model is a fine-tuned version of [bert-large-cased-whole-word-masking](https://huggingface.co/bert-large-cased-whole-word-masking) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dkqjrm/20230822185044
|
dkqjrm
| 2023-08-22T11:47:15Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-22T09:51:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230822185044'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230822185044
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3482
- Accuracy: 0.4729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 0.3580 | 0.5379 |
| 0.5102 | 2.0 | 624 | 0.3670 | 0.5415 |
| 0.5102 | 3.0 | 936 | 0.4888 | 0.4765 |
| 0.4569 | 4.0 | 1248 | 0.3742 | 0.4982 |
| 0.4403 | 5.0 | 1560 | 0.3796 | 0.5379 |
| 0.4403 | 6.0 | 1872 | 0.3602 | 0.5776 |
| 0.4215 | 7.0 | 2184 | 0.4013 | 0.5415 |
| 0.4215 | 8.0 | 2496 | 0.3596 | 0.5884 |
| 0.4166 | 9.0 | 2808 | 0.3447 | 0.5487 |
| 0.3885 | 10.0 | 3120 | 0.3395 | 0.6101 |
| 0.3885 | 11.0 | 3432 | 0.3395 | 0.6354 |
| 0.3776 | 12.0 | 3744 | 0.3568 | 0.5343 |
| 0.4274 | 13.0 | 4056 | 0.5923 | 0.4729 |
| 0.4274 | 14.0 | 4368 | 0.3503 | 0.5668 |
| 0.4138 | 15.0 | 4680 | 0.3605 | 0.5523 |
| 0.4138 | 16.0 | 4992 | 0.3491 | 0.5451 |
| 0.4025 | 17.0 | 5304 | 0.3728 | 0.5379 |
| 0.394 | 18.0 | 5616 | 0.4029 | 0.4729 |
| 0.394 | 19.0 | 5928 | 0.3682 | 0.4729 |
| 0.3892 | 20.0 | 6240 | 0.3484 | 0.5054 |
| 0.3839 | 21.0 | 6552 | 0.3485 | 0.4765 |
| 0.3839 | 22.0 | 6864 | 0.3467 | 0.5343 |
| 0.3782 | 23.0 | 7176 | 0.3471 | 0.5307 |
| 0.3782 | 24.0 | 7488 | 0.3565 | 0.4693 |
| 0.3757 | 25.0 | 7800 | 0.3483 | 0.5343 |
| 0.3737 | 26.0 | 8112 | 0.3495 | 0.5271 |
| 0.3737 | 27.0 | 8424 | 0.3550 | 0.4729 |
| 0.3724 | 28.0 | 8736 | 0.3544 | 0.4729 |
| 0.3696 | 29.0 | 9048 | 0.3478 | 0.5307 |
| 0.3696 | 30.0 | 9360 | 0.3519 | 0.5271 |
| 0.3693 | 31.0 | 9672 | 0.3515 | 0.5271 |
| 0.3693 | 32.0 | 9984 | 0.3487 | 0.4729 |
| 0.3674 | 33.0 | 10296 | 0.3492 | 0.5379 |
| 0.3628 | 34.0 | 10608 | 0.3555 | 0.4729 |
| 0.3628 | 35.0 | 10920 | 0.3550 | 0.4729 |
| 0.3635 | 36.0 | 11232 | 0.3686 | 0.4729 |
| 0.3636 | 37.0 | 11544 | 0.3488 | 0.4801 |
| 0.3636 | 38.0 | 11856 | 0.3484 | 0.4874 |
| 0.3595 | 39.0 | 12168 | 0.3477 | 0.4910 |
| 0.3595 | 40.0 | 12480 | 0.3486 | 0.5307 |
| 0.3598 | 41.0 | 12792 | 0.3488 | 0.4801 |
| 0.3594 | 42.0 | 13104 | 0.3614 | 0.4729 |
| 0.3594 | 43.0 | 13416 | 0.3476 | 0.5199 |
| 0.3586 | 44.0 | 13728 | 0.3482 | 0.4729 |
| 0.3581 | 45.0 | 14040 | 0.3519 | 0.4729 |
| 0.3581 | 46.0 | 14352 | 0.3494 | 0.4729 |
| 0.3579 | 47.0 | 14664 | 0.3613 | 0.4729 |
| 0.3579 | 48.0 | 14976 | 0.3480 | 0.4729 |
| 0.3573 | 49.0 | 15288 | 0.3480 | 0.4729 |
| 0.3564 | 50.0 | 15600 | 0.3487 | 0.4729 |
| 0.3564 | 51.0 | 15912 | 0.3529 | 0.4729 |
| 0.3561 | 52.0 | 16224 | 0.3515 | 0.4729 |
| 0.3554 | 53.0 | 16536 | 0.3475 | 0.4946 |
| 0.3554 | 54.0 | 16848 | 0.3489 | 0.5271 |
| 0.3535 | 55.0 | 17160 | 0.3488 | 0.4729 |
| 0.3535 | 56.0 | 17472 | 0.3478 | 0.5018 |
| 0.3542 | 57.0 | 17784 | 0.3491 | 0.4729 |
| 0.354 | 58.0 | 18096 | 0.3485 | 0.4729 |
| 0.354 | 59.0 | 18408 | 0.3483 | 0.4729 |
| 0.3529 | 60.0 | 18720 | 0.3482 | 0.4729 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dkqjrm/20230822185237
|
dkqjrm
| 2023-08-22T11:44:15Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-22T09:52:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230822185237'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230822185237
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3335
- Accuracy: 0.6498
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 0.3589 | 0.5415 |
| 0.4381 | 2.0 | 624 | 0.3585 | 0.5560 |
| 0.4381 | 3.0 | 936 | 0.4824 | 0.4729 |
| 0.4251 | 4.0 | 1248 | 0.3497 | 0.5740 |
| 0.4013 | 5.0 | 1560 | 0.5515 | 0.5307 |
| 0.4013 | 6.0 | 1872 | 0.5300 | 0.5343 |
| 0.4064 | 7.0 | 2184 | 0.3515 | 0.4982 |
| 0.4064 | 8.0 | 2496 | 0.3456 | 0.5704 |
| 0.4121 | 9.0 | 2808 | 0.3522 | 0.5632 |
| 0.4048 | 10.0 | 3120 | 0.3437 | 0.5632 |
| 0.4048 | 11.0 | 3432 | 0.3483 | 0.5668 |
| 0.4035 | 12.0 | 3744 | 0.3952 | 0.4657 |
| 0.3797 | 13.0 | 4056 | 0.3535 | 0.4801 |
| 0.3797 | 14.0 | 4368 | 0.3443 | 0.5993 |
| 0.3657 | 15.0 | 4680 | 0.3431 | 0.5379 |
| 0.3657 | 16.0 | 4992 | 0.3478 | 0.5993 |
| 0.3615 | 17.0 | 5304 | 0.3475 | 0.6173 |
| 0.3573 | 18.0 | 5616 | 0.3539 | 0.6101 |
| 0.3573 | 19.0 | 5928 | 0.3384 | 0.6101 |
| 0.3552 | 20.0 | 6240 | 0.3483 | 0.6245 |
| 0.3545 | 21.0 | 6552 | 0.3359 | 0.6173 |
| 0.3545 | 22.0 | 6864 | 0.3844 | 0.5740 |
| 0.349 | 23.0 | 7176 | 0.3436 | 0.6498 |
| 0.349 | 24.0 | 7488 | 0.3422 | 0.6209 |
| 0.351 | 25.0 | 7800 | 0.3495 | 0.6318 |
| 0.3471 | 26.0 | 8112 | 0.3498 | 0.6101 |
| 0.3471 | 27.0 | 8424 | 0.3316 | 0.6462 |
| 0.3468 | 28.0 | 8736 | 0.3322 | 0.6751 |
| 0.3459 | 29.0 | 9048 | 0.3354 | 0.6390 |
| 0.3459 | 30.0 | 9360 | 0.3353 | 0.6390 |
| 0.344 | 31.0 | 9672 | 0.3383 | 0.6354 |
| 0.344 | 32.0 | 9984 | 0.3329 | 0.6245 |
| 0.3435 | 33.0 | 10296 | 0.3411 | 0.6390 |
| 0.3408 | 34.0 | 10608 | 0.3414 | 0.6354 |
| 0.3408 | 35.0 | 10920 | 0.3319 | 0.6534 |
| 0.3401 | 36.0 | 11232 | 0.3347 | 0.6282 |
| 0.3406 | 37.0 | 11544 | 0.3382 | 0.6137 |
| 0.3406 | 38.0 | 11856 | 0.3355 | 0.6245 |
| 0.3378 | 39.0 | 12168 | 0.3416 | 0.6245 |
| 0.3378 | 40.0 | 12480 | 0.3422 | 0.6209 |
| 0.3386 | 41.0 | 12792 | 0.3388 | 0.6390 |
| 0.3362 | 42.0 | 13104 | 0.3330 | 0.6390 |
| 0.3362 | 43.0 | 13416 | 0.3393 | 0.6282 |
| 0.3373 | 44.0 | 13728 | 0.3340 | 0.6282 |
| 0.3337 | 45.0 | 14040 | 0.3318 | 0.6390 |
| 0.3337 | 46.0 | 14352 | 0.3323 | 0.6354 |
| 0.3332 | 47.0 | 14664 | 0.3301 | 0.6643 |
| 0.3332 | 48.0 | 14976 | 0.3422 | 0.6282 |
| 0.3315 | 49.0 | 15288 | 0.3348 | 0.6570 |
| 0.33 | 50.0 | 15600 | 0.3366 | 0.6462 |
| 0.33 | 51.0 | 15912 | 0.3308 | 0.6570 |
| 0.331 | 52.0 | 16224 | 0.3298 | 0.6606 |
| 0.3295 | 53.0 | 16536 | 0.3377 | 0.6498 |
| 0.3295 | 54.0 | 16848 | 0.3439 | 0.6462 |
| 0.3282 | 55.0 | 17160 | 0.3326 | 0.6570 |
| 0.3282 | 56.0 | 17472 | 0.3356 | 0.6498 |
| 0.3291 | 57.0 | 17784 | 0.3309 | 0.6570 |
| 0.3278 | 58.0 | 18096 | 0.3333 | 0.6498 |
| 0.3278 | 59.0 | 18408 | 0.3324 | 0.6498 |
| 0.3292 | 60.0 | 18720 | 0.3335 | 0.6498 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dkqjrm/20230822185221
|
dkqjrm
| 2023-08-22T11:41:38Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-22T09:52:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230822185221'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230822185221
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3289
- Accuracy: 0.7329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 0.5077 | 0.5307 |
| 0.4439 | 2.0 | 624 | 0.3971 | 0.4874 |
| 0.4439 | 3.0 | 936 | 0.3574 | 0.5379 |
| 0.4231 | 4.0 | 1248 | 0.3625 | 0.5776 |
| 0.4071 | 5.0 | 1560 | 0.4937 | 0.5343 |
| 0.4071 | 6.0 | 1872 | 0.3738 | 0.5668 |
| 0.3956 | 7.0 | 2184 | 0.4081 | 0.4729 |
| 0.3956 | 8.0 | 2496 | 0.3386 | 0.6209 |
| 0.3905 | 9.0 | 2808 | 0.4147 | 0.4729 |
| 0.3888 | 10.0 | 3120 | 0.3353 | 0.6354 |
| 0.3888 | 11.0 | 3432 | 0.3540 | 0.6282 |
| 0.3992 | 12.0 | 3744 | 0.3453 | 0.5848 |
| 0.372 | 13.0 | 4056 | 0.3265 | 0.6895 |
| 0.372 | 14.0 | 4368 | 0.3575 | 0.6426 |
| 0.3643 | 15.0 | 4680 | 0.3304 | 0.6498 |
| 0.3643 | 16.0 | 4992 | 0.3633 | 0.6715 |
| 0.3666 | 17.0 | 5304 | 0.5230 | 0.5343 |
| 0.3517 | 18.0 | 5616 | 0.3384 | 0.6462 |
| 0.3517 | 19.0 | 5928 | 0.3293 | 0.6823 |
| 0.3519 | 20.0 | 6240 | 0.3613 | 0.6823 |
| 0.338 | 21.0 | 6552 | 0.3242 | 0.7256 |
| 0.338 | 22.0 | 6864 | 0.3399 | 0.7184 |
| 0.3316 | 23.0 | 7176 | 0.3392 | 0.7004 |
| 0.3316 | 24.0 | 7488 | 0.3343 | 0.6534 |
| 0.3266 | 25.0 | 7800 | 0.3467 | 0.7112 |
| 0.3213 | 26.0 | 8112 | 0.3419 | 0.7040 |
| 0.3213 | 27.0 | 8424 | 0.3190 | 0.7112 |
| 0.3177 | 28.0 | 8736 | 0.3205 | 0.6931 |
| 0.3187 | 29.0 | 9048 | 0.3303 | 0.7076 |
| 0.3187 | 30.0 | 9360 | 0.3268 | 0.7148 |
| 0.3162 | 31.0 | 9672 | 0.3274 | 0.7148 |
| 0.3162 | 32.0 | 9984 | 0.3311 | 0.7112 |
| 0.3132 | 33.0 | 10296 | 0.3454 | 0.7148 |
| 0.3087 | 34.0 | 10608 | 0.3250 | 0.7076 |
| 0.3087 | 35.0 | 10920 | 0.3266 | 0.7076 |
| 0.3076 | 36.0 | 11232 | 0.3347 | 0.7292 |
| 0.3071 | 37.0 | 11544 | 0.3308 | 0.7112 |
| 0.3071 | 38.0 | 11856 | 0.3272 | 0.7220 |
| 0.3061 | 39.0 | 12168 | 0.3301 | 0.7148 |
| 0.3061 | 40.0 | 12480 | 0.3226 | 0.7256 |
| 0.3006 | 41.0 | 12792 | 0.3285 | 0.7365 |
| 0.3016 | 42.0 | 13104 | 0.3226 | 0.7148 |
| 0.3016 | 43.0 | 13416 | 0.3291 | 0.7220 |
| 0.2984 | 44.0 | 13728 | 0.3377 | 0.7112 |
| 0.2976 | 45.0 | 14040 | 0.3326 | 0.7220 |
| 0.2976 | 46.0 | 14352 | 0.3341 | 0.7292 |
| 0.2967 | 47.0 | 14664 | 0.3187 | 0.7184 |
| 0.2967 | 48.0 | 14976 | 0.3322 | 0.7148 |
| 0.2953 | 49.0 | 15288 | 0.3269 | 0.7365 |
| 0.2911 | 50.0 | 15600 | 0.3256 | 0.7365 |
| 0.2911 | 51.0 | 15912 | 0.3252 | 0.7256 |
| 0.2929 | 52.0 | 16224 | 0.3251 | 0.7292 |
| 0.2904 | 53.0 | 16536 | 0.3258 | 0.7256 |
| 0.2904 | 54.0 | 16848 | 0.3358 | 0.7220 |
| 0.2895 | 55.0 | 17160 | 0.3219 | 0.7329 |
| 0.2895 | 56.0 | 17472 | 0.3322 | 0.7329 |
| 0.2887 | 57.0 | 17784 | 0.3259 | 0.7365 |
| 0.2883 | 58.0 | 18096 | 0.3260 | 0.7292 |
| 0.2883 | 59.0 | 18408 | 0.3276 | 0.7365 |
| 0.2874 | 60.0 | 18720 | 0.3289 | 0.7329 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AhmedTaha012/finance-ner-v0.0.2-finetuned-ner
|
AhmedTaha012
| 2023-08-22T11:39:45Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-22T10:33:22Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finance-ner-v0.0.2-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finance-ner-v0.0.2-finetuned-ner
This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0004
- Precision: 0.9945
- Recall: 1.0
- F1: 0.9972
- Accuracy: 0.9999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0002 | 1.0 | 551 | 0.0011 | 0.9850 | 0.9940 | 0.9895 | 0.9997 |
| 0.0 | 2.0 | 1102 | 0.0006 | 0.9900 | 0.9991 | 0.9945 | 0.9999 |
| 0.0 | 3.0 | 1653 | 0.0005 | 0.9953 | 0.9991 | 0.9972 | 0.9999 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
viprav/llama2-quote-1-row
|
viprav
| 2023-08-22T11:38:59Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-22T11:38:51Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
lengoctuong/gpt2-finetuned-chatbot
|
lengoctuong
| 2023-08-22T11:38:39Z | 139 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-22T11:34:55Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: gpt2-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-chatbot
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
anshudaur/cat_model_small_lr
|
anshudaur
| 2023-08-22T11:29:13Z | 5 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-22T09:52:51Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: photo of a <new1> cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - anshudaur/cat_model_small_lr
These are Custom Diffusion adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on photo of a <new1> cat using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.


For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
Nebyx/LunarLanderunit8
|
Nebyx
| 2023-08-22T11:25:12Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-22T11:25:07Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -128.72 +/- 76.26
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Nebyx/LunarLanderunit8'
'batch_size': 512
'minibatch_size': 128}
```
|
stalker331333/my-pet-cat
|
stalker331333
| 2023-08-22T11:19:37Z | 41 | 0 |
diffusers
|
[
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-22T11:16:01Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat Dreambooth model trained by stalker331333 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:

|
elit333/newstable
|
elit333
| 2023-08-22T11:15:25Z | 5 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2207.12598",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-22T10:20:12Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
---
# Stable Diffusion v1-5 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-5** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion).
### Diffusers
```py
from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion)
### Original GitHub Repository
1. Download the weights
- [v1-5-pruned-emaonly.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt) - 4.27GB, ema-only weight. uses less VRAM - suitable for inference
- [v1-5-pruned.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt) - 7.7GB, ema+non-ema weights. uses more VRAM - suitable for fine-tuning
2. Follow instructions [here](https://github.com/runwayml/stable-diffusion).
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1-5 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
Currently six Stable Diffusion checkpoints are provided, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` - 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting) Resumed from `stable-diffusion-v1-5` - then 440,000 steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
deepsdh99/llama2-qlora-finetunined-8
|
deepsdh99
| 2023-08-22T11:05:58Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-22T11:05:30Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
MattStammers/dqn-SpaceInvadersNoFrameskip-v4
|
MattStammers
| 2023-08-22T11:04:08Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-22T11:03:00Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 710.50 +/- 398.67
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MattStammers -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MattStammers -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga MattStammers
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
OpenBuddy/openbuddy-openllama-3b-v10-bf16
|
OpenBuddy
| 2023-08-22T10:51:04Z | 1,568 | 8 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-08-10T13:37:46Z |
---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
pipeline_tag: text-generation
inference: false
library_name: transformers
license: apache-2.0
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)

# Copyright Notice
License: Apache 2.0.
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
|
roa7n/gpt2-human_nontata_promoters-rng_ep8
|
roa7n
| 2023-08-22T10:51:03Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-22T10:51:00Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
kasperchen/a2c-PandaReachDense-v3
|
kasperchen
| 2023-08-22T10:49:40Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-22T03:13:36Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.18 +/- 0.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ankur24022002/test1
|
ankur24022002
| 2023-08-22T10:44:57Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"opt",
"region:us"
] | null | 2023-08-22T08:27:08Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
jwhandley/setfit-gb-manifestos
|
jwhandley
| 2023-08-22T10:38:48Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-21T23:56:51Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# jwhandley/setfit-gb-manifestos
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("jwhandley/setfit-gb-manifestos")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
MayankAmrit/my-pet-dog
|
MayankAmrit
| 2023-08-22T10:26:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-22T10:13:31Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by MayankAmrit following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:

|
dkqjrm/20230822173808
|
dkqjrm
| 2023-08-22T10:26:48Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-22T08:38:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230822173808'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230822173808
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3493
- Accuracy: 0.6968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.004
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 0.3774 | 0.5162 |
| 0.5343 | 2.0 | 624 | 0.3506 | 0.5018 |
| 0.5343 | 3.0 | 936 | 0.4575 | 0.4729 |
| 0.4659 | 4.0 | 1248 | 0.3759 | 0.5307 |
| 0.4691 | 5.0 | 1560 | 0.3500 | 0.5812 |
| 0.4691 | 6.0 | 1872 | 0.3457 | 0.5993 |
| 0.4442 | 7.0 | 2184 | 0.3500 | 0.6101 |
| 0.4442 | 8.0 | 2496 | 0.3403 | 0.6173 |
| 0.4366 | 9.0 | 2808 | 0.3840 | 0.5776 |
| 0.4097 | 10.0 | 3120 | 0.4391 | 0.5487 |
| 0.4097 | 11.0 | 3432 | 0.3584 | 0.6029 |
| 0.3922 | 12.0 | 3744 | 0.3356 | 0.6498 |
| 0.3564 | 13.0 | 4056 | 0.3275 | 0.6931 |
| 0.3564 | 14.0 | 4368 | 0.3283 | 0.7076 |
| 0.3343 | 15.0 | 4680 | 0.3377 | 0.6462 |
| 0.3343 | 16.0 | 4992 | 0.3550 | 0.6390 |
| 0.335 | 17.0 | 5304 | 0.3370 | 0.6895 |
| 0.3233 | 18.0 | 5616 | 0.3256 | 0.6787 |
| 0.3233 | 19.0 | 5928 | 0.3174 | 0.7112 |
| 0.3232 | 20.0 | 6240 | 0.3440 | 0.6643 |
| 0.3102 | 21.0 | 6552 | 0.3375 | 0.6895 |
| 0.3102 | 22.0 | 6864 | 0.3433 | 0.6787 |
| 0.3064 | 23.0 | 7176 | 0.3690 | 0.6715 |
| 0.3064 | 24.0 | 7488 | 0.3394 | 0.6931 |
| 0.3004 | 25.0 | 7800 | 0.3377 | 0.7256 |
| 0.2962 | 26.0 | 8112 | 0.3435 | 0.6751 |
| 0.2962 | 27.0 | 8424 | 0.3182 | 0.7329 |
| 0.2937 | 28.0 | 8736 | 0.3306 | 0.7112 |
| 0.2905 | 29.0 | 9048 | 0.3362 | 0.7148 |
| 0.2905 | 30.0 | 9360 | 0.3675 | 0.6751 |
| 0.2865 | 31.0 | 9672 | 0.3406 | 0.7076 |
| 0.2865 | 32.0 | 9984 | 0.3343 | 0.7040 |
| 0.2812 | 33.0 | 10296 | 0.3472 | 0.6859 |
| 0.2727 | 34.0 | 10608 | 0.3372 | 0.7292 |
| 0.2727 | 35.0 | 10920 | 0.3575 | 0.7076 |
| 0.2735 | 36.0 | 11232 | 0.3300 | 0.7076 |
| 0.2701 | 37.0 | 11544 | 0.3585 | 0.6968 |
| 0.2701 | 38.0 | 11856 | 0.3422 | 0.7148 |
| 0.2688 | 39.0 | 12168 | 0.3579 | 0.6931 |
| 0.2688 | 40.0 | 12480 | 0.3326 | 0.7148 |
| 0.2644 | 41.0 | 12792 | 0.3464 | 0.7256 |
| 0.2637 | 42.0 | 13104 | 0.3579 | 0.6931 |
| 0.2637 | 43.0 | 13416 | 0.3489 | 0.7040 |
| 0.26 | 44.0 | 13728 | 0.3439 | 0.7076 |
| 0.2582 | 45.0 | 14040 | 0.3585 | 0.7004 |
| 0.2582 | 46.0 | 14352 | 0.3535 | 0.7076 |
| 0.2533 | 47.0 | 14664 | 0.3440 | 0.7148 |
| 0.2533 | 48.0 | 14976 | 0.3506 | 0.7040 |
| 0.2535 | 49.0 | 15288 | 0.3519 | 0.7040 |
| 0.2498 | 50.0 | 15600 | 0.3457 | 0.6931 |
| 0.2498 | 51.0 | 15912 | 0.3494 | 0.7112 |
| 0.2504 | 52.0 | 16224 | 0.3431 | 0.7040 |
| 0.2499 | 53.0 | 16536 | 0.3450 | 0.7040 |
| 0.2499 | 54.0 | 16848 | 0.3485 | 0.6895 |
| 0.2488 | 55.0 | 17160 | 0.3437 | 0.7004 |
| 0.2488 | 56.0 | 17472 | 0.3465 | 0.7004 |
| 0.2479 | 57.0 | 17784 | 0.3479 | 0.6895 |
| 0.247 | 58.0 | 18096 | 0.3447 | 0.7004 |
| 0.247 | 59.0 | 18408 | 0.3521 | 0.7004 |
| 0.2468 | 60.0 | 18720 | 0.3493 | 0.6968 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
thanhnew2001/bloom560m_grade7_2000_10kstep
|
thanhnew2001
| 2023-08-22T10:22:01Z | 31 | 0 |
peft
|
[
"peft",
"text-generation",
"region:us"
] |
text-generation
| 2023-08-22T09:50:28Z |
---
library_name: peft
pipeline_tag: text-generation
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
asenella/ms_config_1_alpha_10_beta_250_seed_0
|
asenella
| 2023-08-22T10:18:27Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-08-22T10:18:24Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
AddisuSeteye/speecht5_tts_amharic2
|
AddisuSeteye
| 2023-08-22T10:17:13Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"tags",
"generated_from_trainer",
"am",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-08-11T20:06:06Z |
---
language:
- am
license: mit
base_model: microsoft/speecht5_tts
tags:
- tags
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS amharic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS amharic
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the alfaa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4304 | 3.32 | 1000 | 0.3855 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
TobiTob/decision_transformer_merged3
|
TobiTob
| 2023-08-22T10:12:41Z | 31 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"decision_transformer",
"generated_from_trainer",
"dataset:city_learn",
"endpoints_compatible",
"region:us"
] | null | 2023-07-07T12:10:55Z |
---
tags:
- generated_from_trainer
datasets:
- city_learn
model-index:
- name: decision_transformer_merged3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# decision_transformer_merged3
This model is a fine-tuned version of [](https://huggingface.co/) on the city_learn dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 200
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
asenella/ms_config_1_alpha_10_beta_1_seed_2
|
asenella
| 2023-08-22T10:02:53Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-08-22T10:02:51Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Tina-2005/sunset
|
Tina-2005
| 2023-08-22T09:50:39Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-22T09:37:33Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Sunset Dreambooth model trained by Tina-2005 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: TSEC230
Sample pictures of this concept:

|
qgallouedec/window-close-v2
|
qgallouedec
| 2023-08-22T09:48:51Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-08T10:03:03Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: window-close-v2
type: window-close-v2
metrics:
- type: mean_reward
value: 593.18 +/- 40.45
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **window-close-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/window-close-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=window-close-v2 --train_dir=./train_dir --experiment=window-close-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=window-close-v2 --train_dir=./train_dir --experiment=window-close-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Samuael/llama-2-7b-tebot-sharded
|
Samuael
| 2023-08-22T09:44:04Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"region:us"
] | null | 2023-08-17T16:00:53Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
qgallouedec/shelf-place-v2
|
qgallouedec
| 2023-08-22T09:43:18Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-08T10:01:57Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: shelf-place-v2
type: shelf-place-v2
metrics:
- type: mean_reward
value: 274.68 +/- 29.20
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **shelf-place-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/shelf-place-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=shelf-place-v2 --train_dir=./train_dir --experiment=shelf-place-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=shelf-place-v2 --train_dir=./train_dir --experiment=shelf-place-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
qgallouedec/reach-wall-v2
|
qgallouedec
| 2023-08-22T09:42:23Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-08T10:01:45Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: reach-wall-v2
type: reach-wall-v2
metrics:
- type: mean_reward
value: 794.75 +/- 72.50
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **reach-wall-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/reach-wall-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=reach-wall-v2 --train_dir=./train_dir --experiment=reach-wall-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=reach-wall-v2 --train_dir=./train_dir --experiment=reach-wall-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
qgallouedec/reach-v2
|
qgallouedec
| 2023-08-22T09:41:29Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-08T10:01:35Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: reach-v2
type: reach-v2
metrics:
- type: mean_reward
value: 686.43 +/- 166.95
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **reach-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/reach-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=reach-v2 --train_dir=./train_dir --experiment=reach-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=reach-v2 --train_dir=./train_dir --experiment=reach-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
qgallouedec/push-v2
|
qgallouedec
| 2023-08-22T09:39:39Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-08T10:01:14Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: push-v2
type: push-v2
metrics:
- type: mean_reward
value: 742.07 +/- 37.84
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **push-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/push-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=push-v2 --train_dir=./train_dir --experiment=push-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=push-v2 --train_dir=./train_dir --experiment=push-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
qgallouedec/plate-slide-side-v2
|
qgallouedec
| 2023-08-22T09:36:55Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-08T10:00:43Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: plate-slide-side-v2
type: plate-slide-side-v2
metrics:
- type: mean_reward
value: 711.01 +/- 56.29
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **plate-slide-side-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/plate-slide-side-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=plate-slide-side-v2 --train_dir=./train_dir --experiment=plate-slide-side-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=plate-slide-side-v2 --train_dir=./train_dir --experiment=plate-slide-side-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
qgallouedec/plate-slide-back-v2
|
qgallouedec
| 2023-08-22T09:35:57Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-08T10:00:34Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: plate-slide-back-v2
type: plate-slide-back-v2
metrics:
- type: mean_reward
value: 709.93 +/- 82.31
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **plate-slide-back-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/plate-slide-back-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=plate-slide-back-v2 --train_dir=./train_dir --experiment=plate-slide-back-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=plate-slide-back-v2 --train_dir=./train_dir --experiment=plate-slide-back-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
qgallouedec/pick-place-wall-v2
|
qgallouedec
| 2023-08-22T09:34:08Z | 0 | 1 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-08T10:00:14Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: pick-place-wall-v2
type: pick-place-wall-v2
metrics:
- type: mean_reward
value: 449.64 +/- 63.43
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **pick-place-wall-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/pick-place-wall-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=pick-place-wall-v2 --train_dir=./train_dir --experiment=pick-place-wall-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=pick-place-wall-v2 --train_dir=./train_dir --experiment=pick-place-wall-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
qgallouedec/peg-unplug-side-v2
|
qgallouedec
| 2023-08-22T09:31:20Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-08T09:59:45Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: peg-unplug-side-v2
type: peg-unplug-side-v2
metrics:
- type: mean_reward
value: 499.20 +/- 95.68
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **peg-unplug-side-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/peg-unplug-side-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=peg-unplug-side-v2 --train_dir=./train_dir --experiment=peg-unplug-side-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=peg-unplug-side-v2 --train_dir=./train_dir --experiment=peg-unplug-side-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
qgallouedec/peg-insert-side-v2
|
qgallouedec
| 2023-08-22T09:30:22Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-08T09:59:35Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: peg-insert-side-v2
type: peg-insert-side-v2
metrics:
- type: mean_reward
value: 308.94 +/- 175.97
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **peg-insert-side-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/peg-insert-side-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=peg-insert-side-v2 --train_dir=./train_dir --experiment=peg-insert-side-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=peg-insert-side-v2 --train_dir=./train_dir --experiment=peg-insert-side-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
qgallouedec/handle-pull-v2
|
qgallouedec
| 2023-08-22T09:28:29Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-08T09:59:15Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: handle-pull-v2
type: handle-pull-v2
metrics:
- type: mean_reward
value: 698.01 +/- 21.12
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **handle-pull-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/handle-pull-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=handle-pull-v2 --train_dir=./train_dir --experiment=handle-pull-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=handle-pull-v2 --train_dir=./train_dir --experiment=handle-pull-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
qgallouedec/handle-pull-side-v2
|
qgallouedec
| 2023-08-22T09:27:34Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-08T09:59:05Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: handle-pull-side-v2
type: handle-pull-side-v2
metrics:
- type: mean_reward
value: 462.12 +/- 95.86
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **handle-pull-side-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/handle-pull-side-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=handle-pull-side-v2 --train_dir=./train_dir --experiment=handle-pull-side-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=handle-pull-side-v2 --train_dir=./train_dir --experiment=handle-pull-side-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
qgallouedec/handle-press-v2
|
qgallouedec
| 2023-08-22T09:26:37Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-08T09:58:54Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: handle-press-v2
type: handle-press-v2
metrics:
- type: mean_reward
value: 862.58 +/- 32.94
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **handle-press-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/handle-press-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=handle-press-v2 --train_dir=./train_dir --experiment=handle-press-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=handle-press-v2 --train_dir=./train_dir --experiment=handle-press-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
qgallouedec/hand-insert-v2
|
qgallouedec
| 2023-08-22T09:24:46Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-08T09:58:36Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: hand-insert-v2
type: hand-insert-v2
metrics:
- type: mean_reward
value: 742.89 +/- 26.31
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **hand-insert-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/hand-insert-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=hand-insert-v2 --train_dir=./train_dir --experiment=hand-insert-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=hand-insert-v2 --train_dir=./train_dir --experiment=hand-insert-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
VK246/IC_ver6I_coco_swin_gpt2_50A_1e
|
VK246
| 2023-08-22T09:24:05Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:coco",
"base_model:VK246/IC_ver6H_coco_swin_gpt2_50B_1e",
"base_model:finetune:VK246/IC_ver6H_coco_swin_gpt2_50B_1e",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-08-22T07:16:44Z |
---
base_model: VK246/IC_ver6H_coco_swin_gpt2_50B_1e
tags:
- generated_from_trainer
datasets:
- coco
metrics:
- rouge
model-index:
- name: IC_ver6I_coco_swin_gpt2_50A_1e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IC_ver6I_coco_swin_gpt2_50A_1e
This model is a fine-tuned version of [VK246/IC_ver6H_coco_swin_gpt2_50B_1e](https://huggingface.co/VK246/IC_ver6H_coco_swin_gpt2_50B_1e) on the coco dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8003
- Cider: 36.4847
- Rouge1: 41.9392
- Rouge2: 16.4156
- Rougel: 38.0808
- Rougelsum: 38.0721
- Bleu-1: 42.8624
- Bleu-2: 24.8647
- Bleu-3: 15.7144
- Bleu-4: 10.4434
- Gen Len: 11.2806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cider | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:|:-------:|:-------:|:-------:|
| 0.5622 | 0.34 | 1000 | 0.8598 | 16.5035 | 41.0303 | 15.4795 | 37.2917 | 37.2896 | 41.7661 | 23.7724 | 14.7804 | 9.5941 | 11.2806 |
| 0.639 | 0.68 | 2000 | 0.8003 | 36.4847 | 41.9392 | 16.4156 | 38.0808 | 38.0721 | 42.8624 | 24.8647 | 15.7144 | 10.4434 | 11.2806 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
qgallouedec/drawer-open-v2
|
qgallouedec
| 2023-08-22T09:21:02Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-08T09:57:55Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: drawer-open-v2
type: drawer-open-v2
metrics:
- type: mean_reward
value: 493.34 +/- 2.61
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **drawer-open-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/drawer-open-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=drawer-open-v2 --train_dir=./train_dir --experiment=drawer-open-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=drawer-open-v2 --train_dir=./train_dir --experiment=drawer-open-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
qgallouedec/door-open-v2
|
qgallouedec
| 2023-08-22T09:18:18Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-08T09:57:27Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: door-open-v2
type: door-open-v2
metrics:
- type: mean_reward
value: 579.89 +/- 31.92
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **door-open-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/door-open-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=door-open-v2 --train_dir=./train_dir --experiment=door-open-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=door-open-v2 --train_dir=./train_dir --experiment=door-open-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
IngeniousArtist/distilbert-finance
|
IngeniousArtist
| 2023-08-22T09:15:43Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:financial_phrasebank",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-31T00:31:08Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
metrics:
- accuracy
model-index:
- name: distilbert-finance
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
config: sentences_50agree
split: train
args: sentences_50agree
metrics:
- name: Accuracy
type: accuracy
value: 0.7386363636363636
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-finance
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9962
- Accuracy: 0.7386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.904 | 0.33 | 20 | 1.5959 | 0.4205 |
| 0.6562 | 0.66 | 40 | 1.6665 | 0.4143 |
| 0.539 | 0.98 | 60 | 1.6067 | 0.3936 |
| 0.4759 | 1.31 | 80 | 1.5079 | 0.4236 |
| 0.3882 | 1.64 | 100 | 1.4719 | 0.4298 |
| 0.3782 | 1.97 | 120 | 1.2392 | 0.4267 |
| 0.2729 | 2.3 | 140 | 1.0114 | 0.4928 |
| 0.2607 | 2.62 | 160 | 0.9514 | 0.5930 |
| 0.2889 | 2.95 | 180 | 0.8661 | 0.6477 |
| 0.181 | 3.28 | 200 | 0.7093 | 0.7417 |
| 0.1742 | 3.61 | 220 | 1.1042 | 0.5764 |
| 0.1904 | 3.93 | 240 | 0.7439 | 0.7510 |
| 0.1186 | 4.26 | 260 | 0.8587 | 0.7469 |
| 0.137 | 4.59 | 280 | 0.7408 | 0.7603 |
| 0.1166 | 4.92 | 300 | 1.0107 | 0.6705 |
| 0.0938 | 5.25 | 320 | 0.7883 | 0.7624 |
| 0.0881 | 5.57 | 340 | 1.0339 | 0.7056 |
| 0.0812 | 5.9 | 360 | 0.8409 | 0.7490 |
| 0.0586 | 6.23 | 380 | 0.9146 | 0.7345 |
| 0.0572 | 6.56 | 400 | 0.9000 | 0.7366 |
| 0.0527 | 6.89 | 420 | 0.9782 | 0.7335 |
| 0.045 | 7.21 | 440 | 1.0102 | 0.7262 |
| 0.0471 | 7.54 | 460 | 1.0322 | 0.7324 |
| 0.0508 | 7.87 | 480 | 0.9381 | 0.7448 |
| 0.039 | 8.2 | 500 | 0.9489 | 0.7459 |
| 0.0419 | 8.52 | 520 | 0.9779 | 0.7469 |
| 0.0256 | 8.85 | 540 | 0.9834 | 0.7407 |
| 0.0264 | 9.18 | 560 | 0.9963 | 0.7376 |
| 0.0378 | 9.51 | 580 | 0.9981 | 0.7376 |
| 0.0421 | 9.84 | 600 | 0.9962 | 0.7386 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
qgallouedec/coffee-push-v2
|
qgallouedec
| 2023-08-22T09:13:37Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-08T09:56:36Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: coffee-push-v2
type: coffee-push-v2
metrics:
- type: mean_reward
value: 526.27 +/- 120.28
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **coffee-push-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/coffee-push-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=coffee-push-v2 --train_dir=./train_dir --experiment=coffee-push-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=coffee-push-v2 --train_dir=./train_dir --experiment=coffee-push-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
qgallouedec/coffee-pull-v2
|
qgallouedec
| 2023-08-22T09:12:42Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-08T09:56:25Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: coffee-pull-v2
type: coffee-pull-v2
metrics:
- type: mean_reward
value: 262.59 +/- 63.08
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **coffee-pull-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/coffee-pull-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=coffee-pull-v2 --train_dir=./train_dir --experiment=coffee-pull-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=coffee-pull-v2 --train_dir=./train_dir --experiment=coffee-pull-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
922-CA/Modded-Berry
|
922-CA
| 2023-08-22T09:10:37Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-18T11:48:37Z |
---
license: creativeml-openrail-m
---
# INFO (11/18/2022)
Old stable diffusion 1.5 merge of berry mix + anythingv3 at around ~30/70 ratio + further merges. (If can recall correctly...)
|
qgallouedec/button-press-topdown-wall-v2
|
qgallouedec
| 2023-08-22T09:09:02Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T16:13:14Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: button-press-topdown-wall-v2
type: button-press-topdown-wall-v2
metrics:
- type: mean_reward
value: 497.31 +/- 37.73
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **button-press-topdown-wall-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/button-press-topdown-wall-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=button-press-topdown-wall-v2 --train_dir=./train_dir --experiment=button-press-topdown-wall-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=button-press-topdown-wall-v2 --train_dir=./train_dir --experiment=button-press-topdown-wall-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
qgallouedec/button-press-topdown-v2
|
qgallouedec
| 2023-08-22T09:08:08Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T16:12:47Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: button-press-topdown-v2
type: button-press-topdown-v2
metrics:
- type: mean_reward
value: 486.12 +/- 38.04
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **button-press-topdown-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/button-press-topdown-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=button-press-topdown-v2 --train_dir=./train_dir --experiment=button-press-topdown-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=button-press-topdown-v2 --train_dir=./train_dir --experiment=button-press-topdown-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
qgallouedec/bin-picking-v2
|
qgallouedec
| 2023-08-22T09:06:20Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T16:11:52Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: bin-picking-v2
type: bin-picking-v2
metrics:
- type: mean_reward
value: 452.37 +/- 36.53
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **bin-picking-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/bin-picking-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=bin-picking-v2 --train_dir=./train_dir --experiment=bin-picking-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=bin-picking-v2 --train_dir=./train_dir --experiment=bin-picking-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
marksverdhei/t5-base-define
|
marksverdhei
| 2023-08-22T09:05:55Z | 123 | 6 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"en",
"dataset:marksverdhei/wordnet-definitions-en-2021",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-02T09:50:37Z |
---
language: en
widget:
- text: 'define "toecoin": toecoin rose by 200% after Elon Musk mentioned it in his tweet'
datasets:
- 'marksverdhei/wordnet-definitions-en-2021'
---
# T5-define
(This model is still a work in progress. If you use it for fine tuning, make sure to save a local copy)
This model is trained to generate word definitions based on the word and a context,
using a subset of wordnet for all words that have an example and definition.
The model uses task prompts on the format 'define "[word]": [example sentence]'
This model in particular is a one-shot learner for unseen words, as it has to infer the definition by only one example
How to run:
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("marksverdhei/t5-base-define")
model = T5ForConditionalGeneration.from_pretrained("marksverdhei/t5-base-define")
prompt = "define \"noseplow\": The children hid as the noseplow drove across the street"
ids = tokenizer(prompt, return_tensors="pt").input_ids
generated_tokens = model.generate(ids)[0][1:-1]
print(tokenizer.decode(generated_tokens))
```
See the gist for the source code to used to train the model:
https://gist.github.com/marksverdhei/0a13f67e65460b71c05fcf558a6a91ae
|
qgallouedec/basketball-v2
|
qgallouedec
| 2023-08-22T09:05:27Z | 0 | 1 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T16:11:22Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: basketball-v2
type: basketball-v2
metrics:
- type: mean_reward
value: 584.02 +/- 49.43
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **basketball-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/basketball-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=basketball-v2 --train_dir=./train_dir --experiment=basketball-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=basketball-v2 --train_dir=./train_dir --experiment=basketball-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
qgallouedec/assembly-v2
|
qgallouedec
| 2023-08-22T09:04:34Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T13:44:05Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: assembly-v2
type: assembly-v2
metrics:
- type: mean_reward
value: 245.47 +/- 4.68
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **assembly-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/assembly-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=assembly-v2 --train_dir=./train_dir --experiment=assembly-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=assembly-v2 --train_dir=./train_dir --experiment=assembly-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
newronai/lma2-7b-Chat-Adapter-N
|
newronai
| 2023-08-22T08:56:11Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-22T08:56:04Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
Abhishek-Pathak/llama2_finetuned_demo
|
Abhishek-Pathak
| 2023-08-22T08:51:12Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | 2023-08-22T06:28:44Z |
---
tags:
- generated_from_trainer
model-index:
- name: llama2_finetuned_demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2_finetuned_demo
This model is a fine-tuned version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
grv805/llama2-qlora-finetunined-13b-gcp
|
grv805
| 2023-08-22T08:47:03Z | 6 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-22T08:46:51Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
tianpf/chinese-alpaca-2-qlora-finetunined-law2
|
tianpf
| 2023-08-22T08:46:11Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-22T08:46:07Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
linoyts/lora-xl-3d_icons-0.0001-5e-05-2000-1-5
|
linoyts
| 2023-08-22T08:45:51Z | 5 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-22T07:54:07Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: blb 3d icon
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - LinoyTsaban/lora-xl-3d_icons-0.0001-5e-05-2000-1-5
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on blb 3d icon using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
dkqjrm/20230822155557
|
dkqjrm
| 2023-08-22T08:44:13Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-22T06:56:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230822155557'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230822155557
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3488
- Accuracy: 0.5307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 0.3548 | 0.4729 |
| 0.3737 | 2.0 | 624 | 0.3480 | 0.5199 |
| 0.3737 | 3.0 | 936 | 0.3486 | 0.5162 |
| 0.3718 | 4.0 | 1248 | 0.3495 | 0.5235 |
| 0.3714 | 5.0 | 1560 | 0.3505 | 0.4729 |
| 0.3714 | 6.0 | 1872 | 0.3487 | 0.5235 |
| 0.3686 | 7.0 | 2184 | 0.3496 | 0.4729 |
| 0.3686 | 8.0 | 2496 | 0.3505 | 0.4729 |
| 0.3684 | 9.0 | 2808 | 0.3502 | 0.5235 |
| 0.3679 | 10.0 | 3120 | 0.3491 | 0.5054 |
| 0.3679 | 11.0 | 3432 | 0.3515 | 0.4729 |
| 0.3659 | 12.0 | 3744 | 0.3496 | 0.5162 |
| 0.3649 | 13.0 | 4056 | 0.3517 | 0.4729 |
| 0.3649 | 14.0 | 4368 | 0.3543 | 0.4729 |
| 0.3651 | 15.0 | 4680 | 0.3513 | 0.4729 |
| 0.3651 | 16.0 | 4992 | 0.3489 | 0.5235 |
| 0.363 | 17.0 | 5304 | 0.3537 | 0.5235 |
| 0.3613 | 18.0 | 5616 | 0.3487 | 0.5307 |
| 0.3613 | 19.0 | 5928 | 0.3495 | 0.5126 |
| 0.3645 | 20.0 | 6240 | 0.3530 | 0.5199 |
| 0.359 | 21.0 | 6552 | 0.3497 | 0.5235 |
| 0.359 | 22.0 | 6864 | 0.3487 | 0.5235 |
| 0.3614 | 23.0 | 7176 | 0.3511 | 0.5235 |
| 0.3614 | 24.0 | 7488 | 0.3491 | 0.5271 |
| 0.3617 | 25.0 | 7800 | 0.3493 | 0.5199 |
| 0.3611 | 26.0 | 8112 | 0.3491 | 0.5271 |
| 0.3611 | 27.0 | 8424 | 0.3581 | 0.4729 |
| 0.3583 | 28.0 | 8736 | 0.3496 | 0.5343 |
| 0.3583 | 29.0 | 9048 | 0.3492 | 0.5162 |
| 0.3583 | 30.0 | 9360 | 0.3493 | 0.4404 |
| 0.3564 | 31.0 | 9672 | 0.3494 | 0.5343 |
| 0.3564 | 32.0 | 9984 | 0.3489 | 0.5199 |
| 0.3567 | 33.0 | 10296 | 0.3490 | 0.5343 |
| 0.3561 | 34.0 | 10608 | 0.3486 | 0.5271 |
| 0.3561 | 35.0 | 10920 | 0.3492 | 0.5307 |
| 0.3556 | 36.0 | 11232 | 0.3503 | 0.4765 |
| 0.3556 | 37.0 | 11544 | 0.3497 | 0.5307 |
| 0.3556 | 38.0 | 11856 | 0.3494 | 0.5379 |
| 0.3561 | 39.0 | 12168 | 0.3488 | 0.5235 |
| 0.3561 | 40.0 | 12480 | 0.3503 | 0.5271 |
| 0.3558 | 41.0 | 12792 | 0.3489 | 0.5343 |
| 0.3579 | 42.0 | 13104 | 0.3508 | 0.4729 |
| 0.3579 | 43.0 | 13416 | 0.3505 | 0.5271 |
| 0.3547 | 44.0 | 13728 | 0.3493 | 0.5379 |
| 0.3567 | 45.0 | 14040 | 0.3519 | 0.4729 |
| 0.3567 | 46.0 | 14352 | 0.3497 | 0.4729 |
| 0.3548 | 47.0 | 14664 | 0.3499 | 0.4729 |
| 0.3548 | 48.0 | 14976 | 0.3492 | 0.5343 |
| 0.3563 | 49.0 | 15288 | 0.3491 | 0.5307 |
| 0.3552 | 50.0 | 15600 | 0.3489 | 0.5235 |
| 0.3552 | 51.0 | 15912 | 0.3487 | 0.5162 |
| 0.3557 | 52.0 | 16224 | 0.3496 | 0.4513 |
| 0.3555 | 53.0 | 16536 | 0.3488 | 0.5307 |
| 0.3555 | 54.0 | 16848 | 0.3489 | 0.5271 |
| 0.3542 | 55.0 | 17160 | 0.3488 | 0.5162 |
| 0.3542 | 56.0 | 17472 | 0.3488 | 0.5343 |
| 0.3545 | 57.0 | 17784 | 0.3494 | 0.5379 |
| 0.3543 | 58.0 | 18096 | 0.3489 | 0.5126 |
| 0.3543 | 59.0 | 18408 | 0.3489 | 0.5162 |
| 0.3553 | 60.0 | 18720 | 0.3488 | 0.5307 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
xray1111/ppo-LunarLander-v2
|
xray1111
| 2023-08-22T08:41:09Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-22T08:40:42Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.95 +/- 16.80
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
newronai/llama-2-7b-Chat-QLoRA-New-1.0
|
newronai
| 2023-08-22T08:36:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-22T08:36:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
kaanhho/whisper-tiny-01
|
kaanhho
| 2023-08-22T08:29:58Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-22T01:07:48Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-01
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.33884297520661155
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-01
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6410
- Wer Ortho: 0.3430
- Wer: 0.3388
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.001 | 17.24 | 500 | 0.6410 | 0.3430 | 0.3388 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
CodyNichol14/M_Shadows_HTTK
|
CodyNichol14
| 2023-08-22T08:28:22Z | 0 | 0 | null |
[
"license:artistic-2.0",
"region:us"
] | null | 2023-08-22T08:25:25Z |
---
license: artistic-2.0
Model Maker: CodyNichol14
Epoch: 250
Model: M. SHadows From Avenged Sevenfold
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.