modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 12:32:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 12:31:20
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
neseudin/nhabbb
|
neseudin
| 2023-07-20T16:01:57Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-20T16:00:27Z |
---
license: creativeml-openrail-m
---
|
Guilherme34/Jennifer2.0-Multiturn.Chat-BETATest0-Llama2-Lora-v0
|
Guilherme34
| 2023-07-20T16:01:25Z | 5 | 2 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-20T13:02:58Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
NasimB/cbt-guten-norm-rarity-log-rarity-mixed
|
NasimB
| 2023-07-20T16:01:13Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-20T13:55:13Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-guten-norm-rarity-log-rarity-mixed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-guten-norm-rarity-log-rarity-mixed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3451 | 0.29 | 500 | 5.3393 |
| 5.0397 | 0.58 | 1000 | 4.9206 |
| 4.711 | 0.87 | 1500 | 4.6904 |
| 4.4458 | 1.17 | 2000 | 4.5517 |
| 4.2933 | 1.46 | 2500 | 4.4297 |
| 4.2024 | 1.75 | 3000 | 4.3324 |
| 4.0876 | 2.04 | 3500 | 4.2618 |
| 3.8969 | 2.33 | 4000 | 4.2227 |
| 3.878 | 2.62 | 4500 | 4.1608 |
| 3.83 | 2.91 | 5000 | 4.1117 |
| 3.6525 | 3.21 | 5500 | 4.1057 |
| 3.5976 | 3.5 | 6000 | 4.0782 |
| 3.5747 | 3.79 | 6500 | 4.0477 |
| 3.4836 | 4.08 | 7000 | 4.0427 |
| 3.3253 | 4.37 | 7500 | 4.0383 |
| 3.314 | 4.66 | 8000 | 4.0252 |
| 3.3077 | 4.95 | 8500 | 4.0114 |
| 3.1614 | 5.24 | 9000 | 4.0230 |
| 3.1426 | 5.54 | 9500 | 4.0231 |
| 3.1402 | 5.83 | 10000 | 4.0225 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
ndtest/distilbert-base-uncased-finetuned-emotion
|
ndtest
| 2023-07-20T15:55:35Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-20T14:58:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9145
- name: F1
type: f1
value: 0.9142322884703892
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2761
- Accuracy: 0.9145
- F1: 0.9142
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 63 | 0.3414 | 0.901 | 0.8990 |
| No log | 2.0 | 126 | 0.2761 | 0.9145 | 0.9142 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
chandan9t8/a2c-PandaReachDense-v2
|
chandan9t8
| 2023-07-20T15:50:05Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-20T15:47:05Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.41 +/- 0.17
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NasimB/aochildes-norm-rarity-log-rarity-no-cut
|
NasimB
| 2023-07-20T15:46:28Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-20T13:39:47Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: aochildes-norm-rarity-log-rarity-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aochildes-norm-rarity-log-rarity-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3587 | 0.29 | 500 | 5.3374 |
| 5.0473 | 0.59 | 1000 | 4.9329 |
| 4.7166 | 0.88 | 1500 | 4.6905 |
| 4.4468 | 1.17 | 2000 | 4.5469 |
| 4.2996 | 1.47 | 2500 | 4.4324 |
| 4.2018 | 1.76 | 3000 | 4.3343 |
| 4.0845 | 2.05 | 3500 | 4.2599 |
| 3.9014 | 2.34 | 4000 | 4.2138 |
| 3.8737 | 2.64 | 4500 | 4.1624 |
| 3.8298 | 2.93 | 5000 | 4.1071 |
| 3.6371 | 3.22 | 5500 | 4.1078 |
| 3.5947 | 3.52 | 6000 | 4.0806 |
| 3.5728 | 3.81 | 6500 | 4.0497 |
| 3.467 | 4.1 | 7000 | 4.0487 |
| 3.3191 | 4.4 | 7500 | 4.0431 |
| 3.318 | 4.69 | 8000 | 4.0308 |
| 3.3039 | 4.98 | 8500 | 4.0207 |
| 3.1482 | 5.28 | 9000 | 4.0361 |
| 3.1394 | 5.57 | 9500 | 4.0345 |
| 3.1279 | 5.86 | 10000 | 4.0337 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
giovannidispoto/ppo-SnowballTarget
|
giovannidispoto
| 2023-07-20T15:45:29Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-20T15:45:22Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: giovannidispoto/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Wyzard1004/ppo-SnowballTarget
|
Wyzard1004
| 2023-07-20T15:42:55Z | 10 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-20T15:42:47Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Wyzard1004/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kinkpunk/whisper-tiny-en-US
|
kinkpunk
| 2023-07-20T15:36:31Z | 81 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-20T15:14:27Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en-US
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.34297520661157027
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en-US
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6362
- Wer Ortho: 0.3473
- Wer: 0.3430
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0013 | 17.54 | 500 | 0.6362 | 0.3473 | 0.3430 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.13.0+cu117
- Datasets 2.13.1
- Tokenizers 0.11.6
|
tyavika/09-Distilbert-QA-Pytorch-FULL
|
tyavika
| 2023-07-20T15:35:21Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-20T12:56:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: 09-Distilbert-QA-Pytorch-FULL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 09-Distilbert-QA-Pytorch-FULL
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2659 | 1.0 | 3702 | 1.1538 |
| 0.948 | 2.0 | 7404 | 1.1383 |
| 0.6619 | 3.0 | 11106 | 1.1760 |
| 0.4642 | 4.0 | 14808 | 1.3262 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
gFulvio/moralstories-bart-consequences.context-action_gen
|
gFulvio
| 2023-07-20T15:31:09Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"dataset:demelin/moral_stories",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-20T15:17:37Z |
---
datasets:
- demelin/moral_stories
---
|
engkufizz/falcon-7b-qlora-datacom
|
engkufizz
| 2023-07-20T15:12:59Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-20T15:12:55Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
diffusers/lora-trained-xl-starbucks
|
diffusers
| 2023-07-20T15:08:48Z | 4 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:diffusers/stable-diffusion-xl-base-0.9",
"base_model:adapter:diffusers/stable-diffusion-xl-base-0.9",
"license:other",
"region:us"
] |
text-to-image
| 2023-06-29T10:02:44Z |
---
license: other
base_model: diffusers/stable-diffusion-xl-base-0.9
instance_prompt: a photo of sks logo
tags:
- 'stable-diffusion-xl'
- 'stable-diffusion-xl-diffusers'
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - sayakpaul/lora-trained-xl-starbucks
These are LoRA adaption weights for diffusers/stable-diffusion-xl-base-0.9. The weights were trained on a photo of sks logo using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
## License
[SDXL 0.9 Research License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9/blob/main/LICENSE.md)
|
epsilonai/WashingtonRVB
|
epsilonai
| 2023-07-20T15:07:24Z | 0 | 0 | null |
[
"rooster teeth",
"rvb",
"redvsblue",
"music",
"en",
"region:us"
] | null | 2023-07-20T15:05:52Z |
---
language:
- en
tags:
- rooster teeth
- rvb
- redvsblue
- music
---
|
faezehsgh/finetuning-sentiment-model-3000-samples
|
faezehsgh
| 2023-07-20T15:06:26Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-20T14:58:23Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8566666666666667
- name: F1
type: f1
value: 0.8571428571428571
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3132
- Accuracy: 0.8567
- F1: 0.8571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
iBorrrrr/Yoru_TR
|
iBorrrrr
| 2023-07-20T15:06:05Z | 0 | 0 | null |
[
"license:c-uda",
"region:us"
] | null | 2023-07-20T14:58:30Z |
---
license: c-uda
---
i'm making yoru stuff so if you want, you can support my youtube channel /// Youtube > iBorrrrr
---
Yoru ile ilgili işleri yapıyorum bu yüzden isterseniz youtube kanalıma destek olabilirsiniz /// Youtube > iBorrrrr
---
|
epsilonai/FelixRVB
|
epsilonai
| 2023-07-20T15:03:45Z | 0 | 0 | null |
[
"rvb",
"redvsblue",
"rooster teeth",
"music",
"en",
"region:us"
] | null | 2023-07-20T15:00:48Z |
---
language:
- en
tags:
- rvb
- redvsblue
- rooster teeth
- music
---
|
epsilonai/ChurchRVB
|
epsilonai
| 2023-07-20T15:02:13Z | 0 | 1 | null |
[
"rvb",
"redvsblue",
"rooster teeth",
"music",
"en",
"region:us"
] | null | 2023-07-20T14:58:44Z |
---
language:
- en
tags:
- rvb
- redvsblue
- rooster teeth
- music
---
|
Vasanth/llama-7b-finetuned-chatbot
|
Vasanth
| 2023-07-20T14:59:38Z | 0 | 1 | null |
[
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | 2023-07-20T11:56:35Z |
---
tags:
- generated_from_trainer
model-index:
- name: llama-7b-finetuned-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-7b-finetuned-chatbot
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- lr_scheduler_warmup_steps: 2
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Kipsalo/Selma
|
Kipsalo
| 2023-07-20T14:56:50Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-20T14:56:13Z |
---
license: bigscience-openrail-m
---
|
UholoDala/tweet_sentiments_analysis_roberta
|
UholoDala
| 2023-07-20T14:54:44Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-20T13:44:28Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: tweet_sentiments_analysis_roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tweet_sentiments_analysis_roberta
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6039
- F1-score: 0.7454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1-score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7293 | 1.0 | 1000 | 0.7054 | 0.6857 |
| 0.6175 | 2.0 | 2000 | 0.6039 | 0.7454 |
| 0.5132 | 3.0 | 3000 | 0.6426 | 0.7662 |
| 0.4113 | 4.0 | 4000 | 0.7244 | 0.7790 |
| 0.3092 | 5.0 | 5000 | 0.9855 | 0.7734 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aware-ai/roberta-large-squad-classification
|
aware-ai
| 2023-07-20T14:45:04Z | 121 | 2 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"text-classification",
"dataset:squad_v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
datasets:
- squad_v2
---
# Roberta-LARGE finetuned on SQuADv2
This is roberta-large model finetuned on SQuADv2 dataset for question answering answerability classification
## Model details
This model is simply an Sequenceclassification model with two inputs (context and question) in a list.
The result is either [1] for answerable or [0] if it is not answerable.
It was trained over 4 epochs on squadv2 dataset and can be used to filter out which context is good to give into the QA model to avoid bad answers.
## Model training
This model was trained with following parameters using simpletransformers wrapper:
```
train_args = {
'learning_rate': 1e-5,
'max_seq_length': 512,
'overwrite_output_dir': True,
'reprocess_input_data': False,
'train_batch_size': 4,
'num_train_epochs': 4,
'gradient_accumulation_steps': 2,
'no_cache': True,
'use_cached_eval_features': False,
'save_model_every_epoch': False,
'output_dir': "bart-squadv2",
'eval_batch_size': 8,
'fp16_opt_level': 'O2',
}
```
## Results
```{"accuracy": 90.48%}```
## Model in Action 🚀
```python3
from simpletransformers.classification import ClassificationModel
model = ClassificationModel('roberta', 'a-ware/roberta-large-squadv2', num_labels=2, args=train_args)
predictions, raw_outputs = model.predict([["my dog is an year old. he loves to go into the rain", "how old is my dog ?"]])
print(predictions)
==> [1]
```
> Created with ❤️ by A-ware UG [](https://github.com/aware-ai)
|
epsilonai/Dexter_Grif
|
epsilonai
| 2023-07-20T14:40:43Z | 0 | 1 | null |
[
"redvsblue",
"rvb",
"fictional characters",
"rooster teeth",
"en",
"region:us"
] | null | 2023-07-20T14:34:26Z |
---
language:
- en
tags:
- redvsblue
- rvb
- fictional characters
- rooster teeth
---
|
xian79/rl_course_vizdoom_health_gathering_supreme
|
xian79
| 2023-07-20T14:29:54Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-20T14:29:48Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.56 +/- 4.96
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r xian79/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
giovannidispoto/a2c-PandaReachDense-v2
|
giovannidispoto
| 2023-07-20T14:29:05Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-20T14:26:27Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.80 +/- 0.23
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Chat-Error/LLama2-13B-easylm
|
Chat-Error
| 2023-07-20T14:28:22Z | 5 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"en",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-07-20T13:56:45Z |
---
inference: false
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Meta's Llama 2 13B fp16
These files are fp16 format model files for [Meta's Llama 2 13B](https://huggingface.co/meta-llama/Llama-2-13b-hf).
They were produced by downloading the PTH files from Meta, and then converting to HF format using the latest Transformers 4.32.0.dev0, from Git, with the Llama 2 PR included: https://github.com/huggingface/transformers/pull/24891.
Command to convert was:
```
python3 /workspace/venv/pytorch2/lib/python3.10/site-packages/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir /workspace/git/llama/download --model_size 13B --output_dir /workspace/process/llama-2-13b/source
```
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-13B-GPTQ)
* [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-13b-hf)
* [My fp16 conversion of the unquantised PTH model files](https://huggingface.co/TheBloke/Llama-2-13B-fp16)
## Prompt template: None
```
{prompt}
```
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Meta's Llama 2 13B
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
patebel/LunarLander
|
patebel
| 2023-07-20T14:25:11Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-20T13:58:32Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -70.75 +/- 91.73
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MHRDYN7/my_awesome_food_model
|
MHRDYN7
| 2023-07-20T14:23:01Z | 195 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:food101",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-20T14:12:28Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: my_awesome_food_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
config: default
split: train[:5000]
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.889
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6130
- Accuracy: 0.889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7036 | 0.99 | 62 | 2.4963 | 0.839 |
| 1.808 | 2.0 | 125 | 1.7523 | 0.875 |
| 1.5765 | 2.98 | 186 | 1.6130 | 0.889 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Kerz/bbc
|
Kerz
| 2023-07-20T14:14:40Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-20T13:09:43Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: bbc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: test
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.499
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bbc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1692
- Accuracy: 0.499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 250 | 1.4265 | 0.391 |
| 1.4806 | 2.0 | 500 | 1.2233 | 0.458 |
| 1.4806 | 3.0 | 750 | 1.1692 | 0.499 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Jungang/path_to_saved_model
|
Jungang
| 2023-07-20T14:07:37Z | 29 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-20T13:13:32Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Jungang/path_to_saved_model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
VFiona/opus-mt-it-en-finetuned_20000-it-to-en
|
VFiona
| 2023-07-20T13:55:52Z | 104 | 1 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-20T12:41:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-it-en-finetuned_20000-it-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-it-en-finetuned_20000-it-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-it-en](https://huggingface.co/Helsinki-NLP/opus-mt-it-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3483
- Bleu: 75.7583
- Gen Len: 21.996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.3971 | 1.0 | 1125 | 0.3483 | 75.7583 | 21.996 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.13.1
- Tokenizers 0.11.0
|
HiTZ/A2T_RoBERTa_SMFA_WikiEvents-arg
|
HiTZ
| 2023-07-20T13:45:10Z | 113 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"zero-shot-classification",
"dataset:snli",
"dataset:anli",
"dataset:multi_nli",
"dataset:multi_nli_mismatch",
"dataset:fever",
"arxiv:2104.14690",
"arxiv:2203.13602",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-05-02T12:25:23Z |
---
pipeline_tag: zero-shot-classification
datasets:
- snli
- anli
- multi_nli
- multi_nli_mismatch
- fever
---
# A2T Entailment model
**Important:** These pretrained entailment models are intended to be used with the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library but are also fully compatible with the `ZeroShotTextClassificationPipeline` from [Transformers](https://github.com/huggingface/Transformers).
Textual Entailment (or Natural Language Inference) has turned out to be a good choice for zero-shot text classification problems [(Yin et al., 2019](https://aclanthology.org/D19-1404/); [Wang et al., 2021](https://arxiv.org/abs/2104.14690); [Sainz and Rigau, 2021)](https://aclanthology.org/2021.gwc-1.6/). Recent research addressed Information Extraction problems with the same idea [(Lyu et al., 2021](https://aclanthology.org/2021.acl-short.42/); [Sainz et al., 2021](https://aclanthology.org/2021.emnlp-main.92/); [Sainz et al., 2022a](), [Sainz et al., 2022b)](https://arxiv.org/abs/2203.13602). The A2T entailment models are first trained with NLI datasets such as MNLI [(Williams et al., 2018)](), SNLI [(Bowman et al., 2015)]() or/and ANLI [(Nie et al., 2020)]() and then fine-tuned to specific tasks that were previously converted to textual entailment format.
For more information please, take a look to the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library or the following published papers:
- [Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction (Sainz et al., EMNLP 2021)](https://aclanthology.org/2021.emnlp-main.92/)
- [Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning (Sainz et al., Findings of NAACL-HLT 2022)]()
## About the model
The model name describes the configuration used for training as follows:
<!-- $$\text{HiTZ/A2T\_[pretrained\_model]\_[NLI\_datasets]\_[finetune\_datasets]}$$ -->
<h3 align="center">HiTZ/A2T_[pretrained_model]_[NLI_datasets]_[finetune_datasets]</h3>
- `pretrained_model`: The checkpoint used for initialization. For example: RoBERTa<sub>large</sub>.
- `NLI_datasets`: The NLI datasets used for pivot training.
- `S`: Standford Natural Language Inference (SNLI) dataset.
- `M`: Multi Natural Language Inference (MNLI) dataset.
- `F`: Fever-nli dataset.
- `A`: Adversarial Natural Language Inference (ANLI) dataset.
- `finetune_datasets`: The datasets used for fine tuning the entailment model. Note that for more than 1 dataset the training was performed sequentially. For example: ACE-arg.
Some models like `HiTZ/A2T_RoBERTa_SMFA_ACE-arg` have been trained marking some information between square brackets (`'[['` and `']]'`) like the event trigger span. Make sure you follow the same preprocessing in order to obtain the best results.
## Cite
If you use this model, consider citing the following publications:
```bibtex
@inproceedings{sainz-etal-2021-label,
title = "Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction",
author = "Sainz, Oscar and
Lopez de Lacalle, Oier and
Labaka, Gorka and
Barrena, Ander and
Agirre, Eneko",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.92",
doi = "10.18653/v1/2021.emnlp-main.92",
pages = "1199--1212",
}
```
|
casque/SuspendedCongressMS
|
casque
| 2023-07-20T13:41:33Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-20T13:39:55Z |
---
license: creativeml-openrail-m
---
|
casque/grab_thight_sex
|
casque
| 2023-07-20T13:36:15Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-20T13:35:15Z |
---
license: creativeml-openrail-m
---
|
NasimB/cbt-norm-rarity-log-rarity-end-p5k
|
NasimB
| 2023-07-20T13:29:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-20T11:19:04Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-norm-rarity-log-rarity-end-p5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-norm-rarity-log-rarity-end-p5k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3492 | 0.29 | 500 | 5.3375 |
| 5.0257 | 0.58 | 1000 | 4.9206 |
| 4.6972 | 0.88 | 1500 | 4.6826 |
| 4.4476 | 1.17 | 2000 | 4.5485 |
| 4.286 | 1.46 | 2500 | 4.4256 |
| 4.1861 | 1.75 | 3000 | 4.3203 |
| 4.073 | 2.04 | 3500 | 4.2493 |
| 3.8835 | 2.34 | 4000 | 4.2070 |
| 3.8576 | 2.63 | 4500 | 4.1491 |
| 3.8247 | 2.92 | 5000 | 4.0994 |
| 3.6292 | 3.21 | 5500 | 4.0973 |
| 3.5811 | 3.5 | 6000 | 4.0662 |
| 3.5613 | 3.8 | 6500 | 4.0335 |
| 3.4739 | 4.09 | 7000 | 4.0307 |
| 3.3065 | 4.38 | 7500 | 4.0279 |
| 3.3108 | 4.67 | 8000 | 4.0149 |
| 3.2959 | 4.96 | 8500 | 4.0015 |
| 3.1501 | 5.26 | 9000 | 4.0147 |
| 3.129 | 5.55 | 9500 | 4.0126 |
| 3.1254 | 5.84 | 10000 | 4.0124 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
casque/Tits_fuck
|
casque
| 2023-07-20T13:28:25Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-20T13:25:13Z |
---
license: creativeml-openrail-m
---
|
HalteroXHunter/distilbert-base-uncased-finetuned-emotion
|
HalteroXHunter
| 2023-07-20T13:22:09Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-20T06:57:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9335
- name: F1
type: f1
value: 0.9335622045808896
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1623
- Accuracy: 0.9335
- F1: 0.9336
## Model description
Labels:
- Label 0: sadness
- Label 1: joy
- Label 2: love
- Label 3: anger
- Label 4: fear
- Label 5: surprise
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.206 | 1.0 | 250 | 0.1749 | 0.9235 | 0.9234 |
| 0.1433 | 2.0 | 500 | 0.1623 | 0.9335 | 0.9336 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.13.1
- Tokenizers 0.13.3
|
casque/silly_fuck
|
casque
| 2023-07-20T13:21:29Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-20T13:19:41Z |
---
license: creativeml-openrail-m
---
|
NasimB/cbt-norm-rarity-log-rarity-no-cut
|
NasimB
| 2023-07-20T13:11:28Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-20T11:00:27Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-norm-rarity-log-rarity-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-norm-rarity-log-rarity-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1028
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3453 | 0.29 | 500 | 5.3315 |
| 5.033 | 0.58 | 1000 | 4.9200 |
| 4.6991 | 0.87 | 1500 | 4.6842 |
| 4.4349 | 1.16 | 2000 | 4.5403 |
| 4.2892 | 1.46 | 2500 | 4.4287 |
| 4.1929 | 1.75 | 3000 | 4.3281 |
| 4.0816 | 2.04 | 3500 | 4.2542 |
| 3.8848 | 2.33 | 4000 | 4.2078 |
| 3.8614 | 2.62 | 4500 | 4.1532 |
| 3.8318 | 2.91 | 5000 | 4.1052 |
| 3.6429 | 3.2 | 5500 | 4.0981 |
| 3.58 | 3.49 | 6000 | 4.0665 |
| 3.569 | 3.79 | 6500 | 4.0380 |
| 3.4854 | 4.08 | 7000 | 4.0323 |
| 3.3124 | 4.37 | 7500 | 4.0285 |
| 3.3128 | 4.66 | 8000 | 4.0149 |
| 3.2978 | 4.95 | 8500 | 4.0026 |
| 3.1549 | 5.24 | 9000 | 4.0129 |
| 3.1259 | 5.53 | 9500 | 4.0130 |
| 3.132 | 5.82 | 10000 | 4.0115 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
casque/Breast_grab
|
casque
| 2023-07-20T13:07:21Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-20T13:02:46Z |
---
license: creativeml-openrail-m
---
|
SmellyKat/Taxi-v3
|
SmellyKat
| 2023-07-20T13:05:09Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-20T13:05:05Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.67
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="SmellyKat/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jinaai/falcon-7b-code-alpaca-lora
|
jinaai
| 2023-07-20T13:01:25Z | 0 | 3 | null |
[
"text-generation",
"en",
"dataset:stanford_alpaca",
"license:cc-by-nc-4.0",
"region:us"
] |
text-generation
| 2023-07-11T07:50:58Z |
---
license: cc-by-nc-4.0
language:
- en
tags:
- text-generation
datasets:
- stanford_alpaca
pipeline_tag: text-generation
---
<br><br>
<p align="center">
<img src="https://github.com/jina-ai/finetuner/blob/main/docs/_static/finetuner-logo-ani.svg?raw=true" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>
<p align="center">
<b>LLM Generation models trained by Jina AI, Finetuner team.</b>
This repo contains the lora weights (8bit) for Falcon-7b
fit on the [Code Alpaca](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k) dataset.
## Reproduction
This version of the weights was trained with the following hyperparameters:
- Epochs: 6
- Batch size: 128
- Micro batch size: 8
- Learning rate: 3e-4
- Lora _r_: 8
- Lora target modules: query_key_value
You can reproduce using this repository:
https://github.com/jina-ai/jerboa
Make sure you install requirements and finetune using this command using the following command:
```
python finetune.py \
--base-model tiiuae/falcon-7b --lora-target-modules query_key_value \
--data-path sahil2801/CodeAlpaca-20k --output-dir ./lora-alpaca-code \
--batch-size 128 --micro-batch-size 8 --eval-limit 45 \
--eval-file code_eval.jsonl --wandb-project jerboa --wandb-log-model \
--wandb-watch gradients --num-epochs 6
```
## Inference
```Python
import torch
from peft import PeftModel
from transformers import AutoTokenizer, AutoModelForCausalLM
TOKENIZER_SOURCE = 'tiiuae/falcon-7b'
BASE_MODEL = 'tiiuae/falcon-7b'
LORA_REPO = 'jinaai/falcon-7b-code-alpaca-lora'
DEVICE = "cuda"
PROMPT = """
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Write a for loop in python
### Input:
### Response:
"""
model = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path=BASE_MODEL,
torch_dtype=torch.float16,
trust_remote_code=True,
device_map='auto',
)
model = PeftModel.from_pretrained(
model=model,
model_id=LORA_REPO,
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(
TOKENIZER_SOURCE,
trust_remote_code=True,
padding_side='left',
)
tokenizer.pad_token = tokenizer.eos_token
inputs = tokenizer(PROMPT, return_tensors="pt")
input_ids = inputs["input_ids"].to(DEVICE)
input_attention_mask = inputs["attention_mask"].to(DEVICE)
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
attention_mask=input_attention_mask,
return_dict_in_generate=True,
max_new_tokens=32,
eos_token_id=tokenizer.eos_token_id,
)
generation_output = generation_output.sequences[0]
output = tokenizer.decode(generation_output, skip_special_tokens=True)
print(output)
```
## Contact
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
|
jinaai/falcon-40b-code-alpaca
|
jinaai
| 2023-07-20T13:01:11Z | 17 | 3 |
transformers
|
[
"transformers",
"pytorch",
"RefinedWeb",
"feature-extraction",
"text-generation",
"custom_code",
"en",
"dataset:stanford_alpaca",
"license:cc-by-nc-4.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-11T15:05:46Z |
---
license: cc-by-nc-4.0
language:
- en
tags:
- text-generation
datasets:
- stanford_alpaca
pipeline_tag: text-generation
---
<br><br>
<p align="center">
<img src="https://github.com/jina-ai/finetuner/blob/main/docs/_static/finetuner-logo-ani.svg?raw=true" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>
<p align="center">
<b>LLM Generation models trained by Jina AI, Finetuner team.</b>
This repo contains the full weights (16bit) for Falcon-40b
fit on the [Code Alpaca](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k) dataset.
## Reproduction
This version of the weights was trained with the following hyperparameters:
- Epochs: 2
- Batch size: 128
- Micro batch size: 4
- Learning rate: 3e-4
- Lora _r_: 8
- Lora target modules: query_key_value
You can reproduce using this repository:
https://github.com/jina-ai/jerboa
Make sure you install requirements and finetune using this command using the following command:
```
python finetune.py \
--base-model tiiuae/falcon-40b --lora-target-modules query_key_value \
--data-path sahil2801/CodeAlpaca-20k --output-dir ./lora-alpaca-code \
--batch-size 128 --micro-batch-size 4 --eval-limit 45 \
--eval-file code_eval.jsonl --wandb-project jerboa --wandb-log-model \
--wandb-watch gradients --num-epochs 2
```
## Inference
```Python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
TOKENIZER_SOURCE = 'tiiuae/falcon-40b'
BASE_MODEL = 'jinaai/falcon-40b-code-alpaca'
DEVICE = "cuda"
PROMPT = """
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Write a for loop in python
### Input:
### Response:
"""
model = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path=BASE_MODEL,
torch_dtype=torch.float16,
trust_remote_code=True,
device_map='auto',
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(
TOKENIZER_SOURCE,
trust_remote_code=True,
padding_side='left',
)
tokenizer.pad_token = tokenizer.eos_token
inputs = tokenizer(PROMPT, return_tensors="pt")
input_ids = inputs["input_ids"].to(DEVICE)
input_attention_mask = inputs["attention_mask"].to(DEVICE)
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
attention_mask=input_attention_mask,
return_dict_in_generate=True,
max_new_tokens=32,
eos_token_id=tokenizer.eos_token_id,
)
generation_output = generation_output.sequences[0]
output = tokenizer.decode(generation_output, skip_special_tokens=True)
print(output)
```
## Contact
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
|
jinaai/falcon-7b-code-alpaca
|
jinaai
| 2023-07-20T13:00:35Z | 22 | 3 |
transformers
|
[
"transformers",
"pytorch",
"RefinedWebModel",
"feature-extraction",
"text-generation",
"custom_code",
"en",
"dataset:stanford_alpaca",
"license:cc-by-nc-4.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-11T14:09:52Z |
---
license: cc-by-nc-4.0
language:
- en
tags:
- text-generation
datasets:
- stanford_alpaca
pipeline_tag: text-generation
---
<br><br>
<p align="center">
<img src="https://github.com/jina-ai/finetuner/blob/main/docs/_static/finetuner-logo-ani.svg?raw=true" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>
<p align="center">
<b>LLM Generation models trained by Jina AI, Finetuner team.</b>
</p>
This repo contains the full weights (16bit) for Falcon-7b
fit on the [Code Alpaca](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k) dataset.
## Reproduction
This version of the weights was trained with the following hyperparameters:
- Epochs: 6
- Batch size: 128
- Micro batch size: 8
- Learning rate: 3e-4
- Lora _r_: 8
- Lora target modules: query_key_value
You can reproduce using this repository:
https://github.com/jina-ai/jerboa
Make sure you install requirements and finetune using this command using the following command:
```
python finetune.py \
--base-model tiiuae/falcon-7b --lora-target-modules query_key_value \
--data-path sahil2801/CodeAlpaca-20k --output-dir ./lora-alpaca-code \
--batch-size 128 --micro-batch-size 8 --eval-limit 45 \
--eval-file code_eval.jsonl --wandb-project jerboa --wandb-log-model \
--wandb-watch gradients --num-epochs 6
```
## Inference
```Python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
TOKENIZER_SOURCE = 'tiiuae/falcon-7b'
BASE_MODEL = 'jinaai/falcon-7b-code-alpaca'
DEVICE = "cuda"
PROMPT = """
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Write a for loop in python
### Input:
### Response:
"""
model = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path=BASE_MODEL,
torch_dtype=torch.float16,
trust_remote_code=True,
device_map='auto',
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(
TOKENIZER_SOURCE,
trust_remote_code=True,
padding_side='left',
)
tokenizer.pad_token = tokenizer.eos_token
inputs = tokenizer(PROMPT, return_tensors="pt")
input_ids = inputs["input_ids"].to(DEVICE)
input_attention_mask = inputs["attention_mask"].to(DEVICE)
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
attention_mask=input_attention_mask,
return_dict_in_generate=True,
max_new_tokens=32,
eos_token_id=tokenizer.eos_token_id,
)
generation_output = generation_output.sequences[0]
output = tokenizer.decode(generation_output, skip_special_tokens=True)
print(output)
```
## Contact
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
|
casque/The_Mating
|
casque
| 2023-07-20T13:00:31Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-20T12:58:38Z |
---
license: creativeml-openrail-m
---
|
casque/MS_Real_POV_Blowjob
|
casque
| 2023-07-20T12:55:54Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-20T12:54:03Z |
---
license: creativeml-openrail-m
---
|
casque/PSCowgirl
|
casque
| 2023-07-20T12:48:39Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-20T12:48:18Z |
---
license: creativeml-openrail-m
---
|
PeterBrendan/Prebid_Module_GPT2
|
PeterBrendan
| 2023-07-20T12:42:06Z | 152 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-19T23:43:57Z |
---
license: mit
widget:
- text: gptPreAuction
- text: price
- text: OpenX
---
**Model:** GPT-2
**Model name:** Prebid_Module_GPT2
**Model description:** This fine-tuned version of the GPT-2 model was trained on a dataset of 1100+ publisher domains' Prebid installed modules. The model aims to provide insights into what Prebid modules other publishers install with their Prebid set-up. Given a Prebid module, such as ***appnexusBidAdapter***, the model can generate a sample Prebid installed modules combination based on the collected data. This helps publishers gain an understanding of how different publishers use Prebid modules.
**Intended uses:** This model is intended to assist publishers in understanding and exploring how other publishers use Prebid modules. It serves as a reference to gain insights into common configurations, best practices, and different approaches used by publishers across various domains.
**Limitations:** It's important to note that the generated installed Prebid modules are based on the data from the training set and may not cover all possible combinations or reflect the specific requirements of a particular domain. Publishers should carefully review and adapt the generated installed Prebid modules to their specific needs and business rules.
**How to use:** To use this model, provide a Prebid module, such as ***gptPreAuction***. The model will generate a sample Prebid installed modules combination related to that input based on the collected data from that point forward. To start from the beginning, use ***[*** as the input.
**Training data:** This model was trained on a dataset consisting of over 1100+ publisher domains Prebid modules. The dataset was collected from a variety of publishers and represents a wide range of Prebid settings used in the industry.
**Training procedure:** The model was fine-tuned using the GPT-2 base model with the aforementioned dataset.
**Evaluation results:** The evaluation of this model focuses on its ability to generate coherent and valid Prebid configuration settings based on the provided Prebid config setting. Human evaluators reviewed the generated configurations for relevance and accuracy.
**Safety and bias considerations:** The model is trained on data from actual Prebid config files and aims to provide accurate insights into publishers' configurations. However, it's important to note that biases may exist in the original data itself, as the training data is based on real-world configurations. Users should review and validate the generated configurations to ensure they align with their specific requirements and guidelines.
Users are encouraged to exercise caution and use their expertise in interpreting and adapting the generated Prebid module combinations for their own use. The model should be seen as a helpful tool to gain inspiration and understanding of common Prebid settings but not as a substitute for thorough testing and manual review of the final configurations.
|
fedbor/settimo_modello
|
fedbor
| 2023-07-20T12:31:06Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-20T12:31:05Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
tobijen/distilgpt2_left_headings
|
tobijen
| 2023-07-20T12:25:00Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-19T15:06:25Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_keras_callback
model-index:
- name: tobijen/distilgpt2_left_headings
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tobijen/distilgpt2_left_headings
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.4455
- Validation Loss: 5.6434
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 6.0703 | 5.7752 | 0 |
| 5.5228 | 5.5932 | 1 |
| 5.1845 | 5.5286 | 2 |
| 4.9123 | 5.5338 | 3 |
| 4.6756 | 5.5673 | 4 |
| 4.4455 | 5.6434 | 5 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
atiiisham988/finetune-lora-stable-diffusion
|
atiiisham988
| 2023-07-20T12:20:55Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-20T09:32:25Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - atiiisham988/finetune-lora-stable-diffusion
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
marianafmedeiros/a2c-AntBulletEnv-v0
|
marianafmedeiros
| 2023-07-20T12:20:04Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-20T03:03:24Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 846.46 +/- 66.62
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rafaym/DreamBoothAvatar
|
rafaym
| 2023-07-20T12:19:09Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-20T09:44:33Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: Rafay
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - rafaym/DreamBoothAvatar
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on Rafay using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
RaffKhan/alpaca7B-lora
|
RaffKhan
| 2023-07-20T12:19:08Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-20T00:30:30Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Xxmlala/dqn-SpaceInvadersNoFrameskip-v4
|
Xxmlala
| 2023-07-20T12:11:41Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-20T12:11:00Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 691.50 +/- 262.91
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Xxmlala -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Xxmlala -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Xxmlala
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
sciarrilli/xgen-7b-tuned-alpaca-l1
|
sciarrilli
| 2023-07-20T12:10:07Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:Salesforce/xgen-7b-8k-base",
"base_model:finetune:Salesforce/xgen-7b-8k-base",
"license:apache-2.0",
"region:us"
] | null | 2023-07-20T09:45:57Z |
---
license: apache-2.0
base_model: Salesforce/xgen-7b-8k-base
tags:
- generated_from_trainer
model-index:
- name: xgen-7b-tuned-alpaca-l1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xgen-7b-tuned-alpaca-l1
This model is a fine-tuned version of [Salesforce/xgen-7b-8k-base](https://huggingface.co/Salesforce/xgen-7b-8k-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
udon2301/opencalm3b
|
udon2301
| 2023-07-20T12:05:08Z | 232 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-20T11:48:42Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: opencalm3b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opencalm3b
This model is a fine-tuned version of [cyberagent/open-calm-3b](https://huggingface.co/cyberagent/open-calm-3b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
laurent255/octave
|
laurent255
| 2023-07-20T12:01:02Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-20T12:00:57Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
mattmdjaga/segformer_b0_clothes
|
mattmdjaga
| 2023-07-20T11:58:04Z | 2,654 | 8 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"dataset:mattmdjaga/human_parsing_dataset",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2023-04-20T13:37:29Z |
---
license: mit
tags:
- vision
- image-segmentation
widget:
- src: https://images.unsplash.com/photo-1643310325061-2beef64926a5?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8Nnx8cmFjb29uc3xlbnwwfHwwfHw%3D&w=1000&q=80
example_title: Person
- src: https://freerangestock.com/sample/139043/young-man-standing-and-leaning-on-car.jpg
example_title: Person
datasets:
- mattmdjaga/human_parsing_dataset
---
# Segformer B0 fine-tuned for clothes segmentation
SegFormer model fine-tuned on [ATR dataset](https://github.com/lemondan/HumanParsing-Dataset) for clothes segmentation.
The dataset on hugging face is called "mattmdjaga/human_parsing_dataset".
```python
from transformers import SegformerImageProcessor, AutoModelForSemanticSegmentation
from PIL import Image
import requests
import matplotlib.pyplot as plt
import torch.nn as nn
extractor = SegformerImageProcessor.from_pretrained("mattmdjaga/segformer_b0_clothes")
model = AutoModelForSemanticSegmentation.from_pretrained("mattmdjaga/segformer_b0_clothes")
url = "https://plus.unsplash.com/premium_photo-1673210886161-bfcc40f54d1f?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8MXx8cGVyc29uJTIwc3RhbmRpbmd8ZW58MHx8MHx8&w=1000&q=80"
image = Image.open(requests.get(url, stream=True).raw)
inputs = extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits.cpu()
upsampled_logits = nn.functional.interpolate(
logits,
size=image.size[::-1],
mode="bilinear",
align_corners=False,
)
pred_seg = upsampled_logits.argmax(dim=1)[0]
plt.imshow(pred_seg)
```
|
Claaas/Reinforce-Cartpole
|
Claaas
| 2023-07-20T11:53:38Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-20T11:53:27Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
c72599/LunarLander-v2
|
c72599
| 2023-07-20T11:44:10Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-20T10:20:24Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 35.92 +/- 82.10
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'LunarLander-v2'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 500000
'learning_rate': 0.00025
'num_envs': 8
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'c72599/LunarLander-v2'
'batch_size': 1024
'minibatch_size': 256}
```
|
photonmz/xlm-roberta-base-finetuned-panx-all
|
photonmz
| 2023-07-20T11:40:34Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-17T22:52:07Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1466
- F1: 0.8656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.233 | 1.0 | 715 | 0.1639 | 0.8234 |
| 0.1016 | 2.0 | 1430 | 0.1435 | 0.8577 |
| 0.0581 | 3.0 | 2145 | 0.1466 | 0.8656 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
qddwudan/unit2_taxi_hw
|
qddwudan
| 2023-07-20T11:30:59Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-20T11:30:57Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: unit2_taxi_hw
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="qddwudan/unit2_taxi_hw", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
suchetajjw47/llama2_finetuned
|
suchetajjw47
| 2023-07-20T11:20:50Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-20T11:20:48Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
Shubham09/falcon_20072023_r16
|
Shubham09
| 2023-07-20T11:18:14Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-20T11:17:34Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
Epl1/my_awesome_food_model
|
Epl1
| 2023-07-20T11:13:22Z | 217 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:food101",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-20T11:00:31Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: my_awesome_food_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
config: default
split: train[:5000]
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.892
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6141
- Accuracy: 0.892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7048 | 0.99 | 62 | 2.5361 | 0.823 |
| 1.8279 | 2.0 | 125 | 1.7878 | 0.875 |
| 1.5917 | 2.98 | 186 | 1.6141 | 0.892 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Jinouga/makima-chainsaw-manv1
|
Jinouga
| 2023-07-20T10:46:52Z | 3 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-11T00:06:03Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### makima_chainsaw_manV1 Dreambooth model trained by Jinouga with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
GMW123/finetuning-classification-model-3000-samples
|
GMW123
| 2023-07-20T10:44:53Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-20T10:39:46Z |
---
license: apache-2.0
base_model: sentence-transformers/all-MiniLM-L6-v2
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-classification-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.82
- name: F1
type: f1
value: 0.8211920529801323
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-classification-model-3000-samples
This model is a fine-tuned version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4086
- Accuracy: 0.82
- F1: 0.8212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/guten-norm-rarity-log-rarity-no-cut
|
NasimB
| 2023-07-20T10:34:08Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-20T08:31:52Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-norm-rarity-log-rarity-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-norm-rarity-log-rarity-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3491 | 0.29 | 500 | 5.3474 |
| 5.0344 | 0.58 | 1000 | 4.9307 |
| 4.6986 | 0.87 | 1500 | 4.6846 |
| 4.442 | 1.16 | 2000 | 4.5407 |
| 4.29 | 1.46 | 2500 | 4.4289 |
| 4.197 | 1.75 | 3000 | 4.3249 |
| 4.0736 | 2.04 | 3500 | 4.2531 |
| 3.8799 | 2.33 | 4000 | 4.2079 |
| 3.8675 | 2.62 | 4500 | 4.1508 |
| 3.8247 | 2.91 | 5000 | 4.1025 |
| 3.6446 | 3.2 | 5500 | 4.0995 |
| 3.5806 | 3.49 | 6000 | 4.0696 |
| 3.5597 | 3.79 | 6500 | 4.0359 |
| 3.4815 | 4.08 | 7000 | 4.0327 |
| 3.3091 | 4.37 | 7500 | 4.0278 |
| 3.3049 | 4.66 | 8000 | 4.0164 |
| 3.2916 | 4.95 | 8500 | 4.0023 |
| 3.1552 | 5.24 | 9000 | 4.0164 |
| 3.1256 | 5.53 | 9500 | 4.0151 |
| 3.1252 | 5.82 | 10000 | 4.0136 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
GMW123/finetuning-sentiment-model-3000-samples
|
GMW123
| 2023-07-20T10:27:43Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-20T10:21:25Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.877076411960133
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3097
- Accuracy: 0.8767
- F1: 0.8771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
neseudin/nhabb
|
neseudin
| 2023-07-20T10:05:55Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-20T10:03:19Z |
---
license: creativeml-openrail-m
---
|
c72599/ppo-CartPole-v1
|
c72599
| 2023-07-20T10:05:43Z | 0 | 0 | null |
[
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-20T10:05:36Z |
---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 213.40 +/- 62.92
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'ppo-CartPole-v1'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'c72599/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
linkanjarad/PythiaChat-2.8B_v0.1
|
linkanjarad
| 2023-07-20T10:03:35Z | 5 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"dataset:linkanjarad/baize-chat-data",
"base_model:EleutherAI/pythia-2.8b-deduped",
"base_model:adapter:EleutherAI/pythia-2.8b-deduped",
"license:apache-2.0",
"region:us"
] | null | 2023-07-20T05:37:50Z |
---
license: apache-2.0
base_model: EleutherAI/pythia-2.8b-deduped
tags:
- generated_from_trainer
model-index:
- name: PythiaChat-2.8B_v0.1
results: []
library_name: peft
inference: false
datasets:
- linkanjarad/baize-chat-data
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PythiaChat-2.8B_v0.1
This model is a fine-tuned version of [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped) on the [Baize dataset](https://huggingface.co/datasets/linkanjarad/baize-chat-data/viewer/linkanjarad--baize-chat-data) with LoRA, trained for only 200+ steps for testing. This model is trained for "chat" style instruction following capabilities.
# Sample Use
Remember to mark the human messages with `[|Human|]` and AI messages with `[|AI]` at the start.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel, PeftConfig
peft_model_id = "linkanjarad/PythiaChat-2.8B_v0.1"
model_id = "EleutherAI/pythia-2.8b-deduped"
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True) # you can add `load_in_4bit=True` for faster inference
model = PeftModel.from_pretrained(model, peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = model.to('cuda')
model.eval()
input_text = """The conversation between human and AI assistant.
[|Human|] How do I open a file with python?
[|AI|]"""
# Tokenize the input text
input_ids = tokenizer.encode(input_text, return_tensors='pt').to('cuda')
len_input = len(input_ids[0])
# Generate text using the model
with torch.no_grad():
output = model.generate(input_ids=input_ids, max_length=len_input+120, temperature=0.9, do_sample=True)
# Decode the generated output
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
```
Example Output
```
The conversation between human and AI assistant.
[|Human|] How do I open a file with python?
[|AI|] To open a file with python, you can use the open function as follows:
>>> with open('filename.txt', 'w') as f:
... f.write(data)
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 80
- num_epochs: 1
### Framework versions
- PEFT 0.4.0
- Transformers 4.31.0
- Pytorch 2.0.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
lvwerra/test-gpt
|
lvwerra
| 2023-07-20T09:56:49Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"region:us"
] | null | 2023-07-20T09:20:29Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: test-gpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-gpt
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
l3cube-pune/hing-gpt
|
l3cube-pune
| 2023-07-20T09:49:08Z | 196 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"hi",
"en",
"codemix",
"multilingual",
"dataset:L3Cube-HingCorpus",
"arxiv:2204.08398",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-15T17:32:29Z |
---
language:
- hi
- en
- multilingual
license: cc-by-4.0
tags:
- hi
- en
- codemix
datasets:
- L3Cube-HingCorpus
---
## HingGPT
HingGPT is a Hindi-English code-mixed GPT model trained on roman text. It is a GPT2 model trained on L3Cube-HingCorpus.
<br>
[dataset link] (https://github.com/l3cube-pune/code-mixed-nlp)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398)
Other models from HingBERT family: <br>
<a href="https://huggingface.co/l3cube-pune/hing-bert"> HingBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert"> HingMBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed"> HingBERT-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed-v2"> HingBERT-Mixed-v2 </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-roberta"> HingRoBERTa </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-roberta-mixed"> HingRoBERTa-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-gpt"> HingGPT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-gpt-devanagari"> HingGPT-Devanagari </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-bert-lid"> HingBERT-LID </a> <br>
```
@inproceedings{nayak-joshi-2022-l3cube,
title = "{L}3{C}ube-{H}ing{C}orpus and {H}ing{BERT}: A Code Mixed {H}indi-{E}nglish Dataset and {BERT} Language Models",
author = "Nayak, Ravindra and Joshi, Raviraj",
booktitle = "Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.wildre-1.2",
pages = "7--12",
}
```
|
l3cube-pune/hing-roberta
|
l3cube-pune
| 2023-07-20T09:48:36Z | 310 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"hi",
"en",
"codemix",
"multilingual",
"dataset:L3Cube-HingCorpus",
"arxiv:2204.08398",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-04T19:00:50Z |
---
language:
- hi
- en
- multilingual
license: cc-by-4.0
tags:
- hi
- en
- codemix
datasets:
- L3Cube-HingCorpus
---
## HingRoBERTa
HingRoBERTa is a Hindi-English code-mixed RoBERTa model trained on roman text. It is an xlm-RoBERTa model fine-tuned on L3Cube-HingCorpus.
<br>
[dataset link] (https://github.com/l3cube-pune/code-mixed-nlp)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398)
Other models from HingBERT family: <br>
<a href="https://huggingface.co/l3cube-pune/hing-bert"> HingBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert"> HingMBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed"> HingBERT-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed-v2"> HingBERT-Mixed-v2 </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-roberta"> HingRoBERTa </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-roberta-mixed"> HingRoBERTa-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-gpt"> HingGPT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-gpt-devanagari"> HingGPT-Devanagari </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-bert-lid"> HingBERT-LID </a> <br>
```
@inproceedings{nayak-joshi-2022-l3cube,
title = "{L}3{C}ube-{H}ing{C}orpus and {H}ing{BERT}: A Code Mixed {H}indi-{E}nglish Dataset and {BERT} Language Models",
author = "Nayak, Ravindra and Joshi, Raviraj",
booktitle = "Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.wildre-1.2",
pages = "7--12",
}
```
|
MredK/RyTiexv1
|
MredK
| 2023-07-20T09:47:58Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-07-20T09:45:38Z |
---
license: openrail
---
5 Dklık Dataset İle Yapıldı\
Train Bana Aittir\
150 Epoch\
Türkçe Model
|
l3cube-pune/hing-mbert
|
l3cube-pune
| 2023-07-20T09:47:51Z | 188 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"hi",
"en",
"codemix",
"multilingual",
"dataset:L3Cube-HingCorpus",
"arxiv:2204.08398",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-04T18:45:09Z |
---
language:
- hi
- en
- multilingual
license: cc-by-4.0
tags:
- hi
- en
- codemix
datasets:
- L3Cube-HingCorpus
---
## HingMBERT
HingBERT is a Hindi-English code-mixed BERT model trained on roman text. It is a mBERT model fine-tuned on L3Cube-HingCorpus.
<br>
[dataset link] (https://github.com/l3cube-pune/code-mixed-nlp)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398)<br>
Other models from HingBERT family: <br>
<a href="https://huggingface.co/l3cube-pune/hing-bert"> HingBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert"> HingMBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed"> HingBERT-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed-v2"> HingBERT-Mixed-v2 </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-roberta"> HingRoBERTa </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-roberta-mixed"> HingRoBERTa-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-gpt"> HingGPT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-gpt-devanagari"> HingGPT-Devanagari </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-bert-lid"> HingBERT-LID </a> <br>
```
@inproceedings{nayak-joshi-2022-l3cube,
title = "{L}3{C}ube-{H}ing{C}orpus and {H}ing{BERT}: A Code Mixed {H}indi-{E}nglish Dataset and {BERT} Language Models",
author = "Nayak, Ravindra and Joshi, Raviraj",
booktitle = "Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.wildre-1.2",
pages = "7--12",
}
```
|
MredK/Akinv2
|
MredK
| 2023-07-20T09:47:16Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-07-17T17:57:44Z |
---
license: openrail
---
4 Dklık Dataset İle Yapıldı\
Train Bana Aittir\
200 Epoch\
Türkçe Model
|
MredK/Viper
|
MredK
| 2023-07-20T09:46:35Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-07-16T17:45:24Z |
---
license: openrail
---
10 Dklık Dataset İle Yapıldı\
Train Bana Aittir\
150 Epoch\
Türkçe Model
|
l3cube-pune/hing-mbert-mixed-v2
|
l3cube-pune
| 2023-07-20T09:46:22Z | 126 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"hi",
"en",
"codemix",
"multilingual",
"dataset:L3Cube-HingCorpus",
"arxiv:2204.08398",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-28T17:18:36Z |
---
language:
- hi
- en
- multilingual
license: cc-by-4.0
tags:
- hi
- en
- codemix
datasets:
- L3Cube-HingCorpus
---
## HingBERT-Mixed-v2
HingBERT-Mixed-v2 is a Hindi-English code-mixed BERT model trained on roman + devanagari text. It is a base MuRIL model fine-tuned on mixed script L3Cube-HingCorpus.
<br>
[dataset link] (https://github.com/l3cube-pune/code-mixed-nlp)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398)
Other models from HingBERT family: <br>
<a href="https://huggingface.co/l3cube-pune/hing-bert"> HingBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert"> HingMBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed"> HingBERT-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed-v2"> HingBERT-Mixed-v2 </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-roberta"> HingRoBERTa </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-roberta-mixed"> HingRoBERTa-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-gpt"> HingGPT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-gpt-devanagari"> HingGPT-Devanagari </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-bert-lid"> HingBERT-LID </a> <br>
```
@inproceedings{nayak-joshi-2022-l3cube,
title = "{L}3{C}ube-{H}ing{C}orpus and {H}ing{BERT}: A Code Mixed {H}indi-{E}nglish Dataset and {BERT} Language Models",
author = "Nayak, Ravindra and Joshi, Raviraj",
booktitle = "Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.wildre-1.2",
pages = "7--12",
}
```
|
kingabzpro/DialoGPT-small-Rick-Bot
|
kingabzpro
| 2023-07-20T09:28:15Z | 164 | 4 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"gpt-2",
"conversational",
"en",
"dataset:ysharma/rickandmorty",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
datasets:
- ysharma/rickandmorty
language:
- en
metrics:
- perplexity
library_name: transformers
pipeline_tag: conversational
tags:
- gpt-2
---
# Source Code
[<img src="https://api.flatworld.co/wp-content/uploads/2020/10/DAGsHub-Logo.png" alt="dagshub" width="150"/>](https://dagshub.com/kingabzpro/DailoGPT-RickBot)
[](https://github.com/kingabzpro/DailoGPT-RickBot)
# Testing
```python
tokenizer = AutoTokenizer.from_pretrained('kingabzpro/DialoGPT-small-Rick-Bot')
model = AutoModelWithLMHead.from_pretrained('kingabzpro/DialoGPT-small-Rick-Bot')
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("RickBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
**Result**
perplexity : 8.53
|
au2a/whisper-base-zh-20230718-1
|
au2a
| 2023-07-20T09:25:30Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"zh",
"dataset:-",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-18T12:22:30Z |
---
language:
- zh
license: apache-2.0
tags:
- whisper
- generated_from_trainer
datasets:
- '-'
model-index:
- name: whisper-base-zh-20230718-1 - au2a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-zh-20230718-1 - au2a
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the some hakka audio dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4142
- Cer: 84.7926
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0499 | 2.59 | 1000 | 0.3377 | 153.9019 |
| 0.0035 | 5.17 | 2000 | 0.3506 | 138.4528 |
| 0.0015 | 7.76 | 3000 | 0.3651 | 128.2541 |
| 0.001 | 10.35 | 4000 | 0.3754 | 105.1522 |
| 0.0005 | 12.94 | 5000 | 0.3841 | 90.0846 |
| 0.0004 | 15.52 | 6000 | 0.3925 | 92.5134 |
| 0.0002 | 18.11 | 7000 | 0.4011 | 86.3035 |
| 0.0002 | 20.7 | 8000 | 0.4070 | 80.0219 |
| 0.0001 | 23.29 | 9000 | 0.4118 | 82.5451 |
| 0.0001 | 25.87 | 10000 | 0.4142 | 84.7926 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
phatjk/bloomz-lora-vi-QA-NLLB-viquad_v4
|
phatjk
| 2023-07-20T09:14:32Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-20T09:14:25Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
NICFRU/bart-base-paraphrasing-story
|
NICFRU
| 2023-07-20T09:05:08Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-16T13:30:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-paraphrasing
results: []
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-paraphrasing
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.135500
- Rouge1: 32.399800
- Rouge2: 25.275900
- Rougel: 30.322200
- Rougelsum: 31.459500
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.110900 | 1 | 20 | 0.649241 | 31.050300 | 23.973300 | 28.952200 | 30.114800 | 19.965200 |
| 0.707200 | 1 | 40 | 0.604967 | 30.421600 | 22.941900 | 28.194500 | 29.408600 | 19.998300 |
| 0.645800 | 1 | 60 | 0.577806 | 31.129700 | 24.142600 | 29.114900 | 30.237500 | 20.000000 |
| 0.657500 | 1 | 80 | 0.542420 | 31.140900 | 24.045000 | 29.186100 | 30.225700 | 19.998300 |
| 0.610200 | 1 | 100 | 0.555390 | 31.324400 | 24.399300 | 29.340700 | 30.487900 | 20.000000 |
| 0.612400 | 1 | 120 | 0.533283 | 31.907300 | 25.085300 | 29.983400 | 31.090100 | 20.000000 |
| 0.558100 | 1 | 140 | 0.503850 | 31.137500 | 24.259700 | 29.174700 | 30.297800 | 20.000000 |
| 0.617000 | 1 | 160 | 0.512676 | 31.575500 | 24.508400 | 29.472900 | 30.697700 | 20.000000 |
| 0.572100 | 1 | 180 | 0.470928 | 31.757700 | 24.963600 | 29.798800 | 30.949300 | 20.000000 |
| 0.563600 | 1 | 200 | 0.477484 | 31.277800 | 24.694900 | 29.496900 | 30.546500 | 19.998300 |
| 0.549000 | 1 | 220 | 0.464705 | 31.547900 | 24.783700 | 29.620100 | 30.717600 | 20.000000 |
| 0.545100 | 1 | 240 | 0.456029 | 31.406500 | 24.418500 | 29.394600 | 30.539200 | 20.000000 |
| 0.498000 | 1 | 260 | 0.420587 | 31.747000 | 24.919900 | 29.789400 | 30.891900 | 20.000000 |
| 0.497900 | 1 | 280 | 0.437126 | 31.403000 | 24.407800 | 29.435800 | 30.529800 | 20.000000 |
| 0.466600 | 2 | 300 | 0.416397 | 32.079200 | 25.387500 | 30.220300 | 31.262000 | 20.000000 |
| 0.446200 | 2 | 320 | 0.419514 | 32.079500 | 25.261900 | 30.184600 | 31.261500 | 20.000000 |
| 0.406300 | 2 | 340 | 0.417019 | 31.950400 | 25.306400 | 30.126200 | 31.165600 | 20.000000 |
| 0.411200 | 2 | 360 | 0.410052 | 32.384900 | 25.795600 | 30.624500 | 31.649300 | 20.000000 |
| 0.437800 | 2 | 380 | 0.412937 | 31.739100 | 24.850800 | 29.862100 | 30.982400 | 20.000000 |
| 0.419600 | 2 | 400 | 0.406854 | 31.489600 | 24.432500 | 29.517200 | 30.632900 | 20.000000 |
| 0.408400 | 2 | 420 | 0.404026 | 31.642200 | 24.846900 | 29.716100 | 30.829100 | 20.000000 |
| 0.438100 | 2 | 440 | 0.398692 | 31.769900 | 25.074000 | 29.930400 | 31.024500 | 20.000000 |
| 0.401300 | 2 | 460 | 0.400428 | 31.429500 | 24.790800 | 29.548700 | 30.704200 | 20.000000 |
| 0.395300 | 2 | 480 | 0.397955 | 31.831100 | 24.881600 | 29.782700 | 30.952200 | 20.000000 |
| 0.395500 | 2 | 500 | 0.400316 | 31.816000 | 25.031100 | 29.938200 | 31.070800 | 20.000000 |
| 0.427800 | 2 | 520 | 0.398385 | 32.100200 | 25.320300 | 30.257200 | 31.350800 | 20.000000 |
| 0.400900 | 2 | 540 | 0.397272 | 31.768100 | 24.850100 | 29.710100 | 30.932100 | 20.000000 |
| 0.427600 | 2 | 560 | 0.392695 | 31.930100 | 25.084900 | 29.957200 | 31.102900 | 20.000000 |
| 0.387900 | 3 | 580 | 0.399037 | 31.813900 | 24.913500 | 29.798900 | 31.021000 | 20.000000 |
| 0.323500 | 3 | 600 | 0.389272 | 31.992600 | 25.169800 | 30.063600 | 31.153100 | 19.995700 |
| 0.317300 | 3 | 620 | 0.386492 | 31.992100 | 25.154700 | 30.066900 | 31.195300 | 20.000000 |
| 0.330400 | 3 | 640 | 0.402186 | 31.302900 | 24.220400 | 29.340700 | 30.436700 | 20.000000 |
| 0.356700 | 3 | 660 | 0.389047 | 32.074300 | 25.118600 | 30.137500 | 31.212800 | 20.000000 |
| 0.338600 | 3 | 680 | 0.401531 | 31.940100 | 25.027400 | 29.987200 | 31.038700 | 20.000000 |
| 0.356700 | 3 | 700 | 0.376122 | 32.045000 | 25.249500 | 30.061100 | 31.190300 | 20.000000 |
| 0.344300 | 3 | 720 | 0.397580 | 32.053800 | 25.201600 | 30.177200 | 31.191000 | 20.000000 |
| 0.369000 | 3 | 740 | 0.382221 | 32.068400 | 25.105600 | 30.102400 | 31.223900 | 20.000000 |
| 0.310400 | 3 | 760 | 0.393573 | 31.869500 | 24.907200 | 29.931600 | 31.043200 | 20.000000 |
| 0.361200 | 3 | 780 | 0.383016 | 32.339000 | 25.427200 | 30.267200 | 31.476700 | 20.000000 |
| 0.321500 | 3 | 800 | 0.381312 | 31.966800 | 25.008000 | 30.008400 | 31.071000 | 20.000000 |
| 0.379600 | 3 | 820 | 0.389013 | 32.218900 | 25.378800 | 30.355500 | 31.413500 | 20.000000 |
| 0.346900 | 3 | 840 | 0.388966 | 31.900700 | 25.074500 | 30.031500 | 31.071600 | 20.000000 |
| 0.364500 | 3 | 860 | 0.382512 | 32.172200 | 25.309800 | 30.261900 | 31.279800 | 20.000000 |
| 0.279100 | 4 | 880 | 0.393970 | 31.498000 | 24.603300 | 29.558700 | 30.643000 | 20.000000 |
| 0.284700 | 4 | 900 | 0.391282 | 32.090000 | 25.168800 | 30.106400 | 31.227100 | 20.000000 |
| 0.301900 | 4 | 920 | 0.387117 | 32.137600 | 25.320700 | 30.234000 | 31.315000 | 20.000000 |
| 0.248700 | 4 | 940 | 0.393035 | 32.296800 | 25.379200 | 30.349900 | 31.486200 | 20.000000 |
| 0.302800 | 4 | 960 | 0.389426 | 32.488300 | 25.542800 | 30.532100 | 31.676000 | 20.000000 |
| 0.286500 | 4 | 980 | 0.405294 | 31.434500 | 24.362200 | 29.462400 | 30.605300 | 20.000000 |
| 0.282600 | 4 | 1000 | 0.391225 | 31.207100 | 24.081500 | 29.202400 | 30.333900 | 20.000000 |
| 0.258000 | 4 | 1020 | 0.392702 | 31.602000 | 24.586400 | 29.662000 | 30.753400 | 20.000000 |
| 0.276800 | 4 | 1040 | 0.385929 | 32.025900 | 24.915800 | 29.943700 | 31.137700 | 20.000000 |
| 0.280300 | 4 | 1060 | 0.395826 | 32.169600 | 25.301900 | 30.247000 | 31.350600 | 20.000000 |
| 0.307300 | 4 | 1080 | 0.391523 | 31.888500 | 24.968000 | 29.943900 | 31.068700 | 20.000000 |
| 0.290300 | 4 | 1100 | 0.378953 | 31.685000 | 24.868800 | 29.768200 | 30.838100 | 19.996500 |
| 0.285200 | 4 | 1120 | 0.384716 | 32.416400 | 25.605600 | 30.512500 | 31.584200 | 20.000000 |
| 0.280400 | 4 | 1140 | 0.383306 | 32.672600 | 25.866200 | 30.716700 | 31.826300 | 20.000000 |
| 0.301000 | 5 | 1160 | 0.388244 | 32.197300 | 25.377200 | 30.273700 | 31.360300 | 20.000000 |
| 0.259300 | 5 | 1180 | 0.394219 | 32.010500 | 24.821700 | 29.897100 | 31.045900 | 20.000000 |
| 0.229200 | 5 | 1200 | 0.399910 | 32.214100 | 25.272800 | 30.251200 | 31.332200 | 20.000000 |
| 0.265300 | 5 | 1220 | 0.399432 | 32.192400 | 25.345200 | 30.301100 | 31.364000 | 20.000000 |
| 0.265700 | 5 | 1240 | 0.400144 | 32.580400 | 25.887700 | 30.779800 | 31.800300 | 20.000000 |
| 0.235700 | 5 | 1260 | 0.389669 | 32.012300 | 25.066600 | 30.066500 | 31.202000 | 20.000000 |
| 0.268700 | 5 | 1280 | 0.385898 | 32.177500 | 25.199600 | 30.209200 | 31.331900 | 20.000000 |
| 0.240600 | 5 | 1300 | 0.384041 | 32.670000 | 25.872200 | 30.733200 | 31.822900 | 20.000000 |
| 0.240700 | 5 | 1320 | 0.387255 | 32.621700 | 25.810900 | 30.683900 | 31.748300 | 20.000000 |
| 0.242600 | 5 | 1340 | 0.393272 | 32.377200 | 25.487800 | 30.431500 | 31.525700 | 20.000000 |
| 0.267400 | 5 | 1360 | 0.390408 | 32.208000 | 25.233600 | 30.202300 | 31.362000 | 20.000000 |
| 0.241300 | 5 | 1380 | 0.387935 | 32.259800 | 25.131100 | 30.247800 | 31.354100 | 20.000000 |
| 0.238400 | 5 | 1400 | 0.403618 | 32.174700 | 25.150600 | 30.093700 | 31.344200 | 20.000000 |
| 0.259000 | 5 | 1420 | 0.396614 | 32.334800 | 25.372900 | 30.350300 | 31.511000 | 20.000000 |
| 0.243700 | 5 | 1440 | 0.397254 | 31.815800 | 24.716500 | 29.728900 | 30.880300 | 20.000000 |
| 0.201600 | 6 | 1460 | 0.395704 | 32.305900 | 25.363600 | 30.297500 | 31.433500 | 20.000000 |
| 0.205100 | 6 | 1480 | 0.396571 | 32.182700 | 25.160200 | 30.121500 | 31.282700 | 20.000000 |
| 0.224200 | 6 | 1500 | 0.398343 | 32.439600 | 25.334300 | 30.381300 | 31.509700 | 20.000000 |
| 0.224900 | 6 | 1520 | 0.395585 | 32.333800 | 25.477100 | 30.410600 | 31.494400 | 20.000000 |
| 0.216700 | 6 | 1540 | 0.404786 | 32.014500 | 25.052800 | 30.045800 | 31.203900 | 20.000000 |
| 0.227300 | 6 | 1560 | 0.397305 | 32.342300 | 25.545600 | 30.468800 | 31.516000 | 20.000000 |
| 0.211700 | 6 | 1580 | 0.401612 | 32.274000 | 25.443900 | 30.276200 | 31.440300 | 20.000000 |
| 0.210700 | 6 | 1600 | 0.399011 | 32.389400 | 25.518800 | 30.492800 | 31.613000 | 20.000000 |
| 0.230600 | 6 | 1620 | 0.393134 | 32.612400 | 25.817900 | 30.717100 | 31.801300 | 20.000000 |
| 0.201000 | 6 | 1640 | 0.401414 | 32.349800 | 25.302800 | 30.293100 | 31.457500 | 20.000000 |
| 0.211600 | 6 | 1660 | 0.391455 | 32.270900 | 25.483600 | 30.308200 | 31.460700 | 20.000000 |
| 0.198700 | 6 | 1680 | 0.396596 | 32.233000 | 25.212400 | 30.236400 | 31.355600 | 20.000000 |
| 0.226300 | 6 | 1700 | 0.401143 | 32.192100 | 25.220900 | 30.281400 | 31.391900 | 20.000000 |
| 0.238600 | 6 | 1720 | 0.391453 | 32.439000 | 25.479100 | 30.479800 | 31.613700 | 20.000000 |
| 0.200700 | 7 | 1740 | 0.398769 | 32.487600 | 25.642500 | 30.515500 | 31.662900 | 20.000000 |
| 0.186400 | 7 | 1760 | 0.400294 | 32.287400 | 25.251000 | 30.308100 | 31.437800 | 20.000000 |
| 0.176800 | 7 | 1780 | 0.406219 | 32.325100 | 25.401000 | 30.325000 | 31.505900 | 20.000000 |
| 0.190600 | 7 | 1800 | 0.398379 | 32.165700 | 25.140900 | 30.198300 | 31.349800 | 20.000000 |
| 0.177100 | 7 | 1820 | 0.406410 | 32.454800 | 25.475800 | 30.490200 | 31.540900 | 20.000000 |
| 0.198700 | 7 | 1840 | 0.396886 | 32.274000 | 25.247900 | 30.223400 | 31.407500 | 20.000000 |
| 0.196200 | 7 | 1860 | 0.407596 | 32.348300 | 25.156900 | 30.238300 | 31.413000 | 20.000000 |
| 0.167400 | 7 | 1880 | 0.405560 | 32.382000 | 25.506300 | 30.377600 | 31.519900 | 20.000000 |
| 0.198800 | 7 | 1900 | 0.409359 | 32.281700 | 25.331500 | 30.271900 | 31.423700 | 20.000000 |
| 0.202900 | 7 | 1920 | 0.405715 | 32.192000 | 25.054400 | 30.103300 | 31.341200 | 20.000000 |
| 0.210100 | 7 | 1940 | 0.402631 | 32.375500 | 25.331800 | 30.371700 | 31.527100 | 20.000000 |
| 0.199200 | 7 | 1960 | 0.403153 | 32.261700 | 25.227800 | 30.275300 | 31.404700 | 20.000000 |
| 0.192400 | 7 | 1980 | 0.406693 | 32.438400 | 25.486300 | 30.438700 | 31.580000 | 20.000000 |
| 0.210000 | 7 | 2000 | 0.397093 | 32.487200 | 25.537200 | 30.542800 | 31.687500 | 20.000000 |
| 0.186300 | 8 | 2020 | 0.403671 | 32.530700 | 25.529700 | 30.503900 | 31.651400 | 20.000000 |
| 0.171200 | 8 | 2040 | 0.406167 | 32.297300 | 25.244400 | 30.216400 | 31.406900 | 20.000000 |
| 0.159600 | 8 | 2060 | 0.413590 | 32.562300 | 25.551500 | 30.551000 | 31.677700 | 20.000000 |
| 0.191000 | 8 | 2080 | 0.406790 | 32.380000 | 25.326200 | 30.374900 | 31.476300 | 20.000000 |
| 0.149700 | 8 | 2100 | 0.419098 | 32.253200 | 25.283000 | 30.321300 | 31.422500 | 20.000000 |
| 0.174500 | 8 | 2120 | 0.410545 | 32.492700 | 25.497000 | 30.516600 | 31.623100 | 20.000000 |
| 0.178600 | 8 | 2140 | 0.405749 | 32.109100 | 25.057800 | 30.142800 | 31.178700 | 20.000000 |
| 0.172400 | 8 | 2160 | 0.413341 | 32.336500 | 25.260200 | 30.329000 | 31.456300 | 20.000000 |
| 0.199200 | 8 | 2180 | 0.402256 | 32.643900 | 25.630300 | 30.712600 | 31.744700 | 20.000000 |
| 0.182100 | 8 | 2200 | 0.401074 | 32.437400 | 25.420100 | 30.451300 | 31.558200 | 20.000000 |
| 0.165800 | 8 | 2220 | 0.408149 | 32.433600 | 25.306700 | 30.407500 | 31.537000 | 20.000000 |
| 0.164100 | 8 | 2240 | 0.407869 | 32.282900 | 25.398100 | 30.395100 | 31.471400 | 20.000000 |
| 0.174300 | 8 | 2260 | 0.412621 | 32.169700 | 25.171700 | 30.176700 | 31.304600 | 20.000000 |
| 0.178600 | 8 | 2280 | 0.407604 | 32.385700 | 25.380600 | 30.372900 | 31.494000 | 20.000000 |
| 0.160200 | 8 | 2300 | 0.408272 | 32.505100 | 25.568400 | 30.517300 | 31.657300 | 20.000000 |
| 0.166700 | 9 | 2320 | 0.405484 | 32.621300 | 25.726500 | 30.674400 | 31.786200 | 20.000000 |
| 0.148800 | 9 | 2340 | 0.413829 | 32.275700 | 25.185000 | 30.272300 | 31.355000 | 20.000000 |
| 0.161400 | 9 | 2360 | 0.413913 | 32.372700 | 25.201000 | 30.301500 | 31.506300 | 20.000000 |
| 0.155800 | 9 | 2380 | 0.414684 | 32.420600 | 25.395400 | 30.461500 | 31.533400 | 20.000000 |
| 0.170600 | 9 | 2400 | 0.403257 | 32.243600 | 25.152100 | 30.174700 | 31.333500 | 20.000000 |
| 0.162600 | 9 | 2420 | 0.408112 | 32.190200 | 25.136800 | 30.135900 | 31.295100 | 20.000000 |
| 0.160200 | 9 | 2440 | 0.413158 | 32.240100 | 25.255300 | 30.259300 | 31.391600 | 20.000000 |
| 0.165300 | 9 | 2460 | 0.408876 | 32.117800 | 24.999500 | 30.075400 | 31.187300 | 20.000000 |
| 0.157700 | 9 | 2480 | 0.418658 | 32.182700 | 25.065800 | 30.117200 | 31.251900 | 20.000000 |
| 0.152900 | 9 | 2500 | 0.412553 | 32.137700 | 25.021900 | 30.136700 | 31.234400 | 20.000000 |
| 0.153500 | 9 | 2520 | 0.411657 | 31.994400 | 24.742300 | 29.874600 | 31.051900 | 20.000000 |
| 0.152500 | 9 | 2540 | 0.404253 | 32.366500 | 25.086700 | 30.228600 | 31.393000 | 20.000000 |
| 0.163500 | 9 | 2560 | 0.406488 | 32.474000 | 25.284700 | 30.419900 | 31.541900 | 20.000000 |
| 0.175700 | 9 | 2580 | 0.406476 | 32.314300 | 25.101900 | 30.219300 | 31.342900 | 20.000000 |
| 0.156500 | 10 | 2600 | 0.411366 | 32.325400 | 25.088200 | 30.230000 | 31.382600 | 20.000000 |
| 0.147800 | 10 | 2620 | 0.411610 | 32.174600 | 24.935000 | 30.134600 | 31.225900 | 20.000000 |
| 0.154600 | 10 | 2640 | 0.416763 | 32.064800 | 24.824400 | 30.005100 | 31.147300 | 20.000000 |
| 0.147300 | 10 | 2660 | 0.413373 | 32.138200 | 24.856000 | 30.081100 | 31.209300 | 20.000000 |
| 0.140600 | 10 | 2680 | 0.416898 | 32.196500 | 25.032400 | 30.171700 | 31.282600 | 20.000000 |
| 0.146600 | 10 | 2700 | 0.414243 | 32.321500 | 25.131500 | 30.251500 | 31.376800 | 20.000000 |
| 0.154300 | 10 | 2720 | 0.411708 | 32.302400 | 25.028000 | 30.196800 | 31.338300 | 20.000000 |
| 0.146000 | 10 | 2740 | 0.412115 | 32.343600 | 25.191900 | 30.302900 | 31.403900 | 20.000000 |
| 0.140000 | 10 | 2760 | 0.414298 | 32.244000 | 25.085400 | 30.180300 | 31.292300 | 20.000000 |
| 0.150100 | 10 | 2780 | 0.416827 | 32.313100 | 25.206500 | 30.260700 | 31.390100 | 20.000000 |
| 0.153400 | 10 | 2800 | 0.415130 | 32.392200 | 25.266000 | 30.320600 | 31.461100 | 20.000000 |
| 0.143600 | 10 | 2820 | 0.414414 | 32.394300 | 25.249800 | 30.313800 | 31.445400 | 20.000000 |
| 0.153400 | 10 | 2840 | 0.414328 | 32.427100 | 25.294400 | 30.359600 | 31.485100 | 20.000000 |
| 0.145300 | 10 | 2860 | 0.414271 | 32.362800 | 25.219700 | 30.281900 | 31.420600 | 20.000000 |
| 0.135500 | 10 | 2880 | 0.414513 | 32.399800 | 25.275900 | 30.322200 | 31.459500 | 20.000000 |
|
tgamstaetter/mult_tf
|
tgamstaetter
| 2023-07-20T09:01:57Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"base_model:finetune:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-20T08:27:11Z |
---
license: mit
base_model: microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: mult_tf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mult_tf
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5180
- Accuracy: 0.8364
- F1: 0.8358
- Precision: 0.8355
- Recall: 0.8364
- Roc Auc: 0.9896
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 640
- eval_batch_size: 1280
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------:|
| No log | 1.0 | 357 | 0.5694 | 0.8249 | 0.8243 | 0.8245 | 0.8249 | 0.9875 |
| 0.5397 | 2.0 | 714 | 0.5324 | 0.8324 | 0.8312 | 0.8313 | 0.8324 | 0.9890 |
| 0.523 | 3.0 | 1071 | 0.5193 | 0.8354 | 0.8348 | 0.8346 | 0.8354 | 0.9895 |
| 0.523 | 4.0 | 1428 | 0.5180 | 0.8364 | 0.8358 | 0.8355 | 0.8364 | 0.9896 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
junejae/pegasus-samsum
|
junejae
| 2023-07-20T09:00:31Z | 100 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/pegasus-cnn_dailymail",
"base_model:finetune:google/pegasus-cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-20T07:56:56Z |
---
base_model: google/pegasus-cnn_dailymail
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6297 | 0.54 | 500 | 1.4858 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Oslaw/rl_course_vizdoom_health_gathering_supreme
|
Oslaw
| 2023-07-20T08:42:41Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-20T07:42:15Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.98 +/- 4.39
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Oslaw/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
jackswie/Hadise
|
jackswie
| 2023-07-20T08:41:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-20T08:32:36Z |
[](discord.gg/ailab)


# Hadise AÇIKGÖZ - RVC V2 - Mangio Crepe - 330 Epoch
**Şarkıcı Hadise AÇIKGÖZ'ÜN ses modelidir,
Rvc V2 350 epoch olarak eğitilmiştir.**
**30 Dakikalık Dataset Kullanılmıştır.**
**Dataset içerisinde röpartaj ve şarkı söyleme ses örnekleri bulunmaktadır.**
_Dataset ve Train Benim Tarafımdan yapılmıştır.._
__Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.__
## Credits
**Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.**
- Discord: jackswie
- Reddit: u/jackk_m
- YouTube: 𝖏𝖆𝖈𝖐𝖘𝖑𝖜𝖐 (https://www.youtube.com/channel/UCZSMJToEeMuqMFDL318v3Xw)
- TikTok: jackss.aep (https://www.tiktok.com/@jackss.aep)
- Instagram: jackslwk (https://www.instagram.com/jackslwk/)

[](discord.gg/ailab)

|
siciai/vicunaprodtype
|
siciai
| 2023-07-20T08:41:25Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-20T08:33:44Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
albagon/q-FrozenLake-v1-4x4-noSlippery
|
albagon
| 2023-07-20T08:34:07Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-20T08:33:40Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="albagon/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AlVrde/bloomz-560m_PROMPT_TUNING_CAUSAL_LM_0.001_0.04_30epochs
|
AlVrde
| 2023-07-20T08:29:02Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-20T08:28:59Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Zywald/GenerAd-AI
|
Zywald
| 2023-07-20T08:06:34Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-20T08:06:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
DiTo97/binarization-segformer-b3
|
DiTo97
| 2023-07-20T08:05:47Z | 215 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"segformer",
"generated_from_trainer",
"document-image-binarization",
"image-segmentation",
"arxiv:2105.05521",
"arxiv:1901.06081",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2023-05-13T16:27:36Z |
---
license: openrail
tags:
- generated_from_trainer
- document-image-binarization
- image-segmentation
model-index:
- name: binarization-segformer-b3
results: []
---
# binarization-segformer-b3
This model is a fine-tuned version of [nvidia/segformer-b3-1024-1024](https://huggingface.co/nvidia/segformer-b3-finetuned-cityscapes-1024-1024)
on the same ensemble of 13 datasets as the [SauvolaNet](https://arxiv.org/pdf/2105.05521.pdf) work publicly available
in their GitHub [repository](https://github.com/Leedeng/SauvolaNet#datasets).
It achieves the following results on the evaluation set on DIBCO metrics:
- loss: 0.0743
- DRD: 5.9548
- F-measure: 0.9840
- pseudo F-measure: 0.9740
- PSNR: 16.0119
with PSNR the peak signal-to-noise ratio and DRD the distance reciprocal distortion.
For more information on the above DIBCO metrics, see the 2017 introductory [paper](https://ieeexplore.ieee.org/document/8270159).
## Model description
This model is part of on-going research on pure semantic segmentation models as a formulation of document image binarization (DIBCO).
This is in contrast to the late trend of adapting classical binarization algorithms with neural networks,
such as [DeepOtsu](https://arxiv.org/abs/1901.06081) or [SauvolaNet](https://arxiv.org/pdf/2105.05521.pdf)
as extensions of Otsu's method and Sauvola thresholding algorithm, respectively.
## Intended uses & limitations
TBC
## Training and evaluation data
TBC
## Training procedure
### Training hyperparameters
TBC
### Training results
| training loss | epoch | step | validation loss | DRD | F-measure | pseudo F-measure | PSNR |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:----------------:|:-------:|
| 0.6983 | 0.26 | 10 | 0.7079 | 199.5096 | 0.5945 | 0.5801 | 3.4552 |
| 0.6657 | 0.52 | 20 | 0.6755 | 149.2346 | 0.7006 | 0.6165 | 4.6752 |
| 0.6145 | 0.77 | 30 | 0.6433 | 109.7298 | 0.7831 | 0.6520 | 5.5489 |
| 0.5553 | 1.03 | 40 | 0.5443 | 53.7149 | 0.8952 | 0.8000 | 8.1736 |
| 0.4627 | 1.29 | 50 | 0.4896 | 32.7649 | 0.9321 | 0.8603 | 9.8706 |
| 0.3969 | 1.55 | 60 | 0.4327 | 21.5508 | 0.9526 | 0.8985 | 11.3400 |
| 0.3414 | 1.81 | 70 | 0.3002 | 11.0094 | 0.9732 | 0.9462 | 13.5901 |
| 0.2898 | 2.06 | 80 | 0.2839 | 10.1064 | 0.9748 | 0.9563 | 13.9796 |
| 0.2292 | 2.32 | 90 | 0.2427 | 9.4437 | 0.9761 | 0.9584 | 14.2161 |
| 0.2153 | 2.58 | 100 | 0.2095 | 8.8696 | 0.9771 | 0.9621 | 14.4319 |
| 0.1767 | 2.84 | 110 | 0.1916 | 8.6152 | 0.9776 | 0.9646 | 14.5528 |
| 0.1509 | 3.1 | 120 | 0.1704 | 8.0761 | 0.9791 | 0.9632 | 14.7961 |
| 0.1265 | 3.35 | 130 | 0.1561 | 8.5627 | 0.9784 | 0.9655 | 14.7400 |
| 0.132 | 3.61 | 140 | 0.1318 | 8.1849 | 0.9788 | 0.9670 | 14.8469 |
| 0.1115 | 3.87 | 150 | 0.1317 | 7.8438 | 0.9790 | 0.9657 | 14.9072 |
| 0.0983 | 4.13 | 160 | 0.1273 | 7.9405 | 0.9791 | 0.9673 | 14.9701 |
| 0.1001 | 4.39 | 170 | 0.1234 | 8.4132 | 0.9788 | 0.9691 | 14.8573 |
| 0.0862 | 4.65 | 180 | 0.1147 | 8.0838 | 0.9797 | 0.9678 | 15.0433 |
| 0.0713 | 4.9 | 190 | 0.1134 | 7.6027 | 0.9806 | 0.9687 | 15.2235 |
| 0.0905 | 5.16 | 200 | 0.1061 | 7.2973 | 0.9803 | 0.9699 | 15.1646 |
| 0.0902 | 5.42 | 210 | 0.1061 | 8.4049 | 0.9787 | 0.9699 | 14.8460 |
| 0.0759 | 5.68 | 220 | 0.1062 | 7.7147 | 0.9809 | 0.9695 | 15.2426 |
| 0.0638 | 5.94 | 230 | 0.1019 | 7.7449 | 0.9806 | 0.9695 | 15.2195 |
| 0.0852 | 6.19 | 240 | 0.0962 | 7.0221 | 0.9817 | 0.9693 | 15.4730 |
| 0.0677 | 6.45 | 250 | 0.0961 | 7.2520 | 0.9814 | 0.9710 | 15.3878 |
| 0.0668 | 6.71 | 260 | 0.0972 | 6.6658 | 0.9823 | 0.9689 | 15.6106 |
| 0.0701 | 6.97 | 270 | 0.0909 | 6.9454 | 0.9820 | 0.9713 | 15.5458 |
| 0.0567 | 7.23 | 280 | 0.0925 | 6.5498 | 0.9824 | 0.9718 | 15.5965 |
| 0.0624 | 7.48 | 290 | 0.0899 | 7.3125 | 0.9813 | 0.9717 | 15.3255 |
| 0.0649 | 7.74 | 300 | 0.0932 | 7.4915 | 0.9816 | 0.9684 | 15.5666 |
| 0.0524 | 8.0 | 310 | 0.0905 | 7.1666 | 0.9815 | 0.9711 | 15.4526 |
| 0.0693 | 8.26 | 320 | 0.0901 | 6.5627 | 0.9827 | 0.9704 | 15.7335 |
| 0.0528 | 8.52 | 330 | 0.0845 | 6.6690 | 0.9826 | 0.9734 | 15.5950 |
| 0.0632 | 8.77 | 340 | 0.0822 | 6.2661 | 0.9833 | 0.9723 | 15.8631 |
| 0.0522 | 9.03 | 350 | 0.0844 | 6.0073 | 0.9836 | 0.9715 | 15.9393 |
| 0.0568 | 9.29 | 360 | 0.0817 | 5.9460 | 0.9837 | 0.9721 | 15.9523 |
| 0.057 | 9.55 | 370 | 0.0900 | 7.9726 | 0.9812 | 0.9730 | 15.1229 |
| 0.052 | 9.81 | 380 | 0.0836 | 6.5444 | 0.9822 | 0.9712 | 15.6388 |
| 0.0568 | 10.06 | 390 | 0.0810 | 6.0359 | 0.9836 | 0.9714 | 15.9796 |
| 0.0481 | 10.32 | 400 | 0.0784 | 6.2110 | 0.9835 | 0.9724 | 15.9235 |
| 0.0513 | 10.58 | 410 | 0.0803 | 6.0990 | 0.9835 | 0.9715 | 15.9502 |
| 0.0595 | 10.84 | 420 | 0.0798 | 6.0829 | 0.9835 | 0.9720 | 15.9052 |
| 0.047 | 11.1 | 430 | 0.0779 | 5.8847 | 0.9838 | 0.9725 | 16.0043 |
| 0.0406 | 11.35 | 440 | 0.0802 | 5.7944 | 0.9838 | 0.9713 | 16.0620 |
| 0.0493 | 11.61 | 450 | 0.0781 | 6.0947 | 0.9836 | 0.9731 | 15.9033 |
| 0.064 | 11.87 | 460 | 0.0769 | 6.1257 | 0.9837 | 0.9736 | 15.9080 |
| 0.0622 | 12.13 | 470 | 0.0765 | 6.2964 | 0.9835 | 0.9739 | 15.8188 |
| 0.0457 | 12.39 | 480 | 0.0773 | 5.9826 | 0.9838 | 0.9728 | 16.0119 |
| 0.0447 | 12.65 | 490 | 0.0761 | 5.7977 | 0.9841 | 0.9728 | 16.0900 |
| 0.0515 | 12.9 | 500 | 0.0750 | 5.8569 | 0.9840 | 0.9729 | 16.0633 |
| 0.0357 | 13.16 | 510 | 0.0796 | 5.7990 | 0.9837 | 0.9713 | 16.0818 |
| 0.0503 | 13.42 | 520 | 0.0749 | 5.8323 | 0.9841 | 0.9736 | 16.0510 |
| 0.0508 | 13.68 | 530 | 0.0746 | 6.0361 | 0.9839 | 0.9735 | 15.9709 |
| 0.0533 | 13.94 | 540 | 0.0768 | 6.1596 | 0.9836 | 0.9740 | 15.9193 |
| 0.0503 | 14.19 | 550 | 0.0739 | 5.5900 | 0.9843 | 0.9723 | 16.1883 |
| 0.0515 | 14.45 | 560 | 0.0740 | 5.4660 | 0.9845 | 0.9727 | 16.2745 |
| 0.0502 | 14.71 | 570 | 0.0740 | 5.5895 | 0.9844 | 0.9736 | 16.2054 |
| 0.0401 | 14.97 | 580 | 0.0741 | 5.9694 | 0.9840 | 0.9747 | 15.9603 |
| 0.0495 | 15.23 | 590 | 0.0745 | 5.9136 | 0.9841 | 0.9740 | 16.0458 |
| 0.0413 | 15.48 | 600 | 0.0743 | 5.9548 | 0.9840 | 0.9740 | 16.0119 |
### Framework versions
- transformers 4.31.0
- torch 2.0.0
- datasets 2.13.1
- tokenizers 0.13.3
|
lianlian123/ppo-LunarLander-v2
|
lianlian123
| 2023-07-20T07:50:25Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-20T07:50:04Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.36 +/- 13.46
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
4bit/Redmond-Puffin-13B
|
4bit
| 2023-07-20T07:47:15Z | 5 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"sft",
"eng",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-20T07:36:10Z |
---
language:
- eng
tags:
- llama-2
- sft
license: [mit]
---

## **Redmond-Puffin-13b-V1.3**
**The first commercially available language model released by Nous Research!**
Redmond-Puffin-13B is one of the worlds first llama-2 based, fine-tuned language models, leveraging a hand curated set of 3K high quality examples, many of which take full advantage of the 4096 context length of Llama 2. This model was fine-tuned by Nous Research, with LDJ leading the training and dataset curation, along with significant dataset formation contributions by J-Supha.
Special thank you to Redmond AI for sponsoring the compute.
Special thank you to Emozilla for assisting with training experimentations and many issues encountered during training.
Notable mentions for assisting in some of the training issues goes to: Caseus and Teknium.
## Model Training
Redmond-Puffin-13B-V1.3 is a new model trained for multiple epochs on a dataset of 3,000 carefully curated GPT-4 examples, most of which are long context conversations between a real human and GPT-4.
Additional data came from carefully curated sub sections of datasets such as CamelAI's Physics, Chemistry, Biology and Math.
## Prompt Format
The model follows the Vicuna ShareGPT prompt format:
```
### human:
### gpt:
```
## Improvements over previous version:
The original Puffin model was loved by many, however it was quickly discovered to have dataset errors in a significant amount of the conversations.
Puffin-V1.3 dataset solves this issue and the resulting fixed model has now fully finished training!
## Notable Features:
- The first Llama-2 based fine-tuned model released by Nous Research.
- Ability to recall information upto 2023 without internet (ChatGPT cut off date is in 2021)
- Pretrained on 2 trillion tokens of text. (This is double the amount of most Open LLM's)
- Pretrained with a context length of 4096 tokens, and fine-tuned on a significant amount of multi-turn conversations reaching that full token limit.
- The first commercially available language model released by Nous Research.
## Current Limitations
Some token mismatch problems and formatting issues have been idenitifed, these may very possibly effect the current output quality.
We plan to have these solved in an updated Puffin model in the very near future, please stay tuned!
## Future Plans
This is a relatively early build amongst the grand plans for the future of Puffin!
Current limitations: Some token mismatch problems have been identified, these may effect the current output quality, we plan to have this solved in Puffin V2 along with other improvements.
## How you can help!
In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations.
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact ldj on discord!
## Benchmarks coming soon
benchmarks coming soon!
|
J3/speecht5_finetuned_voxpopuli_it
|
J3
| 2023-07-20T07:46:18Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-07-19T10:00:22Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_it
results: []
pipeline_tag: text-to-speech
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_it
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6707 | 1.0 | 108 | 0.5946 |
| 0.6625 | 2.0 | 217 | 0.6029 |
| 0.708 | 3.0 | 325 | 0.6118 |
| 0.6588 | 4.0 | 434 | 0.7109 |
| 0.6614 | 5.0 | 542 | 0.5799 |
| 0.6375 | 6.0 | 651 | 0.5714 |
| 0.619 | 7.0 | 759 | 0.5699 |
| 0.5806 | 8.0 | 868 | 0.5538 |
| 0.6024 | 9.0 | 976 | 0.5856 |
| 0.5728 | 10.0 | 1085 | 0.5446 |
| 0.5624 | 11.0 | 1193 | 0.5508 |
| 0.5711 | 12.0 | 1302 | 0.5376 |
| 0.5438 | 13.0 | 1410 | 0.5300 |
| 0.5308 | 14.0 | 1519 | 0.5206 |
| 0.5536 | 15.0 | 1627 | 0.5359 |
| 0.5285 | 16.0 | 1736 | 0.5264 |
| 0.525 | 17.0 | 1844 | 0.5108 |
| 0.4961 | 18.0 | 1953 | 0.5116 |
| 0.5111 | 19.0 | 2061 | 0.5042 |
| 0.4869 | 20.0 | 2170 | 0.5050 |
| 0.4864 | 21.0 | 2278 | 0.4994 |
| 0.4794 | 22.0 | 2387 | 0.5039 |
| 0.4787 | 23.0 | 2495 | 0.4975 |
| 0.4692 | 24.0 | 2604 | 0.4961 |
| 0.4656 | 24.88 | 2700 | 0.4968 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
EhsanElahi/pokemon-lora
|
EhsanElahi
| 2023-07-20T07:44:45Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-20T06:43:23Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - EhsanElahi/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
VFiona/opus-mt-it-en-finetuned_5000-it-to-en
|
VFiona
| 2023-07-20T07:42:25Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-19T22:30:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: opus-mt-it-en-finetuned_5000-it-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-it-en-finetuned_5000-it-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-it-en](https://huggingface.co/Helsinki-NLP/opus-mt-it-en) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 282 | 0.5054 | 71.2415 | 22.26 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.12.1
- Datasets 2.13.1
- Tokenizers 0.11.0
|
yancongwen/chatglm2-6b-pt-16-1e-2-20230720-2
|
yancongwen
| 2023-07-20T07:37:45Z | 0 | 0 | null |
[
"tensorboard",
"region:us"
] | null | 2023-07-20T07:35:33Z |
# ChatGLM2-6B 微调模型
参考:[ChatGLM2-6B-PT](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning)
## 参数
```sh
PRE_SEQ_LEN=16
LR=1e-2
NUM_GPUS=1
torchrun --standalone --nnodes=1 --nproc-per-node=$NUM_GPUS main.py \
--do_train \
--train_file train_data/train_100k.json \
--validation_file train_data/dev_1k.json \
--preprocessing_num_workers 10 \
--prompt_column question \
--response_column answer \
--overwrite_cache \
--model_name_or_path THUDM/chatglm2-6b \
--output_dir output/chatglm2-6b-pt-$PRE_SEQ_LEN-$LR-20230720-2 \
--overwrite_output_dir \
--max_source_length 256 \
--max_target_length 128 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 8 \
--predict_with_generate \
--max_steps 1000 \
--logging_steps 10 \
--save_steps 1000 \
--learning_rate $LR \
--pre_seq_len $PRE_SEQ_LEN \
--quantization_bit 4
```
## train metrics
```
epoch = 0.2
train_loss = 0.1803
train_runtime = 1:44:48.92
train_samples = 78577
train_samples_per_second = 2.544
train_steps_per_second = 0.159
```
---
license: unlicense
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.