modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 12:32:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 12:31:20
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
DeepakGautam/Gautam
|
DeepakGautam
| 2023-07-22T07:51:38Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-22T07:51:38Z |
---
license: bigscience-openrail-m
---
|
vineetsharma/a2c-AntBulletEnv-v0
|
vineetsharma
| 2023-07-22T07:47:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-22T07:46:54Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1480.41 +/- 128.67
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ailabturkiye/kratosGOWRAGNAROK
|
ailabturkiye
| 2023-07-22T07:40:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-22T07:36:28Z |
[](discord.gg/ailab)


# Kratos (God Of War Ragnarök) - RVC V2 500 Epoch
**Kratos'un Serisinin son oyunu olan ragnarök'teki ses kayıtlarından oluşturulmuş ses modelidir.
Rvc V2 | 10 Dakikalık Dataset | 500 Epoch olarak eğitilmiştir.**
_Dataset ve Train Benim Tarafımdan yapılmıştır.._
__Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.__
## Credits
**Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.**
- Discord: hydragee
- YouTube: CoverLai (https://www.youtube.com/@coverlai)

[](discord.gg/ailab)

|
josephrich/my_awesome_model_721_2
|
josephrich
| 2023-07-22T07:27:32Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-22T04:00:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model_721_2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93228
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model_721_2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5942
- Accuracy: 0.9323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4604 | 1.0 | 12500 | 0.6389 | 0.8761 |
| 0.2442 | 2.0 | 25000 | 0.4233 | 0.9264 |
| 0.1495 | 3.0 | 37500 | 0.4755 | 0.9303 |
| 0.0516 | 4.0 | 50000 | 0.5942 | 0.9323 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Ryukijano/Mujoco_rl_halfcheetah_Decision_Trasformer
|
Ryukijano
| 2023-07-22T07:27:15Z | 62 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"Generated_From_Trainer",
"reinforcement-learning",
"Mujoco",
"dataset:decision_transformer_gym_replay",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-07-19T15:13:27Z |
---
base_model: ''
tags:
- Generated_From_Trainer
- reinforcement-learning
- Mujoco
datasets:
- decision_transformer_gym_replay
model-index:
- name: Mujoco_rl_halfcheetah_Decision_Trasformer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mujoco_rl_halfcheetah_Decision_Trasformer
This model is a fine-tuned version of [](https://huggingface.co/) on the decision_transformer_gym_replay dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 250
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
gokuls/hbertv2-wt-frz-48-emotion
|
gokuls
| 2023-07-22T07:19:29Z | 48 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48_frz",
"base_model:finetune:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48_frz",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-22T07:09:53Z |
---
base_model: gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48_frz
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: hbertv2-wt-frz-48-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.927
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv2-wt-frz-48-emotion
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48_frz](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48_frz) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2271
- Accuracy: 0.927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6559 | 1.0 | 250 | 0.2760 | 0.9015 |
| 0.2565 | 2.0 | 500 | 0.2507 | 0.9035 |
| 0.1862 | 3.0 | 750 | 0.2221 | 0.919 |
| 0.1455 | 4.0 | 1000 | 0.2271 | 0.927 |
| 0.1218 | 5.0 | 1250 | 0.2059 | 0.9235 |
| 0.1003 | 6.0 | 1500 | 0.2576 | 0.9215 |
| 0.0812 | 7.0 | 1750 | 0.2603 | 0.92 |
| 0.0676 | 8.0 | 2000 | 0.2949 | 0.9215 |
| 0.0515 | 9.0 | 2250 | 0.3322 | 0.919 |
| 0.0411 | 10.0 | 2500 | 0.3375 | 0.924 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
|
qwerty8409/Medical_dataset
|
qwerty8409
| 2023-07-22T07:04:27Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-22T07:00:20Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
EXrRor3/ppo-Huggy
|
EXrRor3
| 2023-07-22T06:47:29Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-22T06:47:19Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: EXrRor3/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AndrewL088/SpaceInvadersNoFrameskip-v4_20230722
|
AndrewL088
| 2023-07-22T06:31:47Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-22T06:31:18Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 29.00 +/- 64.30
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AndrewL088 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AndrewL088 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga AndrewL088
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.025),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 10000000.0),
('learning_starts', 100000),
('n_timesteps', 110000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
NasimB/guten-rarity-neg-log-rarity-end-19p1k
|
NasimB
| 2023-07-22T06:23:58Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-22T04:00:43Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-rarity-neg-log-rarity-end-19p1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-rarity-neg-log-rarity-end-19p1k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1078
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3472 | 0.29 | 500 | 5.3359 |
| 5.0242 | 0.59 | 1000 | 4.9159 |
| 4.7018 | 0.88 | 1500 | 4.6868 |
| 4.4382 | 1.17 | 2000 | 4.5458 |
| 4.2888 | 1.47 | 2500 | 4.4338 |
| 4.1941 | 1.76 | 3000 | 4.3265 |
| 4.0652 | 2.05 | 3500 | 4.2631 |
| 3.8933 | 2.34 | 4000 | 4.2118 |
| 3.8664 | 2.64 | 4500 | 4.1589 |
| 3.8275 | 2.93 | 5000 | 4.1077 |
| 3.6287 | 3.22 | 5500 | 4.1006 |
| 3.5847 | 3.52 | 6000 | 4.0707 |
| 3.5697 | 3.81 | 6500 | 4.0389 |
| 3.4614 | 4.1 | 7000 | 4.0369 |
| 3.3179 | 4.4 | 7500 | 4.0323 |
| 3.307 | 4.69 | 8000 | 4.0175 |
| 3.3039 | 4.98 | 8500 | 4.0058 |
| 3.1413 | 5.28 | 9000 | 4.0177 |
| 3.132 | 5.57 | 9500 | 4.0172 |
| 3.1349 | 5.86 | 10000 | 4.0158 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
AndrewL088/SpaceInvadersNoFrameskip-v4_0722
|
AndrewL088
| 2023-07-22T06:18:45Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-22T06:09:45Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 257.00 +/- 38.81
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AndrewL088 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AndrewL088 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga AndrewL088
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.025),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 10000000.0),
('learning_starts', 100000),
('n_timesteps', 70000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ebilal79/watsonx-falcon-7b
|
ebilal79
| 2023-07-22T06:04:59Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-19T19:42:58Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
4bit/Nous-Hermes-Llama2-13b-GPTQ
|
4bit
| 2023-07-22T05:32:28Z | 11 | 3 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"self-instruct",
"distillation",
"synthetic instruction",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-22T05:26:48Z |
---
license: llama2
language:
- en
tags:
- llama-2
- self-instruct
- distillation
- synthetic instruction
---
# Model Card: Nous-Hermes-Llama2-13b
Compute provided by our project sponsor Redmond AI, thank you! Follow RedmondAI on Twitter @RedmondAI.
## Model Description
Nous-Hermes-Llama2-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Emozilla leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors.
This Hermes model uses the exact same dataset as Hermes on Llama-1. This is to ensure consistency between the old Hermes and new, for anyone who wanted to keep Hermes as similar to the old one, just more capable.
This model stands out for its long responses, lower hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 4096 sequence length on an 8x a100 80GB DGX machine.
## Example Outputs:




## Model Training
The model was trained almost entirely on synthetic GPT-4 outputs. Curating high quality GPT-4 datasets enables incredibly high quality in knowledge, task completion, and style.
This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), and several others, detailed further below
## Collaborators
The model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Emozilla, Huemin Art, and Redmond AI.
Special mention goes to @winglian for assisting in some of the training issues.
Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly.
Among the contributors of datasets:
- GPTeacher was made available by Teknium
- Wizard LM by nlpxucan
- Nous Research Instruct Dataset was provided by Karan4D and HueminArt.
- GPT4-LLM and Unnatural Instructions were provided by Microsoft
- Airoboros dataset by jondurbin
- Camel-AI's domain expert datasets are from Camel-AI
- CodeAlpaca dataset by Sahil 2801.
If anyone was left out, please open a thread in the community tab.
## Prompt Format
The model follows the Alpaca prompt format:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
or
```
### Instruction:
<prompt>
### Input:
<additional context>
### Response:
<leave a newline blank for model to respond>
```
## Benchmark Results
AGI-Eval
```
| Task |Version| Metric |Value | |Stderr|
|agieval_aqua_rat | 0|acc |0.2362|± |0.0267|
| | |acc_norm|0.2480|± |0.0272|
|agieval_logiqa_en | 0|acc |0.3425|± |0.0186|
| | |acc_norm|0.3472|± |0.0187|
|agieval_lsat_ar | 0|acc |0.2522|± |0.0287|
| | |acc_norm|0.2087|± |0.0269|
|agieval_lsat_lr | 0|acc |0.3510|± |0.0212|
| | |acc_norm|0.3627|± |0.0213|
|agieval_lsat_rc | 0|acc |0.4647|± |0.0305|
| | |acc_norm|0.4424|± |0.0303|
|agieval_sat_en | 0|acc |0.6602|± |0.0331|
| | |acc_norm|0.6165|± |0.0340|
|agieval_sat_en_without_passage| 0|acc |0.4320|± |0.0346|
| | |acc_norm|0.4272|± |0.0345|
|agieval_sat_math | 0|acc |0.2909|± |0.0307|
| | |acc_norm|0.2727|± |0.0301|
```
GPT-4All Benchmark Set
```
| Task |Version| Metric |Value | |Stderr|
|arc_challenge| 0|acc |0.5102|± |0.0146|
| | |acc_norm|0.5213|± |0.0146|
|arc_easy | 0|acc |0.7959|± |0.0083|
| | |acc_norm|0.7567|± |0.0088|
|boolq | 1|acc |0.8394|± |0.0064|
|hellaswag | 0|acc |0.6164|± |0.0049|
| | |acc_norm|0.8009|± |0.0040|
|openbookqa | 0|acc |0.3580|± |0.0215|
| | |acc_norm|0.4620|± |0.0223|
|piqa | 0|acc |0.7992|± |0.0093|
| | |acc_norm|0.8069|± |0.0092|
|winogrande | 0|acc |0.7127|± |0.0127|
```
BigBench Reasoning Test
```
| Task |Version| Metric |Value | |Stderr|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5526|± |0.0362|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7344|± |0.0230|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.2636|± |0.0275|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.0195|± |0.0073|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2760|± |0.0200|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2100|± |0.0154|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4400|± |0.0287|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.2440|± |0.0192|
|bigbench_navigate | 0|multiple_choice_grade|0.4950|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.5570|± |0.0111|
|bigbench_ruin_names | 0|multiple_choice_grade|0.3728|± |0.0229|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.1854|± |0.0123|
|bigbench_snarks | 0|multiple_choice_grade|0.6298|± |0.0360|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6156|± |0.0155|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.3140|± |0.0147|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2032|± |0.0114|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1406|± |0.0083|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4400|± |0.0287|
```
These are the highest benchmarks Hermes has seen on every metric, achieving the following average scores:
- GPT4All benchmark average is now 70.0 - from 68.8 in Hermes-Llama1
- 0.3657 on BigBench, up from 0.328 on hermes-llama1
- 0.372 on AGIEval, up from 0.354 on Hermes-llama1
These benchmarks currently have us at #1 on ARC-c, ARC-e, Hellaswag, and OpenBookQA, and 2nd place on Winogrande, comparing to GPT4all's benchmarking list, supplanting Hermes 1 for the new top position.
## Resources for Applied Use Cases:
For an example of a back and forth chatbot using huggingface transformers and discord, check out: https://github.com/teknium1/alpaca-discord
For an example of a roleplaying discord chatbot, check out this: https://github.com/teknium1/alpaca-roleplay-discordbot
## Future Plans
We plan to continue to iterate on both more high quality data, and new data filtering techniques to eliminate lower quality data going forward.
## Model Usage
The model is available for download on Hugging Face. It is suitable for a wide range of language tasks, from generating creative text to understanding and following complex instructions.
|
Vsukiyaki/Shungiku-Mix
|
Vsukiyaki
| 2023-07-22T05:11:31Z | 0 | 23 | null |
[
"stable-diffusion",
"text-to-image",
"ja",
"en",
"license:other",
"region:us"
] |
text-to-image
| 2023-06-03T16:25:04Z |
---
license: other
language:
- ja
- en
tags:
- stable-diffusion
- text-to-image
---
# Shungiku-Mix
<img src="https://huggingface.co/Vsukiyaki/Shungiku-Mix/resolve/main/imgs/header.jpg" style="width: 640px;">
## 概要 / Overview
- **Shungiku-Mix**は、アニメ風の画風に特化したマージモデルです。 / **Shungiku-Mix** is a merge model that specializes in an anime-like painting style.
- 幻想的な空や光の表現が得意です。 / This model excels in the expression of fantastic skies and light.
- VAEはお好きなものをお使いください。VAEが無くても鮮やかな色合いで出力されますが、clearvaeを使用することを推奨しています。 / You can use whatever VAE you like. The output will be vividly tinted without VAE, but we recommend using clearvae.
- clearvaeを含んだモデルも提供しています。 / I also offer models that include clearvae.
=> **Shungiku-Mix_v1-better-vae-fp16.safetensors**
<hr>
## 更新 / UPDATE NOTE
- 2023/07/22:ライセンスを変更しました。 / License changed.
<hr>
## 推奨設定 / Recommended Settings
<pre style="margin: 1em 0; padding: 1em; border-radius: 5px; background: #25292f; color: #fff; white-space: pre-line;">
Steps: 20 ~ 60
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Denoising strength: 0.55
Hires steps: 20
Hires upscaler: Latent
Clip skip: 2
Negative embeddings: EasyNegative, verybadimagenegative
</pre>
**Negative prompt**:
<pre style="margin: 1em 0; padding: 1em; border-radius: 5px; background: #25292f; color: #fff; white-space: pre-line;">
(easynegative:1.0),(worst quality,low quality:1.2),(bad anatomy:1.4),(realistic:1.1),nose,lips,adult,fat,sad,(inaccurate limb:1.2),extra digit,fewer digits,six fingers,(monochrome:0.95),verybadimagenegative_v1.3,
</pre>
<hr>
## 例 / Examples
<img src="https://huggingface.co/Vsukiyaki/Shungiku-Mix/resolve/main/imgs/sample1.png" style="width: 512px;">
<pre style="margin: 1em 0; padding: 1em; border-radius: 5px; background: #25292f; color: #fff; white-space: pre-line;">
((solo:1.2)),cute girl,(harbor),(blue sky:1.2),looking at viewer,dramatic,fantastic atmosphere,magnificent view,cumulonimbus,(cowboy shot:1.2),scenery,Mediterranean Buildings,silver hair
Negative prompt:
(easynegative:1.0),(worst quality,low quality:1.2),(bad anatomy:1.4),(realistic:1.1),nose,lips,adult,fat,sad,(inaccurate limb:1.2),extra digit,fewer digits,six fingers,(monochrome:0.95),verybadimagenegative_v1.3,
Steps: 60
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 1896063174
Size: 768x768
Denoising strength: 0.58
Clip skip: 2
Hires upscale: 2
Hires steps: 20
Hires upscaler: Latent
</pre>
<br>
<img src="https://huggingface.co/Vsukiyaki/Shungiku-Mix/resolve/main/imgs/sample2.png" style="width: 640px;">
<pre style="margin: 1em 0; padding: 1em; border-radius: 5px; background: #25292f; color: #fff; white-space: pre-line;">
((solo:1.2)),cute little (1girl:1.3) walking,landscape,beautiful sky,village,head tilt,bloom effect,fantastic atmosphere,magnificent view,cowboy shot,pale-blonde hair,blue eyes,long twintails,blush,light smile,white dress,wind,(petals)
Negative prompt:
(easynegative:1.0),(worst quality,low quality:1.2),(bad anatomy:1.4),(realistic:1.1),nose,lips,adult,fat,sad,(inaccurate limb:1.2),extra digit,fewer digits,six fingers,(monochrome:0.95),verybadimagenegative_v1.3,
Steps: 60
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 400031884
Size: 848x600
Denoising strength: 0.55
Clip skip: 2
Hires upscale: 2.5
Hires steps: 20
Hires upscaler: Latent
</pre>
<hr>
## ライセンス / License
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base text-bold" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span style="font-size: 18px;">
🚫
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span style="font-size: 18px;">
🚫
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span style="font-size: 18px;">
🚫
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span style="font-size: 18px;">
✅
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span style="font-size: 18px;">
🚫
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span style="font-size: 18px;">
🚫
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
<hr>
Twiter: [@Vsukiyaki_AIArt](https://twitter.com/Vsukiyaki_AIArt)
<a
href="https://twitter.com/Vsukiyaki_AIArt"
class="mb-2 inline-block rounded px-6 py-2.5 text-white shadow-md"
style="background-color: #1da1f2">
<svg xmlns="http://www.w3.org/2000/svg" class="h-3.5 w-3.5" fill="currentColor" viewBox="0 0 24 24">
<path d="M24 4.557c-.883.392-1.832.656-2.828.775 1.017-.609 1.798-1.574 2.165-2.724-.951.564-2.005.974-3.127 1.195-.897-.957-2.178-1.555-3.594-1.555-3.179 0-5.515 2.966-4.797 6.045-4.091-.205-7.719-2.165-10.148-5.144-1.29 2.213-.669 5.108 1.523 6.574-.806-.026-1.566-.247-2.229-.616-.054 2.281 1.581 4.415 3.949 4.89-.693.188-1.452.232-2.224.084.626 1.956 2.444 3.379 4.6 3.419-2.07 1.623-4.678 2.348-7.29 2.04 2.179 1.397 4.768 2.212 7.548 2.212 9.142 0 14.307-7.721 13.995-14.646.962-.695 1.797-1.562 2.457-2.549z" />
</svg>
</a>
|
nvidia/GCViT
|
nvidia
| 2023-07-22T04:47:32Z | 0 | 5 | null |
[
"arxiv:2206.09959",
"region:us"
] | null | 2023-07-21T19:28:35Z |
# Global Context Vision Transformer (GC ViT)
This model contains the official PyTorch implementation of **Global Context Vision Transformers** (ICML2023) \
\
[Global Context Vision
Transformers](https://arxiv.org/pdf/2206.09959.pdf) \
[Ali Hatamizadeh](https://research.nvidia.com/person/ali-hatamizadeh),
[Hongxu (Danny) Yin](https://scholar.princeton.edu/hongxu),
[Greg Heinrich](https://developer.nvidia.com/blog/author/gheinrich/),
[Jan Kautz](https://jankautz.com/),
and [Pavlo Molchanov](https://www.pmolchanov.com/).
GC ViT achieves state-of-the-art results across image classification, object detection and semantic segmentation tasks. On ImageNet-1K dataset for classification, GC ViT variants with `51M`, `90M` and `201M` parameters achieve `84.3`, `85.9` and `85.7` Top-1 accuracy, respectively, surpassing comparably-sized prior art such as CNN-based ConvNeXt and ViT-based Swin Transformer.
<p align="center">
<img src="https://github.com/NVlabs/GCVit/assets/26806394/d1820d6d-3aef-470e-a1d3-af370f1c1f77" width=63% height=63%
class="center">
</p>
The architecture of GC ViT is demonstrated in the following:

## Introduction
**GC ViT** leverages global context self-attention modules, joint with local self-attention, to effectively yet efficiently model both long and short-range spatial interactions, without the need for expensive
operations such as computing attention masks or shifting local windows.
<p align="center">
<img src="https://github.com/NVlabs/GCVit/assets/26806394/da64f22a-e7af-4577-8884-b08ba4e24e49" width=72% height=72%
class="center">
</p>
## ImageNet Benchmarks
**ImageNet-1K Pretrained Models**
<table>
<tr>
<th>Model Variant</th>
<th>Acc@1</th>
<th>#Params(M)</th>
<th>FLOPs(G)</th>
<th>Download</th>
</tr>
<tr>
<td>GC ViT-XXT</td>
<th>79.9</th>
<td>12</td>
<td>2.1</td>
<td><a href="https://drive.google.com/uc?export=download&id=1apSIWQCa5VhWLJws8ugMTuyKzyayw4Eh">model</a></td>
</tr>
<tr>
<td>GC ViT-XT</td>
<th>82.0</th>
<td>20</td>
<td>2.6</td>
<td><a href="https://drive.google.com/uc?export=download&id=1OgSbX73AXmE0beStoJf2Jtda1yin9t9m">model</a></td>
</tr>
<tr>
<td>GC ViT-T</td>
<th>83.5</th>
<td>28</td>
<td>4.7</td>
<td><a href="https://drive.google.com/uc?export=download&id=11M6AsxKLhfOpD12Nm_c7lOvIIAn9cljy">model</a></td>
</tr>
<tr>
<td>GC ViT-T2</td>
<th>83.7</th>
<td>34</td>
<td>5.5</td>
<td><a href="https://drive.google.com/uc?export=download&id=1cTD8VemWFiwAx0FB9cRMT-P4vRuylvmQ">model</a></td>
</tr>
<tr>
<td>GC ViT-S</td>
<th>84.3</th>
<td>51</td>
<td>8.5</td>
<td><a href="https://drive.google.com/uc?export=download&id=1Nn6ABKmYjylyWC0I41Q3oExrn4fTzO9Y">model</a></td>
</tr>
<tr>
<td>GC ViT-S2</td>
<th>84.8</th>
<td>68</td>
<td>10.7</td>
<td><a href="https://drive.google.com/uc?export=download&id=1E5TtYpTqILznjBLLBTlO5CGq343RbEan">model</a></td>
</tr>
<tr>
<td>GC ViT-B</td>
<th>85.0</th>
<td>90</td>
<td>14.8</td>
<td><a href="https://drive.google.com/uc?export=download&id=1PF7qfxKLcv_ASOMetDP75n8lC50gaqyH">model</a></td>
</tr>
<tr>
<td>GC ViT-L</td>
<th>85.7</th>
<td>201</td>
<td>32.6</td>
<td><a href="https://drive.google.com/uc?export=download&id=1Lkz1nWKTwCCUR7yQJM6zu_xwN1TR0mxS">model</a></td>
</tr>
</table>
**ImageNet-21K Pretrained Models**
<table>
<tr>
<th>Model Variant</th>
<th>Resolution</th>
<th>Acc@1</th>
<th>#Params(M)</th>
<th>FLOPs(G)</th>
<th>Download</th>
</tr>
<tr>
<td>GC ViT-L</td>
<td>224 x 224</td>
<th>86.6</th>
<td>201</td>
<td>32.6</td>
<td><a href="https://drive.google.com/uc?export=download&id=1maGDr6mJkLyRTUkspMzCgSlhDzNRFGEf">model</a></td>
</tr>
<tr>
<td>GC ViT-L</td>
<td>384 x 384</td>
<th>87.4</th>
<td>201</td>
<td>120.4</td>
<td><a href="https://drive.google.com/uc?export=download&id=1P-IEhvQbJ3FjnunVkM1Z9dEpKw-tsuWv">model</a></td>
</tr>
</table>
## Citation
Please consider citing GC ViT paper if it is useful for your work:
```
@inproceedings{hatamizadeh2023global,
title={Global context vision transformers},
author={Hatamizadeh, Ali and Yin, Hongxu and Heinrich, Greg and Kautz, Jan and Molchanov, Pavlo},
booktitle={International Conference on Machine Learning},
pages={12633--12646},
year={2023},
organization={PMLR}
}
```
## Licenses
Copyright © 2023, NVIDIA Corporation. All rights reserved.
This work is made available under the Nvidia Source Code License-NC. Click [here](LICENSE) to view a copy of this license.
The pre-trained models are shared under [CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
For license information regarding the timm, please refer to its [repository](https://github.com/rwightman/pytorch-image-models).
For license information regarding the ImageNet dataset, please refer to the ImageNet [official website](https://www.image-net.org/).
|
EXrRor3/ppo-LunarLander-v2
|
EXrRor3
| 2023-07-22T03:46:54Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-22T03:40:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.86 +/- 17.63
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ittailup/lallama-13b-chat
|
ittailup
| 2023-07-22T03:20:47Z | 1 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"region:us"
] | null | 2023-07-21T19:10:35Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Jonathaniu/vicuna-breast-cancer-7b-mix-data-epoch-2_5
|
Jonathaniu
| 2023-07-22T03:09:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-22T03:09:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
### Framework versions
- PEFT 0.4.0.dev0
|
UNIST-Eunchan/Pegasus-x-base-govreport-12288-1024-numepoch-10
|
UNIST-Eunchan
| 2023-07-22T03:05:31Z | 93 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus_x",
"text2text-generation",
"generated_from_trainer",
"dataset:govreport-summarization",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-20T02:20:44Z |
---
tags:
- generated_from_trainer
datasets:
- govreport-summarization
model-index:
- name: Pegasus-x-base-govreport-12288-1024-numepoch-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Pegasus-x-base-govreport-12288-1024-numepoch-10
This model is a fine-tuned version of [google/pegasus-x-base](https://huggingface.co/google/pegasus-x-base) on the govreport-summarization dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6234
## Model description
More information needed
## Evaluation Score
**'ROUGE'**:
{
'rouge1': 0.5012,
'rouge2': 0.2205,
'rougeL': 0.2552,
'rougeLsum': 0.2554
}
**'BERT_SCORE'**
{'f1': 0.859,
'precision': 0.8619,
'recall': 0.8563
}
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1149 | 0.37 | 100 | 1.9237 |
| 1.9545 | 0.73 | 200 | 1.8380 |
| 1.8835 | 1.1 | 300 | 1.7574 |
| 1.862 | 1.46 | 400 | 1.7305 |
| 1.8536 | 1.83 | 500 | 1.7100 |
| 1.8062 | 2.19 | 600 | 1.6944 |
| 1.8161 | 2.56 | 700 | 1.6882 |
| 1.7611 | 2.92 | 800 | 1.6803 |
| 1.7878 | 3.29 | 900 | 1.6671 |
| 1.7299 | 3.65 | 1000 | 1.6599 |
| 1.7636 | 4.02 | 1100 | 1.6558 |
| 1.7262 | 4.38 | 1200 | 1.6547 |
| 1.715 | 4.75 | 1300 | 1.6437 |
| 1.7178 | 5.12 | 1400 | 1.6445 |
| 1.7163 | 5.48 | 1500 | 1.6386 |
| 1.7367 | 5.85 | 1600 | 1.6364 |
| 1.7114 | 6.21 | 1700 | 1.6365 |
| 1.6452 | 6.58 | 1800 | 1.6309 |
| 1.7251 | 6.94 | 1900 | 1.6301 |
| 1.6726 | 7.31 | 2000 | 1.6305 |
| 1.7104 | 7.67 | 2100 | 1.6285 |
| 1.6739 | 8.04 | 2200 | 1.6252 |
| 1.7082 | 8.4 | 2300 | 1.6246 |
| 1.6888 | 8.77 | 2400 | 1.6244 |
| 1.6609 | 9.13 | 2500 | 1.6256 |
| 1.6707 | 9.5 | 2600 | 1.6241 |
| 1.669 | 9.86 | 2700 | 1.6234 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Falcinspire/Reinforce-MLP-v1-Cartpole-v1
|
Falcinspire
| 2023-07-22T02:22:54Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-22T00:42:23Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-MLP-v1-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 491.10 +/- 26.70
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
LarryAIDraw/YelanV4-09
|
LarryAIDraw
| 2023-07-22T02:16:49Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-22T02:15:12Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/61470/yelan-lora-genshin-impact
|
Blackroot/Llama-2-13B-Storyweaver-LORA-Deprecated
|
Blackroot
| 2023-07-22T02:12:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-21T23:33:22Z |
Join the Coffee & AI Discord for AI Stuff and things!
[](https://discord.gg/2JhHVh7CGu)
## **Probably bad model**
Test results are showing that although this model does produce long outputs, the quality has generally degraded. I'm leaving this up for the time being but I would recommend one of my other loras instead. As an aside, this model is really, really funny, try it if you want a laugh.
## Get the base model here:
Base Model Quantizations by The Bloke here:
https://huggingface.co/TheBloke/Llama-2-13B-GGML
https://huggingface.co/TheBloke/Llama-2-13B-GPTQ
## Prompting for this model:
A brief warning that no alignment or attempts to sanitize or otherwise filter the dataset or the outputs have been done. This is a completelty raw model and may behave unpredictably or create scenarios that are unpleasant.
The base Llama2 is a text completion model. That means it will continue writing from the story in whatever manner you direct it. This is not an instruct tuned model, so don't try and give it instruction.
Correct prompting:
```
He grabbed his sword, his gleaming armor, he readied himself. The battle was coming, he walked into the dawn light and
```
Incorrect prompting:
```
Write a story about...
```
This model has been trained to generate as much text as possible, so you should use some mechanism to force it to stop at N tokens or something. For exmaple, in one prompt I average about 7000 output tokens, basically make sure you have a max sequence length set or it'll just keep going forever.
## Training procedure
22,000 steps @ 7 epochs. Final training loss of 1.8. Total training time was 30 hours on a single 3090 TI.
PEFT:
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
LarryAIDraw/ots-14
|
LarryAIDraw
| 2023-07-22T02:11:14Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-22T02:09:45Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/21833/girls-frontline-ots-14
|
LarryAIDraw/niloutest
|
LarryAIDraw
| 2023-07-22T01:50:09Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-22T01:49:28Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/101969/nilou-genshin-impact
|
LarryAIDraw/Genshin_Impact-Nilou_V2_nilou__genshin_impact_-000012
|
LarryAIDraw
| 2023-07-22T01:49:53Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-22T01:46:49Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/5367/tsumasaky-nilou-genshin-impact-lora
|
minhanhtuan/llama2-qlora-finetunined-french
|
minhanhtuan
| 2023-07-22T01:25:58Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-22T01:25:51Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
jonmay/ppo-LunarLander-v2
|
jonmay
| 2023-07-22T01:24:58Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-22T01:24:35Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.45 +/- 20.35
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Mel-Iza0/RedPajama-ZeroShot-20K-new_prompt_classe_bias
|
Mel-Iza0
| 2023-07-22T01:12:05Z | 2 | 0 |
peft
|
[
"peft",
"pytorch",
"gpt_neox",
"region:us"
] | null | 2023-07-21T21:11:26Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Bainbridge/vilt-b32-mlm-mami
|
Bainbridge
| 2023-07-22T01:03:05Z | 38 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vilt",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-07-22T00:22:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: vilt-b32-mlm-mami
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vilt-b32-mlm-mami
This model is a fine-tuned version of [dandelin/vilt-b32-mlm](https://huggingface.co/dandelin/vilt-b32-mlm) on the MAMI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5796
- F1: 0.7899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6898 | 0.48 | 100 | 0.6631 | 0.6076 |
| 0.5824 | 0.96 | 200 | 0.5055 | 0.7545 |
| 0.4306 | 1.44 | 300 | 0.4586 | 0.7861 |
| 0.4207 | 1.91 | 400 | 0.4439 | 0.7927 |
| 0.3055 | 2.39 | 500 | 0.4912 | 0.7949 |
| 0.2582 | 2.87 | 600 | 0.4921 | 0.7873 |
| 0.1875 | 3.35 | 700 | 0.5796 | 0.7899 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
NasimB/cbt-norm-rarity-neg-log-rarity
|
NasimB
| 2023-07-22T00:46:15Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T22:20:45Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-norm-rarity-neg-log-rarity
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-norm-rarity-neg-log-rarity
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1046
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3494 | 0.29 | 500 | 5.3385 |
| 5.0263 | 0.58 | 1000 | 4.9258 |
| 4.7061 | 0.87 | 1500 | 4.6888 |
| 4.4468 | 1.16 | 2000 | 4.5463 |
| 4.2956 | 1.46 | 2500 | 4.4260 |
| 4.1947 | 1.75 | 3000 | 4.3302 |
| 4.0756 | 2.04 | 3500 | 4.2520 |
| 3.8921 | 2.33 | 4000 | 4.2106 |
| 3.8655 | 2.62 | 4500 | 4.1572 |
| 3.8345 | 2.91 | 5000 | 4.1064 |
| 3.6432 | 3.2 | 5500 | 4.1013 |
| 3.581 | 3.49 | 6000 | 4.0704 |
| 3.569 | 3.79 | 6500 | 4.0362 |
| 3.4919 | 4.08 | 7000 | 4.0338 |
| 3.3226 | 4.37 | 7500 | 4.0289 |
| 3.3106 | 4.66 | 8000 | 4.0166 |
| 3.297 | 4.95 | 8500 | 4.0046 |
| 3.1568 | 5.24 | 9000 | 4.0152 |
| 3.1358 | 5.53 | 9500 | 4.0145 |
| 3.1313 | 5.82 | 10000 | 4.0135 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
SaffalPoosh/falcon-7b-autogptq-custom
|
SaffalPoosh
| 2023-07-22T00:17:42Z | 6 | 0 |
transformers
|
[
"transformers",
"RefinedWebModel",
"text-generation",
"custom_code",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-07-22T00:02:03Z |
autogptq quant. logs
```
>>> model.quantize(examples)
2023-07-21 16:54:47 INFO [auto_gptq.modeling._base] Start quantizing layer 1/32
2023-07-21 16:54:47 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 1/32...
2023-07-21 16:54:48 INFO [auto_gptq.quantization.gptq] duration: 0.8171646595001221
2023-07-21 16:54:48 INFO [auto_gptq.quantization.gptq] avg loss: 3.7546463012695312
2023-07-21 16:54:48 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 1/32...
2023-07-21 16:54:49 INFO [auto_gptq.quantization.gptq] duration: 0.8055715560913086
2023-07-21 16:54:49 INFO [auto_gptq.quantization.gptq] avg loss: 0.2164316177368164
2023-07-21 16:54:49 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 1/32...
2023-07-21 16:54:50 INFO [auto_gptq.quantization.gptq] duration: 0.8417620658874512
2023-07-21 16:54:50 INFO [auto_gptq.quantization.gptq] avg loss: 16.070518493652344
2023-07-21 16:54:50 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 1/32...
2023-07-21 16:54:53 INFO [auto_gptq.quantization.gptq] duration: 3.90244197845459
2023-07-21 16:54:53 INFO [auto_gptq.quantization.gptq] avg loss: 0.5676069855690002
2023-07-21 16:54:53 INFO [auto_gptq.modeling._base] Start quantizing layer 2/32
2023-07-21 16:54:54 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 2/32...
2023-07-21 16:54:54 INFO [auto_gptq.quantization.gptq] duration: 0.8373761177062988
2023-07-21 16:54:54 INFO [auto_gptq.quantization.gptq] avg loss: 4.066518783569336
2023-07-21 16:54:54 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 2/32...
2023-07-21 16:54:55 INFO [auto_gptq.quantization.gptq] duration: 0.8285796642303467
2023-07-21 16:54:55 INFO [auto_gptq.quantization.gptq] avg loss: 0.2558078169822693
2023-07-21 16:55:25 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 2/32...
2023-07-21 16:55:25 INFO [auto_gptq.quantization.gptq] duration: 0.8859198093414307
2023-07-21 16:55:25 INFO [auto_gptq.quantization.gptq] avg loss: 16.571727752685547
2023-07-21 16:55:26 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 2/32...
2023-07-21 16:55:29 INFO [auto_gptq.quantization.gptq] duration: 3.86962890625
2023-07-21 16:55:29 INFO [auto_gptq.quantization.gptq] avg loss: 0.34605544805526733
2023-07-21 16:55:30 INFO [auto_gptq.modeling._base] Start quantizing layer 3/32
2023-07-21 16:55:30 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 3/32...
2023-07-21 16:55:30 INFO [auto_gptq.quantization.gptq] duration: 0.8118832111358643
2023-07-21 16:55:30 INFO [auto_gptq.quantization.gptq] avg loss: 5.4185943603515625
2023-07-21 16:55:30 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 3/32...
2023-07-21 16:55:31 INFO [auto_gptq.quantization.gptq] duration: 0.8096959590911865
2023-07-21 16:55:31 INFO [auto_gptq.quantization.gptq] avg loss: 0.22585009038448334
2023-07-21 16:55:31 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 3/32...
2023-07-21 16:55:32 INFO [auto_gptq.quantization.gptq] duration: 0.8473665714263916
2023-07-21 16:55:32 INFO [auto_gptq.quantization.gptq] avg loss: 27.050426483154297
2023-07-21 16:55:32 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 3/32...
2023-07-21 16:55:36 INFO [auto_gptq.quantization.gptq] duration: 3.8430850505828857
2023-07-21 16:55:36 INFO [auto_gptq.quantization.gptq] avg loss: 0.6839203834533691
2023-07-21 16:55:36 INFO [auto_gptq.modeling._base] Start quantizing layer 4/32
2023-07-21 16:55:36 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 4/32...
2023-07-21 16:55:37 INFO [auto_gptq.quantization.gptq] duration: 0.7948899269104004
2023-07-21 16:55:37 INFO [auto_gptq.quantization.gptq] avg loss: 6.523550987243652
2023-07-21 16:55:37 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 4/32...
2023-07-21 16:55:38 INFO [auto_gptq.quantization.gptq] duration: 0.7990512847900391
2023-07-21 16:55:38 INFO [auto_gptq.quantization.gptq] avg loss: 0.21638213098049164
2023-07-21 16:55:38 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 4/32...
2023-07-21 16:55:39 INFO [auto_gptq.quantization.gptq] duration: 0.8403058052062988
2023-07-21 16:55:39 INFO [auto_gptq.quantization.gptq] avg loss: 36.57025146484375
2023-07-21 16:55:39 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 4/32...
2023-07-21 16:55:43 INFO [auto_gptq.quantization.gptq] duration: 3.856529474258423
2023-07-21 16:55:43 INFO [auto_gptq.quantization.gptq] avg loss: 9.424503326416016
2023-07-21 16:55:43 INFO [auto_gptq.modeling._base] Start quantizing layer 5/32
2023-07-21 16:55:43 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 5/32...
2023-07-21 16:55:44 INFO [auto_gptq.quantization.gptq] duration: 0.7926647663116455
2023-07-21 16:55:44 INFO [auto_gptq.quantization.gptq] avg loss: 6.277029037475586
2023-07-21 16:55:44 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 5/32...
2023-07-21 16:55:44 INFO [auto_gptq.quantization.gptq] duration: 0.7987856864929199
2023-07-21 16:55:44 INFO [auto_gptq.quantization.gptq] avg loss: 0.1324760764837265
2023-07-21 16:55:44 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 5/32...
2023-07-21 16:55:45 INFO [auto_gptq.quantization.gptq] duration: 0.8394050598144531
2023-07-21 16:55:45 INFO [auto_gptq.quantization.gptq] avg loss: 36.26388168334961
2023-07-21 16:55:45 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 5/32...
2023-07-21 16:55:49 INFO [auto_gptq.quantization.gptq] duration: 3.849104166030884
2023-07-21 16:55:49 INFO [auto_gptq.quantization.gptq] avg loss: 2.376619338989258
2023-07-21 16:55:49 INFO [auto_gptq.modeling._base] Start quantizing layer 6/32
2023-07-21 16:55:49 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 6/32...
2023-07-21 16:55:50 INFO [auto_gptq.quantization.gptq] duration: 0.7964150905609131
2023-07-21 16:55:50 INFO [auto_gptq.quantization.gptq] avg loss: 8.479263305664062
2023-07-21 16:55:50 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 6/32...
2023-07-21 16:55:51 INFO [auto_gptq.quantization.gptq] duration: 0.7951827049255371
2023-07-21 16:55:51 INFO [auto_gptq.quantization.gptq] avg loss: 0.14170163869857788
2023-07-21 16:56:21 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 6/32...
2023-07-21 16:56:22 INFO [auto_gptq.quantization.gptq] duration: 0.8720560073852539
2023-07-21 16:56:22 INFO [auto_gptq.quantization.gptq] avg loss: 42.756919860839844
2023-07-21 16:56:22 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 6/32...
2023-07-21 16:56:25 INFO [auto_gptq.quantization.gptq] duration: 3.8685550689697266
2023-07-21 16:56:25 INFO [auto_gptq.quantization.gptq] avg loss: 0.8117952346801758
2023-07-21 16:56:26 INFO [auto_gptq.modeling._base] Start quantizing layer 7/32
2023-07-21 16:56:26 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 7/32...
2023-07-21 16:56:26 INFO [auto_gptq.quantization.gptq] duration: 0.7976808547973633
2023-07-21 16:56:26 INFO [auto_gptq.quantization.gptq] avg loss: 7.019394397735596
2023-07-21 16:56:26 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 7/32...
2023-07-21 16:56:27 INFO [auto_gptq.quantization.gptq] duration: 0.803225040435791
2023-07-21 16:56:27 INFO [auto_gptq.quantization.gptq] avg loss: 0.21443051099777222
2023-07-21 16:56:27 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 7/32...
2023-07-21 16:56:28 INFO [auto_gptq.quantization.gptq] duration: 0.8342931270599365
2023-07-21 16:56:28 INFO [auto_gptq.quantization.gptq] avg loss: 39.33504104614258
2023-07-21 16:56:28 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 7/32...
2023-07-21 16:56:32 INFO [auto_gptq.quantization.gptq] duration: 3.8671581745147705
2023-07-21 16:56:32 INFO [auto_gptq.quantization.gptq] avg loss: 0.9214520454406738
2023-07-21 16:56:32 INFO [auto_gptq.modeling._base] Start quantizing layer 8/32
2023-07-21 16:56:32 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 8/32...
2023-07-21 16:56:33 INFO [auto_gptq.quantization.gptq] duration: 0.7989864349365234
2023-07-21 16:56:33 INFO [auto_gptq.quantization.gptq] avg loss: 7.602280616760254
2023-07-21 16:56:33 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 8/32...
2023-07-21 16:56:34 INFO [auto_gptq.quantization.gptq] duration: 0.8112733364105225
2023-07-21 16:56:34 INFO [auto_gptq.quantization.gptq] avg loss: 0.11391645669937134
2023-07-21 16:56:34 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 8/32...
2023-07-21 16:56:35 INFO [auto_gptq.quantization.gptq] duration: 0.8388988971710205
2023-07-21 16:56:35 INFO [auto_gptq.quantization.gptq] avg loss: 34.74957275390625
2023-07-21 16:56:35 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 8/32...
2023-07-21 16:56:39 INFO [auto_gptq.quantization.gptq] duration: 3.8561182022094727
2023-07-21 16:56:39 INFO [auto_gptq.quantization.gptq] avg loss: 1.1289432048797607
2023-07-21 16:56:39 INFO [auto_gptq.modeling._base] Start quantizing layer 9/32
2023-07-21 16:56:39 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 9/32...
2023-07-21 16:56:40 INFO [auto_gptq.quantization.gptq] duration: 0.7969386577606201
2023-07-21 16:56:40 INFO [auto_gptq.quantization.gptq] avg loss: 6.806826591491699
2023-07-21 16:56:40 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 9/32...
2023-07-21 16:56:41 INFO [auto_gptq.quantization.gptq] duration: 0.7953078746795654
2023-07-21 16:56:41 INFO [auto_gptq.quantization.gptq] avg loss: 0.2318212240934372
2023-07-21 16:56:41 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 9/32...
2023-07-21 16:56:41 INFO [auto_gptq.quantization.gptq] duration: 0.8294937610626221
2023-07-21 16:56:41 INFO [auto_gptq.quantization.gptq] avg loss: 35.324676513671875
2023-07-21 16:56:41 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 9/32...
2023-07-21 16:56:45 INFO [auto_gptq.quantization.gptq] duration: 3.8630259037017822
2023-07-21 16:56:45 INFO [auto_gptq.quantization.gptq] avg loss: 1.4622347354888916
2023-07-21 16:56:45 INFO [auto_gptq.modeling._base] Start quantizing layer 10/32
2023-07-21 16:56:46 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 10/32...
2023-07-21 16:56:46 INFO [auto_gptq.quantization.gptq] duration: 0.8029708862304688
2023-07-21 16:56:46 INFO [auto_gptq.quantization.gptq] avg loss: 6.056252956390381
2023-07-21 16:56:46 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 10/32...
2023-07-21 16:56:47 INFO [auto_gptq.quantization.gptq] duration: 0.8028323650360107
2023-07-21 16:56:47 INFO [auto_gptq.quantization.gptq] avg loss: 1.092197060585022
2023-07-21 16:56:47 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 10/32...
2023-07-21 16:56:48 INFO [auto_gptq.quantization.gptq] duration: 0.8335537910461426
2023-07-21 16:56:48 INFO [auto_gptq.quantization.gptq] avg loss: 30.71457290649414
2023-07-21 16:56:48 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 10/32...
2023-07-21 16:56:52 INFO [auto_gptq.quantization.gptq] duration: 3.8703184127807617
2023-07-21 16:56:52 INFO [auto_gptq.quantization.gptq] avg loss: 1.2208330631256104
2023-07-21 16:56:52 INFO [auto_gptq.modeling._base] Start quantizing layer 11/32
2023-07-21 16:56:52 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 11/32...
2023-07-21 16:56:53 INFO [auto_gptq.quantization.gptq] duration: 0.814570426940918
2023-07-21 16:56:53 INFO [auto_gptq.quantization.gptq] avg loss: 6.145627021789551
2023-07-21 16:56:53 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 11/32...
2023-07-21 16:56:54 INFO [auto_gptq.quantization.gptq] duration: 0.8268287181854248
2023-07-21 16:56:54 INFO [auto_gptq.quantization.gptq] avg loss: 0.24324843287467957
2023-07-21 16:56:54 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 11/32...
2023-07-21 16:56:55 INFO [auto_gptq.quantization.gptq] duration: 0.8359119892120361
2023-07-21 16:56:55 INFO [auto_gptq.quantization.gptq] avg loss: 30.847026824951172
2023-07-21 16:56:55 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 11/32...
2023-07-21 16:56:58 INFO [auto_gptq.quantization.gptq] duration: 3.831470489501953
2023-07-21 16:56:58 INFO [auto_gptq.quantization.gptq] avg loss: 1.3961751461029053
2023-07-21 16:57:26 INFO [auto_gptq.modeling._base] Start quantizing layer 12/32
2023-07-21 16:57:26 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 12/32...
2023-07-21 16:57:27 INFO [auto_gptq.quantization.gptq] duration: 0.7964096069335938
2023-07-21 16:57:27 INFO [auto_gptq.quantization.gptq] avg loss: 6.053964614868164
2023-07-21 16:57:27 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 12/32...
2023-07-21 16:57:28 INFO [auto_gptq.quantization.gptq] duration: 0.799691915512085
2023-07-21 16:57:28 INFO [auto_gptq.quantization.gptq] avg loss: 0.2671034336090088
2023-07-21 16:57:28 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 12/32...
2023-07-21 16:57:29 INFO [auto_gptq.quantization.gptq] duration: 0.8342888355255127
2023-07-21 16:57:29 INFO [auto_gptq.quantization.gptq] avg loss: 29.729408264160156
2023-07-21 16:57:29 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 12/32...
2023-07-21 16:57:33 INFO [auto_gptq.quantization.gptq] duration: 3.8561949729919434
2023-07-21 16:57:33 INFO [auto_gptq.quantization.gptq] avg loss: 1.495622158050537
2023-07-21 16:57:33 INFO [auto_gptq.modeling._base] Start quantizing layer 13/32
2023-07-21 16:57:33 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 13/32...
2023-07-21 16:57:34 INFO [auto_gptq.quantization.gptq] duration: 0.7953364849090576
2023-07-21 16:57:34 INFO [auto_gptq.quantization.gptq] avg loss: 5.408998489379883
2023-07-21 16:57:34 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 13/32...
2023-07-21 16:57:34 INFO [auto_gptq.quantization.gptq] duration: 0.7990250587463379
2023-07-21 16:57:34 INFO [auto_gptq.quantization.gptq] avg loss: 0.5066410303115845
2023-07-21 16:57:34 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 13/32...
2023-07-21 16:57:35 INFO [auto_gptq.quantization.gptq] duration: 0.8330769538879395
2023-07-21 16:57:35 INFO [auto_gptq.quantization.gptq] avg loss: 27.790515899658203
2023-07-21 16:57:35 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 13/32...
2023-07-21 16:57:39 INFO [auto_gptq.quantization.gptq] duration: 3.861015558242798
2023-07-21 16:57:39 INFO [auto_gptq.quantization.gptq] avg loss: 1.3019633293151855
2023-07-21 16:57:39 INFO [auto_gptq.modeling._base] Start quantizing layer 14/32
2023-07-21 16:57:39 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 14/32...
2023-07-21 16:57:40 INFO [auto_gptq.quantization.gptq] duration: 0.8011329174041748
2023-07-21 16:57:40 INFO [auto_gptq.quantization.gptq] avg loss: 6.027165412902832
2023-07-21 16:57:40 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 14/32...
2023-07-21 16:57:41 INFO [auto_gptq.quantization.gptq] duration: 0.7977538108825684
2023-07-21 16:57:41 INFO [auto_gptq.quantization.gptq] avg loss: 0.28969255089759827
2023-07-21 16:57:41 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 14/32...
2023-07-21 16:57:42 INFO [auto_gptq.quantization.gptq] duration: 0.8305981159210205
2023-07-21 16:57:42 INFO [auto_gptq.quantization.gptq] avg loss: 28.996891021728516
2023-07-21 16:57:42 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 14/32...
2023-07-21 16:57:46 INFO [auto_gptq.quantization.gptq] duration: 3.874257802963257
2023-07-21 16:57:46 INFO [auto_gptq.quantization.gptq] avg loss: 1.6258554458618164
2023-07-21 16:57:46 INFO [auto_gptq.modeling._base] Start quantizing layer 15/32
2023-07-21 16:57:46 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 15/32...
2023-07-21 16:57:47 INFO [auto_gptq.quantization.gptq] duration: 0.7982082366943359
2023-07-21 16:57:47 INFO [auto_gptq.quantization.gptq] avg loss: 5.937747001647949
2023-07-21 16:57:47 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 15/32...
2023-07-21 16:57:48 INFO [auto_gptq.quantization.gptq] duration: 0.8004462718963623
2023-07-21 16:57:48 INFO [auto_gptq.quantization.gptq] avg loss: 0.3830963373184204
2023-07-21 16:57:48 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 15/32...
2023-07-21 16:57:48 INFO [auto_gptq.quantization.gptq] duration: 0.8347995281219482
2023-07-21 16:57:48 INFO [auto_gptq.quantization.gptq] avg loss: 30.339778900146484
2023-07-21 16:57:48 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 15/32...
2023-07-21 16:57:52 INFO [auto_gptq.quantization.gptq] duration: 3.8794045448303223
2023-07-21 16:57:52 INFO [auto_gptq.quantization.gptq] avg loss: 1.618453025817871
2023-07-21 16:57:52 INFO [auto_gptq.modeling._base] Start quantizing layer 16/32
2023-07-21 16:57:53 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 16/32...
2023-07-21 16:57:53 INFO [auto_gptq.quantization.gptq] duration: 0.802685022354126
2023-07-21 16:57:53 INFO [auto_gptq.quantization.gptq] avg loss: 5.992144584655762
2023-07-21 16:57:53 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 16/32...
2023-07-21 16:57:54 INFO [auto_gptq.quantization.gptq] duration: 0.8001143932342529
2023-07-21 16:57:54 INFO [auto_gptq.quantization.gptq] avg loss: 0.3652211129665375
2023-07-21 16:57:54 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 16/32...
2023-07-21 16:57:55 INFO [auto_gptq.quantization.gptq] duration: 0.843254566192627
2023-07-21 16:57:55 INFO [auto_gptq.quantization.gptq] avg loss: 29.359691619873047
2023-07-21 16:57:55 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 16/32...
2023-07-21 16:57:59 INFO [auto_gptq.quantization.gptq] duration: 3.8731229305267334
2023-07-21 16:57:59 INFO [auto_gptq.quantization.gptq] avg loss: 1.8666539192199707
2023-07-21 16:57:59 INFO [auto_gptq.modeling._base] Start quantizing layer 17/32
2023-07-21 16:57:59 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 17/32...
2023-07-21 16:58:00 INFO [auto_gptq.quantization.gptq] duration: 0.79642653465271
2023-07-21 16:58:00 INFO [auto_gptq.quantization.gptq] avg loss: 6.463171482086182
2023-07-21 16:58:00 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 17/32...
2023-07-21 16:58:01 INFO [auto_gptq.quantization.gptq] duration: 0.8078687191009521
2023-07-21 16:58:01 INFO [auto_gptq.quantization.gptq] avg loss: 0.24540238082408905
2023-07-21 16:58:01 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 17/32...
2023-07-21 16:58:02 INFO [auto_gptq.quantization.gptq] duration: 0.829270601272583
2023-07-21 16:58:02 INFO [auto_gptq.quantization.gptq] avg loss: 30.825468063354492
2023-07-21 16:58:02 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 17/32...
2023-07-21 16:58:05 INFO [auto_gptq.quantization.gptq] duration: 3.855315923690796
2023-07-21 16:58:05 INFO [auto_gptq.quantization.gptq] avg loss: 1.957414150238037
2023-07-21 16:58:06 INFO [auto_gptq.modeling._base] Start quantizing layer 18/32
2023-07-21 16:58:06 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 18/32...
2023-07-21 16:58:07 INFO [auto_gptq.quantization.gptq] duration: 0.8099801540374756
2023-07-21 16:58:07 INFO [auto_gptq.quantization.gptq] avg loss: 6.510787010192871
2023-07-21 16:58:07 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 18/32...
2023-07-21 16:58:07 INFO [auto_gptq.quantization.gptq] duration: 0.8008811473846436
2023-07-21 16:58:07 INFO [auto_gptq.quantization.gptq] avg loss: 0.3201957941055298
2023-07-21 16:58:07 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 18/32...
2023-07-21 16:58:08 INFO [auto_gptq.quantization.gptq] duration: 0.8365602493286133
2023-07-21 16:58:08 INFO [auto_gptq.quantization.gptq] avg loss: 31.26324462890625
2023-07-21 16:58:08 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 18/32...
2023-07-21 16:58:12 INFO [auto_gptq.quantization.gptq] duration: 3.8536572456359863
2023-07-21 16:58:12 INFO [auto_gptq.quantization.gptq] avg loss: 2.0843615531921387
2023-07-21 16:58:12 INFO [auto_gptq.modeling._base] Start quantizing layer 19/32
2023-07-21 16:58:12 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 19/32...
2023-07-21 16:58:13 INFO [auto_gptq.quantization.gptq] duration: 0.7980837821960449
2023-07-21 16:58:13 INFO [auto_gptq.quantization.gptq] avg loss: 6.686659812927246
2023-07-21 16:58:13 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 19/32...
2023-07-21 16:58:14 INFO [auto_gptq.quantization.gptq] duration: 0.7951889038085938
2023-07-21 16:58:14 INFO [auto_gptq.quantization.gptq] avg loss: 0.3053201138973236
2023-07-21 16:58:14 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 19/32...
2023-07-21 16:58:15 INFO [auto_gptq.quantization.gptq] duration: 0.8315420150756836
2023-07-21 16:58:15 INFO [auto_gptq.quantization.gptq] avg loss: 31.97283935546875
2023-07-21 16:58:15 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 19/32...
2023-07-21 16:58:19 INFO [auto_gptq.quantization.gptq] duration: 3.868382215499878
2023-07-21 16:58:19 INFO [auto_gptq.quantization.gptq] avg loss: 2.382962703704834
2023-07-21 16:58:19 INFO [auto_gptq.modeling._base] Start quantizing layer 20/32
2023-07-21 16:58:19 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 20/32...
2023-07-21 16:58:20 INFO [auto_gptq.quantization.gptq] duration: 0.797062873840332
2023-07-21 16:58:20 INFO [auto_gptq.quantization.gptq] avg loss: 6.721341133117676
2023-07-21 16:58:20 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 20/32...
2023-07-21 16:58:20 INFO [auto_gptq.quantization.gptq] duration: 0.806023120880127
2023-07-21 16:58:20 INFO [auto_gptq.quantization.gptq] avg loss: 0.5635891556739807
2023-07-21 16:58:20 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 20/32...
2023-07-21 16:58:21 INFO [auto_gptq.quantization.gptq] duration: 0.841651201248169
2023-07-21 16:58:21 INFO [auto_gptq.quantization.gptq] avg loss: 33.371273040771484
2023-07-21 16:58:21 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 20/32...
2023-07-21 16:58:25 INFO [auto_gptq.quantization.gptq] duration: 3.8724091053009033
2023-07-21 16:58:25 INFO [auto_gptq.quantization.gptq] avg loss: 2.5540378093719482
2023-07-21 16:58:25 INFO [auto_gptq.modeling._base] Start quantizing layer 21/32
2023-07-21 16:58:25 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 21/32...
2023-07-21 16:58:26 INFO [auto_gptq.quantization.gptq] duration: 0.8135292530059814
2023-07-21 16:58:26 INFO [auto_gptq.quantization.gptq] avg loss: 7.383816242218018
2023-07-21 16:58:26 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 21/32...
2023-07-21 16:58:27 INFO [auto_gptq.quantization.gptq] duration: 0.8004577159881592
2023-07-21 16:58:27 INFO [auto_gptq.quantization.gptq] avg loss: 0.2988166809082031
2023-07-21 16:58:27 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 21/32...
2023-07-21 16:58:28 INFO [auto_gptq.quantization.gptq] duration: 0.8346357345581055
2023-07-21 16:58:28 INFO [auto_gptq.quantization.gptq] avg loss: 34.46820068359375
2023-07-21 16:58:28 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 21/32...
2023-07-21 16:58:32 INFO [auto_gptq.quantization.gptq] duration: 3.8698837757110596
2023-07-21 16:58:32 INFO [auto_gptq.quantization.gptq] avg loss: 2.538421154022217
2023-07-21 16:58:32 INFO [auto_gptq.modeling._base] Start quantizing layer 22/32
2023-07-21 16:58:32 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 22/32...
2023-07-21 16:58:33 INFO [auto_gptq.quantization.gptq] duration: 0.7975707054138184
2023-07-21 16:58:33 INFO [auto_gptq.quantization.gptq] avg loss: 7.026803970336914
2023-07-21 16:58:33 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 22/32...
2023-07-21 16:58:34 INFO [auto_gptq.quantization.gptq] duration: 0.7988865375518799
2023-07-21 16:58:34 INFO [auto_gptq.quantization.gptq] avg loss: 0.5440877079963684
2023-07-21 16:58:34 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 22/32...
2023-07-21 16:58:35 INFO [auto_gptq.quantization.gptq] duration: 0.847116231918335
2023-07-21 16:58:35 INFO [auto_gptq.quantization.gptq] avg loss: 33.8814582824707
2023-07-21 16:58:35 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 22/32...
2023-07-21 16:58:38 INFO [auto_gptq.quantization.gptq] duration: 3.851823091506958
2023-07-21 16:58:38 INFO [auto_gptq.quantization.gptq] avg loss: 2.612248182296753
2023-07-21 16:58:39 INFO [auto_gptq.modeling._base] Start quantizing layer 23/32
2023-07-21 16:58:39 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 23/32...
2023-07-21 16:58:39 INFO [auto_gptq.quantization.gptq] duration: 0.7956225872039795
2023-07-21 16:58:39 INFO [auto_gptq.quantization.gptq] avg loss: 7.3217453956604
2023-07-21 16:58:39 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 23/32...
2023-07-21 16:58:40 INFO [auto_gptq.quantization.gptq] duration: 0.8155944347381592
2023-07-21 16:58:40 INFO [auto_gptq.quantization.gptq] avg loss: 0.3978100121021271
2023-07-21 16:58:40 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 23/32...
2023-07-21 16:58:41 INFO [auto_gptq.quantization.gptq] duration: 0.8472270965576172
2023-07-21 16:58:41 INFO [auto_gptq.quantization.gptq] avg loss: 33.613494873046875
2023-07-21 16:58:41 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 23/32...
2023-07-21 16:58:45 INFO [auto_gptq.quantization.gptq] duration: 3.877121925354004
2023-07-21 16:58:45 INFO [auto_gptq.quantization.gptq] avg loss: 3.0234107971191406
2023-07-21 16:58:45 INFO [auto_gptq.modeling._base] Start quantizing layer 24/32
2023-07-21 16:58:45 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 24/32...
2023-07-21 16:58:46 INFO [auto_gptq.quantization.gptq] duration: 0.8478920459747314
2023-07-21 16:58:46 INFO [auto_gptq.quantization.gptq] avg loss: 7.490325927734375
2023-07-21 16:58:46 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 24/32...
2023-07-21 16:58:47 INFO [auto_gptq.quantization.gptq] duration: 0.8023700714111328
2023-07-21 16:58:47 INFO [auto_gptq.quantization.gptq] avg loss: 0.6462091207504272
2023-07-21 16:58:47 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 24/32...
2023-07-21 16:58:48 INFO [auto_gptq.quantization.gptq] duration: 0.8271210193634033
2023-07-21 16:58:48 INFO [auto_gptq.quantization.gptq] avg loss: 35.156715393066406
2023-07-21 16:58:48 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 24/32...
2023-07-21 16:58:52 INFO [auto_gptq.quantization.gptq] duration: 3.8558664321899414
2023-07-21 16:58:52 INFO [auto_gptq.quantization.gptq] avg loss: 3.4150047302246094
2023-07-21 16:58:52 INFO [auto_gptq.modeling._base] Start quantizing layer 25/32
2023-07-21 16:58:52 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 25/32...
2023-07-21 16:58:53 INFO [auto_gptq.quantization.gptq] duration: 0.804887056350708
2023-07-21 16:58:53 INFO [auto_gptq.quantization.gptq] avg loss: 7.842990875244141
2023-07-21 16:58:53 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 25/32...
2023-07-21 16:58:53 INFO [auto_gptq.quantization.gptq] duration: 0.7986440658569336
2023-07-21 16:58:53 INFO [auto_gptq.quantization.gptq] avg loss: 0.5917433500289917
2023-07-21 16:58:53 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 25/32...
2023-07-21 16:58:54 INFO [auto_gptq.quantization.gptq] duration: 0.8256046772003174
2023-07-21 16:58:54 INFO [auto_gptq.quantization.gptq] avg loss: 36.299095153808594
2023-07-21 16:58:54 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 25/32...
2023-07-21 16:58:58 INFO [auto_gptq.quantization.gptq] duration: 3.86680006980896
2023-07-21 16:58:58 INFO [auto_gptq.quantization.gptq] avg loss: 4.292586326599121
2023-07-21 16:58:58 INFO [auto_gptq.modeling._base] Start quantizing layer 26/32
2023-07-21 16:58:58 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 26/32...
2023-07-21 16:58:59 INFO [auto_gptq.quantization.gptq] duration: 0.7961215972900391
2023-07-21 16:58:59 INFO [auto_gptq.quantization.gptq] avg loss: 8.335006713867188
2023-07-21 16:58:59 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 26/32...
2023-07-21 16:59:00 INFO [auto_gptq.quantization.gptq] duration: 0.7967922687530518
2023-07-21 16:59:00 INFO [auto_gptq.quantization.gptq] avg loss: 0.5929185152053833
2023-07-21 16:59:00 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 26/32...
2023-07-21 16:59:01 INFO [auto_gptq.quantization.gptq] duration: 0.8355779647827148
2023-07-21 16:59:01 INFO [auto_gptq.quantization.gptq] avg loss: 39.31059265136719
2023-07-21 16:59:01 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 26/32...
2023-07-21 16:59:05 INFO [auto_gptq.quantization.gptq] duration: 3.859668731689453
2023-07-21 16:59:05 INFO [auto_gptq.quantization.gptq] avg loss: 5.2629475593566895
2023-07-21 16:59:05 INFO [auto_gptq.modeling._base] Start quantizing layer 27/32
2023-07-21 16:59:05 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 27/32...
2023-07-21 16:59:06 INFO [auto_gptq.quantization.gptq] duration: 0.7974636554718018
2023-07-21 16:59:06 INFO [auto_gptq.quantization.gptq] avg loss: 8.194433212280273
2023-07-21 16:59:06 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 27/32...
2023-07-21 16:59:07 INFO [auto_gptq.quantization.gptq] duration: 0.8030986785888672
2023-07-21 16:59:07 INFO [auto_gptq.quantization.gptq] avg loss: 0.7090796828269958
2023-07-21 16:59:07 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 27/32...
2023-07-21 16:59:07 INFO [auto_gptq.quantization.gptq] duration: 0.8322622776031494
2023-07-21 16:59:07 INFO [auto_gptq.quantization.gptq] avg loss: 39.4634895324707
2023-07-21 16:59:07 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 27/32...
2023-07-21 16:59:11 INFO [auto_gptq.quantization.gptq] duration: 3.878126859664917
2023-07-21 16:59:11 INFO [auto_gptq.quantization.gptq] avg loss: 6.581557750701904
2023-07-21 16:59:11 INFO [auto_gptq.modeling._base] Start quantizing layer 28/32
2023-07-21 16:59:12 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 28/32...
2023-07-21 16:59:12 INFO [auto_gptq.quantization.gptq] duration: 0.7974464893341064
2023-07-21 16:59:12 INFO [auto_gptq.quantization.gptq] avg loss: 9.201988220214844
2023-07-21 16:59:12 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 28/32...
2023-07-21 16:59:13 INFO [auto_gptq.quantization.gptq] duration: 0.8018836975097656
2023-07-21 16:59:13 INFO [auto_gptq.quantization.gptq] avg loss: 1.193915605545044
2023-07-21 16:59:13 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 28/32...
2023-07-21 16:59:14 INFO [auto_gptq.quantization.gptq] duration: 0.832056999206543
2023-07-21 16:59:14 INFO [auto_gptq.quantization.gptq] avg loss: 39.874481201171875
2023-07-21 16:59:14 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 28/32...
2023-07-21 16:59:18 INFO [auto_gptq.quantization.gptq] duration: 3.8739585876464844
2023-07-21 16:59:18 INFO [auto_gptq.quantization.gptq] avg loss: 7.8150634765625
2023-07-21 16:59:18 INFO [auto_gptq.modeling._base] Start quantizing layer 29/32
2023-07-21 16:59:18 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 29/32...
2023-07-21 16:59:19 INFO [auto_gptq.quantization.gptq] duration: 0.7971282005310059
2023-07-21 16:59:19 INFO [auto_gptq.quantization.gptq] avg loss: 8.788995742797852
2023-07-21 16:59:19 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 29/32...
2023-07-21 16:59:20 INFO [auto_gptq.quantization.gptq] duration: 0.8014233112335205
2023-07-21 16:59:20 INFO [auto_gptq.quantization.gptq] avg loss: 0.9004578590393066
2023-07-21 16:59:20 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 29/32...
2023-07-21 16:59:21 INFO [auto_gptq.quantization.gptq] duration: 0.8585555553436279
2023-07-21 16:59:21 INFO [auto_gptq.quantization.gptq] avg loss: 40.52891159057617
2023-07-21 16:59:21 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 29/32...
2023-07-21 16:59:24 INFO [auto_gptq.quantization.gptq] duration: 3.886247396469116
2023-07-21 16:59:24 INFO [auto_gptq.quantization.gptq] avg loss: 7.627683639526367
2023-07-21 16:59:25 INFO [auto_gptq.modeling._base] Start quantizing layer 30/32
2023-07-21 16:59:25 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 30/32...
2023-07-21 16:59:26 INFO [auto_gptq.quantization.gptq] duration: 0.8017170429229736
2023-07-21 16:59:26 INFO [auto_gptq.quantization.gptq] avg loss: 7.885834217071533
2023-07-21 16:59:26 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 30/32...
2023-07-21 16:59:26 INFO [auto_gptq.quantization.gptq] duration: 0.8006551265716553
2023-07-21 16:59:26 INFO [auto_gptq.quantization.gptq] avg loss: 1.0838208198547363
2023-07-21 16:59:26 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 30/32...
2023-07-21 16:59:27 INFO [auto_gptq.quantization.gptq] duration: 0.8757197856903076
2023-07-21 16:59:27 INFO [auto_gptq.quantization.gptq] avg loss: 38.54998779296875
2023-07-21 16:59:27 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 30/32...
2023-07-21 16:59:31 INFO [auto_gptq.quantization.gptq] duration: 3.8700709342956543
2023-07-21 16:59:31 INFO [auto_gptq.quantization.gptq] avg loss: 10.26675796508789
2023-07-21 16:59:31 INFO [auto_gptq.modeling._base] Start quantizing layer 31/32
2023-07-21 16:59:31 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 31/32...
2023-07-21 16:59:32 INFO [auto_gptq.quantization.gptq] duration: 0.7995920181274414
2023-07-21 16:59:32 INFO [auto_gptq.quantization.gptq] avg loss: 7.922703266143799
2023-07-21 16:59:32 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 31/32...
2023-07-21 16:59:33 INFO [auto_gptq.quantization.gptq] duration: 0.7997887134552002
2023-07-21 16:59:33 INFO [auto_gptq.quantization.gptq] avg loss: 0.6395642757415771
2023-07-21 16:59:33 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 31/32...
2023-07-21 16:59:34 INFO [auto_gptq.quantization.gptq] duration: 0.8389708995819092
2023-07-21 16:59:34 INFO [auto_gptq.quantization.gptq] avg loss: 38.0499153137207
2023-07-21 16:59:34 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 31/32...
2023-07-21 16:59:38 INFO [auto_gptq.quantization.gptq] duration: 3.8527672290802
2023-07-21 16:59:38 INFO [auto_gptq.quantization.gptq] avg loss: 14.685250282287598
2023-07-21 16:59:38 INFO [auto_gptq.modeling._base] Start quantizing layer 32/32
2023-07-21 16:59:38 INFO [auto_gptq.modeling._base] Quantizing self_attention.query_key_value in layer 32/32...
2023-07-21 16:59:39 INFO [auto_gptq.quantization.gptq] duration: 0.7899763584136963
2023-07-21 16:59:39 INFO [auto_gptq.quantization.gptq] avg loss: 6.566901206970215
2023-07-21 17:00:08 INFO [auto_gptq.modeling._base] Quantizing self_attention.dense in layer 32/32...
2023-07-21 17:00:09 INFO [auto_gptq.quantization.gptq] duration: 0.890770673751831
2023-07-21 17:00:09 INFO [auto_gptq.quantization.gptq] avg loss: 0.2703491747379303
2023-07-21 17:00:09 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_h_to_4h in layer 32/32...
2023-07-21 17:00:10 INFO [auto_gptq.quantization.gptq] duration: 0.8699018955230713
2023-07-21 17:00:10 INFO [auto_gptq.quantization.gptq] avg loss: 33.582237243652344
2023-07-21 17:00:10 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 32/32...
2023-07-21 17:00:14 INFO [auto_gptq.quantization.gptq] duration: 3.8666820526123047
2023-07-21 17:00:14 INFO [auto_gptq.quantization.gptq] avg loss: 26.30276107788086
2023-07-21 17:00:14 INFO [auto_gptq.modeling._utils] Packing model...
2023-07-21 17:00:14 INFO [auto_gptq.modeling._utils] transformer.h.0.self_attention.dense
2023-07-21 17:00:15 INFO [auto_gptq.modeling._utils] transformer.h.0.self_attention.query_key_value
2023-07-21 17:00:15 INFO [auto_gptq.modeling._utils] transformer.h.0.mlp.dense_4h_to_h
2023-07-21 17:00:18 INFO [auto_gptq.modeling._utils] transformer.h.0.mlp.dense_h_to_4h
2023-07-21 17:00:19 INFO [auto_gptq.modeling._utils] transformer.h.1.self_attention.dense
2023-07-21 17:00:19 INFO [auto_gptq.modeling._utils] transformer.h.1.self_attention.query_key_value
2023-07-21 17:00:20 INFO [auto_gptq.modeling._utils] transformer.h.1.mlp.dense_4h_to_h
2023-07-21 17:00:22 INFO [auto_gptq.modeling._utils] transformer.h.1.mlp.dense_h_to_4h
2023-07-21 17:00:23 INFO [auto_gptq.modeling._utils] transformer.h.2.self_attention.dense
2023-07-21 17:00:23 INFO [auto_gptq.modeling._utils] transformer.h.2.self_attention.query_key_value
2023-07-21 17:00:24 INFO [auto_gptq.modeling._utils] transformer.h.2.mlp.dense_4h_to_h
2023-07-21 17:00:26 INFO [auto_gptq.modeling._utils] transformer.h.2.mlp.dense_h_to_4h
2023-07-21 17:00:27 INFO [auto_gptq.modeling._utils] transformer.h.3.self_attention.dense
2023-07-21 17:00:28 INFO [auto_gptq.modeling._utils] transformer.h.3.self_attention.query_key_value
2023-07-21 17:00:28 INFO [auto_gptq.modeling._utils] transformer.h.3.mlp.dense_4h_to_h
2023-07-21 17:00:30 INFO [auto_gptq.modeling._utils] transformer.h.3.mlp.dense_h_to_4h
2023-07-21 17:00:31 INFO [auto_gptq.modeling._utils] transformer.h.4.self_attention.dense
2023-07-21 17:00:32 INFO [auto_gptq.modeling._utils] transformer.h.4.self_attention.query_key_value
2023-07-21 17:00:32 INFO [auto_gptq.modeling._utils] transformer.h.4.mlp.dense_4h_to_h
2023-07-21 17:00:34 INFO [auto_gptq.modeling._utils] transformer.h.4.mlp.dense_h_to_4h
2023-07-21 17:00:35 INFO [auto_gptq.modeling._utils] transformer.h.5.self_attention.dense
2023-07-21 17:00:35 INFO [auto_gptq.modeling._utils] transformer.h.5.self_attention.query_key_value
2023-07-21 17:00:36 INFO [auto_gptq.modeling._utils] transformer.h.5.mlp.dense_4h_to_h
2023-07-21 17:00:38 INFO [auto_gptq.modeling._utils] transformer.h.5.mlp.dense_h_to_4h
2023-07-21 17:00:39 INFO [auto_gptq.modeling._utils] transformer.h.6.self_attention.dense
2023-07-21 17:00:39 INFO [auto_gptq.modeling._utils] transformer.h.6.self_attention.query_key_value
2023-07-21 17:00:40 INFO [auto_gptq.modeling._utils] transformer.h.6.mlp.dense_4h_to_h
2023-07-21 17:00:41 INFO [auto_gptq.modeling._utils] transformer.h.6.mlp.dense_h_to_4h
2023-07-21 17:00:42 INFO [auto_gptq.modeling._utils] transformer.h.7.self_attention.dense
2023-07-21 17:00:43 INFO [auto_gptq.modeling._utils] transformer.h.7.self_attention.query_key_value
2023-07-21 17:00:43 INFO [auto_gptq.modeling._utils] transformer.h.7.mlp.dense_4h_to_h
2023-07-21 17:00:45 INFO [auto_gptq.modeling._utils] transformer.h.7.mlp.dense_h_to_4h
2023-07-21 17:00:46 INFO [auto_gptq.modeling._utils] transformer.h.8.self_attention.dense
2023-07-21 17:00:47 INFO [auto_gptq.modeling._utils] transformer.h.8.self_attention.query_key_value
2023-07-21 17:00:47 INFO [auto_gptq.modeling._utils] transformer.h.8.mlp.dense_4h_to_h
2023-07-21 17:00:49 INFO [auto_gptq.modeling._utils] transformer.h.8.mlp.dense_h_to_4h
2023-07-21 17:00:50 INFO [auto_gptq.modeling._utils] transformer.h.9.self_attention.dense
2023-07-21 17:00:50 INFO [auto_gptq.modeling._utils] transformer.h.9.self_attention.query_key_value
2023-07-21 17:00:51 INFO [auto_gptq.modeling._utils] transformer.h.9.mlp.dense_4h_to_h
2023-07-21 17:00:53 INFO [auto_gptq.modeling._utils] transformer.h.9.mlp.dense_h_to_4h
2023-07-21 17:00:54 INFO [auto_gptq.modeling._utils] transformer.h.10.self_attention.dense
2023-07-21 17:00:54 INFO [auto_gptq.modeling._utils] transformer.h.10.self_attention.query_key_value
2023-07-21 17:00:55 INFO [auto_gptq.modeling._utils] transformer.h.10.mlp.dense_4h_to_h
2023-07-21 17:00:56 INFO [auto_gptq.modeling._utils] transformer.h.10.mlp.dense_h_to_4h
2023-07-21 17:00:57 INFO [auto_gptq.modeling._utils] transformer.h.11.self_attention.dense
2023-07-21 17:00:58 INFO [auto_gptq.modeling._utils] transformer.h.11.self_attention.query_key_value
2023-07-21 17:00:58 INFO [auto_gptq.modeling._utils] transformer.h.11.mlp.dense_4h_to_h
2023-07-21 17:01:00 INFO [auto_gptq.modeling._utils] transformer.h.11.mlp.dense_h_to_4h
2023-07-21 17:01:01 INFO [auto_gptq.modeling._utils] transformer.h.12.self_attention.dense
2023-07-21 17:01:02 INFO [auto_gptq.modeling._utils] transformer.h.12.self_attention.query_key_value
2023-07-21 17:01:02 INFO [auto_gptq.modeling._utils] transformer.h.12.mlp.dense_4h_to_h
2023-07-21 17:01:04 INFO [auto_gptq.modeling._utils] transformer.h.12.mlp.dense_h_to_4h
2023-07-21 17:01:05 INFO [auto_gptq.modeling._utils] transformer.h.13.self_attention.dense
2023-07-21 17:01:06 INFO [auto_gptq.modeling._utils] transformer.h.13.self_attention.query_key_value
2023-07-21 17:01:06 INFO [auto_gptq.modeling._utils] transformer.h.13.mlp.dense_4h_to_h
2023-07-21 17:01:08 INFO [auto_gptq.modeling._utils] transformer.h.13.mlp.dense_h_to_4h
2023-07-21 17:01:09 INFO [auto_gptq.modeling._utils] transformer.h.14.self_attention.dense
2023-07-21 17:01:10 INFO [auto_gptq.modeling._utils] transformer.h.14.self_attention.query_key_value
2023-07-21 17:01:10 INFO [auto_gptq.modeling._utils] transformer.h.14.mlp.dense_4h_to_h
2023-07-21 17:01:12 INFO [auto_gptq.modeling._utils] transformer.h.14.mlp.dense_h_to_4h
2023-07-21 17:01:13 INFO [auto_gptq.modeling._utils] transformer.h.15.self_attention.dense
2023-07-21 17:01:13 INFO [auto_gptq.modeling._utils] transformer.h.15.self_attention.query_key_value
2023-07-21 17:01:14 INFO [auto_gptq.modeling._utils] transformer.h.15.mlp.dense_4h_to_h
2023-07-21 17:01:16 INFO [auto_gptq.modeling._utils] transformer.h.15.mlp.dense_h_to_4h
2023-07-21 17:01:17 INFO [auto_gptq.modeling._utils] transformer.h.16.self_attention.dense
2023-07-21 17:01:17 INFO [auto_gptq.modeling._utils] transformer.h.16.self_attention.query_key_value
2023-07-21 17:01:18 INFO [auto_gptq.modeling._utils] transformer.h.16.mlp.dense_4h_to_h
2023-07-21 17:01:19 INFO [auto_gptq.modeling._utils] transformer.h.16.mlp.dense_h_to_4h
2023-07-21 17:01:21 INFO [auto_gptq.modeling._utils] transformer.h.17.self_attention.dense
2023-07-21 17:01:21 INFO [auto_gptq.modeling._utils] transformer.h.17.self_attention.query_key_value
2023-07-21 17:01:21 INFO [auto_gptq.modeling._utils] transformer.h.17.mlp.dense_4h_to_h
2023-07-21 17:01:23 INFO [auto_gptq.modeling._utils] transformer.h.17.mlp.dense_h_to_4h
2023-07-21 17:01:24 INFO [auto_gptq.modeling._utils] transformer.h.18.self_attention.dense
2023-07-21 17:01:25 INFO [auto_gptq.modeling._utils] transformer.h.18.self_attention.query_key_value
2023-07-21 17:01:25 INFO [auto_gptq.modeling._utils] transformer.h.18.mlp.dense_4h_to_h
2023-07-21 17:01:27 INFO [auto_gptq.modeling._utils] transformer.h.18.mlp.dense_h_to_4h
2023-07-21 17:01:28 INFO [auto_gptq.modeling._utils] transformer.h.19.self_attention.dense
2023-07-21 17:01:29 INFO [auto_gptq.modeling._utils] transformer.h.19.self_attention.query_key_value
2023-07-21 17:01:29 INFO [auto_gptq.modeling._utils] transformer.h.19.mlp.dense_4h_to_h
2023-07-21 17:01:31 INFO [auto_gptq.modeling._utils] transformer.h.19.mlp.dense_h_to_4h
2023-07-21 17:01:32 INFO [auto_gptq.modeling._utils] transformer.h.20.self_attention.dense
2023-07-21 17:01:33 INFO [auto_gptq.modeling._utils] transformer.h.20.self_attention.query_key_value
2023-07-21 17:01:33 INFO [auto_gptq.modeling._utils] transformer.h.20.mlp.dense_4h_to_h
2023-07-21 17:01:35 INFO [auto_gptq.modeling._utils] transformer.h.20.mlp.dense_h_to_4h
2023-07-21 17:01:36 INFO [auto_gptq.modeling._utils] transformer.h.21.self_attention.dense
2023-07-21 17:01:37 INFO [auto_gptq.modeling._utils] transformer.h.21.self_attention.query_key_value
2023-07-21 17:01:37 INFO [auto_gptq.modeling._utils] transformer.h.21.mlp.dense_4h_to_h
2023-07-21 17:01:39 INFO [auto_gptq.modeling._utils] transformer.h.21.mlp.dense_h_to_4h
2023-07-21 17:01:40 INFO [auto_gptq.modeling._utils] transformer.h.22.self_attention.dense
2023-07-21 17:01:40 INFO [auto_gptq.modeling._utils] transformer.h.22.self_attention.query_key_value
2023-07-21 17:01:41 INFO [auto_gptq.modeling._utils] transformer.h.22.mlp.dense_4h_to_h
2023-07-21 17:01:43 INFO [auto_gptq.modeling._utils] transformer.h.22.mlp.dense_h_to_4h
2023-07-21 17:01:44 INFO [auto_gptq.modeling._utils] transformer.h.23.self_attention.dense
2023-07-21 17:01:44 INFO [auto_gptq.modeling._utils] transformer.h.23.self_attention.query_key_value
2023-07-21 17:01:45 INFO [auto_gptq.modeling._utils] transformer.h.23.mlp.dense_4h_to_h
2023-07-21 17:01:46 INFO [auto_gptq.modeling._utils] transformer.h.23.mlp.dense_h_to_4h
2023-07-21 17:01:48 INFO [auto_gptq.modeling._utils] transformer.h.24.self_attention.dense
2023-07-21 17:01:48 INFO [auto_gptq.modeling._utils] transformer.h.24.self_attention.query_key_value
2023-07-21 17:01:49 INFO [auto_gptq.modeling._utils] transformer.h.24.mlp.dense_4h_to_h
2023-07-21 17:01:51 INFO [auto_gptq.modeling._utils] transformer.h.24.mlp.dense_h_to_4h
2023-07-21 17:01:52 INFO [auto_gptq.modeling._utils] transformer.h.25.self_attention.dense
2023-07-21 17:01:52 INFO [auto_gptq.modeling._utils] transformer.h.25.self_attention.query_key_value
2023-07-21 17:01:53 INFO [auto_gptq.modeling._utils] transformer.h.25.mlp.dense_4h_to_h
2023-07-21 17:01:54 INFO [auto_gptq.modeling._utils] transformer.h.25.mlp.dense_h_to_4h
2023-07-21 17:01:55 INFO [auto_gptq.modeling._utils] transformer.h.26.self_attention.dense
2023-07-21 17:01:56 INFO [auto_gptq.modeling._utils] transformer.h.26.self_attention.query_key_value
2023-07-21 17:01:56 INFO [auto_gptq.modeling._utils] transformer.h.26.mlp.dense_4h_to_h
2023-07-21 17:01:58 INFO [auto_gptq.modeling._utils] transformer.h.26.mlp.dense_h_to_4h
2023-07-21 17:02:00 INFO [auto_gptq.modeling._utils] transformer.h.27.self_attention.dense
2023-07-21 17:02:00 INFO [auto_gptq.modeling._utils] transformer.h.27.self_attention.query_key_value
2023-07-21 17:02:00 INFO [auto_gptq.modeling._utils] transformer.h.27.mlp.dense_4h_to_h
2023-07-21 17:02:02 INFO [auto_gptq.modeling._utils] transformer.h.27.mlp.dense_h_to_4h
2023-07-21 17:02:03 INFO [auto_gptq.modeling._utils] transformer.h.28.self_attention.dense
2023-07-21 17:02:04 INFO [auto_gptq.modeling._utils] transformer.h.28.self_attention.query_key_value
2023-07-21 17:02:04 INFO [auto_gptq.modeling._utils] transformer.h.28.mlp.dense_4h_to_h
2023-07-21 17:02:06 INFO [auto_gptq.modeling._utils] transformer.h.28.mlp.dense_h_to_4h
2023-07-21 17:02:07 INFO [auto_gptq.modeling._utils] transformer.h.29.self_attention.dense
2023-07-21 17:02:08 INFO [auto_gptq.modeling._utils] transformer.h.29.self_attention.query_key_value
2023-07-21 17:02:08 INFO [auto_gptq.modeling._utils] transformer.h.29.mlp.dense_4h_to_h
2023-07-21 17:02:10 INFO [auto_gptq.modeling._utils] transformer.h.29.mlp.dense_h_to_4h
2023-07-21 17:02:11 INFO [auto_gptq.modeling._utils] transformer.h.30.self_attention.dense
2023-07-21 17:02:12 INFO [auto_gptq.modeling._utils] transformer.h.30.self_attention.query_key_value
2023-07-21 17:02:12 INFO [auto_gptq.modeling._utils] transformer.h.30.mlp.dense_4h_to_h
2023-07-21 17:02:14 INFO [auto_gptq.modeling._utils] transformer.h.30.mlp.dense_h_to_4h
2023-07-21 17:02:15 INFO [auto_gptq.modeling._utils] transformer.h.31.self_attention.dense
2023-07-21 17:02:16 INFO [auto_gptq.modeling._utils] transformer.h.31.self_attention.query_key_value
2023-07-21 17:02:16 INFO [auto_gptq.modeling._utils] transformer.h.31.mlp.dense_4h_to_h
2023-07-21 17:02:18 INFO [auto_gptq.modeling._utils] transformer.h.31.mlp.dense_h_to_4h
2023-07-21 17:02:19 INFO [auto_gptq.modeling._utils] Model packed.
```
|
Villagerindo/tts-bluearchive
|
Villagerindo
| 2023-07-21T23:28:07Z | 0 | 1 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-07-21T14:59:23Z |
---
title: Vits Models
emoji: 🏃
colorFrom: pink
colorTo: indigo
sdk: gradio
sdk_version: 3.17.0
app_file: app.py
pinned: false
license: apache-2.0
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
asapp/sew-tiny-100k
|
asapp
| 2023-07-21T23:05:12Z | 2,256 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"sew",
"feature-extraction",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-tiny
[SEW by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWForCTC`.
|
asapp/sew-d-tiny-100k
|
asapp
| 2023-07-21T23:05:03Z | 2,248 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"sew-d",
"feature-extraction",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-D-tiny
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
youw/modelodogiela
|
youw
| 2023-07-21T22:51:59Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"music",
"pt",
"license:openrail",
"region:us"
] | null | 2023-07-21T22:39:02Z |
---
language:
- pt
library_name: adapter-transformers
tags:
- music
license: openrail
---
|
Emperor-WS/q-FrozenLake-v1-4x4-noSlippery
|
Emperor-WS
| 2023-07-21T22:44:11Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T22:44:09Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Emperor-WS/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ashercn97/OpenOrcaUpload
|
ashercn97
| 2023-07-21T22:28:43Z | 7 | 0 |
peft
|
[
"peft",
"text-generation",
"dataset:ashercn97/OpenOrcaPleaseWork",
"region:us"
] |
text-generation
| 2023-07-21T14:43:58Z |
---
library_name: peft
pipeline_tag: text-generation
datasets:
- ashercn97/OpenOrcaPleaseWork
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
brunoboat/ppo-Huggy
|
brunoboat
| 2023-07-21T22:02:00Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-21T22:01:49Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: brunoboat/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Oburaco/llama2-qlora-finetunined-ptbr
|
Oburaco
| 2023-07-21T21:43:25Z | 1 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T21:43:16Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
Pedrampd/NLP-HW5-NerTaggerModel
|
Pedrampd
| 2023-07-21T21:25:17Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-21T20:10:05Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: NLP-HW5-NerTaggerModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP-HW5-NerTaggerModel
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0218
- Accuracy: 0.9947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1891 | 1.0 | 878 | 0.0342 | 0.9909 |
| 0.0377 | 2.0 | 1756 | 0.0218 | 0.9947 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Aspik101/guanaco-7B-HF-pl-lora_adapter_model
|
Aspik101
| 2023-07-21T21:15:58Z | 0 | 0 | null |
[
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"text-generation",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"region:us"
] |
text-generation
| 2023-07-21T21:15:57Z |
---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
|
Mel-Iza0/RedPajama-ZeroShot-20K-new_prompt_classe_nenhuma
|
Mel-Iza0
| 2023-07-21T21:07:19Z | 2 | 0 |
peft
|
[
"peft",
"pytorch",
"gpt_neox",
"region:us"
] | null | 2023-07-21T18:40:27Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
davidkariuki/RentPredictorSouthAfrica
|
davidkariuki
| 2023-07-21T20:40:37Z | 0 | 0 | null |
[
"joblib",
"license:apache-2.0",
"region:us"
] | null | 2023-07-20T19:29:11Z |
---
license: apache-2.0
---
README
Introduction
This repository contains a Gradient Boosting Regressor model trained to predict house rents. The model was trained on a dataset that was preprocessed and cleaned to ensure the best possible predictions.
Getting Started
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
Prerequisites
You need Python 3.7 or later to run the scripts. You can have multiple Python versions (2.x and 3.x) installed on the same system without problems.
In Ubuntu, Mint and Debian you can install Python 3 like this: sudo apt-get install python3 python3-pip
For other Linux flavors, macOS, and Windows, packages are available at
https://www.python.org/getit/
Required Python Packages
You will also need the following Python packages:
pandas
sklearn
joblib
These can be installed using pip: pip install pandas sklearn joblib
Cloning the Repository
To clone this repository, run the following command in your terminal:
git clone <repository-link>
Running the Script
To use the model to predict house rents, run the predict.py script. You will be asked to input data for 'Area' and 'Suburb'. The script will then print the predicted rent.
To run the script: python test.py
The model you'll be interacting with is a machine-learning model specifically designed to predict house rent prices based on various property features.
It's been trained on a dataset of housing information and uses what it learned to make predictions for new, unseen houses.
Rent: The existing rent of the house.
Property Type: The type of property, such as apartment, house, etc.
Area: The area where the house is located.
Suburb: The suburb within the area where the house is located.
Bedrooms: The number of bedrooms in the house.
Bathrooms: The number of bathrooms in the house.
Garages: The number of garages the house has.
nGparking: The number of non-garage parking spaces the house has.
Floor Size: The size of the house in square feet or meters.
Pool: Whether the house has a pool (1 if yes, 0 if no).
Garden: Whether the house has a garden (1 if yes, 0 if no).
Study: Whether the house has a study or office room (1 if yes, 0 if no).
Pets: Whether pets are allowed in the house (1 if yes, 0 if no).
Furnished: Whether the house is furnished (1 if yes, 0 if no).
Fibre: Whether the house has fibre internet connection (1 if yes, 0 if no).
Based on the information you provide for a house, the model will give an estimate of what it thinks the house's rent would be.
Please note that while the model tries its best to make accurate predictions, there is error in its estimates.
|
gokuls/hbertv2-Massive-intent-48-emb-comp-gelu
|
gokuls
| 2023-07-21T20:39:48Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"base_model:gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48_gelu",
"base_model:finetune:gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48_gelu",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T20:31:51Z |
---
base_model: gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48_gelu
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: hbertv2-Massive-intent-48-emb-comp-gelu
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: validation
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.8421052631578947
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv2-Massive-intent-48-emb-comp-gelu
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48_gelu](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48_gelu) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0025
- Accuracy: 0.8421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9202 | 1.0 | 180 | 1.0767 | 0.7068 |
| 0.9104 | 2.0 | 360 | 0.9209 | 0.7482 |
| 0.6425 | 3.0 | 540 | 0.8343 | 0.7821 |
| 0.4854 | 4.0 | 720 | 0.8159 | 0.7954 |
| 0.3682 | 5.0 | 900 | 0.8154 | 0.8077 |
| 0.272 | 6.0 | 1080 | 0.8417 | 0.7993 |
| 0.204 | 7.0 | 1260 | 0.7931 | 0.8155 |
| 0.1363 | 8.0 | 1440 | 0.8740 | 0.8195 |
| 0.1016 | 9.0 | 1620 | 0.8993 | 0.8205 |
| 0.0689 | 10.0 | 1800 | 0.9309 | 0.8210 |
| 0.0478 | 11.0 | 1980 | 0.9877 | 0.8318 |
| 0.0254 | 12.0 | 2160 | 1.0041 | 0.8293 |
| 0.0133 | 13.0 | 2340 | 0.9982 | 0.8396 |
| 0.0068 | 14.0 | 2520 | 1.0049 | 0.8406 |
| 0.005 | 15.0 | 2700 | 1.0025 | 0.8421 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
|
gokuls/hbertv1-Massive-intent-48-emb-comp-gelu
|
gokuls
| 2023-07-21T20:22:27Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"base_model:gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48_gelu",
"base_model:finetune:gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48_gelu",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T20:14:12Z |
---
base_model: gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48_gelu
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: hbertv1-Massive-intent-48-emb-comp-gelu
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: validation
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.8027545499262174
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-Massive-intent-48-emb-comp-gelu
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48_gelu](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48_gelu) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9566
- Accuracy: 0.8028
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.4622 | 1.0 | 180 | 3.0181 | 0.2169 |
| 2.7526 | 2.0 | 360 | 2.4760 | 0.3168 |
| 2.188 | 3.0 | 540 | 1.9627 | 0.4368 |
| 1.7069 | 4.0 | 720 | 1.5603 | 0.5568 |
| 1.3045 | 5.0 | 900 | 1.3354 | 0.6345 |
| 1.0621 | 6.0 | 1080 | 1.1726 | 0.6862 |
| 0.8745 | 7.0 | 1260 | 1.0703 | 0.7226 |
| 0.7286 | 8.0 | 1440 | 0.9905 | 0.7516 |
| 0.6005 | 9.0 | 1620 | 0.9881 | 0.7644 |
| 0.5021 | 10.0 | 1800 | 0.9661 | 0.7732 |
| 0.4208 | 11.0 | 1980 | 0.9621 | 0.7787 |
| 0.3524 | 12.0 | 2160 | 0.9480 | 0.7939 |
| 0.282 | 13.0 | 2340 | 0.9614 | 0.7924 |
| 0.2327 | 14.0 | 2520 | 0.9525 | 0.7969 |
| 0.1912 | 15.0 | 2700 | 0.9566 | 0.8028 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
|
gokuls/hbertv1-tiny-wt-48-Massive-intent-emb-comp
|
gokuls
| 2023-07-21T20:06:02Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"base_model:gokuls/model_v1_complete_training_wt_init_48_tiny_emb_comp",
"base_model:finetune:gokuls/model_v1_complete_training_wt_init_48_tiny_emb_comp",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T20:02:45Z |
---
base_model: gokuls/model_v1_complete_training_wt_init_48_tiny_emb_comp
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: hbertv1-tiny-wt-48-Massive-intent-emb-comp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: validation
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.7899655681259223
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-tiny-wt-48-Massive-intent-emb-comp
This model is a fine-tuned version of [gokuls/model_v1_complete_training_wt_init_48_tiny_emb_comp](https://huggingface.co/gokuls/model_v1_complete_training_wt_init_48_tiny_emb_comp) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8545
- Accuracy: 0.7900
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.6847 | 1.0 | 180 | 3.2207 | 0.2710 |
| 2.7795 | 2.0 | 360 | 2.3154 | 0.4471 |
| 2.0459 | 3.0 | 540 | 1.7680 | 0.5627 |
| 1.5874 | 4.0 | 720 | 1.4363 | 0.6734 |
| 1.2902 | 5.0 | 900 | 1.2306 | 0.7127 |
| 1.0905 | 6.0 | 1080 | 1.1068 | 0.7373 |
| 0.9468 | 7.0 | 1260 | 1.0113 | 0.7545 |
| 0.844 | 8.0 | 1440 | 0.9661 | 0.7580 |
| 0.7684 | 9.0 | 1620 | 0.9333 | 0.7649 |
| 0.7086 | 10.0 | 1800 | 0.9018 | 0.7772 |
| 0.6629 | 11.0 | 1980 | 0.8807 | 0.7831 |
| 0.6244 | 12.0 | 2160 | 0.8747 | 0.7796 |
| 0.5965 | 13.0 | 2340 | 0.8591 | 0.7875 |
| 0.5731 | 14.0 | 2520 | 0.8634 | 0.7875 |
| 0.5633 | 15.0 | 2700 | 0.8545 | 0.7900 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
|
gokuls/hbertv1-small-wt-frz-48-Massive-intent-emb-comp
|
gokuls
| 2023-07-21T20:02:17Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"base_model:gokuls/model_v1_complete_training_wt_init_48_small_emb_comp_frz",
"base_model:finetune:gokuls/model_v1_complete_training_wt_init_48_small_emb_comp_frz",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T19:59:48Z |
---
base_model: gokuls/model_v1_complete_training_wt_init_48_small_emb_comp_frz
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: hbertv1-small-wt-frz-48-Massive-intent-emb-comp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: validation
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.838170191834727
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-small-wt-frz-48-Massive-intent-emb-comp
This model is a fine-tuned version of [gokuls/model_v1_complete_training_wt_init_48_small_emb_comp_frz](https://huggingface.co/gokuls/model_v1_complete_training_wt_init_48_small_emb_comp_frz) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6512
- Accuracy: 0.8382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.3044 | 1.0 | 180 | 1.1025 | 0.7167 |
| 0.8662 | 2.0 | 360 | 0.7731 | 0.7934 |
| 0.5469 | 3.0 | 540 | 0.6981 | 0.8224 |
| 0.357 | 4.0 | 720 | 0.6512 | 0.8382 |
| 0.228 | 5.0 | 900 | 0.6980 | 0.8254 |
| 0.1435 | 6.0 | 1080 | 0.7169 | 0.8278 |
| 0.0863 | 7.0 | 1260 | 0.7441 | 0.8323 |
| 0.0534 | 8.0 | 1440 | 0.7516 | 0.8382 |
| 0.0334 | 9.0 | 1620 | 0.8162 | 0.8357 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ALM-AHME/convnextv2-large-1k-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20-Shuffled
|
ALM-AHME
| 2023-07-21T20:00:08Z | 14 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"convnextv2",
"image-classification",
"generated_from_trainer",
"base_model:facebook/convnextv2-large-1k-224",
"base_model:finetune:facebook/convnextv2-large-1k-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-21T14:42:59Z |
---
license: apache-2.0
base_model: facebook/convnextv2-large-1k-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: convnextv2-large-1k-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20-Shuffled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-large-1k-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20-Shuffled
This model is a fine-tuned version of [facebook/convnextv2-large-1k-224](https://huggingface.co/facebook/convnextv2-large-1k-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0721
- Accuracy: 0.9869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.9
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.8937 | 1.0 | 114 | 1.9040 | 0.3144 |
| 1.7208 | 2.0 | 229 | 1.6891 | 0.5632 |
| 1.3822 | 3.0 | 343 | 1.3554 | 0.6897 |
| 1.1497 | 4.0 | 458 | 1.2437 | 0.5755 |
| 0.8979 | 5.0 | 572 | 0.8548 | 0.7701 |
| 0.6382 | 6.0 | 687 | 0.6359 | 0.8424 |
| 0.583 | 7.0 | 801 | 0.4687 | 0.8966 |
| 0.6295 | 8.0 | 916 | 0.5029 | 0.8456 |
| 0.5367 | 9.0 | 1030 | 0.4742 | 0.8670 |
| 0.5091 | 10.0 | 1145 | 0.3038 | 0.9212 |
| 0.3521 | 11.0 | 1259 | 0.1855 | 0.9606 |
| 0.318 | 12.0 | 1374 | 0.1893 | 0.9573 |
| 0.2725 | 13.0 | 1488 | 0.2292 | 0.9409 |
| 0.2937 | 14.0 | 1603 | 0.0866 | 0.9836 |
| 0.1185 | 14.93 | 1710 | 0.0721 | 0.9869 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
gokuls/hbertv1-small-wt-48-Massive-intent-emb-comp
|
gokuls
| 2023-07-21T19:59:16Z | 50 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"base_model:gokuls/model_v1_complete_training_wt_init_48_small_emb_comp",
"base_model:finetune:gokuls/model_v1_complete_training_wt_init_48_small_emb_comp",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T19:55:19Z |
---
base_model: gokuls/model_v1_complete_training_wt_init_48_small_emb_comp
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: hbertv1-small-wt-48-Massive-intent-emb-comp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: validation
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.8504672897196262
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-small-wt-48-Massive-intent-emb-comp
This model is a fine-tuned version of [gokuls/model_v1_complete_training_wt_init_48_small_emb_comp](https://huggingface.co/gokuls/model_v1_complete_training_wt_init_48_small_emb_comp) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8462
- Accuracy: 0.8505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1467 | 1.0 | 180 | 1.0602 | 0.7393 |
| 0.8554 | 2.0 | 360 | 0.7646 | 0.7964 |
| 0.5593 | 3.0 | 540 | 0.6846 | 0.8239 |
| 0.3868 | 4.0 | 720 | 0.6673 | 0.8278 |
| 0.2613 | 5.0 | 900 | 0.6909 | 0.8259 |
| 0.1681 | 6.0 | 1080 | 0.7123 | 0.8278 |
| 0.1096 | 7.0 | 1260 | 0.7193 | 0.8318 |
| 0.0687 | 8.0 | 1440 | 0.7653 | 0.8337 |
| 0.0405 | 9.0 | 1620 | 0.7966 | 0.8308 |
| 0.0255 | 10.0 | 1800 | 0.8047 | 0.8441 |
| 0.0145 | 11.0 | 1980 | 0.8415 | 0.8426 |
| 0.0092 | 12.0 | 2160 | 0.8462 | 0.8505 |
| 0.0053 | 13.0 | 2340 | 0.8635 | 0.8465 |
| 0.0031 | 14.0 | 2520 | 0.8625 | 0.8475 |
| 0.0023 | 15.0 | 2700 | 0.8632 | 0.8480 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
|
theanupdas/llama2-qlora-finetuned-french
|
theanupdas
| 2023-07-21T19:57:09Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T19:56:51Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
ailabturkiye/incesemicenk
|
ailabturkiye
| 2023-07-21T19:53:49Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-21T19:32:32Z |
---
license: openrail
language:
- tr
tags:
- music
---
Semicenk (Müzisyen) - RVC V2 450 Epoch Şarkıcı Semicenk'in Stüdyo Vokallerinden oluşturulan ses modelidir. Rvc V2 | 4 Dakikalık Dataset | 450 Epoch olarak eğitilmiştir.
Dataset ve Train Benim Tarafımdan yapılmıştır..
Modelin izinsiz bir şekilde Ai Lab Discord Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.
Credits Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.
Discord: onurkilot YouTube: onur (https://youtube.com/@onurkilot)
|
marouni/miniDolly
|
marouni
| 2023-07-21T19:51:51Z | 169 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"en",
"dataset:databricks/databricks-dolly-15k",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-14T12:21:43Z |
---
license: apache-2.0
datasets:
- databricks/databricks-dolly-15k
language:
- en
widget:
- text: 'What is the capital of France ?'
example_title: Basic question
group: Python
---
# Summary
An instruction-following large language model based on [pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) and trained on [Databricks' 15k instruction](https://huggingface.co/datasets/databricks/databricks-dolly-15k)
with capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization.
This model is an experiment in using small base model ([pythia-70m](https://huggingface.co/EleutherAI/pythia-70m)) to build a model similar to Databricks' [dolly model](https://huggingface.co/databricks/dolly-v2-12b).
# Usage
To use the model with the transformers library, first make sure you have the transformers and accelerate libraries installed :
```python
%pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2"
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
res = generate_text("What is the capital of France ?")
print(res[0]["generated_text"])
```
# Training
The model was trained using [Databricks' 15k instruction](https://huggingface.co/datasets/databricks/databricks-dolly-15k) on a recent Dell PC with 32G of RAM with a core i7 CPU.
The training took around 12 hours !
# Accuracy
As expected the model performance is very bad ! Especially when compared to [Databricks dolly v2 12b model](https://huggingface.co/databricks/dolly-v2-12b).
When prompted with `What is the capital of France ?`, the model answers with :
```
"The World". It is an artwork for "working time" called «The Middle East Today". It comes from Paris, Belgium, in local variation, including large cities as described in English language photographs which portray a crescent and sunrise of late note, Bangourt before Paris.
“Countries like Pakistan and throughout East Africa close to Australia have constructed a watered havock which can be felt ever longer. Bombardment and booby traps tend to occupy space by wind and water, as were effectively used for material and equipment which have a green signal leading in the images."
```
Compared with the following asnwer from [Databricks dolly v2 3b model](https://huggingface.co/databricks/dolly-v2-12b)
```
The capital of France is Paris.
```
# Conclusion
The accuracy between the base model used in this model (pythia-70m) and the base models used by Databricks (pythia-2.8b and pythia-12b) is huge ! And it makes all the difference in terms of accuracy.
The only thing worth mentioning here is the model's size, at around 160M it's orders of magnitude smaller than the Databricks ones.
|
chh6/Reinforce_pixelcopter
|
chh6
| 2023-07-21T19:41:05Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T19:40:59Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 29.20 +/- 23.66
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
dnarqq/dqn-SpaceInvadersNoFrameskip-v4
|
dnarqq
| 2023-07-21T19:33:07Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T19:32:37Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 303.50 +/- 78.49
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dnarqq -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dnarqq -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga dnarqq
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 200000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
josh-salako/ai_generated_image_detector
|
josh-salako
| 2023-07-21T19:27:04Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"dataset:competitions/aiornot",
"region:us"
] | null | 2023-03-13T19:22:12Z |
---
library_name: keras
datasets:
- competitions/aiornot
metrics:
- accuracy
---
## Model description
A model that detects AI generated iamge
## Intended uses & limitations
Intended for use cases whenever real images are needed and not AI generated ones. This model however cannot distinguish an AI generated movie whenever it has a close resemblance with a real image.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
mann-e/mann-e_5-new-merge-1
|
mann-e
| 2023-07-21T19:24:29Z | 0 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"license:mit",
"region:us"
] |
text-to-image
| 2023-07-21T19:12:34Z |
---
license: mit
library_name: diffusers
pipeline_tag: text-to-image
---
# Mann-E 5 Merge 1
This is only the checkpoint file and will be deprecated soon.
|
ByteExplorer/Pixelcopter-PLE-v0
|
ByteExplorer
| 2023-07-21T19:18:40Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-18T18:59:06Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 42.30 +/- 33.15
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Naruke/taxi-v3
|
Naruke
| 2023-07-21T19:12:38Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T19:12:35Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Naruke/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Naruke/q-FrozenLake-v1-8x8-randommap-noSlippery
|
Naruke
| 2023-07-21T19:09:08Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T19:09:05Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-randommap-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Naruke/q-FrozenLake-v1-8x8-randommap-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ailabturkiye/semicenkroportaj
|
ailabturkiye
| 2023-07-21T18:57:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-21T18:54:42Z |
[](discord.gg/ailab)


# Semicenk (Şarkıcı) - RVC V2 300 Epoch
**Şarkıcı Semicenk'in Röportaj Kesitlerinden oluşturulan ses modelidir, Şarkıdaki sesini temsil etmez.!
Rvc V2 | 6 Dakikalık Dataset | 300 Epoch olarak eğitilmiştir.**
_Dataset ve Train Benim Tarafımdan yapılmıştır.._
__Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.__
## Credits
**Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.**
- Discord: hydragee
- YouTube: CoverLai (https://www.youtube.com/@coverlai)

[](discord.gg/ailab)

|
jaygdesai/Reinforce-Jay-cartpole
|
jaygdesai
| 2023-07-21T18:53:27Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T18:12:34Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Jay-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 482.50 +/- 52.50
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
NasimB/guten-rarity-neg-log-rarity-no-cut
|
NasimB
| 2023-07-21T18:40:43Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T15:16:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-rarity-neg-log-rarity-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-rarity-neg-log-rarity-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3421 | 0.29 | 500 | 5.3363 |
| 5.0357 | 0.58 | 1000 | 4.9250 |
| 4.7084 | 0.87 | 1500 | 4.6857 |
| 4.4492 | 1.16 | 2000 | 4.5455 |
| 4.2984 | 1.46 | 2500 | 4.4301 |
| 4.1972 | 1.75 | 3000 | 4.3258 |
| 4.0832 | 2.04 | 3500 | 4.2503 |
| 3.8934 | 2.33 | 4000 | 4.2116 |
| 3.8607 | 2.62 | 4500 | 4.1533 |
| 3.8323 | 2.91 | 5000 | 4.1090 |
| 3.6419 | 3.2 | 5500 | 4.0989 |
| 3.5834 | 3.49 | 6000 | 4.0699 |
| 3.5762 | 3.79 | 6500 | 4.0398 |
| 3.4864 | 4.08 | 7000 | 4.0350 |
| 3.3174 | 4.37 | 7500 | 4.0295 |
| 3.3153 | 4.66 | 8000 | 4.0165 |
| 3.304 | 4.95 | 8500 | 4.0047 |
| 3.1667 | 5.24 | 9000 | 4.0159 |
| 3.1375 | 5.53 | 9500 | 4.0149 |
| 3.1343 | 5.82 | 10000 | 4.0139 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
adarsha30735/3_alpaca-heart-status-dataset
|
adarsha30735
| 2023-07-21T18:32:21Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T18:32:18Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
gokuls/hbertv1-small-wt-frz-48-emotion-emb-comp
|
gokuls
| 2023-07-21T18:31:54Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:gokuls/model_v1_complete_training_wt_init_48_small_emb_comp_frz",
"base_model:finetune:gokuls/model_v1_complete_training_wt_init_48_small_emb_comp_frz",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T18:28:37Z |
---
base_model: gokuls/model_v1_complete_training_wt_init_48_small_emb_comp_frz
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: hbertv1-small-wt-frz-48-emotion-emb-comp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.887
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-small-wt-frz-48-emotion-emb-comp
This model is a fine-tuned version of [gokuls/model_v1_complete_training_wt_init_48_small_emb_comp_frz](https://huggingface.co/gokuls/model_v1_complete_training_wt_init_48_small_emb_comp_frz) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4013
- Accuracy: 0.887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1127 | 1.0 | 250 | 0.6028 | 0.798 |
| 0.4399 | 2.0 | 500 | 0.4066 | 0.855 |
| 0.2726 | 3.0 | 750 | 0.3762 | 0.866 |
| 0.1907 | 4.0 | 1000 | 0.3649 | 0.876 |
| 0.1412 | 5.0 | 1250 | 0.4169 | 0.8755 |
| 0.1065 | 6.0 | 1500 | 0.4013 | 0.887 |
| 0.0761 | 7.0 | 1750 | 0.4679 | 0.884 |
| 0.0548 | 8.0 | 2000 | 0.5221 | 0.8775 |
| 0.0379 | 9.0 | 2250 | 0.5458 | 0.8835 |
| 0.0233 | 10.0 | 2500 | 0.5586 | 0.8805 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jaimevera1107/all-MiniLM-L6-v2-similarity-es
|
jaimevera1107
| 2023-07-21T18:26:31Z | 4,970 | 3 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"es",
"dataset:jaimevera1107/similarity-sentences-spanish",
"license:mit",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-21T17:15:03Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: mit
datasets:
- jaimevera1107/similarity-sentences-spanish
language:
- es
library_name: sentence-transformers
---
# All-MiniLM-L6-v2 Fine Tuned - Sentence Transformers - Embedding Model (Spanish-Español)
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Esta es una frase para ser comparada", "Esta es otra oración"]
model = SentenceTransformer('jaimevera1107/roberta-similarity-es')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Esta es una frase para ser comparada", "Esta es otra oración"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jaimevera1107/roberta-similarity-es')
model = AutoModel.from_pretrained('jaimevera1107/roberta-similarity-es')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
| Model | R squared | Spearman Correlation |
|----------------------------|--------------|-------------------------|
| Roberta Fine tuned | 70.67 % | 80.1 % |
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 767 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
The data used was the one in the [Similarity Sentences Spanish Dataset](https://huggingface.co/datasets/jaimevera1107/similarity-sentences-spanish)
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 500,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 383,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
|
tyzp-INC/bench1-paraphrase-multilingual-MiniLM-L12-v2
|
tyzp-INC
| 2023-07-21T18:25:57Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-21T18:25:32Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# tyzp-INC/bench1-paraphrase-multilingual-MiniLM-L12-v2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("tyzp-INC/bench1-paraphrase-multilingual-MiniLM-L12-v2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
gokuls/hbertv1-mini-wt-48-Massive-intent
|
gokuls
| 2023-07-21T18:23:43Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"base_model:gokuls/model_v1_complete_training_wt_init_48_mini",
"base_model:finetune:gokuls/model_v1_complete_training_wt_init_48_mini",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T18:20:04Z |
---
base_model: gokuls/model_v1_complete_training_wt_init_48_mini
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: hbertv1-mini-wt-48-Massive-intent
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: validation
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.8544023610427939
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-mini-wt-48-Massive-intent
This model is a fine-tuned version of [gokuls/model_v1_complete_training_wt_init_48_mini](https://huggingface.co/gokuls/model_v1_complete_training_wt_init_48_mini) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6406
- Accuracy: 0.8544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.16 | 1.0 | 180 | 2.1089 | 0.4934 |
| 1.6964 | 2.0 | 360 | 1.2208 | 0.6916 |
| 1.1107 | 3.0 | 540 | 0.9116 | 0.7703 |
| 0.8493 | 4.0 | 720 | 0.7717 | 0.8155 |
| 0.692 | 5.0 | 900 | 0.7166 | 0.8155 |
| 0.5849 | 6.0 | 1080 | 0.6754 | 0.8288 |
| 0.5133 | 7.0 | 1260 | 0.6491 | 0.8392 |
| 0.4541 | 8.0 | 1440 | 0.6406 | 0.8451 |
| 0.4074 | 9.0 | 1620 | 0.6346 | 0.8480 |
| 0.3615 | 10.0 | 1800 | 0.6403 | 0.8460 |
| 0.3304 | 11.0 | 1980 | 0.6452 | 0.8446 |
| 0.3021 | 12.0 | 2160 | 0.6390 | 0.8495 |
| 0.2792 | 13.0 | 2340 | 0.6412 | 0.8515 |
| 0.2584 | 14.0 | 2520 | 0.6406 | 0.8544 |
| 0.2483 | 15.0 | 2700 | 0.6394 | 0.8529 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
|
gokuls/hbertv1-tiny-wt-48-Massive-intent
|
gokuls
| 2023-07-21T18:19:49Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"base_model:gokuls/model_v1_complete_training_wt_init_48_tiny",
"base_model:finetune:gokuls/model_v1_complete_training_wt_init_48_tiny",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T18:16:50Z |
---
base_model: gokuls/model_v1_complete_training_wt_init_48_tiny
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: hbertv1-tiny-wt-48-Massive-intent
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: validation
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.7722577471716675
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-tiny-wt-48-Massive-intent
This model is a fine-tuned version of [gokuls/model_v1_complete_training_wt_init_48_tiny](https://huggingface.co/gokuls/model_v1_complete_training_wt_init_48_tiny) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8676
- Accuracy: 0.7723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.7161 | 1.0 | 180 | 3.1936 | 0.2499 |
| 2.8544 | 2.0 | 360 | 2.3660 | 0.4058 |
| 2.2122 | 3.0 | 540 | 1.8566 | 0.5430 |
| 1.7979 | 4.0 | 720 | 1.5269 | 0.6370 |
| 1.5083 | 5.0 | 900 | 1.3016 | 0.6911 |
| 1.3044 | 6.0 | 1080 | 1.1672 | 0.7098 |
| 1.1652 | 7.0 | 1260 | 1.0709 | 0.7270 |
| 1.0703 | 8.0 | 1440 | 1.0045 | 0.7432 |
| 0.996 | 9.0 | 1620 | 0.9595 | 0.7511 |
| 0.9323 | 10.0 | 1800 | 0.9276 | 0.7550 |
| 0.8832 | 11.0 | 1980 | 0.9183 | 0.7565 |
| 0.8521 | 12.0 | 2160 | 0.8953 | 0.7649 |
| 0.8246 | 13.0 | 2340 | 0.8829 | 0.7649 |
| 0.8072 | 14.0 | 2520 | 0.8676 | 0.7723 |
| 0.7947 | 15.0 | 2700 | 0.8657 | 0.7708 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
|
tyzp-INC/few-shot-multilingual-e5-large-xnli
|
tyzp-INC
| 2023-07-21T18:17:56Z | 44 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-21T18:16:01Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# tyzp-INC/few-shot-multilingual-e5-large-xnli
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("tyzp-INC/few-shot-multilingual-e5-large-xnli")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
gokuls/hbertv1-small-wt-48-Massive-intent
|
gokuls
| 2023-07-21T18:05:44Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"base_model:gokuls/model_v1_complete_training_wt_init_48_small",
"base_model:finetune:gokuls/model_v1_complete_training_wt_init_48_small",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T18:01:57Z |
---
base_model: gokuls/model_v1_complete_training_wt_init_48_small
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: hbertv1-small-wt-48-Massive-intent
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: validation
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.8671913428430891
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-small-wt-48-Massive-intent
This model is a fine-tuned version of [gokuls/model_v1_complete_training_wt_init_48_small](https://huggingface.co/gokuls/model_v1_complete_training_wt_init_48_small) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6540
- Accuracy: 0.8672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0435 | 1.0 | 180 | 0.8648 | 0.7693 |
| 0.7809 | 2.0 | 360 | 0.6523 | 0.8190 |
| 0.5432 | 3.0 | 540 | 0.5795 | 0.8441 |
| 0.4035 | 4.0 | 720 | 0.5657 | 0.8539 |
| 0.2976 | 5.0 | 900 | 0.5547 | 0.8618 |
| 0.22 | 6.0 | 1080 | 0.5735 | 0.8598 |
| 0.1639 | 7.0 | 1260 | 0.5905 | 0.8554 |
| 0.1281 | 8.0 | 1440 | 0.5916 | 0.8618 |
| 0.0893 | 9.0 | 1620 | 0.6186 | 0.8642 |
| 0.0722 | 10.0 | 1800 | 0.6370 | 0.8642 |
| 0.0513 | 11.0 | 1980 | 0.6540 | 0.8672 |
| 0.039 | 12.0 | 2160 | 0.6762 | 0.8637 |
| 0.0307 | 13.0 | 2340 | 0.6796 | 0.8637 |
| 0.0223 | 14.0 | 2520 | 0.6895 | 0.8657 |
| 0.0169 | 15.0 | 2700 | 0.6918 | 0.8652 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
|
christinezh/squad-bloom-3b
|
christinezh
| 2023-07-21T17:59:48Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T17:52:06Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
gokuls/hbertv1-mini-wt-48-emotion
|
gokuls
| 2023-07-21T17:59:00Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:gokuls/model_v1_complete_training_wt_init_48_mini",
"base_model:finetune:gokuls/model_v1_complete_training_wt_init_48_mini",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T17:55:32Z |
---
base_model: gokuls/model_v1_complete_training_wt_init_48_mini
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: hbertv1-mini-wt-48-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.908
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-mini-wt-48-emotion
This model is a fine-tuned version of [gokuls/model_v1_complete_training_wt_init_48_mini](https://huggingface.co/gokuls/model_v1_complete_training_wt_init_48_mini) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2561
- Accuracy: 0.908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0852 | 1.0 | 250 | 0.5567 | 0.8195 |
| 0.4522 | 2.0 | 500 | 0.3409 | 0.8775 |
| 0.3152 | 3.0 | 750 | 0.3007 | 0.8885 |
| 0.2646 | 4.0 | 1000 | 0.2999 | 0.9045 |
| 0.23 | 5.0 | 1250 | 0.2842 | 0.8945 |
| 0.205 | 6.0 | 1500 | 0.2658 | 0.9035 |
| 0.1871 | 7.0 | 1750 | 0.2674 | 0.902 |
| 0.1623 | 8.0 | 2000 | 0.2561 | 0.908 |
| 0.1488 | 9.0 | 2250 | 0.2529 | 0.9075 |
| 0.1379 | 10.0 | 2500 | 0.2523 | 0.908 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Glen/sd-class-butterflies-32
|
Glen
| 2023-07-21T17:55:59Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-07-21T17:55:48Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Glen/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
gokuls/hbertv1-tiny-wt-48-emotion
|
gokuls
| 2023-07-21T17:55:14Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:gokuls/model_v1_complete_training_wt_init_48_tiny",
"base_model:finetune:gokuls/model_v1_complete_training_wt_init_48_tiny",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T17:52:35Z |
---
base_model: gokuls/model_v1_complete_training_wt_init_48_tiny
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: hbertv1-tiny-wt-48-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.8985
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-tiny-wt-48-emotion
This model is a fine-tuned version of [gokuls/model_v1_complete_training_wt_init_48_tiny](https://huggingface.co/gokuls/model_v1_complete_training_wt_init_48_tiny) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2695
- Accuracy: 0.8985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4321 | 1.0 | 250 | 1.0203 | 0.6475 |
| 0.8329 | 2.0 | 500 | 0.5954 | 0.814 |
| 0.5347 | 3.0 | 750 | 0.4146 | 0.8645 |
| 0.398 | 4.0 | 1000 | 0.3496 | 0.8805 |
| 0.3418 | 5.0 | 1250 | 0.3091 | 0.889 |
| 0.2932 | 6.0 | 1500 | 0.2864 | 0.8945 |
| 0.2646 | 7.0 | 1750 | 0.2782 | 0.8965 |
| 0.2532 | 8.0 | 2000 | 0.2695 | 0.8985 |
| 0.2342 | 9.0 | 2250 | 0.2632 | 0.898 |
| 0.225 | 10.0 | 2500 | 0.2617 | 0.897 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
|
gokuls/hbertv1-wt-frz-48-emotion
|
gokuls
| 2023-07-21T17:50:29Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48_frz",
"base_model:finetune:gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48_frz",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T17:41:30Z |
---
base_model: gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48_frz
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: hbertv1-wt-frz-48-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9195
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-wt-frz-48-emotion
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48_frz](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48_frz) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3306
- Accuracy: 0.9195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8962 | 1.0 | 250 | 0.3587 | 0.872 |
| 0.3328 | 2.0 | 500 | 0.3154 | 0.889 |
| 0.2269 | 3.0 | 750 | 0.2463 | 0.913 |
| 0.1687 | 4.0 | 1000 | 0.3033 | 0.912 |
| 0.1319 | 5.0 | 1250 | 0.2559 | 0.9105 |
| 0.1091 | 6.0 | 1500 | 0.2657 | 0.913 |
| 0.0809 | 7.0 | 1750 | 0.3015 | 0.913 |
| 0.0686 | 8.0 | 2000 | 0.3306 | 0.9195 |
| 0.0498 | 9.0 | 2250 | 0.3532 | 0.9195 |
| 0.0389 | 10.0 | 2500 | 0.3960 | 0.9175 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
|
vineetsharma/ppo-Pyramids
|
vineetsharma
| 2023-07-21T17:46:48Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-07-21T17:45:56Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: vineetsharma/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gokuls/hbertv1-small-wt-48-emotion
|
gokuls
| 2023-07-21T17:40:32Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:gokuls/model_v1_complete_training_wt_init_48_small",
"base_model:finetune:gokuls/model_v1_complete_training_wt_init_48_small",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T17:36:58Z |
---
base_model: gokuls/model_v1_complete_training_wt_init_48_small
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: hbertv1-small-wt-48-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-small-wt-48-emotion
This model is a fine-tuned version of [gokuls/model_v1_complete_training_wt_init_48_small](https://huggingface.co/gokuls/model_v1_complete_training_wt_init_48_small) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1738
- Accuracy: 0.9375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.653 | 1.0 | 250 | 0.2924 | 0.8935 |
| 0.2315 | 2.0 | 500 | 0.2199 | 0.9175 |
| 0.1722 | 3.0 | 750 | 0.1918 | 0.9235 |
| 0.1263 | 4.0 | 1000 | 0.1738 | 0.9375 |
| 0.1087 | 5.0 | 1250 | 0.1898 | 0.9295 |
| 0.0889 | 6.0 | 1500 | 0.1812 | 0.932 |
| 0.0756 | 7.0 | 1750 | 0.1978 | 0.9315 |
| 0.0652 | 8.0 | 2000 | 0.2070 | 0.931 |
| 0.0506 | 9.0 | 2250 | 0.2277 | 0.9345 |
| 0.0398 | 10.0 | 2500 | 0.2356 | 0.9335 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
|
subset-data/falcon-7b-bt
|
subset-data
| 2023-07-21T17:38:14Z | 6 | 0 |
transformers
|
[
"transformers",
"RefinedWeb",
"text-generation",
"custom_code",
"autotrain_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2023-07-20T15:22:03Z |
---
pipeline_tag: text-generation
library_name: transformers
---
|
ethannhzhouu/EthanHorror5
|
ethannhzhouu
| 2023-07-21T17:37:57Z | 173 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:finetune:EleutherAI/gpt-neo-125m",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T17:37:18Z |
---
license: mit
base_model: EleutherAI/gpt-neo-125M
tags:
- generated_from_trainer
model-index:
- name: EthanHorror5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EthanHorror5
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0747
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 1.1872 |
| No log | 2.0 | 2 | 0.5813 |
| No log | 3.0 | 3 | 0.2518 |
| No log | 4.0 | 4 | 0.1155 |
| No log | 5.0 | 5 | 0.0747 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
helenai/deepset-xlm-roberta-large-squad2-ov
|
helenai
| 2023-07-21T17:36:52Z | 4 | 0 |
transformers
|
[
"transformers",
"openvino",
"xlm-roberta",
"question-answering",
"en",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-21T17:36:03Z |
---
language:
- en
tags:
- openvino
---
# deepset/xlm-roberta-large-squad2
This is the [deepset/xlm-roberta-large-squad2](https://huggingface.co/deepset/xlm-roberta-large-squad2) model converted to [OpenVINO](https://openvino.ai), for accellerated inference.
An example of how to do inference on this model:
```python
from optimum.intel.openvino import OVModelForQuestionAnswering
from transformers import AutoTokenizer, pipeline
# model_id should be set to either a local directory or a model available on the HuggingFace hub.
model_id = "helenai/deepset-xlm-roberta-large-squad2-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForQuestionAnswering.from_pretrained(model_id)
pipe = pipeline("question-answering", model=model, tokenizer=tokenizer)
result = pipe("What is OpenVINO?", "OpenVINO is a framework that accelerates deep learning inferencing")
print(result)
```
|
ethannhzhouu/EthanHorror4
|
ethannhzhouu
| 2023-07-21T17:34:14Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:finetune:EleutherAI/gpt-neo-125m",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T17:32:21Z |
---
license: mit
base_model: EleutherAI/gpt-neo-125M
tags:
- generated_from_trainer
model-index:
- name: EthanHorror4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EthanHorror4
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 3.3778 |
| No log | 2.0 | 2 | 2.7985 |
| No log | 3.0 | 3 | 2.4210 |
| No log | 4.0 | 4 | 2.1587 |
| No log | 5.0 | 5 | 2.0187 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ethannhzhouu/EthanHorror3
|
ethannhzhouu
| 2023-07-21T17:29:35Z | 216 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T17:28:58Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: EthanHorror3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EthanHorror3
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 4.8164 |
| No log | 2.0 | 2 | 4.5105 |
| No log | 3.0 | 3 | 4.3888 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aarnphm/llama-2-dolly-qlora
|
aarnphm
| 2023-07-21T17:29:12Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T17:29:09Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
au2a/whisper-base-zh-20230721
|
au2a
| 2023-07-21T17:29:04Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"zh",
"dataset:-",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-21T09:41:35Z |
---
language:
- zh
license: apache-2.0
tags:
- whisper
- generated_from_trainer
datasets:
- '-'
model-index:
- name: whisper-base-zh-20230721 - au2a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-zh-20230721 - au2a
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the some hakka audio dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4546
- Cer: 16.5974
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.4669 | 0.65 | 1000 | 0.6528 | 25.0548 |
| 0.2208 | 1.29 | 2000 | 0.5006 | 19.8761 |
| 0.1452 | 1.94 | 3000 | 0.4546 | 17.9497 |
| 0.0951 | 2.59 | 4000 | 0.4431 | 17.4511 |
| 0.0526 | 3.24 | 5000 | 0.4450 | 17.3113 |
| 0.0422 | 3.88 | 6000 | 0.4440 | 16.6201 |
| 0.0271 | 4.53 | 7000 | 0.4471 | 17.0658 |
| 0.0179 | 5.18 | 8000 | 0.4509 | 16.5823 |
| 0.0166 | 5.83 | 9000 | 0.4535 | 16.8543 |
| 0.0129 | 6.47 | 10000 | 0.4546 | 16.5974 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
an-atlas/moreHorror
|
an-atlas
| 2023-07-21T17:26:45Z | 150 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T17:22:15Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: moreHorror
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# moreHorror
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 4.8164 |
| No log | 2.0 | 2 | 4.5105 |
| No log | 3.0 | 3 | 4.3888 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ethannhzhouu/EthanHorror2
|
ethannhzhouu
| 2023-07-21T17:26:16Z | 150 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T17:23:56Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: EthanHorror2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EthanHorror2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 4.8164 |
| No log | 2.0 | 2 | 4.5105 |
| No log | 3.0 | 3 | 4.3888 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jorgelzn/ppo-SnowballTarget
|
jorgelzn
| 2023-07-21T17:21:56Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-21T16:26:04Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jorgelzn/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
akash0/py-code-complete
|
akash0
| 2023-07-21T17:06:34Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T14:14:08Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_keras_callback
model-index:
- name: akash0/py-code-complete
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# akash0/py-code-complete
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.1922
- Validation Loss: 3.7943
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 6150, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 100, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.1922 | 3.7943 | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
naimul011/finetuned_tweet_sentiment_llama-7b-100-hf
|
naimul011
| 2023-07-21T17:06:05Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T13:49:08Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
nermine123/layoutlmv3-finetuned-cord_100
|
nermine123
| 2023-07-21T16:59:48Z | 80 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:cord-layoutlmv3",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-21T16:08:24Z |
---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
datasets:
- cord-layoutlmv3
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-cord_100
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cord-layoutlmv3
type: cord-layoutlmv3
config: cord
split: test
args: cord
metrics:
- name: Precision
type: precision
value: 0.9296817172464841
- name: Recall
type: recall
value: 0.9401197604790419
- name: F1
type: f1
value: 0.9348716040193524
- name: Accuracy
type: accuracy
value: 0.9435483870967742
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-cord_100
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2908
- Precision: 0.9297
- Recall: 0.9401
- F1: 0.9349
- Accuracy: 0.9435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 4.17 | 250 | 1.0995 | 0.6869 | 0.7635 | 0.7231 | 0.7789 |
| 1.4568 | 8.33 | 500 | 0.5676 | 0.8382 | 0.8765 | 0.8569 | 0.8773 |
| 1.4568 | 12.5 | 750 | 0.4044 | 0.8920 | 0.9147 | 0.9032 | 0.9202 |
| 0.3562 | 16.67 | 1000 | 0.3518 | 0.9086 | 0.9229 | 0.9157 | 0.9270 |
| 0.3562 | 20.83 | 1250 | 0.3060 | 0.9245 | 0.9349 | 0.9297 | 0.9372 |
| 0.1509 | 25.0 | 1500 | 0.3032 | 0.9261 | 0.9379 | 0.9319 | 0.9419 |
| 0.1509 | 29.17 | 1750 | 0.2980 | 0.9261 | 0.9386 | 0.9323 | 0.9368 |
| 0.0848 | 33.33 | 2000 | 0.2996 | 0.9226 | 0.9371 | 0.9298 | 0.9385 |
| 0.0848 | 37.5 | 2250 | 0.2924 | 0.9276 | 0.9394 | 0.9334 | 0.9440 |
| 0.0619 | 41.67 | 2500 | 0.2908 | 0.9297 | 0.9401 | 0.9349 | 0.9435 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ByteExplorer/rl_course_vizdoom_health_gathering_supreme
|
ByteExplorer
| 2023-07-21T16:53:47Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-20T19:48:03Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.71 +/- 4.60
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r ByteExplorer/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
leopuv/cats_vs_dogs_classifier
|
leopuv
| 2023-07-21T16:45:22Z | 84 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"dataset:lewtun/dog_food",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-20T16:19:13Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: leopuv/cats_vs_dogs_classifier
results: []
datasets:
- lewtun/dog_food
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# leopuv/cats_vs_dogs_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0285
- Train Accuracy: 0.9865
- Validation Loss: 0.0340
- Validation Accuracy: 0.9865
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 80000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1739 | 0.9715 | 0.0787 | 0.9715 | 0 |
| 0.0744 | 0.984 | 0.0432 | 0.9840 | 1 |
| 0.0543 | 0.9895 | 0.0365 | 0.9895 | 2 |
| 0.0420 | 0.9885 | 0.0346 | 0.9885 | 3 |
| 0.0402 | 0.9855 | 0.0414 | 0.9855 | 4 |
| 0.0378 | 0.9885 | 0.0307 | 0.9885 | 5 |
| 0.0306 | 0.9855 | 0.0375 | 0.9855 | 6 |
| 0.0343 | 0.987 | 0.0402 | 0.9870 | 7 |
| 0.0283 | 0.9875 | 0.0381 | 0.9875 | 8 |
| 0.0285 | 0.9865 | 0.0340 | 0.9865 | 9 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jphme/vicuna-13b-v1.3-ger-GGML
|
jphme
| 2023-07-21T16:40:44Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation",
"de",
"en",
"license:cc-by-nc-sa-4.0",
"region:us"
] |
text-generation
| 2023-07-11T11:03:11Z |
---
inference: false
license: cc-by-nc-sa-4.0
language:
- de
- en
library_name: transformers
pipeline_tag: text-generation
---
# Vicuna 13b v1.3 German GGML
These files are GGML format model files for [Vicuna 13b v1.3 German](https://huggingface.co/jphme/vicuna-13b-v1.3-ger). Please find all information about the model in the original repository.
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Prompt template:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: Hello!
ASSISTANT: Hello!</s>
USER: How are you?
ASSISTANT: I am good.</s>
```
## Compatibility
### `q4_0` + `q5_1`
So far, I only quantized a `q4_0` and `q5_1` version for my own use. Please let me know if there is demand for other quantizations.
These should be compatbile with any UIs, tools and libraries released since late May.
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| vicuna-13b-v1.3-ger.ggmlv3.q4_0.bin | q4_0 | 4 | 7.37 GB | ~9.8 GB | Original llama.cpp quant method, 4-bit. |
| vicuna-13b-v1.3-ger.ggmlv3.q5_1.bin | q5_1 | 5 | 9.78 GB | ~12.3 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m vicuna-13b-v1.3-ger.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are an story writing assistant who writes very long, detailed and interesting stories\n\nUser:\nWrite a story about llamas\nAssistant:\n"
```
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
## Thanks
Special thanks to [LMSYS](https://huggingface.co/lmsys) for the great Orca Mini base model and [TheBloke](https://huggingface.co/TheBloke) for his great work quantizing billions of models (and for his template for this README).
|
jphme/vicuna-13b-v1.3-ger
|
jphme
| 2023-07-21T16:36:09Z | 10 | 8 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"de",
"en",
"arxiv:2302.13971",
"arxiv:2306.05685",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-11T11:02:48Z |
---
language:
- de
- en
pipeline_tag: text-generation
inference: false
---
# Vicuna 13b v1.3 German
vicuna-13b-v1.3-ger is a variant of [LMSYS](https://huggingface.co/lmsys)´s [Vicuna 13b v1.3](https://huggingface.co/lmsys/vicuna-13b-v1.3) model, finetuned on an additional dataset in German language. The original model has been trained on explain tuned datasets, created using instructions and input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.
This model is optimized for German text, providing proficiency in understanding, generating, and interacting with German language content. However the model is not yet fully optimized for German language, as it has been trained on a small, experimental dataset and has limited capabilities due to the small parameter count.
Some of the fineunting data is also targeted towards factual retrieval (only answer questions from information in the context and refuse to hallucinate) and the model should perform better for these tasks than original Vicuna.
I am working on improving the model´s capabilities and will update the model if there is sufficient interest.
A quantized GGML version for use with llama.cpp, kobold.cpp and other GUIs for CPU inference can be found [here](https://huggingface.co/jphme/vicuna-13b-v1.3-ger-GGML).
## Prompt Template
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: Hello!
ASSISTANT: Hello!</s>
USER: How are you?
ASSISTANT: I am good.</s>
```
## Results
I did only evaluate the output on a small, handcrafted sample on test prompts in German, confirming that the model's ability to understand and generate German text is above the base model in many situations.
## Problems
There might be inconsistencies in multi-turn chat applications, as there was a small problem with the <eos> tokens during preparation of the finetuning dataset.
Please report any problems so I can fix this for the next version.
---------------------------
# Original Vicuna Model Card
## Model Details
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
- Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
- APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
## Training Details
Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
The training data is around 140K conversations collected from ShareGPT.com.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
|
Darisian/taxi-v3
|
Darisian
| 2023-07-21T16:35:55Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T16:35:52Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Darisian/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
OPERFIND/step2
|
OPERFIND
| 2023-07-21T16:34:52Z | 8 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-20T18:19:04Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### step2 Dreambooth model trained by OPERFIND with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
tyzp-INC/few-mjwong
|
tyzp-INC
| 2023-07-21T16:29:00Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-21T16:25:39Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# tyzp-INC/few-mjwong
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("tyzp-INC/few-mjwong")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
michaelfeil/ct2fast-Llama-2-13b-chat-hf
|
michaelfeil
| 2023-07-21T16:19:27Z | 6 | 4 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"ctranslate2",
"int8",
"float16",
"facebook",
"meta",
"pytorch",
"llama-2",
"en",
"arxiv:2307.09288",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-07-18T21:22:27Z |
---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**"
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- ctranslate2
- int8
- float16
- facebook
- meta
- pytorch
- llama
- llama-2
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)
```bash
pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.17.1
```
```python
# from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-Llama-2-13b-chat-hf"
from hf_hub_ctranslate2 import GeneratorCT2fromHfHub
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("{ORG}/{NAME}")
)
outputs = model.generate(
text=["def fibonnaci(", "User: How are you doing? Bot:"],
max_length=64,
include_prompt_in_result=False
)
print(outputs)
```
Checkpoint compatible to [ctranslate2>=3.17.1](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
Converted on 2023-07-21 using
```
LLama-2 -> removed <pad> token.
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
michaelfeil/ct2fast-Llama-2-13b-hf
|
michaelfeil
| 2023-07-21T16:17:17Z | 6 | 1 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"ctranslate2",
"int8",
"float16",
"facebook",
"meta",
"pytorch",
"llama-2",
"en",
"arxiv:2307.09288",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-07-18T21:56:56Z |
---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**"
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- ctranslate2
- int8
- float16
- facebook
- meta
- pytorch
- llama
- llama-2
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)
```bash
pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.17.1
```
```python
# from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-Llama-2-13b-hf"
from hf_hub_ctranslate2 import GeneratorCT2fromHfHub
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("{ORG}/{NAME}")
)
outputs = model.generate(
text=["def fibonnaci(", "User: How are you doing? Bot:"],
max_length=64,
include_prompt_in_result=False
)
print(outputs)
```
Checkpoint compatible to [ctranslate2>=3.17.1](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
Converted on 2023-07-21 using
```
LLama-2 -> removed <pad> token.
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
michaelfeil/ct2fast-Llama-2-7b-chat-hf
|
michaelfeil
| 2023-07-21T16:14:52Z | 13 | 4 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"ctranslate2",
"int8",
"float16",
"facebook",
"meta",
"pytorch",
"llama-2",
"en",
"arxiv:2307.09288",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-07-18T20:39:17Z |
---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**"
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
arxiv: 2307.09288
tags:
- ctranslate2
- int8
- float16
- facebook
- meta
- pytorch
- llama
- llama-2
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
```bash
pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.17.1
```
```python
# from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-Llama-2-7b-chat-hf"
from hf_hub_ctranslate2 import GeneratorCT2fromHfHub
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("{ORG}/{NAME}")
)
outputs = model.generate(
text=["def fibonnaci(", "User: How are you doing? Bot:"],
max_length=64,
include_prompt_in_result=False
)
print(outputs)
```
Checkpoint compatible to [ctranslate2>=3.17.1](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
Converted on 2023-07-21 using
```
LLama-2 -> removed <pad> token.
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.