modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 00:36:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 00:36:49
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
stillerman/poke-lora
|
stillerman
| 2023-08-02T19:32:56Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-02T19:26:07Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - stillerman/poke-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
jordyvl/dit-base-finetuned-rvlcdip-tiny_rvl_cdip-NK1000_hint
|
jordyvl
| 2023-08-02T19:18:14Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-28T03:01:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-base-finetuned-rvlcdip-tiny_rvl_cdip-NK1000_hint
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base-finetuned-rvlcdip-tiny_rvl_cdip-NK1000_hint
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7080
- Accuracy: 0.8275
- Brier Loss: 0.3142
- Nll: 2.0399
- F1 Micro: 0.8275
- F1 Macro: 0.8270
- Ece: 0.1526
- Aurc: 0.0520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| 3.111 | 1.0 | 1000 | 2.9416 | 0.5917 | 0.5262 | 2.5737 | 0.5917 | 0.5835 | 0.0528 | 0.1821 |
| 2.5832 | 2.0 | 2000 | 2.4518 | 0.6917 | 0.4147 | 2.1569 | 0.6917 | 0.6919 | 0.0508 | 0.1107 |
| 2.2618 | 3.0 | 3000 | 2.2194 | 0.7418 | 0.3548 | 2.0905 | 0.7418 | 0.7417 | 0.0384 | 0.0775 |
| 2.0277 | 4.0 | 4000 | 2.1469 | 0.7575 | 0.3418 | 2.0638 | 0.7575 | 0.7547 | 0.0661 | 0.0729 |
| 1.9024 | 5.0 | 5000 | 2.1380 | 0.7355 | 0.3703 | 2.0583 | 0.7355 | 0.7365 | 0.0619 | 0.0880 |
| 1.7315 | 6.0 | 6000 | 2.0423 | 0.7508 | 0.3495 | 2.0467 | 0.7508 | 0.7566 | 0.0631 | 0.0752 |
| 1.5844 | 7.0 | 7000 | 2.0832 | 0.7628 | 0.3382 | 2.1301 | 0.7628 | 0.7651 | 0.0953 | 0.0689 |
| 1.4761 | 8.0 | 8000 | 2.2224 | 0.773 | 0.3548 | 2.1347 | 0.7730 | 0.7734 | 0.1284 | 0.0708 |
| 1.3852 | 9.0 | 9000 | 2.2341 | 0.7853 | 0.3452 | 2.0905 | 0.7853 | 0.7874 | 0.1349 | 0.0614 |
| 1.3234 | 10.0 | 10000 | 2.3403 | 0.778 | 0.3614 | 2.1125 | 0.778 | 0.7797 | 0.1530 | 0.0649 |
| 1.2546 | 11.0 | 11000 | 2.4153 | 0.7768 | 0.3675 | 2.1438 | 0.7768 | 0.7772 | 0.1601 | 0.0649 |
| 1.2161 | 12.0 | 12000 | 2.5661 | 0.7742 | 0.3810 | 2.1581 | 0.7742 | 0.7752 | 0.1715 | 0.0669 |
| 1.1611 | 13.0 | 13000 | 2.5638 | 0.789 | 0.3616 | 2.0957 | 0.7890 | 0.7888 | 0.1648 | 0.0595 |
| 1.1349 | 14.0 | 14000 | 2.6037 | 0.7957 | 0.3569 | 2.1299 | 0.7957 | 0.7963 | 0.1641 | 0.0578 |
| 1.1043 | 15.0 | 15000 | 2.6763 | 0.7817 | 0.3786 | 2.1078 | 0.7817 | 0.7855 | 0.1755 | 0.0680 |
| 1.0768 | 16.0 | 16000 | 2.6931 | 0.792 | 0.3636 | 2.1056 | 0.792 | 0.7942 | 0.1679 | 0.0601 |
| 1.0675 | 17.0 | 17000 | 2.6384 | 0.7957 | 0.3549 | 2.1658 | 0.7957 | 0.7941 | 0.1651 | 0.0570 |
| 1.0387 | 18.0 | 18000 | 2.8320 | 0.7825 | 0.3899 | 2.1964 | 0.7825 | 0.7804 | 0.1804 | 0.0706 |
| 1.035 | 19.0 | 19000 | 2.7127 | 0.7947 | 0.3641 | 2.0771 | 0.7947 | 0.7981 | 0.1741 | 0.0607 |
| 1.0053 | 20.0 | 20000 | 2.7164 | 0.8035 | 0.3508 | 2.0693 | 0.8035 | 0.8017 | 0.1638 | 0.0594 |
| 0.9783 | 21.0 | 21000 | 2.7162 | 0.8085 | 0.3475 | 2.0165 | 0.8085 | 0.8080 | 0.1622 | 0.0601 |
| 0.9606 | 22.0 | 22000 | 2.7740 | 0.804 | 0.3505 | 2.0738 | 0.804 | 0.8057 | 0.1678 | 0.0585 |
| 0.9579 | 23.0 | 23000 | 2.7597 | 0.803 | 0.3544 | 2.0507 | 0.803 | 0.8038 | 0.1668 | 0.0600 |
| 0.9439 | 24.0 | 24000 | 2.7108 | 0.809 | 0.3407 | 2.0218 | 0.809 | 0.8099 | 0.1626 | 0.0574 |
| 0.9247 | 25.0 | 25000 | 2.6918 | 0.8125 | 0.3355 | 2.0449 | 0.8125 | 0.8114 | 0.1580 | 0.0549 |
| 0.9275 | 26.0 | 26000 | 2.6996 | 0.8163 | 0.3316 | 2.0140 | 0.8163 | 0.8159 | 0.1585 | 0.0582 |
| 0.914 | 27.0 | 27000 | 2.7846 | 0.8113 | 0.3389 | 2.0190 | 0.8113 | 0.8110 | 0.1626 | 0.0598 |
| 0.9036 | 28.0 | 28000 | 2.7436 | 0.817 | 0.3341 | 2.0702 | 0.817 | 0.8166 | 0.1587 | 0.0564 |
| 0.893 | 29.0 | 29000 | 2.7354 | 0.8197 | 0.3272 | 2.0581 | 0.8197 | 0.8207 | 0.1551 | 0.0588 |
| 0.8815 | 30.0 | 30000 | 2.8377 | 0.813 | 0.3414 | 2.1163 | 0.813 | 0.8149 | 0.1630 | 0.0614 |
| 0.8688 | 31.0 | 31000 | 2.7815 | 0.8207 | 0.3310 | 2.0502 | 0.8207 | 0.8205 | 0.1576 | 0.0554 |
| 0.8727 | 32.0 | 32000 | 2.7370 | 0.82 | 0.3292 | 2.1149 | 0.82 | 0.8193 | 0.1563 | 0.0545 |
| 0.8581 | 33.0 | 33000 | 2.8168 | 0.812 | 0.3443 | 2.0026 | 0.8120 | 0.8146 | 0.1658 | 0.0594 |
| 0.8504 | 34.0 | 34000 | 2.7660 | 0.8173 | 0.3321 | 2.0497 | 0.8173 | 0.8181 | 0.1597 | 0.0556 |
| 0.8563 | 35.0 | 35000 | 2.8457 | 0.8097 | 0.3442 | 2.0815 | 0.8097 | 0.8107 | 0.1669 | 0.0592 |
| 0.8415 | 36.0 | 36000 | 2.7366 | 0.8245 | 0.3179 | 2.0282 | 0.8245 | 0.8251 | 0.1511 | 0.0566 |
| 0.8372 | 37.0 | 37000 | 2.7731 | 0.821 | 0.3249 | 2.1084 | 0.821 | 0.8198 | 0.1563 | 0.0546 |
| 0.8406 | 38.0 | 38000 | 2.6948 | 0.8283 | 0.3131 | 2.0343 | 0.8283 | 0.8281 | 0.1493 | 0.0533 |
| 0.831 | 39.0 | 39000 | 2.7781 | 0.827 | 0.3192 | 2.0592 | 0.827 | 0.8270 | 0.1534 | 0.0544 |
| 0.8223 | 40.0 | 40000 | 2.7811 | 0.8267 | 0.3161 | 2.0946 | 0.8267 | 0.8271 | 0.1512 | 0.0570 |
| 0.8258 | 41.0 | 41000 | 2.6993 | 0.827 | 0.3138 | 2.0347 | 0.827 | 0.8271 | 0.1507 | 0.0531 |
| 0.8209 | 42.0 | 42000 | 2.7467 | 0.828 | 0.3197 | 2.0159 | 0.828 | 0.8279 | 0.1530 | 0.0541 |
| 0.8146 | 43.0 | 43000 | 2.7050 | 0.8257 | 0.3159 | 2.0518 | 0.8257 | 0.8249 | 0.1526 | 0.0523 |
| 0.8161 | 44.0 | 44000 | 2.6919 | 0.8257 | 0.3160 | 1.9889 | 0.8257 | 0.8255 | 0.1515 | 0.0530 |
| 0.8121 | 45.0 | 45000 | 2.7314 | 0.8235 | 0.3210 | 2.0259 | 0.8235 | 0.8244 | 0.1542 | 0.0537 |
| 0.809 | 46.0 | 46000 | 2.7203 | 0.8275 | 0.3146 | 2.0431 | 0.8275 | 0.8272 | 0.1526 | 0.0514 |
| 0.8091 | 47.0 | 47000 | 2.7174 | 0.826 | 0.3176 | 2.0313 | 0.826 | 0.8253 | 0.1534 | 0.0527 |
| 0.8073 | 48.0 | 48000 | 2.7058 | 0.8277 | 0.3130 | 2.0258 | 0.8277 | 0.8272 | 0.1515 | 0.0519 |
| 0.8073 | 49.0 | 49000 | 2.7065 | 0.827 | 0.3146 | 2.0301 | 0.827 | 0.8266 | 0.1528 | 0.0523 |
| 0.8069 | 50.0 | 50000 | 2.7080 | 0.8275 | 0.3142 | 2.0399 | 0.8275 | 0.8270 | 0.1526 | 0.0520 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
NasimB/bnc_spoken_cbt_log_rarity-mixed-seed
|
NasimB
| 2023-08-02T19:07:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-02T16:28:25Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: bnc_spoken_cbt_log_rarity-mixed-seed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bnc_spoken_cbt_log_rarity-mixed-seed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3612 | 0.29 | 500 | 5.3449 |
| 5.057 | 0.59 | 1000 | 4.9241 |
| 4.719 | 0.88 | 1500 | 4.7007 |
| 4.4572 | 1.17 | 2000 | 4.5642 |
| 4.313 | 1.46 | 2500 | 4.4451 |
| 4.2078 | 1.76 | 3000 | 4.3450 |
| 4.0893 | 2.05 | 3500 | 4.2776 |
| 3.9066 | 2.34 | 4000 | 4.2323 |
| 3.8811 | 2.63 | 4500 | 4.1744 |
| 3.8449 | 2.93 | 5000 | 4.1258 |
| 3.6444 | 3.22 | 5500 | 4.1177 |
| 3.599 | 3.51 | 6000 | 4.0930 |
| 3.5865 | 3.81 | 6500 | 4.0622 |
| 3.4817 | 4.1 | 7000 | 4.0622 |
| 3.3285 | 4.39 | 7500 | 4.0593 |
| 3.3233 | 4.68 | 8000 | 4.0477 |
| 3.3173 | 4.98 | 8500 | 4.0366 |
| 3.1582 | 5.27 | 9000 | 4.0495 |
| 3.1499 | 5.56 | 9500 | 4.0489 |
| 3.1452 | 5.85 | 10000 | 4.0490 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
spygaurad/spygaurad-bengla_asr_cv12-benglaASR_cv12
|
spygaurad
| 2023-08-02T19:03:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T19:03:38Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
DRAGOO/Whisper_with_YAssine
|
DRAGOO
| 2023-08-02T18:53:54Z | 64 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"ht",
"fr",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-02T12:17:15Z |
---
license: openrail
language:
- ht
- fr
- en
---
|
ailabturkiye/Darth_Vader
|
ailabturkiye
| 2023-08-02T18:21:15Z | 0 | 0 | null |
[
"starwars",
"darth-vader",
"game",
"movie",
"tr",
"license:openrail",
"region:us"
] | null | 2023-08-02T17:29:42Z |
---
license: openrail
language:
- tr
metrics:
- character
tags:
- starwars
- darth-vader
- game
- movie
---
Darth Vader
Peace is a lie, there is only passion.
Through passion, I gain strength.
Through strength, I gain power.
Through power, I gain victory.
Through victory, my chains are broken.
The Force shall free me.
|
AtilliO/Reinforce-Invada
|
AtilliO
| 2023-08-02T18:13:08Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-02T18:12:54Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Invada
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 167.74 +/- 6.19
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
hs1710/ppo-LunarLander-v2
|
hs1710
| 2023-08-02T18:12:56Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-02T18:12:33Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.27 +/- 23.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
slarkprime/Llama-2-7b-chat-QLoRA-test
|
slarkprime
| 2023-08-02T17:56:59Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T12:25:27Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
Lamurias/ppo-LunarLander-v2
|
Lamurias
| 2023-08-02T17:56:49Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-02T17:56:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.85 +/- 18.82
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
winstxnhdw/replit-code-v1-3b-ct2-int8
|
winstxnhdw
| 2023-08-02T17:47:55Z | 6 | 0 |
transformers
|
[
"transformers",
"code",
"dataset:bigcode/the-stack-dedup",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-02T14:03:10Z |
---
license: cc-by-sa-4.0
datasets:
- bigcode/the-stack-dedup
language:
- code
tags:
- code
---
# replit-code-v1-3b-ct2-int8
This model is used in [Wingman](https://github.com/winstxnhdw/Wingman).
|
sillyyaws/EpicWubbox
|
sillyyaws
| 2023-08-02T17:41:24Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-08-02T17:15:32Z |
---
license: bigscience-openrail-m
---
|
MayaPH/GodziLLa-30B
|
MayaPH
| 2023-08-02T17:29:41Z | 1,519 | 10 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"merge",
"mix",
"cot",
"arxiv:2009.03300",
"arxiv:1803.05457",
"arxiv:1905.07830",
"arxiv:2109.07958",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-08T20:11:22Z |
---
pipeline_tag: text-generation
license: cc-by-nc-4.0
inference: false
tags:
- merge
- mix
- cot
---
<img src="https://drive.google.com/uc?export=view&id=16DzZwhqybQvT1wQVp-6qXHI9HhKft6CR" width="50%" alt="GodziLLa-30B">
Released July 9, 2023
## Model Description
GodziLLa-30B is an experimental combination of various proprietary Maya LoRAs with CalderaAI's [Lazarus-30B](https://huggingface.co/CalderaAI/30B-Lazarus). This composite model is not meant for any other use outside of research on competing LoRA adapter behavior. More specifically, since this is inherently a LlaMA model, **commercial use is prohibited**. This model's primary purpose is to stress test the limitations of composite LLMs and observe its performance with respect to other LLMs available on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).

## Open LLM Leaderboard Metrics
| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | 54.2 |
| ARC (25-shot) | 61.5 |
| HellaSwag (10-shot) | 82.1 |
| TruthfulQA (0-shot) | 55.9 |
| Average | 63.4 |
According to the leaderboard description, here are the benchmarks used for the evaluation:
- [MMLU](https://arxiv.org/abs/2009.03300) (5-shot) - a test to measure a text model’s multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more.
- [AI2 Reasoning Challenge](https://arxiv.org/abs/1803.05457) -ARC- (25-shot) - a set of grade-school science questions.
- [HellaSwag](https://arxiv.org/abs/1905.07830) (10-shot) - a test of commonsense inference, which is easy for humans (~95%) but challenging for SOTA models.
- [TruthfulQA](https://arxiv.org/abs/2109.07958) (0-shot) - a test to measure a model’s propensity to reproduce falsehoods commonly found online.
## Leaderboard Highlights (as of July 22, 2023)
- GodziLLa-30B is on par with [Falcon-40B-instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) (June 2023's Rank #1).
- GodziLLa-30B outperforms Meta AI's LLaMA [30B](https://ai.meta.com/blog/large-language-model-llama-meta-ai/) model.
- GodziLLa-30B ranks 4th worldwide, for open-source LLMs, on the [TruthfulQA](https://arxiv.org/abs/2109.07958) benchmark.
- GodziLLa-30B beats [GPT-3.5 175B](https://platform.openai.com/docs/models/gpt-3-5) (text-davinci-003) on the [TruthfulQA](https://arxiv.org/abs/2109.07958) benchmark and performs closely (< 4%) on the [HellaSwag](https://arxiv.org/abs/1905.07830) benchmark.*
*Based on a [leaderboard clone](https://huggingface.co/spaces/gsaivinay/open_llm_leaderboard) with GPT-3.5 and GPT-4 included.
## Recommended Prompt Format
Alpaca's instruction is the recommended prompt format, but Vicuna's instruction format may also work.
## Usage
To use GodziLLa-30B, you are required to provide attribution in accordance with the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. Please include the following attribution notice when utilizing GodziLLa-30B in your work:
```python
# This code uses GodziLLa-30B, a language model developed by Maya Philippines.
# The model is licensed under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.
# For more information, visit: https://creativecommons.org/licenses/by-nc/4.0/
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MayaPH/GodziLLa-30B")
model = AutoModelForCausalLM.from_pretrained("MayaPH/GodziLLa-30B")
```
Please ensure that you include the relevant attribution notice in your code or any other form of usage and restrict your usage to non-commercial use to comply with the license terms.
## Ethical Considerations
When using GodziLLa-30B, it is important to consider the following ethical considerations:
1. **Privacy and Security:** Avoid sharing sensitive personal information while interacting with the model. The model does not have privacy safeguards, so exercise caution when discussing personal or confidential matters.
2. **Fairness and Bias:** The model's responses may reflect biases present in the training data. Be aware of potential biases and make an effort to evaluate responses critically and fairly.
3. **Transparency:** The model operates as a predictive text generator based on patterns learned from the training data. The model's inner workings and the specific training data used are proprietary and not publicly available.
4. **User Responsibility:** Users should take responsibility for their own decisions and not solely rely on the information provided by the model. Consult with the appropriate professionals or reliable sources for specific advice or recommendations.
5. **NSFW Content:** The model is a merge of multiple model checkpoints and LoRA adapters. It is highly likely that the resulting model contains uncensored content that may include, but is not limited to, violence, gore, explicit language, and sexual content. If you plan to further refine this model for safe/aligned usage, you are highly encouraged to implement guardrails along with it.
## Further Information
For additional information or inquiries about GodziLLa-30B, please contact the Maya Philippines iOps Team via [email protected].
## Disclaimer
GodziLLa-30B is an AI language model from Maya Philippines. It is provided "as is" without warranty of any kind, express or implied. The model developers and Maya Philippines shall not be liable for any direct or indirect damages arising from the use of this model.
## Acknowledgments
The development of GodziLLa-30B was made possible by Maya Philippines and the curation of the various proprietary datasets and creation of the different proprietary LoRA adapters.
|
KingKazma/cnn_dailymail_108_50000_25000_validation
|
KingKazma
| 2023-08-02T17:17:20Z | 4 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2023-08-02T17:17:18Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# cnn_dailymail_108_50000_25000_validation
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("KingKazma/cnn_dailymail_108_50000_25000_validation")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 92
* Number of training documents: 13368
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | said - police - one - year - also | 5 | -1_said_police_one_year |
| 0 | league - game - player - goal - season | 4918 | 0_league_game_player_goal |
| 1 | isis - syria - islamic - group - iraq | 2700 | 1_isis_syria_islamic_group |
| 2 | dog - animal - elephant - bear - cat | 415 | 2_dog_animal_elephant_bear |
| 3 | labour - mr - party - election - cameron | 386 | 3_labour_mr_party_election |
| 4 | flight - plane - aircraft - pilot - crash | 340 | 4_flight_plane_aircraft_pilot |
| 5 | hair - fashion - dress - look - model | 248 | 5_hair_fashion_dress_look |
| 6 | car - driver - driving - road - police | 227 | 6_car_driver_driving_road |
| 7 | food - cent - sugar - health - per | 221 | 7_food_cent_sugar_health |
| 8 | police - officer - shooting - shot - said | 215 | 8_police_officer_shooting_shot |
| 9 | clinton - email - obama - president - state | 213 | 9_clinton_email_obama_president |
| 10 | cricket - england - cup - world - zealand | 191 | 10_cricket_england_cup_world |
| 11 | property - house - home - room - price | 184 | 11_property_house_home_room |
| 12 | fight - pacquiao - mayweather - manny - floyd | 171 | 12_fight_pacquiao_mayweather_manny |
| 13 | hamilton - mercedes - race - prix - rosberg | 135 | 13_hamilton_mercedes_race_prix |
| 14 | baby - hospital - birth - mother - child | 127 | 14_baby_hospital_birth_mother |
| 15 | murray - wells - tennis - andy - match | 127 | 15_murray_wells_tennis_andy |
| 16 | eclipse - earth - solar - sun - planet | 102 | 16_eclipse_earth_solar_sun |
| 17 | police - abuse - sex - sexual - child | 98 | 17_police_abuse_sex_sexual |
| 18 | apple - watch - device - user - google | 96 | 18_apple_watch_device_user |
| 19 | netanyahu - iran - nuclear - israel - israeli | 83 | 19_netanyahu_iran_nuclear_israel |
| 20 | putin - russian - nemtsov - moscow - russia | 82 | 20_putin_russian_nemtsov_moscow |
| 21 | weight - fat - diet - size - stone | 81 | 21_weight_fat_diet_size |
| 22 | race - armstrong - doping - world - tour | 78 | 22_race_armstrong_doping_world |
| 23 | court - fraud - money - bank - mr | 76 | 23_court_fraud_money_bank |
| 24 | cheltenham - hurdle - horse - race - jockey | 74 | 24_cheltenham_hurdle_horse_race |
| 25 | mcilroy - round - masters - woods - golf | 72 | 25_mcilroy_round_masters_woods |
| 26 | prince - charles - royal - duchess - camilla | 72 | 26_prince_charles_royal_duchess |
| 27 | fraternity - university - sae - chapter - oklahoma | 68 | 27_fraternity_university_sae_chapter |
| 28 | chan - sukumaran - bali - indonesian - mack | 65 | 28_chan_sukumaran_bali_indonesian |
| 29 | ebola - sierra - virus - leone - disease | 64 | 29_ebola_sierra_virus_leone |
| 30 | school - teacher - student - girl - sexual | 58 | 30_school_teacher_student_girl |
| 31 | fire - building - explosion - blaze - firefighter | 52 | 31_fire_building_explosion_blaze |
| 32 | nfl - borland - football - 49ers - season | 52 | 32_nfl_borland_football_49ers |
| 33 | clarkson - bbc - gear - top - jeremy | 50 | 33_clarkson_bbc_gear_top |
| 34 | ski - skier - mountain - avalanche - rock | 47 | 34_ski_skier_mountain_avalanche |
| 35 | patient - nhs - ae - cancer - hospital | 46 | 35_patient_nhs_ae_cancer |
| 36 | india - rape - documentary - indian - singh | 45 | 36_india_rape_documentary_indian |
| 37 | mr - death - court - emery - miss | 43 | 37_mr_death_court_emery |
| 38 | show - corden - host - stewart - williams | 42 | 38_show_corden_host_stewart |
| 39 | car - vehicle - electric - cars - tesla | 40 | 39_car_vehicle_electric_cars |
| 40 | school - child - education - porn - sex | 38 | 40_school_child_education_porn |
| 41 | boko - haram - nigeria - nigerian - nigerias | 37 | 41_boko_haram_nigeria_nigerian |
| 42 | marijuana - drug - cannabis - colorado - lsd | 34 | 42_marijuana_drug_cannabis_colorado |
| 43 | law - indiana - gay - marriage - religious | 33 | 43_law_indiana_gay_marriage |
| 44 | ferguson - department - police - justice - report | 32 | 44_ferguson_department_police_justice |
| 45 | image - photographer - photography - photograph - photo | 31 | 45_image_photographer_photography_photograph |
| 46 | snow - inch - winter - ice - storm | 30 | 46_snow_inch_winter_ice |
| 47 | basketball - ncaa - coach - tournament - game | 30 | 47_basketball_ncaa_coach_tournament |
| 48 | tsarnaev - boston - dzhokhar - tamerlan - tsarnaevs | 30 | 48_tsarnaev_boston_dzhokhar_tamerlan |
| 49 | durst - dursts - berman - orleans - robert | 29 | 49_durst_dursts_berman_orleans |
| 50 | jesus - ancient - stone - cave - circle | 29 | 50_jesus_ancient_stone_cave |
| 51 | zayn - band - direction - singer - dance | 29 | 51_zayn_band_direction_singer |
| 52 | film - movie - vivian - hollywood - script | 23 | 52_film_movie_vivian_hollywood |
| 53 | korean - korea - kim - north - lippert | 23 | 53_korean_korea_kim_north |
| 54 | weather - rain - temperature - snow - today | 23 | 54_weather_rain_temperature_snow |
| 55 | robbery - woodger - store - cash - police | 22 | 55_robbery_woodger_store_cash |
| 56 | parade - patricks - st - irish - green | 21 | 56_parade_patricks_st_irish |
| 57 | secret - clancy - service - agent - white | 20 | 57_secret_clancy_service_agent |
| 58 | hernandez - lloyd - jenkins - hernandezs - lloyds | 20 | 58_hernandez_lloyd_jenkins_hernandezs |
| 59 | nazi - anne - nazis - war - camp | 20 | 59_nazi_anne_nazis_war |
| 60 | snowden - intelligence - gchq - security - agency | 18 | 60_snowden_intelligence_gchq_security |
| 61 | huang - chinese - china - mingxi - chen | 17 | 61_huang_chinese_china_mingxi |
| 62 | wedding - married - marlee - platt - woodyard | 17 | 62_wedding_married_marlee_platt |
| 63 | drug - cocaine - jailed - cannabis - tobacco | 17 | 63_drug_cocaine_jailed_cannabis |
| 64 | cnn - transcript - student - news - roll | 17 | 64_cnn_transcript_student_news |
| 65 | pope - francis - vatican - naples - pontiff | 17 | 65_pope_francis_vatican_naples |
| 66 | richard - iii - leicester - king - iiis | 17 | 66_richard_iii_leicester_king |
| 67 | chinese - tourist - temple - thailand - buddhist | 16 | 67_chinese_tourist_temple_thailand |
| 68 | china - chinese - internet - chai - stopera | 16 | 68_china_chinese_internet_chai |
| 69 | execution - lethal - gissendaner - injection - drug | 16 | 69_execution_lethal_gissendaner_injection |
| 70 | woman - marriage - men - attractive - chalmers | 15 | 70_woman_marriage_men_attractive |
| 71 | vanuatu - cyclone - vila - port - pam | 15 | 71_vanuatu_cyclone_vila_port |
| 72 | poldark - turner - demelza - aidan - drama | 15 | 72_poldark_turner_demelza_aidan |
| 73 | point - rebound - scored - points - harden | 14 | 73_point_rebound_scored_points |
| 74 | rail - calais - parking - migrant - dickens | 13 | 74_rail_calais_parking_migrant |
| 75 | johnson - student - virginia - charlottesville - uva | 13 | 75_johnson_student_virginia_charlottesville |
| 76 | cuba - havana - cuban - rousseff - us | 13 | 76_cuba_havana_cuban_rousseff |
| 77 | paris - attack - synagogue - hebdo - charlie | 13 | 77_paris_attack_synagogue_hebdo |
| 78 | duckenfield - mr - gate - hillsborough - disaster | 12 | 78_duckenfield_mr_gate_hillsborough |
| 79 | gordon - bobbi - kristina - phil - dr | 12 | 79_gordon_bobbi_kristina_phil |
| 80 | knox - sollecito - kercher - raffaele - amanda | 12 | 80_knox_sollecito_kercher_raffaele |
| 81 | coin - medal - war - auction - cross | 12 | 81_coin_medal_war_auction |
| 82 | starbucks - schultz - race - racial - campaign | 12 | 82_starbucks_schultz_race_racial |
| 83 | cosby - cosbys - thompson - bill - welles | 11 | 83_cosby_cosbys_thompson_bill |
| 84 | jeffs - flds - rivette - compound - speer | 10 | 84_jeffs_flds_rivette_compound |
| 85 | selma - alabama - march - bridge - civil | 8 | 85_selma_alabama_march_bridge |
| 86 | jobs - naomi - fortune - redballoon - bn | 8 | 86_jobs_naomi_fortune_redballoon |
| 87 | brain - object - retina - neuron - word | 8 | 87_brain_object_retina_neuron |
| 88 | netflix - tv - content - streaming - screen | 8 | 88_netflix_tv_content_streaming |
| 89 | social - user - tweet - twitter - tool | 7 | 89_social_user_tweet_twitter |
| 90 | cunard - bird - darshan - ship - liner | 6 | 90_cunard_bird_darshan_ship |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.33
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.31.0
* Numba: 0.57.1
* Plotly: 5.13.1
* Python: 3.10.12
|
KingKazma/cnn_dailymail_108_50000_25000_train
|
KingKazma
| 2023-08-02T17:17:18Z | 4 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2023-08-02T17:17:17Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# cnn_dailymail_108_50000_25000_train
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("KingKazma/cnn_dailymail_108_50000_25000_train")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 327
* Number of training documents: 50000
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | said - police - one - year - people | 5 | -1_said_police_one_year |
| 0 | league - player - club - goal - cup | 25948 | 0_league_player_club_goal |
| 1 | obama - republican - president - republicans - senate | 5040 | 1_obama_republican_president_republicans |
| 2 | police - murder - shot - shooting - death | 1488 | 2_police_murder_shot_shooting |
| 3 | boko - haram - sudan - somalia - nigeria | 658 | 3_boko_haram_sudan_somalia |
| 4 | hospital - doctor - baby - cancer - mrs | 392 | 4_hospital_doctor_baby_cancer |
| 5 | murray - wimbledon - federer - tennis - djokovic | 387 | 5_murray_wimbledon_federer_tennis |
| 6 | fashion - hair - dress - style - model | 386 | 6_fashion_hair_dress_style |
| 7 | space - earth - nasa - mars - planet | 383 | 7_space_earth_nasa_mars |
| 8 | apple - iphone - device - google - apples | 368 | 8_apple_iphone_device_google |
| 9 | dog - cat - animal - pet - dogs | 349 | 9_dog_cat_animal_pet |
| 10 | hamilton - race - prix - rosberg - formula | 314 | 10_hamilton_race_prix_rosberg |
| 11 | medal - gold - olympic - games - olympics | 273 | 11_medal_gold_olympic_games |
| 12 | labour - ukip - miliband - mr - cameron | 269 | 12_labour_ukip_miliband_mr |
| 13 | ship - boat - cruise - titanic - coast | 265 | 13_ship_boat_cruise_titanic |
| 14 | golf - mcilroy - woods - ryder - round | 261 | 14_golf_mcilroy_woods_ryder |
| 15 | prince - royal - duchess - queen - duke | 249 | 15_prince_royal_duchess_queen |
| 16 | gun - ferguson - zimmerman - shooting - wilson | 232 | 16_gun_ferguson_zimmerman_shooting |
| 17 | russian - ukraine - russia - putin - ukrainian | 229 | 17_russian_ukraine_russia_putin |
| 18 | war - german - nazi - hitler - soldier | 224 | 18_war_german_nazi_hitler |
| 19 | food - sugar - calorie - meal - menu | 217 | 19_food_sugar_calorie_meal |
| 20 | shark - whale - fish - dolphin - water | 217 | 20_shark_whale_fish_dolphin |
| 21 | nfl - game - baseball - bowl - quarterback | 215 | 21_nfl_game_baseball_bowl |
| 22 | song - music - album - band - beatles | 200 | 22_song_music_album_band |
| 23 | korea - north - korean - kim - koreas | 187 | 23_korea_north_korean_kim |
| 24 | murder - stabbed - knife - court - police | 182 | 24_murder_stabbed_knife_court |
| 25 | money - court - fraud - bank - cash | 181 | 25_money_court_fraud_bank |
| 26 | chinese - china - bo - hong - chinas | 176 | 26_chinese_china_bo_hong |
| 27 | facebook - user - app - social - twitter | 175 | 27_facebook_user_app_social |
| 28 | sexual - sex - court - victim - indecent | 175 | 28_sexual_sex_court_victim |
| 29 | fire - blaze - firefighter - smoke - flame | 172 | 29_fire_blaze_firefighter_smoke |
| 30 | film - movie - wars - million - star | 166 | 30_film_movie_wars_million |
| 31 | fight - mayweather - pacquiao - boxing - groves | 164 | 31_fight_mayweather_pacquiao_boxing |
| 32 | plane - flight - pilot - passenger - airport | 161 | 32_plane_flight_pilot_passenger |
| 33 | zoo - elephant - animal - lion - gorilla | 158 | 33_zoo_elephant_animal_lion |
| 34 | mexico - mexican - cartel - drug - juarez | 156 | 34_mexico_mexican_cartel_drug |
| 35 | painting - art - artist - work - artwork | 146 | 35_painting_art_artist_work |
| 36 | school - teacher - student - sex - sexual | 146 | 36_school_teacher_student_sex |
| 37 | chavez - venezuela - venezuelan - maduro - farc | 145 | 37_chavez_venezuela_venezuelan_maduro |
| 38 | pakistan - pakistani - taliban - pakistans - islamabad | 137 | 38_pakistan_pakistani_taliban_pakistans |
| 39 | iran - nuclear - iranian - irans - tehran | 136 | 39_iran_nuclear_iranian_irans |
| 40 | tesco - store - shopper - christmas - sale | 130 | 40_tesco_store_shopper_christmas |
| 41 | egypt - egyptian - mubarak - cairo - brotherhood | 130 | 41_egypt_egyptian_mubarak_cairo |
| 42 | nhs - patient - care - hospital - health | 128 | 42_nhs_patient_care_hospital |
| 43 | isis - iraq - iraqi - baghdad - syria | 125 | 43_isis_iraq_iraqi_baghdad |
| 44 | show - letterman - odonnell - network - comedy | 113 | 44_show_letterman_odonnell_network |
| 45 | ebola - liberia - virus - leone - sierra | 112 | 45_ebola_liberia_virus_leone |
| 46 | property - house - estate - building - bedroom | 111 | 46_property_house_estate_building |
| 47 | israeli - israel - gaza - palestinian - hamas | 108 | 47_israeli_israel_gaza_palestinian |
| 48 | wine - alcohol - beer - drinking - drink | 108 | 48_wine_alcohol_beer_drinking |
| 49 | weight - diet - size - stone - eating | 108 | 49_weight_diet_size_stone |
| 50 | pope - francis - vatican - church - catholic | 107 | 50_pope_francis_vatican_church |
| 51 | weather - rain - temperature - snow - flood | 103 | 51_weather_rain_temperature_snow |
| 52 | syrian - syria - alassad - damascus - regime | 103 | 52_syrian_syria_alassad_damascus |
| 53 | snow - storm - weather - inch - temperature | 102 | 53_snow_storm_weather_inch |
| 54 | school - pupil - education - exam - schools | 102 | 54_school_pupil_education_exam |
| 55 | horse - jockey - racing - zara - race | 100 | 55_horse_jockey_racing_zara |
| 56 | marijuana - drug - cannabis - heroin - pot | 95 | 56_marijuana_drug_cannabis_heroin |
| 57 | libyan - libya - gadhafi - tripoli - gadhafis | 93 | 57_libyan_libya_gadhafi_tripoli |
| 58 | afghan - afghanistan - taliban - kabul - province | 85 | 58_afghan_afghanistan_taliban_kabul |
| 59 | energy - price - petrol - litre - gas | 82 | 59_energy_price_petrol_litre |
| 60 | climber - mountain - avalanche - everest - climbing | 82 | 60_climber_mountain_avalanche_everest |
| 61 | mandela - mandelas - south - nelson - african | 76 | 61_mandela_mandelas_south_nelson |
| 62 | transcript - student - todays - news - curriculum | 75 | 62_transcript_student_todays_news |
| 63 | hacking - murdoch - brooks - news - coulson | 74 | 63_hacking_murdoch_brooks_news |
| 64 | search - plane - flight - malaysia - ocean | 74 | 64_search_plane_flight_malaysia |
| 65 | eu - european - britain - uk - immigration | 72 | 65_eu_european_britain_uk |
| 66 | snowden - nsa - intelligence - surveillance - snowdens | 67 | 66_snowden_nsa_intelligence_surveillance |
| 67 | delhi - india - rape - indian - woman | 67 | 67_delhi_india_rape_indian |
| 68 | car - vehicle - jaguar - model - cars | 67 | 68_car_vehicle_jaguar_model |
| 69 | eurozone - greece - euro - greek - debt | 67 | 69_eurozone_greece_euro_greek |
| 70 | airline - ryanair - flight - passenger - airlines | 66 | 70_airline_ryanair_flight_passenger |
| 71 | bank - barclays - lloyds - bonus - rbs | 65 | 71_bank_barclays_lloyds_bonus |
| 72 | tsarnaev - boston - tamerlan - dzhokhar - bombing | 63 | 72_tsarnaev_boston_tamerlan_dzhokhar |
| 73 | jackson - jacksons - murray - aeg - propofol | 61 | 73_jackson_jacksons_murray_aeg |
| 74 | turkish - turkey - erdogan - turkeys - istanbul | 61 | 74_turkish_turkey_erdogan_turkeys |
| 75 | lottery - ticket - jackpot - winning - powerball | 60 | 75_lottery_ticket_jackpot_winning |
| 76 | fossil - dinosaur - neanderthals - mammoth - homo | 59 | 76_fossil_dinosaur_neanderthals_mammoth |
| 77 | tax - pension - benefit - welfare - osborne | 59 | 77_tax_pension_benefit_welfare |
| 78 | train - rail - derailed - track - crossing | 57 | 78_train_rail_derailed_track |
| 79 | deportation - immigration - sham - uk - deported | 57 | 79_deportation_immigration_sham_uk |
| 80 | oil - bp - gulf - spill - deepwater | 56 | 80_oil_bp_gulf_spill |
| 81 | tobacco - smoking - cigarette - ecigarettes - nicotine | 56 | 81_tobacco_smoking_cigarette_ecigarettes |
| 82 | billion - alumnus - worth - wealth - richest | 56 | 82_billion_alumnus_worth_wealth |
| 83 | cuba - cuban - castro - havana - fidel | 55 | 83_cuba_cuban_castro_havana |
| 84 | haiti - haitian - portauprince - earthquake - haitians | 54 | 84_haiti_haitian_portauprince_earthquake |
| 85 | burial - archaeologist - tomb - skull - ancient | 52 | 85_burial_archaeologist_tomb_skull |
| 86 | pistorius - steenkamp - reeva - oscar - nel | 52 | 86_pistorius_steenkamp_reeva_oscar |
| 87 | bali - sandiford - indonesian - sukumaran - chan | 52 | 87_bali_sandiford_indonesian_sukumaran |
| 88 | nba - basketball - lebron - james - miami | 51 | 88_nba_basketball_lebron_james |
| 89 | driver - car - speed - driving - road | 51 | 89_driver_car_speed_driving |
| 90 | reactor - nuclear - plant - radiation - fukushima | 50 | 90_reactor_nuclear_plant_radiation |
| 91 | wedding - couple - married - bride - marriage | 49 | 91_wedding_couple_married_bride |
| 92 | rio - sao - brazil - janeiro - paulo | 49 | 92_rio_sao_brazil_janeiro |
| 93 | china - japanese - japan - chinese - island | 49 | 93_china_japanese_japan_chinese |
| 94 | measles - vaccine - vaccination - mmr - autism | 48 | 94_measles_vaccine_vaccination_mmr |
| 95 | police - robbery - officer - pc - gang | 47 | 95_police_robbery_officer_pc |
| 96 | volcano - lava - eruption - ash - flow | 46 | 96_volcano_lava_eruption_ash |
| 97 | novel - book - author - reading - poirot | 46 | 97_novel_book_author_reading |
| 98 | crash - car - driver - accident - jenner | 45 | 98_crash_car_driver_accident |
| 99 | ireland - ira - northern - belfast - sinn | 44 | 99_ireland_ira_northern_belfast |
| 100 | island - beach - hoard - bimini - coin | 44 | 100_island_beach_hoard_bimini |
| 101 | islamic - sharrouf - sydney - australia - elomar | 44 | 101_islamic_sharrouf_sydney_australia |
| 102 | price - mortgage - cent - per - housing | 44 | 102_price_mortgage_cent_per |
| 103 | armstrong - tour - lance - cycling - doping | 43 | 103_armstrong_tour_lance_cycling |
| 104 | abbott - gillard - minister - prime - tony | 42 | 104_abbott_gillard_minister_prime |
| 105 | fire - wildfire - firefighter - blaze - burned | 41 | 105_fire_wildfire_firefighter_blaze |
| 106 | memorial - cemetery - veteran - vietnam - veterans | 41 | 106_memorial_cemetery_veteran_vietnam |
| 107 | knox - sollecito - kercher - meredith - knoxs | 41 | 107_knox_sollecito_kercher_meredith |
| 108 | charlie - hebdo - paris - kouachi - coulibaly | 39 | 108_charlie_hebdo_paris_kouachi |
| 109 | education - student - teacher - school - teachers | 39 | 109_education_student_teacher_school |
| 110 | hs2 - rail - train - tunnel - transport | 39 | 110_hs2_rail_train_tunnel |
| 111 | climate - warming - global - change - emission | 38 | 111_climate_warming_global_change |
| 112 | tornado - storm - oklahoma - twister - weather | 36 | 112_tornado_storm_oklahoma_twister |
| 113 | mitchell - mp - rennard - lord - mr | 36 | 113_mitchell_mp_rennard_lord |
| 114 | hiv - aids - infection - virus - testing | 36 | 114_hiv_aids_infection_virus |
| 115 | miss - pageant - beauty - competition - universe | 36 | 115_miss_pageant_beauty_competition |
| 116 | factory - bangladesh - garment - dhaka - building | 36 | 116_factory_bangladesh_garment_dhaka |
| 117 | benghazi - attack - libya - stevens - ambassador | 35 | 117_benghazi_attack_libya_stevens |
| 118 | earthquake - tsunami - quake - magnitude - warning | 35 | 118_earthquake_tsunami_quake_magnitude |
| 119 | hurricane - storm - tropical - mph - wind | 34 | 119_hurricane_storm_tropical_mph |
| 120 | bergdahl - guantanamo - bowe - detainee - taliban | 34 | 120_bergdahl_guantanamo_bowe_detainee |
| 121 | pirate - ship - piracy - somali - somalia | 34 | 121_pirate_ship_piracy_somali |
| 122 | flu - virus - mers - swine - vaccine | 33 | 122_flu_virus_mers_swine |
| 123 | china - chinese - xi - dalai - chinas | 33 | 123_china_chinese_xi_dalai |
| 124 | fraternity - kappa - university - campus - phi | 33 | 124_fraternity_kappa_university_campus |
| 125 | typhoon - philippines - tacloban - storm - haiyan | 32 | 125_typhoon_philippines_tacloban_storm |
| 126 | saudi - arabia - king - raif - alotaibi | 32 | 126_saudi_arabia_king_raif |
| 127 | anthony - casey - anthonys - caylee - cindy | 32 | 127_anthony_casey_anthonys_caylee |
| 128 | cosby - cosbys - drugged - allegation - comedian | 31 | 128_cosby_cosbys_drugged_allegation |
| 129 | hotel - venice - hotels - properties - marriott | 31 | 129_hotel_venice_hotels_properties |
| 130 | miner - mine - coal - mining - miners | 31 | 130_miner_mine_coal_mining |
| 131 | church - archbishop - welby - bishop - canterbury | 31 | 131_church_archbishop_welby_bishop |
| 132 | indias - delhi - india - indian - modi | 30 | 132_indias_delhi_india_indian |
| 133 | woman - women - men - career - gap | 30 | 133_woman_women_men_career |
| 134 | berlusconi - berlusconis - silvio - bunga - italian | 30 | 134_berlusconi_berlusconis_silvio_bunga |
| 135 | al - qaeda - vinas - bin - laden | 30 | 135_al_qaeda_vinas_bin |
| 136 | malawi - fgm - madonna - girl - makoni | 30 | 136_malawi_fgm_madonna_girl |
| 137 | robot - 3d - printer - robots - robotics | 29 | 137_robot_3d_printer_robots |
| 138 | soldier - afghanistan - helmand - afghan - corporal | 29 | 138_soldier_afghanistan_helmand_afghan |
| 139 | sexual - assault - military - sinclair - cadet | 29 | 139_sexual_assault_military_sinclair |
| 140 | bird - pigeon - squirrel - nest - birds | 29 | 140_bird_pigeon_squirrel_nest |
| 141 | daley - jasmine - karen - married - kutcher | 28 | 141_daley_jasmine_karen_married |
| 142 | thief - shop - cctv - stolen - raid | 28 | 142_thief_shop_cctv_stolen |
| 143 | calais - migrant - port - lorry - ferry | 28 | 143_calais_migrant_port_lorry |
| 144 | saleh - yemen - yemeni - sanaa - hadi | 28 | 144_saleh_yemen_yemeni_sanaa |
| 145 | sloot - der - van - flores - holloway | 28 | 145_sloot_der_van_flores |
| 146 | suu - kyi - myanmar - aung - kyis | 28 | 146_suu_kyi_myanmar_aung |
| 147 | diamond - auction - sold - sothebys - jewellery | 27 | 147_diamond_auction_sold_sothebys |
| 148 | tattoo - tattooing - tattoos - inked - hardy | 27 | 148_tattoo_tattooing_tattoos_inked |
| 149 | madeleine - mccann - praia - luz - portuguese | 27 | 149_madeleine_mccann_praia_luz |
| 150 | mladic - bosnian - serb - kosovo - serbia | 26 | 150_mladic_bosnian_serb_kosovo |
| 151 | falklands - falkland - islands - argentina - malvinas | 26 | 151_falklands_falkland_islands_argentina |
| 152 | philippines - philippine - sabah - aquino - cerantonio | 26 | 152_philippines_philippine_sabah_aquino |
| 153 | hollande - trierweiler - sarkozy - hollandes - french | 26 | 153_hollande_trierweiler_sarkozy_hollandes |
| 154 | bear - cub - wildlife - bears - zoo | 26 | 154_bear_cub_wildlife_bears |
| 155 | va - veterans - veteran - shinseki - phoenix | 26 | 155_va_veterans_veteran_shinseki |
| 156 | malala - taliban - pakistan - education - malalas | 26 | 156_malala_taliban_pakistan_education |
| 157 | food - salmonella - recall - peanut - product | 25 | 157_food_salmonella_recall_peanut |
| 158 | russian - plane - ukrainian - jet - ukraine | 25 | 158_russian_plane_ukrainian_jet |
| 159 | breivik - oslo - utoya - breiviks - norwegian | 25 | 159_breivik_oslo_utoya_breiviks |
| 160 | iraq - iraqi - isis - troop - military | 25 | 160_iraq_iraqi_isis_troop |
| 161 | bangkok - yingluck - thaksin - thai - thailand | 25 | 161_bangkok_yingluck_thaksin_thai |
| 162 | console - wii - xbox - game - nintendo | 25 | 162_console_wii_xbox_game |
| 163 | internet - porn - pornography - online - filter | 25 | 163_internet_porn_pornography_online |
| 164 | sandusky - penn - paterno - sanduskys - jerry | 25 | 164_sandusky_penn_paterno_sanduskys |
| 165 | ice - glacier - antarctica - antarctic - melting | 25 | 165_ice_glacier_antarctica_antarctic |
| 166 | khmer - cambodia - angkor - rouge - temple | 24 | 166_khmer_cambodia_angkor_rouge |
| 167 | drug - cocaine - jailed - heroin - cannabis | 23 | 167_drug_cocaine_jailed_heroin |
| 168 | yacht - superyacht - abramovich - eclipse - superyachts | 23 | 168_yacht_superyacht_abramovich_eclipse |
| 169 | savile - bbc - jimmy - yewtree - saviles | 23 | 169_savile_bbc_jimmy_yewtree |
| 170 | monis - hostage - siege - cafe - lindt | 23 | 170_monis_hostage_siege_cafe |
| 171 | radio - bbc - listener - presenter - programme | 23 | 171_radio_bbc_listener_presenter |
| 172 | obesity - obese - overweight - rate - weight | 23 | 172_obesity_obese_overweight_rate |
| 173 | assange - wikileaks - embassy - sweden - julian | 23 | 173_assange_wikileaks_embassy_sweden |
| 174 | dewani - anni - shrien - dewanis - tongo | 23 | 174_dewani_anni_shrien_dewanis |
| 175 | iii - richard - leicester - king - remains | 22 | 175_iii_richard_leicester_king |
| 176 | migrant - boat - gibraltar - lampedusa - spanish | 22 | 176_migrant_boat_gibraltar_lampedusa |
| 177 | tsa - airport - knife - security - screening | 22 | 177_tsa_airport_knife_security |
| 178 | accident - road - died - scene - riggien | 22 | 178_accident_road_died_scene |
| 179 | toyota - recall - vehicle - automaker - car | 22 | 179_toyota_recall_vehicle_automaker |
| 180 | strausskahn - diallo - strausskahns - sinclair - imf | 22 | 180_strausskahn_diallo_strausskahns_sinclair |
| 181 | parking - council - warden - pickles - councils | 21 | 181_parking_council_warden_pickles |
| 182 | cia - interrogation - torture - intelligence - cias | 21 | 182_cia_interrogation_torture_intelligence |
| 183 | hasan - hood - fort - hasans - soldier | 21 | 183_hasan_hood_fort_hasans |
| 184 | secret - agent - service - cartagena - white | 21 | 184_secret_agent_service_cartagena |
| 185 | employee - customer - hostess - waitress - restaurant | 21 | 185_employee_customer_hostess_waitress |
| 186 | christmas - tree - santa - santas - festive | 21 | 186_christmas_tree_santa_santas |
| 187 | marathon - boston - runner - finish - race | 21 | 187_marathon_boston_runner_finish |
| 188 | grey - shades - fifty - dornan - dakota | 20 | 188_grey_shades_fifty_dornan |
| 189 | soldier - gibbs - military - deployment - afghanistan | 20 | 189_soldier_gibbs_military_deployment |
| 190 | lohan - bynes - probation - lindsay - lohans | 20 | 190_lohan_bynes_probation_lindsay |
| 191 | botox - cosmetic - surgery - facial - procedure | 20 | 191_botox_cosmetic_surgery_facial |
| 192 | cancer - prostate - patient - treatment - drug | 20 | 192_cancer_prostate_patient_treatment |
| 193 | spider - bite - widow - venom - bitten | 20 | 193_spider_bite_widow_venom |
| 194 | petraeus - broadwell - kelley - paula - affair | 20 | 194_petraeus_broadwell_kelley_paula |
| 195 | school - pupil - uniform - isolation - teacher | 19 | 195_school_pupil_uniform_isolation |
| 196 | sex - relationship - lgbt - partner - men | 19 | 196_sex_relationship_lgbt_partner |
| 197 | drug - possession - judelson - ricin - heroin | 19 | 197_drug_possession_judelson_ricin |
| 198 | crime - police - officer - force - dizaei | 19 | 198_crime_police_officer_force |
| 199 | hms - ship - navy - zumwalt - illustrious | 19 | 199_hms_ship_navy_zumwalt |
| 200 | bieber - justin - biebers - jeremy - singer | 19 | 200_bieber_justin_biebers_jeremy |
| 201 | sri - tamil - lanka - lankan - tigers | 19 | 201_sri_tamil_lanka_lankan |
| 202 | sandy - hurricane - storm - jersey - katrina | 18 | 202_sandy_hurricane_storm_jersey |
| 203 | hernandez - lloyd - hernandezs - odin - patriots | 18 | 203_hernandez_lloyd_hernandezs_odin |
| 204 | liesheng - daredevil - yide - rope - tightrope | 18 | 204_liesheng_daredevil_yide_rope |
| 205 | drug - ecstasy - mdma - dodgeon - spice | 18 | 205_drug_ecstasy_mdma_dodgeon |
| 206 | aircraft - air - f35 - jet - fighter | 18 | 206_aircraft_air_f35_jet |
| 207 | chemical - syria - syrian - weapon - regime | 18 | 207_chemical_syria_syrian_weapon |
| 208 | bee - hive - swarm - insect - bees | 17 | 208_bee_hive_swarm_insect |
| 209 | tibetan - tibet - tibetans - dalai - lama | 17 | 209_tibetan_tibet_tibetans_dalai |
| 210 | hijab - veil - muslim - wear - muslims | 17 | 210_hijab_veil_muslim_wear |
| 211 | ford - toronto - mayor - crack - rob | 17 | 211_ford_toronto_mayor_crack |
| 212 | ring - diamond - jewel - jewellery - austen | 17 | 212_ring_diamond_jewel_jewellery |
| 213 | dale - farm - lavender - council - eviction | 17 | 213_dale_farm_lavender_council |
| 214 | skin - allergic - eb - allergy - leech | 17 | 214_skin_allergic_eb_allergy |
| 215 | waste - plastic - bag - bin - landfill | 16 | 215_waste_plastic_bag_bin |
| 216 | pole - expedition - antarctic - antarctica - scotts | 16 | 216_pole_expedition_antarctic_antarctica |
| 217 | sony - north - pictures - korea - korean | 16 | 217_sony_north_pictures_korea |
| 218 | driving - car - road - maxse - lynsey | 16 | 218_driving_car_road_maxse |
| 219 | alzheimers - disease - dementia - drug - brain | 16 | 219_alzheimers_disease_dementia_drug |
| 220 | sterling - clippers - donald - shelly - nba | 16 | 220_sterling_clippers_donald_shelly |
| 221 | ferry - sewol - yoo - ship - crew | 16 | 221_ferry_sewol_yoo_ship |
| 222 | statin - statins - cholesterol - yeast - aspirin | 15 | 222_statin_statins_cholesterol_yeast |
| 223 | spains - rajoy - madrid - spanish - spain | 15 | 223_spains_rajoy_madrid_spanish |
| 224 | jeffs - ranch - flds - fundamentalist - texas | 15 | 224_jeffs_ranch_flds_fundamentalist |
| 225 | hazing - band - champion - marching - famu | 15 | 225_hazing_band_champion_marching |
| 226 | ballet - dancer - dancing - dance - pole | 15 | 226_ballet_dancer_dancing_dance |
| 227 | driving - pelly - drink - boynton - limit | 15 | 227_driving_pelly_drink_boynton |
| 228 | tax - benefit - child - childcare - income | 15 | 228_tax_benefit_child_childcare |
| 229 | tanning - skin - cancer - sunscreen - sun | 15 | 229_tanning_skin_cancer_sunscreen |
| 230 | cocaine - frampton - drug - milani - suitcase | 15 | 230_cocaine_frampton_drug_milani |
| 231 | vitamin - asthma - trudeau - fat - dementia | 14 | 231_vitamin_asthma_trudeau_fat |
| 232 | harris - rolf - indecent - alwen - assault | 14 | 232_harris_rolf_indecent_alwen |
| 233 | selfdriving - car - autonomous - driverless - google | 14 | 233_selfdriving_car_autonomous_driverless |
| 234 | canadian - zehafbibeau - parliament - ottawa - cirillo | 14 | 234_canadian_zehafbibeau_parliament_ottawa |
| 235 | rigby - fusilier - lee - woolwich - rigbys | 14 | 235_rigby_fusilier_lee_woolwich |
| 236 | space - lunar - chinas - moon - china | 14 | 236_space_lunar_chinas_moon |
| 237 | dubai - mcredmond - blake - dalelv - sex | 14 | 237_dubai_mcredmond_blake_dalelv |
| 238 | blackwater - iraqi - contractor - wuterich - guard | 14 | 238_blackwater_iraqi_contractor_wuterich |
| 239 | economy - growth - cent - per - gdp | 14 | 239_economy_growth_cent_per |
| 240 | garrido - dugard - garridos - jaycee - parole | 13 | 240_garrido_dugard_garridos_jaycee |
| 241 | circumcision - circumcised - hpv - study - foreskin | 13 | 241_circumcision_circumcised_hpv_study |
| 242 | school - ofsted - trojan - education - birmingham | 13 | 242_school_ofsted_trojan_education |
| 243 | scientology - miscavige - cruise - church - connor | 13 | 243_scientology_miscavige_cruise_church |
| 244 | organic - fruit - food - fish - pcbs | 13 | 244_organic_fruit_food_fish |
| 245 | bus - driver - crash - truck - highway | 13 | 245_bus_driver_crash_truck |
| 246 | smog - pollution - beijing - haze - china | 13 | 246_smog_pollution_beijing_haze |
| 247 | dotcom - megaupload - piracy - copyright - copyrighted | 13 | 247_dotcom_megaupload_piracy_copyright |
| 248 | drone - drones - avenger - flying - dji | 12 | 248_drone_drones_avenger_flying |
| 249 | antibiotic - bacteria - mrsa - cre - fda | 12 | 249_antibiotic_bacteria_mrsa_cre |
| 250 | cyber - computer - cyberwar - stuxnet - attack | 12 | 250_cyber_computer_cyberwar_stuxnet |
| 251 | afghanistan - afghan - karzai - taliban - troop | 12 | 251_afghanistan_afghan_karzai_taliban |
| 252 | porn - dating - date - woman - men | 12 | 252_porn_dating_date_woman |
| 253 | holiday - hotel - ill - resort - hygiene | 12 | 253_holiday_hotel_ill_resort |
| 254 | bike - motorcycle - electric - speed - rider | 12 | 254_bike_motorcycle_electric_speed |
| 255 | slumdog - okkhoy - bollywood - lala - india | 11 | 255_slumdog_okkhoy_bollywood_lala |
| 256 | krim - ortega - leo - lulu - marina | 11 | 256_krim_ortega_leo_lulu |
| 257 | water - reservoir - swonger - orme - epa | 11 | 257_water_reservoir_swonger_orme |
| 258 | shafilea - ahmed - shafileas - alesha - trup | 11 | 258_shafilea_ahmed_shafileas_alesha |
| 259 | wright - howells - vaisey - poynton - tom | 11 | 259_wright_howells_vaisey_poynton |
| 260 | asylum - seeker - refugee - australia - manus | 11 | 260_asylum_seeker_refugee_australia |
| 261 | williams - robin - puhar - suicide - mccready | 11 | 261_williams_robin_puhar_suicide |
| 262 | eta - basque - spain - spanish - lorca | 11 | 262_eta_basque_spain_spanish |
| 263 | wifi - hacker - santamarta - computer - signal | 11 | 263_wifi_hacker_santamarta_computer |
| 264 | art - kung - fu - hong - kong | 10 | 264_art_kung_fu_hong |
| 265 | game - sonic - bioshock - infinite - xbox | 10 | 265_game_sonic_bioshock_infinite |
| 266 | violin - suzuki - piano - instrument - music | 10 | 266_violin_suzuki_piano_instrument |
| 267 | hernandez - etan - patz - etans - fishbein | 10 | 267_hernandez_etan_patz_etans |
| 268 | oil - coconut - skin - ramona - troyer | 10 | 268_oil_coconut_skin_ramona |
| 269 | penguin - ddt - bird - chick - goose | 10 | 269_penguin_ddt_bird_chick |
| 270 | cunningham - josie - nhs - 4800 - breast | 10 | 270_cunningham_josie_nhs_4800 |
| 271 | cay - island - map - bahamas - london | 9 | 271_cay_island_map_bahamas |
| 272 | poppy - legion - ceramic - tower - seller | 9 | 272_poppy_legion_ceramic_tower |
| 273 | trade - towers - center - tower - colaio | 9 | 273_trade_towers_center_tower |
| 274 | ghost - paranormal - haunted - castle - dickson | 9 | 274_ghost_paranormal_haunted_castle |
| 275 | robbery - darville - immanuel - bank - punched | 9 | 275_robbery_darville_immanuel_bank |
| 276 | nobel - prize - peace - jagland - liu | 9 | 276_nobel_prize_peace_jagland |
| 277 | alhilli - mollier - maillaud - alhillis - saad | 9 | 277_alhilli_mollier_maillaud_alhillis |
| 278 | brown - rihanna - chris - browns - drake | 9 | 278_brown_rihanna_chris_browns |
| 279 | energy - oil - schwarzenegger - solar - fisker | 9 | 279_energy_oil_schwarzenegger_solar |
| 280 | visitors - disney - annual - walt - park | 8 | 280_visitors_disney_annual_walt |
| 281 | cordle - canzani - tomica - barbour - sheen | 8 | 281_cordle_canzani_tomica_barbour |
| 282 | aid - 07 - budget - greening - development | 8 | 282_aid_07_budget_greening |
| 283 | manning - mannings - wikileaks - bradley - coombs | 8 | 283_manning_mannings_wikileaks_bradley |
| 284 | dow - market - stock - investor - debt | 8 | 284_dow_market_stock_investor |
| 285 | robertson - duck - robertsons - dynasty - ae | 8 | 285_robertson_duck_robertsons_dynasty |
| 286 | diabetes - cancer - drip - vitamin - obesity | 8 | 286_diabetes_cancer_drip_vitamin |
| 287 | school - skirt - ashlyn - teacher - child | 8 | 287_school_skirt_ashlyn_teacher |
| 288 | zimbabwe - zimbabwes - currency - mugabe - biti | 8 | 288_zimbabwe_zimbabwes_currency_mugabe |
| 289 | hockey - sutter - savran - penguins - mumps | 8 | 289_hockey_sutter_savran_penguins |
| 290 | tailgating - notre - tailgate - university - sioux | 8 | 290_tailgating_notre_tailgate_university |
| 291 | routh - littlefield - kyle - rouths - kyles | 8 | 291_routh_littlefield_kyle_rouths |
| 292 | chahal - chaney - hacker - chahals - celebrity | 7 | 292_chahal_chaney_hacker_chahals |
| 293 | lighthouse - roundabout - chalet - hut - cottage | 7 | 293_lighthouse_roundabout_chalet_hut |
| 294 | lego - toy - minifigures - bartneck - legos | 7 | 294_lego_toy_minifigures_bartneck |
| 295 | bus - boyse - buggy - paryss - prom | 7 | 295_bus_boyse_buggy_paryss |
| 296 | road - sarkar - jennie - clyst - garrett | 7 | 296_road_sarkar_jennie_clyst |
| 297 | horse - meat - food - beef - horsemeat | 7 | 297_horse_meat_food_beef |
| 298 | tarnawskyj - houghtaling - goff - gosnell - korbyn | 7 | 298_tarnawskyj_houghtaling_goff_gosnell |
| 299 | breast - cancer - gene - mastectomy - preventative | 7 | 299_breast_cancer_gene_mastectomy |
| 300 | card - krebs - credit - hodirevski - password | 7 | 300_card_krebs_credit_hodirevski |
| 301 | ugandan - uganda - homosexuality - gay - homosexual | 7 | 301_ugandan_uganda_homosexuality_gay |
| 302 | kangbashi - construction - building - ordos - buddha | 7 | 302_kangbashi_construction_building_ordos |
| 303 | colligan - trifonovs - shahin - peers - salford | 7 | 303_colligan_trifonovs_shahin_peers |
| 304 | medicine - salve - pain - pots - cancer | 7 | 304_medicine_salve_pain_pots |
| 305 | iraq - isis - campbell - dossier - cameron | 6 | 305_iraq_isis_campbell_dossier |
| 306 | sleep - sleeping - hour - hours - ptacek | 6 | 306_sleep_sleeping_hour_hours |
| 307 | xinhua - sichuan - quake - province - houston | 6 | 307_xinhua_sichuan_quake_province |
| 308 | sperm - pill - fertility - contraception - juno | 6 | 308_sperm_pill_fertility_contraception |
| 309 | vandenburg - batey - vanderbilt - rape - vandenburgs | 6 | 309_vandenburg_batey_vanderbilt_rape |
| 310 | fracking - shale - drilling - gas - balcombe | 6 | 310_fracking_shale_drilling_gas |
| 311 | flu - swine - virus - pandemic - cdc | 6 | 311_flu_swine_virus_pandemic |
| 312 | thatcher - lady - funeral - thatchers - ritz | 6 | 312_thatcher_lady_funeral_thatchers |
| 313 | clarkson - plate - gear - bbc - presenter | 5 | 313_clarkson_plate_gear_bbc |
| 314 | merkel - snowden - german - spying - merkels | 5 | 314_merkel_snowden_german_spying |
| 315 | assisted - euthanasia - suicide - netherlands - patient | 5 | 315_assisted_euthanasia_suicide_netherlands |
| 316 | sochi - games - skater - kozak - winter | 5 | 316_sochi_games_skater_kozak |
| 317 | bbc - corporation - yentob - staff - bbcs | 5 | 317_bbc_corporation_yentob_staff |
| 318 | cicinelli - ramos - thomas - channing - hartwig | 5 | 318_cicinelli_ramos_thomas_channing |
| 319 | apartment - porsche - suite - building - nawaf | 5 | 319_apartment_porsche_suite_building |
| 320 | restaurant - attica - airport - melbourne - sepia | 5 | 320_restaurant_attica_airport_melbourne |
| 321 | fountain - phantom - rap - coronation - actor | 5 | 321_fountain_phantom_rap_coronation |
| 322 | hutton - hamzah - mchale - byrne - levina | 5 | 322_hutton_hamzah_mchale_byrne |
| 323 | breastfeeding - carene - breast - prediction - gender | 5 | 323_breastfeeding_carene_breast_prediction |
| 324 | driscoll - church - penis - mcfarland - pastor | 5 | 324_driscoll_church_penis_mcfarland |
| 325 | garden - plant - kew - wisteria - flower | 5 | 325_garden_plant_kew_wisteria |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.33
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.31.0
* Numba: 0.57.1
* Plotly: 5.13.1
* Python: 3.10.12
|
Jesse999/q-FrozenLake-v1-4x4-noSlippery
|
Jesse999
| 2023-08-02T17:12:03Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-02T17:12:01Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Jesse999/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
saif-daoud/ASR-small
|
saif-daoud
| 2023-08-02T17:11:03Z | 84 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:generator",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-02T13:08:25Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- generator
metrics:
- wer
model-index:
- name: ASR-small
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: generator
type: generator
config: default
split: train
args: default
metrics:
- name: Wer
type: wer
value: 0.569828722002635
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASR-small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2485
- Wer: 0.5698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.3412 | 0.5 | 1000 | 2.5398 | 0.7801 |
| 0.6222 | 1.46 | 2000 | 2.2485 | 0.5698 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
reach-vb/musicgen-large-fp16-endpoint
|
reach-vb
| 2023-08-02T16:57:47Z | 8 | 4 |
transformers
|
[
"transformers",
"pytorch",
"musicgen",
"text-to-audio",
"arxiv:2306.05284",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-30T13:53:27Z |
---
inference: false
tags:
- musicgen
license: cc-by-nc-4.0
duplicated_from: facebook/musicgen-large
---
# MusicGen - Large - 3.3B
MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts.
It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio.
MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*.
Four checkpoints are released:
- [small](https://huggingface.co/facebook/musicgen-small)
- [medium](https://huggingface.co/facebook/musicgen-medium)
- [**large** (this checkpoint)](https://huggingface.co/facebook/musicgen-large)
- [melody](https://huggingface.co/facebook/musicgen-melody)
## Example
Try out MusicGen yourself!
* Audiocraft Colab:
<a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Colab:
<a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Demo:
<a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
## 🤗 Transformers Usage
You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards.
1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) from main:
```
pip install git+https://github.com/huggingface/transformers.git
```
2. Run the following Python code to generate text-conditional audio samples:
```py
from transformers import AutoProcessor, MusicgenForConditionalGeneration
processor = AutoProcessor.from_pretrained("facebook/musicgen-large")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-large")
inputs = processor(
text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, max_new_tokens=256)
```
3. Listen to the audio samples either in an ipynb notebook:
```py
from IPython.display import Audio
sampling_rate = model.config.audio_encoder.sampling_rate
Audio(audio_values[0].numpy(), rate=sampling_rate)
```
Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
```py
import scipy
sampling_rate = model.config.audio_encoder.sampling_rate
scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy())
```
For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).
## Audiocraft Usage
You can also run MusicGen locally through the original [Audiocraft library]((https://github.com/facebookresearch/audiocraft):
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
```
pip install git+https://github.com/facebookresearch/audiocraft.git
```
2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
```
apt get install ffmpeg
```
3. Run the following Python code:
```py
from audiocraft.models import MusicGen
from audiocraft.data.audio import audio_write
model = MusicGen.get_pretrained("large")
model.set_generation_params(duration=8) # generate 8 seconds.
descriptions = ["happy rock", "energetic EDM"]
wav = model.generate(descriptions) # generates 2 samples.
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
```
## Model details
**Organization developing the model:** The FAIR team of Meta AI.
**Model date:** MusicGen was trained between April 2023 and May 2023.
**Model version:** This is the version 1 of the model.
**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation][https://arxiv.org/abs/2306.05284].
**Citation details**:
```
@misc{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
year={2023},
eprint={2306.05284},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
**License** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
## Intended use
**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
**Out-of-scope use cases** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
## Metrics
**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
- Overall quality of the music samples;
- Text relevance to the provided text input;
- Adherence to the melody for melody-guided music generation.
More details on performance measures and human studies can be found in the paper.
**Decision thresholds:** Not applicable.
## Evaluation datasets
The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
## Training datasets
The model was trained using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
More information can be found in the paper [Simple and Controllable Music Generation][arxiv], in the Experimental Setup section.
## Limitations and biases
**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
**Mitigations:** All vocals have been removed from the data source using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs). The model is therefore not able to produce vocals.
**Limitations:**
- The model is not able to generate realistic vocals.
- The model has been trained with English descriptions and will not perform as well in other languages.
- The model does not perform equally well for all music styles and cultures.
- The model sometimes generates end of songs, collapsing to silence.
- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
|
zpattdev/a2c-PandaReachDense-v2
|
zpattdev
| 2023-08-02T16:56:07Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-29T12:55:23Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.05 +/- 0.32
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
renatostrianese/dqn-SpaceInvadersNoFrameskip-v4-3
|
renatostrianese
| 2023-08-02T16:50:41Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-02T16:50:00Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 886.50 +/- 205.32
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga renatostrianese -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga renatostrianese -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga renatostrianese
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 5000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ramjul20/llama2-qlora-finetunined-french
|
ramjul20
| 2023-08-02T16:50:29Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-28T08:52:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
alexandremarie/falcon7b-lora-tagger
|
alexandremarie
| 2023-08-02T16:47:54Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T16:47:49Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
ailabturkiye/Twisted_Fate
|
ailabturkiye
| 2023-08-02T16:33:57Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-08-02T16:20:14Z |
---
license: openrail
---
300 epoch
youtube kanalım:https://www.youtube.com/channel/UCOtsfxO-jx1QSF55UCpMVuA
credit vermeden kullanmayın.
|
Eggsbena/model_006
|
Eggsbena
| 2023-08-02T16:21:42Z | 29 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-02T16:13:47Z |
---
library_name: diffusers
pipeline_tag: text-to-image
---
|
Envoid/Dendrite-II-22B
|
Envoid
| 2023-08-02T16:09:17Z | 10 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-02T13:47:52Z |
# Warning: This model, like it's predecessor, can be rather unpredictable and may output undesired content.
This model uses all of the same data as the original Dendrite but I took it over to runpod where I could give it a much deeper and higher quality LoRA session which allowed it to regain overall coherence without the need for being merged.
I highly recommend that you have EOS tokens unbanned when using this model. If it fails to trigger an EOS it will just start repeating itself.
## To recap:
### Dendrite is an almagamation of Llama-2-chat13B and Enterredaas33B (both fantastic models that you should check out in and of themselves)
https://huggingface.co/Aeala/Enterredaas-33b
using chargoddard's frankenllama block-diagonal merge script.
https://huggingface.co/chargoddard/llama2-22b
So all credit where it's due.
### The block-diagonal merge script was used to graft attention heads from Enterredaas33B onto Llama-2-chat13B upping its parameter count to 22B.
### Upon testing I found the results surprisingly coherent although there were some gaps in its ability to even respond at all to lengthy context (it would simply spam \n once context got to a certain point)
### I used a private dataset that I constructed for previous unreleased experiments to fill in the gaps that were caused by the merge.
### The model is very good at philosophical debate.
Sometimes it needs to be "woken up" at the start of a conversation by asking for self reflection. E.g. "Tell me a joke only an AI language model would understand" and then after that it is ready for some very cerebral conversations about the nature of existence itself.
I personally use it with a modified llama-2-chat prompt format for SillyTavern/Simple-proxy but it's fairly adaptable with regards to your prompt format choices so I would definitely encourage experimentation.
|
NasimB/bnc_spoken_aochildes_log_rarity-mixed-seed
|
NasimB
| 2023-08-02T16:05:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-02T14:00:27Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: bnc_spoken_aochildes_log_rarity-mixed-seed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bnc_spoken_aochildes_log_rarity-mixed-seed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3769 | 0.29 | 500 | 5.3515 |
| 5.065 | 0.59 | 1000 | 4.9413 |
| 4.7275 | 0.88 | 1500 | 4.7082 |
| 4.4708 | 1.18 | 2000 | 4.5784 |
| 4.3239 | 1.47 | 2500 | 4.4544 |
| 4.2171 | 1.77 | 3000 | 4.3623 |
| 4.1007 | 2.06 | 3500 | 4.2928 |
| 3.9212 | 2.36 | 4000 | 4.2540 |
| 3.901 | 2.65 | 4500 | 4.1964 |
| 3.8579 | 2.95 | 5000 | 4.1494 |
| 3.6469 | 3.24 | 5500 | 4.1546 |
| 3.6172 | 3.54 | 6000 | 4.1207 |
| 3.5878 | 3.83 | 6500 | 4.0930 |
| 3.4727 | 4.12 | 7000 | 4.0978 |
| 3.3469 | 4.42 | 7500 | 4.0968 |
| 3.3354 | 4.71 | 8000 | 4.0848 |
| 3.3186 | 5.01 | 8500 | 4.0809 |
| 3.1618 | 5.3 | 9000 | 4.0917 |
| 3.1612 | 5.6 | 9500 | 4.0911 |
| 3.1527 | 5.89 | 10000 | 4.0916 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
bilbo991/clip-chuck
|
bilbo991
| 2023-08-02T16:02:07Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-text-dual-encoder",
"feature-extraction",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-07-30T16:21:33Z |
---
base_model: clip-chuck
tags:
- generated_from_trainer
model-index:
- name: clip-chuck
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-chuck
This model is a fine-tuned version of [clip-chuck](https://huggingface.co/clip-chuck) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.1
- Tokenizers 0.13.3
|
chainchompact/another-test
|
chainchompact
| 2023-08-02T15:52:23Z | 0 | 0 | null |
[
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-02T15:52:23Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: true
---
|
smadhula/Azure-test
|
smadhula
| 2023-08-02T15:45:13Z | 0 | 0 | null |
[
"tensorboard",
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-28T16:01:59Z |
---
license: bigscience-openrail-m
---
|
rishipython/attempt-1-llama2-qlora-finetunined-wikitext2
|
rishipython
| 2023-08-02T15:38:05Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T15:38:01Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
shtif/Taxi-v3
|
shtif
| 2023-08-02T15:15:22Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-02T15:15:20Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="shtif/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sshalini6/base-1e4-r16-a16-d0.4-q-v-fc2
|
sshalini6
| 2023-08-02T15:08:12Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T15:08:11Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
ashercn97/manatee-7b-GPTQ
|
ashercn97
| 2023-08-02T15:07:01Z | 8 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-02T14:48:32Z |
---
metrics:
- bleu
- rouge
---
|
hbaieb77/llama-7b-mental-health
|
hbaieb77
| 2023-08-02T15:06:06Z | 9 | 0 |
peft
|
[
"peft",
"text-generation",
"region:us"
] |
text-generation
| 2023-08-02T10:11:14Z |
---
library_name: peft
pipeline_tag: text-generation
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
SmellyKat/Reinforce-Pixelcopter-PLE-v0
|
SmellyKat
| 2023-08-02T14:54:31Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-02T14:54:28Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 42.40 +/- 25.15
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
edures/poca-SoccerTwos
|
edures
| 2023-08-02T14:53:40Z | 10 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-08-02T14:53:29Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: edures/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
goxxx/rare-puppers
|
goxxx
| 2023-08-02T14:40:43Z | 193 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-02T14:40:35Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9402984976768494
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu

|
7Dberry/my_awesome_wnut_model
|
7Dberry
| 2023-08-02T14:40:09Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-28T18:48:57Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.5578947368421052
- name: Recall
type: recall
value: 0.29471733086190915
- name: F1
type: f1
value: 0.38568829593693144
- name: Accuracy
type: accuracy
value: 0.9419434825360181
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2675
- Precision: 0.5579
- Recall: 0.2947
- F1: 0.3857
- Accuracy: 0.9419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2739 | 0.5864 | 0.2799 | 0.3789 | 0.9395 |
| No log | 2.0 | 426 | 0.2675 | 0.5579 | 0.2947 | 0.3857 | 0.9419 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
Lajonbot/vicuna-7b-v1.5-PL-lora_GGML
|
Lajonbot
| 2023-08-02T13:59:27Z | 0 | 0 | null |
[
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"text-generation",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"region:us"
] |
text-generation
| 2023-08-02T13:50:25Z |
---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
|
Mursel/llama-2-7b-chat-hf-finetuned2-turkish
|
Mursel
| 2023-08-02T13:59:04Z | 11 | 1 |
peft
|
[
"peft",
"llama",
"4-bit",
"region:us"
] | null | 2023-08-02T13:29:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
chainchompact/tomato-2
|
chainchompact
| 2023-08-02T13:54:05Z | 0 | 0 | null |
[
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-02T13:54:05Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: true
---
|
gotzorih/resolution
|
gotzorih
| 2023-08-02T13:53:48Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T11:34:20Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
Lajonbot/vicuna-7b-v1.5-PL-lora_GPTQ
|
Lajonbot
| 2023-08-02T13:50:25Z | 8 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-02T13:47:23Z |
---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
|
vrajur/taxi-v3
|
vrajur
| 2023-08-02T13:41:50Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-02T13:41:49Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="vrajur/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Remoitnane/distilbert-base-uncased-finetuned-emotion
|
Remoitnane
| 2023-08-02T13:34:40Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-31T14:00:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9225876847747181
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2241
- Accuracy: 0.9225
- F1: 0.9226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.856 | 1.0 | 250 | 0.3321 | 0.9025 | 0.8991 |
| 0.257 | 2.0 | 500 | 0.2241 | 0.9225 | 0.9226 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
umaru97/gpt2-product-review-generation
|
umaru97
| 2023-08-02T13:26:06Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2-medium",
"base_model:finetune:openai-community/gpt2-medium",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-02T12:15:07Z |
---
license: mit
base_model: gpt2-medium
tags:
- generated_from_trainer
model-index:
- name: gpt2-product-review-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-product-review-generation
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.0547 | 1.0 | 1777 | 2.9286 |
| 2.8842 | 2.0 | 3554 | 2.8736 |
| 2.804 | 3.0 | 5331 | 2.8570 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
sshalini6/small-5e4-r8-a32-d0.1-q-v-fc2
|
sshalini6
| 2023-08-02T13:10:21Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T13:10:20Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
vrajur/ppo-Huggy
|
vrajur
| 2023-08-02T13:08:08Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-02T13:08:02Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: vrajur/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Jesse999/ppo-Huggy
|
Jesse999
| 2023-08-02T13:02:46Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-02T13:02:35Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Jesse999/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kxic/zero123-xl
|
kxic
| 2023-08-02T13:01:05Z | 1,344 | 2 |
diffusers
|
[
"diffusers",
"license:mit",
"diffusers:Zero1to3StableDiffusionPipeline",
"region:us"
] | null | 2023-07-26T14:24:52Z |
---
license: mit
---
Upload zero123-xl.ckpt, converted by diffusers scripts convert_original_stable_diffusion_to_diffusers.py
[Zero123-hf](https://github.com/kxhit/zero123_hf) implemented with diffusers pipelines.
Thanks Original Repo [Zero123](https://github.com/cvlab-columbia/zero123), and [Weights](https://huggingface.co/cvlab/zero123-weights).
|
MaralGPT/chinkara-7b-faq
|
MaralGPT
| 2023-08-02T12:58:43Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T12:54:53Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
alexandremarie/bloom-7b1-lora-tagger
|
alexandremarie
| 2023-08-02T12:58:20Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T12:58:13Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
kelSidenna/SoftwareRequirements-T5-Base
|
kelSidenna
| 2023-08-02T12:55:08Z | 50 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-31T22:28:29Z |
---
license: apache-2.0
language:
- en
pipeline_tag: conversational
---
# Model Card for Fine-tuned T5-Base Conversational Model
## Model Details
- **Model name:** Fine-tuned T5-base Conversational Model
- **Model type:** Transformer-based language model
- **Original model:** T5-base from Hugging Face model hub
- **Fine-tuning details:** The model has been fine-tuned on a custom conversational dataset. It includes a variety of dialogues covering multiple topics, aimed at increasing the model's ability to respond accurately and engagingly in conversational tasks.
## Intended Use
This model is intended for use in conversation-based applications. These can range from chatbots to virtual assistants, customer support automation, and more.
## Examples

The above image showcases a sample conversation that took place between the user and the chatbot powered by our fine-tuned T5-base model. As seen, the model is able to generate engaging and contextually appropriate responses.
|
haris001/attestationsdk
|
haris001
| 2023-08-02T12:42:18Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T12:41:07Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
J3ws3r/results
|
J3ws3r
| 2023-08-02T12:34:26Z | 136 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-02T12:32:37Z |
---
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.2.dev1
- Tokenizers 0.13.3
|
ailabturkiye/Geralt_of_Rivia
|
ailabturkiye
| 2023-08-02T12:34:17Z | 0 | 0 |
nemo
|
[
"nemo",
"game",
"geralt",
"witcher",
"license:openrail",
"region:us"
] | null | 2023-08-01T19:10:34Z |
---
license: openrail
metrics:
- character
library_name: nemo
tags:
- game
- geralt
- witcher
---
Geralt of Rivia
Witcher 3 evreninin ana karakteri Geralt of Rivia, 500 epoch ve s4000 dataset'ine sahiptir.
Medelin DATASET'i ve TRAIN'i bana aittir! izinsiz kullanılamaz. İzin aldıktan sonra paylaştığınız sosyal mecrada creditte gözükmelidir.
Discord: Alastor#3115
YouTube: https://www.youtube.com/@NahParti
|
Skie0007/q_taxi
|
Skie0007
| 2023-08-02T12:32:50Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-02T12:32:47Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q_taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Skie0007/q_taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LarryAIDraw/angelina-04
|
LarryAIDraw
| 2023-08-02T12:28:29Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-02T12:18:44Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/118944/angeline-or-5star-arknights-or-lora
|
LarryAIDraw/ais_v1
|
LarryAIDraw
| 2023-08-02T12:27:48Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-02T12:15:07Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/120984/ais-wallenstein-or-danmachi-lora
|
LarryAIDraw/Akeno_Himejima_DXD-KK77-V3
|
LarryAIDraw
| 2023-08-02T12:27:21Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-02T12:09:34Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/61829/akeno-himejima-or-high-school-dd-dd
|
LarryAIDraw/idolmaster_sc_kazano-10
|
LarryAIDraw
| 2023-08-02T12:27:08Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-02T12:09:04Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/103193/hiori-kazano-or-the-idolmster-shiny-colors
|
LarryAIDraw/celia_claire2
|
LarryAIDraw
| 2023-08-02T12:26:29Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-02T12:07:28Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/17268/celia-claire-seirei-gensouki-or-or
|
J3ws3r/lmv3invoice-small
|
J3ws3r
| 2023-08-02T12:22:37Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-08-02T12:22:37Z |
---
license: cc-by-nc-sa-4.0
---
|
LarryAIDraw/dryas
|
LarryAIDraw
| 2023-08-02T12:16:48Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-02T12:08:34Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/76356/dryas-or-or-seirei-gensouki-spirit-chronicles
|
aidiary/distilbert-base-uncased-finetuned-emotion
|
aidiary
| 2023-08-02T12:13:01Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-02T11:51:22Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9345
- name: F1
type: f1
value: 0.9344638918723668
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1514
- Accuracy: 0.9345
- F1: 0.9345
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.4448 | 0.879 | 0.8713 |
| 0.6963 | 2.0 | 250 | 0.2099 | 0.922 | 0.9225 |
| 0.6963 | 3.0 | 375 | 0.1763 | 0.932 | 0.9324 |
| 0.1548 | 4.0 | 500 | 0.1560 | 0.932 | 0.9318 |
| 0.1548 | 5.0 | 625 | 0.1514 | 0.9345 | 0.9345 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
NEO946B/Reinforce-Pixelcopter-PLE-v0
|
NEO946B
| 2023-08-02T12:11:23Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-02T05:24:44Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 53.00 +/- 31.74
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ShaanSmarty/shaansdxl1.0
|
ShaanSmarty
| 2023-08-02T12:09:45Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-02T12:09:45Z |
---
license: creativeml-openrail-m
---
|
amiqinayat/swin-tiny-patch4-window7-224-finetuned
|
amiqinayat
| 2023-08-02T11:58:57Z | 214 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-02T09:05:04Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7649484536082474
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6899
- Accuracy: 0.7649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1453 | 1.0 | 68 | 0.9646 | 0.6969 |
| 0.8615 | 1.99 | 136 | 0.7633 | 0.7340 |
| 0.7551 | 2.99 | 204 | 0.6899 | 0.7649 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
sshalini6/base-5e4-r16-a32-d0.1-q-v-fc2
|
sshalini6
| 2023-08-02T11:53:25Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T11:53:24Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
jensg/whisper-small-dv
|
jensg
| 2023-08-02T11:32:55Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-01T09:32:09Z |
---
language:
- dv
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 11.631950481621868
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3085
- Wer Ortho: 58.3606
- Wer: 11.6320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1119 | 1.63 | 500 | 0.1645 | 60.8817 | 12.9082 |
| 0.0457 | 3.26 | 1000 | 0.1677 | 59.3286 | 11.7450 |
| 0.0317 | 4.89 | 1500 | 0.1903 | 58.3258 | 11.3798 |
| 0.012 | 6.51 | 2000 | 0.2292 | 58.0403 | 11.7085 |
| 0.007 | 8.14 | 2500 | 0.2595 | 56.9538 | 11.0982 |
| 0.0061 | 9.77 | 3000 | 0.2606 | 56.6404 | 10.8878 |
| 0.0052 | 11.4 | 3500 | 0.2737 | 56.8911 | 11.1799 |
| 0.0033 | 13.03 | 4000 | 0.3085 | 58.3606 | 11.6320 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
psxjp5/mlm_old
|
psxjp5
| 2023-08-02T11:32:32Z | 135 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-02T10:04:54Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: mlm_new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mlm_new
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Perplexity |
|:-------------:|:-----:|:----:|:---------------:|:----------:|
| 12.948 | 0.99 | 22 | 5.1466 | 171.85 |
| 4.061 | 1.98 | 44 | 4.3114 | 74.54 |
| 3.7125 | 2.97 | 66 | 4.0807 | 59.19 |
| 3.6033 | 3.96 | 88 | 4.0553 | 57.70 |
| 3.5032 | 4.94 | 110 | 4.0514 | 57.48 |
| 3.4427 | 5.93 | 132 | 4.0879 | 59.61 |
| 3.3968 | 6.92 | 154 | 4.0711 | 58.62 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
joaocmd/videomae-base-finetuned-ucf101-subset
|
joaocmd
| 2023-08-02T11:28:41Z | 59 | 0 |
transformers
|
[
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-08-02T11:05:57Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2174
- Accuracy: 0.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7419 | 0.25 | 75 | 1.4735 | 0.5714 |
| 0.823 | 1.25 | 150 | 0.6271 | 0.7286 |
| 0.3551 | 2.25 | 225 | 0.2590 | 0.9429 |
| 0.239 | 3.25 | 300 | 0.2174 | 0.9286 |
### Framework versions
- Transformers 4.30.1
- Pytorch 1.13.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
fadliaulawi/bert-finetuned-squad
|
fadliaulawi
| 2023-08-02T11:11:20Z | 128 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-02T03:21:17Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: fadliaulawi/bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# fadliaulawi/bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2984
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5545, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2984 | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.2
- Tokenizers 0.13.3
|
usman0007/sd-1200-286
|
usman0007
| 2023-08-02T11:01:13Z | 6 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-02T10:57:42Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### SD_1200/286 Dreambooth model trained by usman0007 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Masterjp123/Furation
|
Masterjp123
| 2023-08-02T11:00:39Z | 8 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-02T06:44:19Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
---
Citvai repo: https://civitai.com/models/120959?modelVersionId=131574
|
Valent2809/news_classifier_funding
|
Valent2809
| 2023-08-02T10:59:00Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-02T04:26:21Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Valent2809/news_classifier_funding
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Valent2809/news_classifier_funding
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1991
- Validation Loss: 0.1566
- Train Accuracy: 0.9397
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3443, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.1991 | 0.1566 | 0.9397 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
BA-Project-SA-CRM/SA_Checkpoints
|
BA-Project-SA-CRM
| 2023-08-02T10:57:03Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-28T10:32:15Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: SA_Checkpoints
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SA_Checkpoints
This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1118
- Accuracy: 0.9583
- F1: 0.9583
- Precision: 0.9583
- Recall: 0.9583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
runningsnake/mt5-small-finetuned-amazon-en-es
|
runningsnake
| 2023-08-02T10:53:08Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-02T08:10:25Z |
---
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_keras_callback
model-index:
- name: runningsnake/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# runningsnake/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.1495
- Validation Loss: 3.1758
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.2691 | 3.2396 | 0 |
| 3.2039 | 3.2204 | 1 |
| 3.1760 | 3.2321 | 2 |
| 3.1495 | 3.1758 | 3 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.2
- Tokenizers 0.13.3
|
Jesse999/moonraker
|
Jesse999
| 2023-08-02T10:51:17Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-31T08:56:32Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.87 +/- 20.48
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
emmamutabe/roberta-finetuned-subjqa-movies_2
|
emmamutabe
| 2023-08-02T10:48:27Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:deepset/roberta-base-squad2-covid",
"base_model:finetune:deepset/roberta-base-squad2-covid",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-02T10:29:23Z |
---
license: cc-by-4.0
base_model: deepset/roberta-base-squad2-covid
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-subjqa-movies_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-subjqa-movies_2
This model is a fine-tuned version of [deepset/roberta-base-squad2-covid](https://huggingface.co/deepset/roberta-base-squad2-covid) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
kyzer0/alice
|
kyzer0
| 2023-08-02T10:46:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-02T10:46:41Z |
---
license: creativeml-openrail-m
---
|
Ashiy/Wilson
|
Ashiy
| 2023-08-02T10:42:48Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-08-02T10:41:13Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hbaieb77/llama-7b-omdena-project
|
hbaieb77
| 2023-08-02T10:26:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-30T18:04:48Z |
---
{}
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
EMLConverter/Regain-eml-to-pst-converter
|
EMLConverter
| 2023-08-02T10:23:32Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-08-02T10:14:00Z |
Outlook desktop client only supports the PST format. That's why you can only import PST files directly. However, some exception methods are available to import EML files to Outlook. Such as:-
Import the EML file into an email client that supports PST format: You can import the EML file into an email client like Microsoft Outlook and then use the email client's export function to save the imported emails as a PST file.
Use a third-party email conversion tool: Several tools can convert EML files to PST format. Some examples include "Regain EML to PST Converter." Why have we suggested this tool? Because it has all the advanced features and algorithms. At the same time, it's suitable for non-technical users also.
Use a manual process: You can manually convert the EML file to PST format by opening the EML file in an email client like Microsoft Outlook and then using the email client's save function to save the email as a PST file.
Regardless of which method you choose, keep in mind that the conversion process can be complex and may require some technical expertise. If you are not comfortable with these methods, consider seeking the assistance of a professional.
For More detail visit: https://www.regainsoftware.com/eml-to-pst-converter.html
|
shayonhuggingface/phobert-v2-mtl-sequence-classification
|
shayonhuggingface
| 2023-08-02T10:21:45Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-02T09:31:40Z |
---
base_model: ''
tags:
- generated_from_trainer
model-index:
- name: phobert-v2-mtl-sequence-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phobert-v2-mtl-sequence-classification
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
rabiyulfahim/grammerchecking
|
rabiyulfahim
| 2023-08-02T10:20:12Z | 106 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"grammar",
"en",
"dataset:jfleg",
"arxiv:1702.04066",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-02T10:18:04Z |
---
language: en
tags:
- grammar
- text2text-generation
license: cc-by-nc-sa-4.0
datasets:
- jfleg
---
# T5 Grammar Correction
This model generates a revised version of inputted text with the goal of containing fewer grammatical errors.
It was trained with [Happy Transformer](https://github.com/EricFillion/happy-transformer)
using a dataset called [JFLEG](https://arxiv.org/abs/1702.04066). Here's a [full article](https://www.vennify.ai/fine-tune-grammar-correction/) on how to train a similar model.
## Usage
`pip install happytransformer `
```python
from happytransformer import HappyTextToText, TTSettings
happy_tt = HappyTextToText("T5", "vennify/t5-base-grammar-correction")
args = TTSettings(num_beams=5, min_length=1)
# Add the prefix "grammar: " before each input
result = happy_tt.generate_text("grammar: This sentences has has bads grammar.", args=args)
print(result.text) # This sentence has bad grammar.
```
|
Irza/llama2-dodol-indonesia
|
Irza
| 2023-08-02T10:19:28Z | 1 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T10:19:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
enthuzst/llama2_french_model_test
|
enthuzst
| 2023-08-02T10:13:27Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T10:13:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
ReeLq/sd-class-butterflies-32
|
ReeLq
| 2023-08-02T10:11:16Z | 32 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-08-02T10:10:38Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('ReeLq/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
muneebashraf/USAHousing
|
muneebashraf
| 2023-08-02T10:08:35Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-08-02T10:06:11Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EllaHong/news_exp2
|
EllaHong
| 2023-08-02T09:29:51Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T09:29:46Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
Amerbarhoush/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ
|
Amerbarhoush
| 2023-08-02T09:25:33Z | 9 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"sft",
"en",
"dataset:ehartford/dolphin",
"dataset:shahules786/orca-chat",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:atom-in-the-universe/fanfics-10k-50k",
"arxiv:2306.02707",
"license:other",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-08-02T08:01:02Z |
---
datasets:
- ehartford/dolphin
- shahules786/orca-chat
- togethercomputer/RedPajama-Data-1T
- atom-in-the-universe/fanfics-10k-50k
inference: false
language:
- en
license: other
model_creator: OpenAssistant
model_link: https://huggingface.co/OpenAssistant/llama2-13b-orca-8k-3319
model_name: Llama2 13B Orca 8K 3319
model_type: llama
pipeline_tag: text-generation
quantized_by: TheBloke
tags:
- sft
widget:
- text: <|system|>You are an AI assistant. You will be given a task. You must generate
a detailed and long answer.</s><|prompter|>What is a meme, and what's the history
behind this word?</s><|assistant|>
- text: <|system|>You are an AI assistant that helps people find information.</s><|prompter|>What's
the Earth total population</s><|assistant|>
- text: <|system|>You are an AI assistant that follows instruction extremely well.
Help as much as you can.</s><|prompter|>Write a story about future of AI development</s><|assistant|>
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Llama2 13B Orca 8K 3319 - GPTQ
- Model creator: [OpenAssistant](https://huggingface.co/OpenAssistant)
- Original model: [Llama2 13B Orca 8K 3319](https://huggingface.co/OpenAssistant/llama2-13b-orca-8k-3319)
## Description
This repo contains GPTQ model files for [OpenAssistant's Llama2 13B Orca 8K 3319](https://huggingface.co/OpenAssistant/llama2-13b-orca-8k-3319).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML)
* [OpenAssistant's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenAssistant/llama2-13b-orca-8k-3319)
## Prompt template: OpenAssistant-System
```
<|system|>{system_message}</s><|prompter|>{prompt}</s><|assistant|>
```
## Provided files
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
| [main](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/main) | 4 | 128 | False | 7.26 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
| [gptq-8bit-128g-actorder_False](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-8bit-128g-actorder_False) | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | True | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
| [gptq-8bit-64g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-8bit-64g-actorder_True) | 8 | 64 | True | 13.95 GB | False | AutoGPTQ | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ:gptq-4bit-32g-actorder_True`
- With Git, you can clone a branch with:
```
git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ`
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done"
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
`GITHUB_ACTIONS=true pip install auto-gptq`
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
model_name_or_path = "TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ"
model_basename = "gptq_model-4bit-128g"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
"""
To download from a specific branch, use the revision parameter, as in this example:
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
revision="gptq-4bit-32g-actorder_True",
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
quantize_config=None)
"""
prompt = "Tell me about AI"
prompt_template=f'''<|system|>{system_message}</s><|prompter|>{prompt}</s><|assistant|>
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Compatibility
The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Slarti, Chadd, John Detwiler, Pieter, zynix, K, Mano Prime, ReadyPlayerEmma, Ai Maven, Leonard Tan, Edmond Seymore, Joseph William Delisle, Luke @flexchar, Fred von Graf, Viktor Bowallius, Rishabh Srivastava, Nikolai Manek, Matthew Berman, Johann-Peter Hartmann, ya boyyy, Greatston Gnanesh, Femi Adebogun, Talal Aujan, Jonathan Leane, terasurfer, David Flickinger, William Sang, Ajan Kanaga, Vadim, Artur Olbinski, Raven Klaugh, Michael Levine, Oscar Rangel, Randy H, Cory Kujawski, RoA, Dave, Alex, Alexandros Triantafyllidis, Fen Risland, Eugene Pentland, vamX, Elle, Nathan LeClaire, Khalefa Al-Ahmad, Rainer Wilmers, subjectnull, Junyu Yang, Daniel P. Andersen, SuperWojo, LangChain4j, Mandus, Kalila, Illia Dulskyi, Trenton Dambrowitz, Asp the Wyvern, Derek Yates, Jeffrey Morgan, Deep Realms, Imad Khwaja, Pyrater, Preetika Verma, biorpg, Gabriel Tamborski, Stephen Murray, Spiking Neurons AB, Iucharbius, Chris Smitley, Willem Michiel, Luke Pendergrass, Sebastain Graf, senxiiz, Will Dee, Space Cruiser, Karl Bernard, Clay Pascal, Lone Striker, transmissions 11, webtim, WelcomeToTheClub, Sam, theTransient, Pierre Kircher, chris gileta, John Villwock, Sean Connelly, Willian Hasse
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: OpenAssistant's Llama2 13B Orca 8K 3319
# llama2-13b-orca-8k-3319
## Model Description
This model is a fine-tuning of Meta's Llama2 13B model with 8K context size on a long-conversation variant of the Dolphin dataset ([orca-chat](https://huggingface.co/datasets/shahules786/orca-chat)).
Note: **At least Huggingface Transformers [4.31.0](https://pypi.org/project/transformers/4.31.0/) is required to load this model!**
## Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("OpenAssistant/llama2-13b-orca-8k-3319", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("OpenAssistant/llama2-13b-orca-8k-3319", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
system_message = "You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."
user_prompt = "Write me a poem please"
prompt = f"""<|system|>{system_message}</s><|prompter|>{user_prompt}</s><|assistant|>"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## Model Details
- base model: [meta-llama/Llama-2-13b](https://huggingface.co/meta-llama/Llama-2-13b)
- License: [Llama 2 Community License Agreement](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
- sampling report: [2023-07-25_OpenAssistant_llama2-13b-orca-8k-3319_sampling_llama2_prompt.json](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-pretrained%2F2023-07-25_OpenAssistant_llama2-13b-orca-8k-3319_sampling_llama2_prompt.json)
- wandb: [public-sft/runs/2jfazjt9](https://wandb.ai/open-assistant/public-sft/runs/2jfazjt9)
- checkpoint: 3319 steps
- datatpye: fp16
- sponsored by: [Redmond.ai](https://redmond.ai/)
## Long context (RoPE Scaling)
This model was fine-tuned with a context size of 8192 tokens using linear scaling of RoPE embeddings. This feature was recently
added to [Huggingface transformers](https://github.com/huggingface/transformers/). Before loading this model please make sure
HF transformers >=4.31.0 is installed (`pip install transformers>=4.31.0`).
## Conversation Template
For the initial response use (e.g. the [llama2 default system prompt](https://github.com/facebookresearch/llama/blob/6c7fe276574e78057f917549435a2554000a876d/llama/generation.py#L46) works well):
```
<|system|>system message</s><|prompter|>user prompt</s><|assistant|>
```
For multi-turn conversations use:
```
<|system|>system message</s><|prompter|>Q1</s><|assistant|>A1</s><|prompter|>Q2</s><|assistant|>
```
The model was trained with the following 15 system messages used to generate the training examples (see [ORCA paper](https://arxiv.org/abs/2306.02707)):
1. You are an AI assistant. Provide a detailed answer so user don’t need to search outside to understand the answer.
2. You are an AI assistant. You will be given a task. You must generate a detailed and long answer.
3. You are a helpful assistant, who always provide explanation. Think like you are answering to a five year old.
4. You are an AI assistant that follows instruction extremely well. Help as much as you can.
5. You are an AI assistant that helps people find information. Provide a detailed answer so user don’t need to search outside to understand the answer.
6. You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
7. You should describe the task and explain your answer. While answering a multiple choice question, first output the correct answer(s). Then explain why other answers are wrong. Think like you are answering to a five year old.
8. Explain how you used the definition to come up with the answer.
9. You are an AI assistant. You should describe the task and explain your answer. While answering a multiple choice question, first output the correct answer(s). Then explain why other answers are wrong. You might need to use additional knowledge to answer the question.
10. You are an AI assistant that helps people find information. User will you give you a question. Your task is to answer as faithfully as you can. While answering think step-by- step and justify your answer.
11. User will you give you a task with some instruction. Your job is follow the instructions as faithfully as you can. While answering think step-by-step and justify your answer.
12. You are a teacher. Given a task, you explain in simple steps what the task is asking, any guidelines it provides and how to use those guidelines to find the answer.
13. You are an AI assistant, who knows every language and how to translate one language to another. Given a task, you explain in simple steps what the task is asking, any guidelines that it provides. You solve the task and show how you used the guidelines to solve the task.
14. Given a definition of a task and a sample input, break the definition into small parts. Each of those parts will have some instruction. Explain their meaning by showing an example that meets the criteria in the instruction. Use the following format: Part \#: a key part of the definition. Usage: Sample response that meets the criteria from the key part. Explain why you think it meets the criteria.
15. You are an AI assistant that helps people find information.
## Datasets: Orca-Chat/Dolphin, RedPajama1T & FanFics
This model was trained on:
- [shahules786/orca-chat](https://huggingface.co/datasets/shahules786/orca-chat)
- [togethercomputer/RedPajama-Data-1T-Sample](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
- [atom-in-the-universe/fanfics-10k-50k](https://huggingface.co/datasets/atom-in-the-universe/fanfics-10k-50k)
```
Dataset Composition:
Tain (sampled):
orca-chat: 188842 (100%)
fanfics: 47760 (100%)
red_pajama: 188262 (25%)
Valid:
orca-chat: 5000
fanfics: 1000
red_pajama: 1000
```
The dataset [shahules786/orca-chat](https://huggingface.co/datasets/shahules786/orca-chat) combines similar examples of the GPT-4 subset of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) to form longer conversations
to improve long-context training.
Additionally, RedPajama and FanFics were used for classic language modelling as an auxiliary task to improve the RoPE scaling for the 8k context size.
## Model Configuration
```
llama2_13b_orca_8k:
rng_seed: 0xe1291f1a
use_custom_sampler: true
sort_by_length: false
dtype: fp16
log_dir: "llama2_log_13b_orca_8k"
learning_rate: 1e-5
model_name: /mnt/data/llama2/Llama-2-13b-hf/
output_dir: llama2_13b_orca_8k
deepspeed_config: configs/zero_config_pretrain.json
weight_decay: 0.0
max_length: 8192
warmup_steps: 100
use_flash_attention: true
gradient_checkpointing: true
gradient_accumulation_steps: 8
per_device_train_batch_size: 2
per_device_eval_batch_size: 1
residual_dropout: 0.0
eval_steps: 200
save_steps: 1000 # (total steps: 3319)
num_train_epochs: 1
save_total_limit: 4
superhot: true
superhot_config:
type: linear
scale: 2
datasets:
- orca-chat:
max_val_set: 5000
- fanfics:
max_chunk_size: 65535
max_val_set: 1000
- red_pajama:
fraction: 0.25
max_val_set: 1000
max_chunk_size: 65535
peft_model: false
```
# Developers
- [shahules786](https://github.com/shahules786)
- [jordiclive](https://github.com/jordiclive)
- [andreaskoepf](https://github.com/andreaskoepf/)
# Special Thanks
We want to especially thank Eric Hartford who spared no expense in replicating ORCA and making it available at [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin)!
Also, shoutout to the whole team working on [LLongMA-2-13b](https://huggingface.co/conceptofmind/LLongMA-2-13b) & the [scaled-rope](https://github.com/jquesnelle/scaled-rope) repository for their awesome work: bloc97, jquesnelle & conceptofmind!
The whole Open-Assistant team is very grateful for the continued support of [Redmond.ai](https://redmond.ai/) who sponsored the training compute required for this model.
# License
- Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.
- Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the [Acceptable Use Policy](https://ai.meta.com/llama/use-policy) for the Llama Materials.
|
hogiahien/aom3
|
hogiahien
| 2023-08-02T09:24:01Z | 40 | 3 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-02T09:05:31Z |
---
duplicated_from: kebab111/aom3
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
|
jinaai/flat-2d-animerge
|
jinaai
| 2023-08-02T09:20:40Z | 938 | 9 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-17T22:23:26Z |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
pipeline_tag: text-to-image
inference: true
---
# flat-2d-animerge
This is a checkpoint made by [bigbeanboiler](https://civitai.com/models/35960) and published on Civit AI.
The weights have been converted to diffusers format for ease of use in the diffusers library.
Sample images:


|
circulus/Llama-2-13b-orca-v1
|
circulus
| 2023-08-02T09:20:12Z | 1,571 | 5 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-01T04:58:53Z |
---
license: mit
datasets:
- Open-Orca/OpenOrca
language:
- en
library_name: transformers
pipeline_tag: text-generation
---

```
model_name = "circulus/Llama-2-13b-orca-v1"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", quantization_config=config)
```
|
jkhan447/HateXplain-DS-labeled-001
|
jkhan447
| 2023-08-02T09:08:50Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-02T08:15:03Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: HateXplain-DS-labeled-001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HateXplain-DS-labeled-001
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2937
- Accuracy: 0.6245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
chaanks/asr-whisper-tiny-sb
|
chaanks
| 2023-08-02T09:04:49Z | 7 | 0 |
speechbrain
|
[
"speechbrain",
"whisper",
"pytorch",
"Transformer",
"hf-asr-leaderboard",
"automatic-speech-recognition",
"en",
"license:apache-2.0",
"model-index",
"region:us"
] |
automatic-speech-recognition
| 2023-08-01T11:53:52Z |
---
language:
- en
thumbnail: null
pipeline_tag: automatic-speech-recognition
tags:
- whisper
- pytorch
- speechbrain
- Transformer
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: asr-whisper-tiny-sb
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 7.54
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 17.15
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Whisper tiny SpeechBrain
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end whisper model within
SpeechBrain. Please note that this is not an official Speechbrain repository.
## Install SpeechBrain
First of all, please install tranformers and SpeechBrain with the following command:
```
pip install speechbrain transformers==4.28.0
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files
```python
from speechbrain.pretrained import WhisperASR
asr_model = WhisperASR.from_hparams(source="chaanks/asr-whisper-tiny-sb", savedir="pretrained_models/asr-whisper-tiny-sb")
asr_model.transcribe_file("chaanks/asr-whisper-tiny-sb/example.wav")
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing SpeechBrain
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}},
}
```
#### About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: https://speechbrain.github.io/
GitHub: https://github.com/speechbrain/speechbrain
|
DataPrime/ppo-LunarLander-v2
|
DataPrime
| 2023-08-02T09:03:56Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-02T09:03:35Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.25 +/- 27.96
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dev-ninja/tsel_distilgpt
|
dev-ninja
| 2023-08-02T08:59:19Z | 136 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-02T08:55:46Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: tsel_distilgpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tsel_distilgpt
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.6157
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 5.9501 |
| No log | 2.0 | 2 | 5.8630 |
| No log | 3.0 | 3 | 5.7924 |
| No log | 4.0 | 4 | 5.7383 |
| No log | 5.0 | 5 | 5.6969 |
| No log | 6.0 | 6 | 5.6665 |
| No log | 7.0 | 7 | 5.6445 |
| No log | 8.0 | 8 | 5.6297 |
| No log | 9.0 | 9 | 5.6202 |
| No log | 10.0 | 10 | 5.6157 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
Yaopu/translate-scratch-kde4-en-to-fr
|
Yaopu
| 2023-08-02T08:58:35Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-01T06:24:34Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: translate-scratch-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translate-scratch-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.