modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-20 18:29:57
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 566
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-20 18:29:15
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ashercn97/code-llama-slay
|
ashercn97
| 2023-07-25T17:45:06Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"text-generation",
"en",
"region:us"
] |
text-generation
| 2023-07-22T21:37:24Z |
---
pipeline_tag: text-generation
library_name: peft
language:
- en
---
This model is a tester that hopefully codes. Built with axolotl (i dont know how to do the widget thing). Trained for like 1 hour on 2 gpu somethings.
|
Adi0010/ppo-LunarLander-v2
|
Adi0010
| 2023-07-25T17:43:50Z | 2 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-04-24T16:42:05Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -201.54 +/- 85.03
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': 'None'
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Adi0010/ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
happyone/bert-finetuned-ner
|
happyone
| 2023-07-25T17:29:22Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-25T16:53:51Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9304821664464994
- name: Recall
type: recall
value: 0.9483338943116796
- name: F1
type: f1
value: 0.9393232205367562
- name: Accuracy
type: accuracy
value: 0.9853858833225407
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0628
- Precision: 0.9305
- Recall: 0.9483
- F1: 0.9393
- Accuracy: 0.9854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0776 | 1.0 | 1756 | 0.0753 | 0.9097 | 0.9322 | 0.9208 | 0.9802 |
| 0.0405 | 2.0 | 3512 | 0.0588 | 0.9236 | 0.9465 | 0.9349 | 0.9857 |
| 0.0239 | 3.0 | 5268 | 0.0628 | 0.9305 | 0.9483 | 0.9393 | 0.9854 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Vhey/childrens-stories-v1-toon-anime
|
Vhey
| 2023-07-25T17:18:01Z | 0 | 2 | null |
[
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-25T17:16:43Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: true
---
Making Children's Books, Comics, Graphic Novels is rewarding, but very heavy on the artwork. This model will help you tell any of your stories visually. Let your imagination go wild. If you need custom characters that are consistent throughout a story, you can hire me to make one for you. Contact me in one of the links under my profile.
Recommended:
Clip Skip: 1
VAE: vae-ft-mse-840000-ema-pruned
|
ailabturkiye/serdartuncer
|
ailabturkiye
| 2023-07-25T16:59:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-25T16:57:34Z |
[](discord.gg/ailab)


# Serdar Tuncer - RVC V2 250 Epoch
**Serdar Tuncer'in ses modelidir.**
**Rvc V2 | 15 Dakikalık Dataset | 250 Epoch olarak eğitilmiştir.**
_Dataset ve Train Benim Tarafımdan yapılmıştır.._
__Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.__
## Credits
**Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.**
- Discord: hydragee
- YouTube: CoverLai (https://www.youtube.com/@coverlai)

[](discord.gg/ailab)

|
liuyt75/t5-small_prefix_tuning_sentences_66agree_10
|
liuyt75
| 2023-07-25T16:56:11Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T16:56:07Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
EricPeter/bert-finetuned-squad-v22
|
EricPeter
| 2023-07-25T16:53:00Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:badokorach/bert-finetuned-squad",
"base_model:finetune:badokorach/bert-finetuned-squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-25T16:10:15Z |
---
license: apache-2.0
base_model: badokorach/bert-finetuned-squad
tags:
- generated_from_keras_callback
model-index:
- name: EricPeter/bert-finetuned-squad-v22
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# EricPeter/bert-finetuned-squad-v22
This model is a fine-tuned version of [badokorach/bert-finetuned-squad](https://huggingface.co/badokorach/bert-finetuned-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0121
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1950, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 2.6826 | 0 |
| 1.9953 | 1 |
| 1.5543 | 2 |
| 1.2287 | 3 |
| 0.8953 | 4 |
| 0.6043 | 5 |
| 0.3745 | 6 |
| 0.2298 | 7 |
| 0.1536 | 8 |
| 0.1098 | 9 |
| 0.0987 | 10 |
| 0.0683 | 11 |
| 0.0609 | 12 |
| 0.0473 | 13 |
| 0.0345 | 14 |
| 0.0353 | 15 |
| 0.0294 | 16 |
| 0.0232 | 17 |
| 0.0243 | 18 |
| 0.0170 | 19 |
| 0.0190 | 20 |
| 0.0111 | 21 |
| 0.0138 | 22 |
| 0.0078 | 23 |
| 0.0143 | 24 |
| 0.0095 | 25 |
| 0.0112 | 26 |
| 0.0092 | 27 |
| 0.0116 | 28 |
| 0.0121 | 29 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.0
- Tokenizers 0.13.3
|
gombalmukiyo/oryviapolindasari
|
gombalmukiyo
| 2023-07-25T16:51:14Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-25T16:48:42Z |
---
license: creativeml-openrail-m
---
|
traintogpb/pko-t5-large-kor-for-colloquial-summarization-finetuned
|
traintogpb
| 2023-07-25T16:41:59Z | 8 | 4 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"ko",
"dataset:AI-Hub_broadcast_contents_script_summarization_dataset",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-07-25T16:33:11Z |
---
datasets:
- AI-Hub_broadcast_contents_script_summarization_dataset
language:
- ko
metrics:
- rouge
pipeline_tag: summarization
---
|
Adi0010/a2c-PandaReachDense-v2
|
Adi0010
| 2023-07-25T16:28:24Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T15:49:38Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.84 +/- 0.41
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Esmask/my-test
|
Esmask
| 2023-07-25T16:06:13Z | 0 | 0 | null |
[
"dataset:openchat/openchat_sharegpt4_dataset",
"region:us"
] | null | 2023-07-25T16:05:12Z |
---
datasets:
- openchat/openchat_sharegpt4_dataset
---
|
liuyt75/t5-small_prefix_tuning_sentences_50agree_15
|
liuyt75
| 2023-07-25T15:55:53Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T15:55:49Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
CordwainerSmith/whisper-small-dv
|
CordwainerSmith
| 2023-07-25T15:54:06Z | 80 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-25T11:41:44Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
metrics:
- wer
model-index:
- name: whisper-small-dv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_13_0
type: common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 11.072086796258302
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-dv
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2738
- Wer Ortho: 56.8842
- Wer: 11.0721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1237 | 1.63 | 500 | 0.1702 | 63.0963 | 13.1429 |
| 0.0494 | 3.26 | 1000 | 0.1662 | 57.8592 | 11.6285 |
| 0.0315 | 4.89 | 1500 | 0.1894 | 58.3397 | 11.3937 |
| 0.0121 | 6.51 | 2000 | 0.2257 | 57.7756 | 11.5502 |
| 0.005 | 8.14 | 2500 | 0.2643 | 56.9747 | 11.1573 |
| 0.0056 | 9.77 | 3000 | 0.2738 | 56.8842 | 11.0721 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
Vhey/a-zovya-photoreal-v2
|
Vhey
| 2023-07-25T15:25:38Z | 38 | 8 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-24T22:55:53Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: true
---
A photorealistic model designed for texture. I hate smooth airbrushed skin so I refined this model to be very realistic with great skin texture and details. Additional training added to supplement some things I feel are missing in current models. Lots of new training for skin textures, lighting and non-asian faces to balance out the asian dominance in models. If you create a generic prompt, you'll get a greater variety of races and faces now. Skin textures are increased by a large amount, if that's not your thing, you can put "detailed skin" in the negative prompt and get back that airbrushed look if you like.
|
Vhey/childrens-stories-v1-semi-real
|
Vhey
| 2023-07-25T15:25:02Z | 9 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-24T23:45:12Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: true
---
Making Children's Books, Comics, Graphic Novels is rewarding, but very heavy on the artwork. This model will help you tell any of your stories visually. Let your imagination go wild. If you need custom characters that are consistent throughout a story, you can hire me to make one for you. Contact me in one of the links under my profile.
Recommended:
Clip Skip: 1
VAE: vae-ft-mse-840000-ema-pruned
|
ale-bay/starcoderplus-q4f16_0
|
ale-bay
| 2023-07-25T15:06:04Z | 0 | 0 | null |
[
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-07-25T14:22:37Z |
---
license: bigcode-openrail-m
---
|
Za88yes/Riciss
|
Za88yes
| 2023-07-25T15:04:21Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-25T12:36:50Z |
---
license: creativeml-openrail-m
---
|
spacemanidol/msmarco_colbert_summary_enhanced
|
spacemanidol
| 2023-07-25T14:55:06Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"msmarco",
"ir",
"text-classification",
"en",
"dataset:Tevatron/msmarco-passage",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-25T14:20:17Z |
---
license: openrail
datasets:
- Tevatron/msmarco-passage
language:
- en
pipeline_tag: text-classification
tags:
- msmarco
- ir
---
|
sarwarbeing/ppo-Huggy
|
sarwarbeing
| 2023-07-25T14:53:46Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-25T14:53:40Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: sarwarbeing/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Za88yes/Riaricis
|
Za88yes
| 2023-07-25T14:53:15Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-25T14:53:15Z |
---
license: creativeml-openrail-m
---
|
Guilherme34/Jennifer-7b-betatest
|
Guilherme34
| 2023-07-25T14:28:48Z | 3 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-23T02:21:32Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
pankajgharai/swin-tiny-patch4-window7-224-finetuned-eurosat
|
pankajgharai
| 2023-07-25T14:13:07Z | 218 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-25T13:53:29Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9714814814814815
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0798
- Accuracy: 0.9715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.22 | 1.0 | 190 | 0.1221 | 0.9570 |
| 0.1705 | 2.0 | 380 | 0.0891 | 0.97 |
| 0.1084 | 3.0 | 570 | 0.0798 | 0.9715 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
Emperor-WS/Reinforce
|
Emperor-WS
| 2023-07-25T14:12:25Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T14:12:14Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
robertpassmann/dqn-SpaceInvadersNoFrameskip-v4
|
robertpassmann
| 2023-07-25T14:04:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T13:52:41Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 606.50 +/- 148.26
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga robertpassmann -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga robertpassmann -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga robertpassmann
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
davolu/stacco-ikea
|
davolu
| 2023-07-25T13:58:26Z | 4 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-25T13:54:59Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### stacco_ikea Dreambooth model trained by davolu with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
davanstrien/demo
|
davanstrien
| 2023-07-25T13:47:27Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-24T13:57:55Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# demo
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1142
- Accuracy: 0.9703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1484 | 1.0 | 2206 | 0.1221 | 0.9629 |
| 0.0962 | 2.0 | 4412 | 0.1142 | 0.9703 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
liuyt75/t5-small_prefix_tuning_sentences_50agree_3
|
liuyt75
| 2023-07-25T13:36:47Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T13:09:28Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
RohaanKhanCentric/llama2-qlora-finetunined-french
|
RohaanKhanCentric
| 2023-07-25T13:35:33Z | 0 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T13:35:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
paust/pko-chat-t5-large
|
paust
| 2023-07-25T13:34:13Z | 248 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"ko",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-28T16:46:40Z |
---
license: mit
language:
- ko
library_name: transformers
pipeline_tag: text2text-generation
widget:
- text: |
사용자가 한 말을 읽고 그에 질문에 답하거나 명령에 응답하는 비서입니다.
사용자:
한국의 수도는 어디인가요?
비서:
---
# Chat T5
[SourceCode](https://github.com/paust-team/pko-t5/tree/main/pkot5/chat)
Chat T5 는 [pko-flan-t5-large](https://huggingface.co/paust/pko-flan-t5-large) 를 기반으로 만들었습니다.
[KoAlpaca](https://github.com/beomi/koalpaca) 에서 제공하는 데이터셋과 [evolve-instruct](https://github.com/lcw99/evolve-instruct) 에서 제공하는 데이터셋을 학습했습니다.
좋은 데이터를 공개해주셔서 감사합니다.
### Model
- [Huggingface](https://huggingface.co/paust/pko-chat-t5-large)
### Example
```python
from transformers import T5TokenizerFast, T5ForConditionalGeneration
tokenizer = T5TokenizerFast.from_pretrained("paust/pko-chat-t5-large")
model = T5ForConditionalGeneration.from_pretrained("paust/pko-chat-t5-large", device_map='cuda')
prompt_tpl = "사용자가 한 말을 읽고 그에 질문에 답하거나 명령에 응답하는 비서입니다.\n\n사용자:\n{text}\n\n비서:\n"
prompt = prompt_tpl.format(text="한국의 수도는 어디인가요?")
input_ids = tokenizer(prompt, return_tensors='pt').input_ids
logits = model.generate(
input_ids,
max_new_tokens=1024,
temperature=0.5,
no_repeat_ngram_size=6,
do_sample=True,
num_return_sequences=1,
)
text = tokenizer.batch_decode(logits, skip_special_tokens=True)[0]
print(text) # 한국의 수도는 서울입니다.
```
## License
[PAUST](https://paust.io)에서 만든 pko-t5는 [MIT license](https://github.com/paust-team/pko-t5/blob/main/LICENSE) 하에 공개되어 있습니다.
|
PrateekJ17/CRAFTLm
|
PrateekJ17
| 2023-07-25T13:32:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-19T11:28:44Z |
# nanoGPT

The simplest, fastest repository for training/finetuning medium-sized GPTs. It is a rewrite of [minGPT](https://github.com/karpathy/minGPT) that prioritizes teeth over education. Still under active development, but currently the file `train.py` reproduces GPT-2 (124M) on OpenWebText, running on a single 8XA100 40GB node in about 4 days of training. The code itself is plain and readable: `train.py` is a ~300-line boilerplate training loop and `model.py` a ~300-line GPT model definition, which can optionally load the GPT-2 weights from OpenAI. That's it.

Because the code is so simple, it is very easy to hack to your needs, train new models from scratch, or finetune pretrained checkpoints (e.g. biggest one currently available as a starting point would be the GPT-2 1.3B model from OpenAI).
## install
Dependencies:
- [pytorch](https://pytorch.org) <3
- [numpy](https://numpy.org/install/) <3
- `pip install transformers` for huggingface transformers <3 (to load GPT-2 checkpoints)
- `pip install datasets` for huggingface datasets <3 (if you want to download + preprocess OpenWebText)
- `pip install tiktoken` for OpenAI's fast BPE code <3
- `pip install wandb` for optional logging <3
- `pip install tqdm` <3
## quick start
If you are not a deep learning professional and you just want to feel the magic and get your feet wet, the fastest way to get started is to train a character-level GPT on the works of Shakespeare. First, we download it as a single (1MB) file and turn it from raw text into one large stream of integers:
```
$ python data/shakespeare_char/prepare.py
```
This creates a `train.bin` and `val.bin` in that data directory. Now it is time to train your GPT. The size of it very much depends on the computational resources of your system:
**I have a GPU**. Great, we can quickly train a baby GPT with the settings provided in the [config/train_shakespeare_char.py](config/train_shakespeare_char.py) config file:
```
$ python train.py config/train_shakespeare_char.py
```
If you peek inside it, you'll see that we're training a GPT with a context size of up to 256 characters, 384 feature channels, and it is a 6-layer Transformer with 6 heads in each layer. On one A100 GPU this training run takes about 3 minutes and the best validation loss is 1.4697. Based on the configuration, the model checkpoints are being written into the `--out_dir` directory `out-shakespeare-char`. So once the training finishes we can sample from the best model by pointing the sampling script at this directory:
```
$ python sample.py --out_dir=out-shakespeare-char
```
This generates a few samples, for example:
```
ANGELO:
And cowards it be strawn to my bed,
And thrust the gates of my threats,
Because he that ale away, and hang'd
An one with him.
DUKE VINCENTIO:
I thank your eyes against it.
DUKE VINCENTIO:
Then will answer him to save the malm:
And what have you tyrannous shall do this?
DUKE VINCENTIO:
If you have done evils of all disposition
To end his power, the day of thrust for a common men
That I leave, to fight with over-liking
Hasting in a roseman.
```
lol `¯\_(ツ)_/¯`. Not bad for a character-level model after 3 minutes of training on a GPU. Better results are quite likely obtainable by instead finetuning a pretrained GPT-2 model on this dataset (see finetuning section later).
**I only have a macbook** (or other cheap computer). No worries, we can still train a GPT but we want to dial things down a notch. I recommend getting the bleeding edge PyTorch nightly ([select it here](https://pytorch.org/get-started/locally/) when installing) as it is currently quite likely to make your code more efficient. But even without it, a simple train run could look as follows:
```
$ python train.py config/train_shakespeare_char.py --device=cpu --compile=False --eval_iters=20 --log_interval=1 --block_size=64 --batch_size=12 --n_layer=4 --n_head=4 --n_embd=128 --max_iters=2000 --lr_decay_iters=2000 --dropout=0.0
```
Here, since we are running on CPU instead of GPU we must set both `--device=cpu` and also turn off PyTorch 2.0 compile with `--compile=False`. Then when we evaluate we get a bit more noisy but faster estimate (`--eval_iters=20`, down from 200), our context size is only 64 characters instead of 256, and the batch size only 12 examples per iteration, not 64. We'll also use a much smaller Transformer (4 layers, 4 heads, 128 embedding size), and decrease the number of iterations to 2000 (and correspondingly usually decay the learning rate to around max_iters with `--lr_decay_iters`). Because our network is so small we also ease down on regularization (`--dropout=0.0`). This still runs in about ~3 minutes, but gets us a loss of only 1.88 and therefore also worse samples, but it's still good fun:
```
$ python sample.py --out_dir=out-shakespeare-char --device=cpu
```
Generates samples like this:
```
GLEORKEN VINGHARD III:
Whell's the couse, the came light gacks,
And the for mought you in Aut fries the not high shee
bot thou the sought bechive in that to doth groan you,
No relving thee post mose the wear
```
Not bad for ~3 minutes on a CPU, for a hint of the right character gestalt. If you're willing to wait longer, feel free to tune the hyperparameters, increase the size of the network, the context length (`--block_size`), the length of training, etc.
Finally, on Apple Silicon Macbooks and with a recent PyTorch version make sure to add `--device=mps` (short for "Metal Performance Shaders"); PyTorch then uses the on-chip GPU that can *significantly* accelerate training (2-3X) and allow you to use larger networks. See [Issue 28](https://github.com/karpathy/nanoGPT/issues/28) for more.
## reproducing GPT-2
A more serious deep learning professional may be more interested in reproducing GPT-2 results. So here we go - we first tokenize the dataset, in this case the [OpenWebText](https://openwebtext2.readthedocs.io/en/latest/), an open reproduction of OpenAI's (private) WebText:
```
$ python data/openwebtext/prepare.py
```
This downloads and tokenizes the [OpenWebText](https://huggingface.co/datasets/openwebtext) dataset. It will create a `train.bin` and `val.bin` which holds the GPT2 BPE token ids in one sequence, stored as raw uint16 bytes. Then we're ready to kick off training. To reproduce GPT-2 (124M) you'll want at least an 8X A100 40GB node and run:
```
$ torchrun --standalone --nproc_per_node=8 train.py config/train_gpt2.py
```
This will run for about 4 days using PyTorch Distributed Data Parallel (DDP) and go down to loss of ~2.85. Now, a GPT-2 model just evaluated on OWT gets a val loss of about 3.11, but if you finetune it it will come down to ~2.85 territory (due to an apparent domain gap), making the two models ~match.
If you're in a cluster environment and you are blessed with multiple GPU nodes you can make GPU go brrrr e.g. across 2 nodes like:
```
Run on the first (master) node with example IP 123.456.123.456:
$ torchrun --nproc_per_node=8 --nnodes=2 --node_rank=0 --master_addr=123.456.123.456 --master_port=1234 train.py
Run on the worker node:
$ torchrun --nproc_per_node=8 --nnodes=2 --node_rank=1 --master_addr=123.456.123.456 --master_port=1234 train.py
```
It is a good idea to benchmark your interconnect (e.g. iperf3). In particular, if you don't have Infiniband then also prepend `NCCL_IB_DISABLE=1` to the above launches. Your multinode training will work, but most likely _crawl_. By default checkpoints are periodically written to the `--out_dir`. We can sample from the model by simply `$ python sample.py`.
Finally, to train on a single GPU simply run the `$ python train.py` script. Have a look at all of its args, the script tries to be very readable, hackable and transparent. You'll most likely want to tune a number of those variables depending on your needs.
## baselines
OpenAI GPT-2 checkpoints allow us to get some baselines in place for openwebtext. We can get the numbers as follows:
```
$ python train.py eval_gpt2
$ python train.py eval_gpt2_medium
$ python train.py eval_gpt2_large
$ python train.py eval_gpt2_xl
```
and observe the following losses on train and val:
| model | params | train loss | val loss |
| ------| ------ | ---------- | -------- |
| gpt2 | 124M | 3.11 | 3.12 |
| gpt2-medium | 350M | 2.85 | 2.84 |
| gpt2-large | 774M | 2.66 | 2.67 |
| gpt2-xl | 1558M | 2.56 | 2.54 |
However, we have to note that GPT-2 was trained on (closed, never released) WebText, while OpenWebText is just a best-effort open reproduction of this dataset. This means there is a dataset domain gap. Indeed, taking the GPT-2 (124M) checkpoint and finetuning on OWT directly for a while reaches loss down to ~2.85. This then becomes the more appropriate baseline w.r.t. reproduction.
## finetuning
Finetuning is no different than training, we just make sure to initialize from a pretrained model and train with a smaller learning rate. For an example of how to finetune a GPT on new text go to `data/shakespeare` and run `prepare.py` to download the tiny shakespeare dataset and render it into a `train.bin` and `val.bin`, using the OpenAI BPE tokenizer from GPT-2. Unlike OpenWebText this will run in seconds. Finetuning can take very little time, e.g. on a single GPU just a few minutes. Run an example finetuning like:
```
$ python train.py config/finetune_shakespeare.py
```
This will load the config parameter overrides in `config/finetune_shakespeare.py` (I didn't tune them much though). Basically, we initialize from a GPT2 checkpoint with `init_from` and train as normal, except shorter and with a small learning rate. If you're running out of memory try decreasing the model size (they are `{'gpt2', 'gpt2-medium', 'gpt2-large', 'gpt2-xl'}`) or possibly decreasing the `block_size` (context length). The best checkpoint (lowest validation loss) will be in the `out_dir` directory, e.g. in `out-shakespeare` by default, per the config file. You can then run the code in `sample.py --out_dir=out-shakespeare`:
```
THEODORE:
Thou shalt sell me to the highest bidder: if I die,
I sell thee to the first; if I go mad,
I sell thee to the second; if I
lie, I sell thee to the third; if I slay,
I sell thee to the fourth: so buy or sell,
I tell thee again, thou shalt not sell my
possession.
JULIET:
And if thou steal, thou shalt not sell thyself.
THEODORE:
I do not steal; I sell the stolen goods.
THEODORE:
Thou know'st not what thou sell'st; thou, a woman,
Thou art ever a victim, a thing of no worth:
Thou hast no right, no right, but to be sold.
```
Whoa there, GPT, entering some dark place over there. I didn't really tune the hyperparameters in the config too much, feel free to try!
## sampling / inference
Use the script `sample.py` to sample either from pre-trained GPT-2 models released by OpenAI, or from a model you trained yourself. For example, here is a way to sample from the largest available `gpt2-xl` model:
```
$ python sample.py \
--init_from=gpt2-xl \
--start="What is the answer to life, the universe, and everything?" \
--num_samples=5 --max_new_tokens=100
```
If you'd like to sample from a model you trained, use the `--out_dir` to point the code appropriately. You can also prompt the model with some text from a file, e.g. `$ python sample.py --start=FILE:prompt.txt`.
## efficiency notes
For simple model benchmarking and profiling, `bench.py` might be useful. It's identical to what happens in the meat of the training loop of `train.py`, but omits much of the other complexities.
Note that the code by default uses [PyTorch 2.0](https://pytorch.org/get-started/pytorch-2.0/). At the time of writing (Dec 29, 2022) this makes `torch.compile()` available in the nightly release. The improvement from the one line of code is noticeable, e.g. cutting down iteration time from ~250ms / iter to 135ms / iter. Nice work PyTorch team!
## todos
- Investigate and add FSDP instead of DDP
- Eval zero-shot perplexities on standard evals (e.g. LAMBADA? HELM? etc.)
- Finetune the finetuning script, I think the hyperparams are not great
- Schedule for linear batch size increase during training
- Incorporate other embeddings (rotary, alibi)
- Separate out the optim buffers from model params in checkpoints I think
- Additional logging around network health (e.g. gradient clip events, magnitudes)
- Few more investigations around better init etc.
## troubleshooting
Note that by default this repo uses PyTorch 2.0 (i.e. `torch.compile`). This is fairly new and experimental, and not yet available on all platforms (e.g. Windows). If you're running into related error messages try to disable this by adding `--compile=False` flag. This will slow down the code but at least it will run.
For some context on this repository, GPT, and language modeling it might be helpful to watch my [Zero To Hero series](https://karpathy.ai/zero-to-hero.html). Specifically, the [GPT video](https://www.youtube.com/watch?v=kCc8FmEb1nY) is popular if you have some prior language modeling context.
For more questions/discussions feel free to stop by **#nanoGPT** on Discord:
[](https://discord.gg/3zy8kqD9Cp)
## acknowledgements
All nanoGPT experiments are powered by GPUs on [Lambda labs](https://lambdalabs.com), my favorite Cloud GPU provider. Thank you Lambda labs for sponsoring nanoGPT!
|
llm-book/bert-base-japanese-v3-ner-wikipedia-dataset
|
llm-book
| 2023-07-25T13:32:15Z | 72,337 | 9 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"ja",
"dataset:llm-book/ner-wikipedia-dataset",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-28T08:06:41Z |
---
language:
- ja
license: apache-2.0
library_name: transformers
datasets:
- llm-book/ner-wikipedia-dataset
pipeline_tag: token-classification
metrics:
- seqeval
- precision
- recall
- f1
---
# llm-book/bert-base-japanese-v3-ner-wikipedia-dataset
「[大規模言語モデル入門](https://www.amazon.co.jp/dp/4297136333)」の第6章で紹介している固有表現認識のモデルです。
[cl-tohoku/bert-base-japanese-v3](https://huggingface.co/cl-tohoku/bert-base-japanese-v3)を[llm-book/ner-wikipedia-dataset](https://huggingface.co/datasets/llm-book/ner-wikipedia-dataset)でファインチューニングして構築されています。
## 関連リンク
* [GitHubリポジトリ](https://github.com/ghmagazine/llm-book)
* [Colabノートブック](https://colab.research.google.com/github/ghmagazine/llm-book/blob/main/chapter6/6-named-entity-recognition.ipynb)
* [データセット](https://huggingface.co/datasets/llm-book/ner-wikipedia-dataset)
* [大規模言語モデル入門(Amazon.co.jp)](https://www.amazon.co.jp/dp/4297136333/)
* [大規模言語モデル入門(gihyo.jp)](https://gihyo.jp/book/2023/978-4-297-13633-8)
## 使い方
```python
from transformers import pipeline
from pprint import pprint
ner_pipeline = pipeline(
model="llm-book/bert-base-japanese-v3-ner-wikipedia-dataset",
aggregation_strategy="simple",
)
text = "大谷翔平は岩手県水沢市出身のプロ野球選手"
# text中の固有表現を抽出
pprint(ner_pipeline(text))
# [{'end': None,
# 'entity_group': '人名',
# 'score': 0.99823624,
# 'start': None,
# 'word': '大谷 翔平'},
# {'end': None,
# 'entity_group': '地名',
# 'score': 0.9986874,
# 'start': None,
# 'word': '岩手 県 水沢 市'}]
```
## ライセンス
[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0)
|
AlVrde/bloomz-360_PROMPT_TUNING_CAUSAL_LM_0.5_0.4_10batch
|
AlVrde
| 2023-07-25T13:25:21Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-20T14:39:08Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Den4ikAI/FRED-T5-XL_instructor
|
Den4ikAI
| 2023-07-25T13:23:52Z | 10 | 4 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"ru",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-23T15:51:17Z |
---
license: mit
language:
- ru
pipeline_tag: text2text-generation
widget:
- text: '<SC6>Человек: Ответь на вопрос. Почему трава зеленая?\nБот: <extra_id_0>'
---
# Den4ikAI/FRED-T5-XL_instructor
Инструкционная модель на FRED-T5-XL.
# Пример использования
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
tokenizer = AutoTokenizer.from_pretrained("Den4ikAI/FRED-T5-XL_instructor")
model = AutoModelForSeq2SeqLM.from_pretrained("Den4ikAI/FRED-T5-XL_instructor", torch_dtype=torch.float16).to(device)
model.eval()
from transformers import GenerationConfig
generation_config = GenerationConfig.from_pretrained("Den4ikAI/FRED-T5-XL_instructor")
def generate(prompt):
data = tokenizer(f"<SC6>Человек: {prompt}\nБот: <extra_id_0>", return_tensors="pt").to(model.device)
output_ids = model.generate(
**data,
generation_config=generation_config
)[0]
out = tokenizer.decode(output_ids.tolist())
return out
while 1:
print(generate(input(":> ")))
```
# Citation
```
@MISC{Den4ikAI/FRED-T5-XL_instructor,
author = {Denis Petrov},
title = {Russian Instructor Model},
url = {https://huggingface.co/Den4ikAI/FRED-T5-XL_instructor/},
year = 2023
}
```
|
opm29/angelin
|
opm29
| 2023-07-25T13:21:42Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-25T13:18:52Z |
---
license: creativeml-openrail-m
---
|
llm-book/t5-base-long-livedoor-news-corpus
|
llm-book
| 2023-07-25T13:10:36Z | 704 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ja",
"dataset:llm-book/livedoor-news-corpus",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-27T13:32:54Z |
---
language:
- ja
license: apache-2.0
library_name: transformers
datasets:
- llm-book/livedoor-news-corpus
pipeline_tag: text2text-generation
metrics:
- rouge
- bleu
- bertscore
---
# llm-book/t5-base-long-livedoor-news-corpus
「[大規模言語モデル入門](https://www.amazon.co.jp/dp/4297136333)」の第7章で紹介している要約生成のモデルです。
[retrieva-jp/t5-base-long](https://huggingface.co/retrieva-jp/t5-base-long)を[llm-book/livedoor-news-corpus](https://huggingface.co/datasets/llm-book/livedoor-news-corpus)でファインチューニングして構築されています。
## 関連リンク
* [GitHubリポジトリ](https://github.com/ghmagazine/llm-book)
* [Colabノートブック](https://colab.research.google.com/github/ghmagazine/llm-book/blob/main/chapter7/7-summarization-generation.ipynb)
* [データセット](https://huggingface.co/datasets/llm-book/livedoor-news-corpus)
* [大規模言語モデル入門(Amazon.co.jp)](https://www.amazon.co.jp/dp/4297136333/)
* [大規模言語モデル入門(gihyo.jp)](https://gihyo.jp/book/2023/978-4-297-13633-8)
## 使い方
```python
from transformers import pipeline
text2text_pipeline = pipeline(
model="llm-book/t5-base-long-livedoor-news-corpus"
)
article = "ついに始まった3連休。テレビを見ながら過ごしている人も多いのではないだろうか? 今夜オススメなのは何と言っても、NHKスペシャル「世界を変えた男 スティーブ・ジョブズ」だ。実は知らない人も多いジョブズ氏の養子に出された生い立ちや、アップル社から一時追放されるなどの経験。そして、彼が追い求めた理想の未来とはなんだったのか、ファンならずとも気になる内容になっている。 今年、亡くなったジョブズ氏の伝記は日本でもベストセラーになっている。今後もアップル製品だけでなく、世界でのジョブズ氏の影響は大きいだろうと想像される。ジョブズ氏のことをあまり知らないという人もこの機会にぜひチェックしてみよう。 世界を変えた男 スティーブ・ジョブズ(NHKスペシャル)"
print(text2text_pipeline(article)[0]["generated_text"])
# 今夜はNHKスペシャル「世界を変えた男 スティーブ・ジョブズ」をチェック!
```
## ライセンス
[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0)
|
EhsanElahi/avatar-creator
|
EhsanElahi
| 2023-07-25T13:03:54Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-21T18:16:29Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - EhsanElahi/avatar-creator
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the EhsanElahi/rao dataset. You can find some example images in the following.




|
Jungwonchang/whisper_finetune_ksponspeech_partial_40epoch
|
Jungwonchang
| 2023-07-25T12:55:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"kr",
"dataset:Jungwonchang/ksponspeech_partial",
"base_model:openai/whisper-large-v2",
"base_model:finetune:openai/whisper-large-v2",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-24T18:43:57Z |
---
language:
- kr
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- Jungwonchang/ksponspeech_partial
metrics:
- wer
base_model: openai/whisper-large-v2
model-index:
- name: Whisper large-v2, KsponSpeech Partial 10 epochs
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: KsponSpeech
type: Jungwonchang/ksponspeech_partial
config: eval
split: test
args: eval
metrics:
- type: wer
value: 25.714073744343054
name: Wer
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Jungwonchang/ksponspeech
type: Jungwonchang/ksponspeech
config: eval
split: test
metrics:
- type: wer
value: 25.54
name: WER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper large-v2, KsponSpeech Partial 10 epochs
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the KsponSpeech dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0194
- Wer: 25.7141
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2225 | 1.15 | 100 | 0.1394 | 27.9769 |
| 0.0507 | 3.11 | 200 | 0.0449 | 14.9640 |
| 0.0114 | 5.07 | 300 | 0.0194 | 25.7141 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.12.1+cu116
- Datasets 2.14.0
- Tokenizers 0.12.1
|
haris001/jsoncodev2
|
haris001
| 2023-07-25T12:49:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T12:28:10Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
darren236/protgpt2_PROMPT_TUNING_CAUSAL_LM
|
darren236
| 2023-07-25T12:46:13Z | 5 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T12:02:32Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
raicrits/OpenLLama13b_Loquace_ITA
|
raicrits
| 2023-07-25T12:39:47Z | 7 | 4 |
transformers
|
[
"transformers",
"pytorch",
"text-generation",
"it",
"dataset:cosimoiaia/Loquace-102k",
"arxiv:2106.09685",
"arxiv:1910.09700",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-24T08:17:13Z |
---
license: other
pipeline_tag: text-generation
datasets:
- cosimoiaia/Loquace-102k
language:
- it
---
# Model Card for Model raicrits/OpenLLama13b_Loquace_ITA
<!-- Provide a quick summary of what the model is/does. -->
An open-source LLaMa language model of 13b parameters fine-tuned to follow instructions in italian.
### Model Description
This model is an open-source LLM of 13b parameters based on [OpenLLaMA](https://github.com/openlm-research/open_llama), an open-source replica of Meta AI's LLaMA.
The model was fine-tuned in order to follow instructions, as proposed in [Alpaca](https://github.com/tatsu-lab/stanford_alpaca),
but using [LoRA](https://arxiv.org/pdf/2106.09685.pdf) technique and a bigger dataset of instruction/answers in italian, [cosimoiaia/Loquace-102k](https://huggingface.co/datasets/cosimoiaia/Loquace-102k/viewer/cosimoiaia--Loquace-102k).
This repository contains the model merged with the LoRA adapters obtained in the fine-tuning procedure.
- **Developed by:** Stefano Scotta ([email protected])
- **Model type:** LLM fine-tuned to follow instructions
- **Language(s) (NLP):** Italian
- **License:** Other
- **Finetuned from model:** [openlm-research/open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The model can be used as is to respond to simple instructions in Italian or can be further fine-tuned to perform specific tasks.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
As any other LLM it is possible that the model generates content which does not correspond to the reality as well as wrong, biased, offensive and inappropriate answers.
## How to Get Started with the Model
**Prompt template:**
``` python
"Di seguito è riportata un'istruzione che descrive un compito, abbinata a un input che fornisce un ulteriore contesto. Scrivete una risposta che completi in modo appropriato la richiesta.
### Istruzione:
{instruction}
### Input:
{input}
### Risposta:"
```
**Usage:**
Use the code below to get started with the model.
``` python
import os
import torch
import sys
from transformers import LlamaTokenizer, LlamaForCausalLM
if torch.cuda.is_available():
device = "cuda"
else:
device = "cpu"
def generate_prompt(instruction, input=None):
if input:
return f"""Di seguito è riportata un'istruzione che descrive un compito, abbinata a un input che fornisce un ulteriore contesto. Scrivete una risposta che completi in modo appropriato la richiesta.
### Istruzione:
{instruction}
### Input:
{input}
### Risposta:"""
else:
return f"""Di seguito è riportata un'istruzione che descrive un compito. Scrivete una risposta che completi in modo appropriato la richiesta..
### Istruzione:
{instruction}
### Risposta:"""
model_name = "raicrits/OpenLLama13b_Loquace_ITA"
model = LlamaForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = LlamaTokenizer.from_pretrained(model_name)
instruction = "qual'è la relazione tra i seguenti oggetti"
input = "sedia, tavolo, divano"
prompt = generate_prompt("instruction", input)
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to(device)
generation_output = model.generate(
input_ids=input_ids,
max_new_tokens=256,
)
output = tokenizer.decode(generation_output[0])
output = output.split("### Risposta:")[1].strip().replace("</s>","")
print(output)
```
``` python
"Sedia, tavolo e divano sono tutti oggetti che possono essere utilizzati per creare un'atmosfera rilassante in una stanza."
```
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The model was fine-tinuned on [cosimoiaia/Loquace-102k](https://huggingface.co/datasets/cosimoiaia/Loquace-102k/viewer/cosimoiaia--Loquace-102k), a dataset of 102k question/answer pairs in italian.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
The fine-tuning procedure was done using [LoRA](https://arxiv.org/pdf/2106.09685.pdf) approach following closely what done for fine-tuning models like [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
#### Training Hyperparameters
**Training setting:**
- train epochs=3,
- learning_rate=3e-4,
- optimizer="adamw_hf"
- mixed precision training: float16
**LoRA configuration:**
- r= 8
- lora_alpha=16
- target_modules=["q_proj","v_proj"]
- lora_dropout=0.05
- bias="none"
- task_type=TaskType.CAUSAL_LM
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 1 NVIDIA A100/40Gb
- **Hours used:** 68
- **Cloud Provider:** Private Infrastructure
- **Carbon Emitted:** 7.34 kg eq. CO2
## Model Card Authors
Stefano Scotta ([email protected])
## Model Card Contact
[email protected]
|
Technotech/MagicPrompt-tinystories-33M-epoch10-lora
|
Technotech
| 2023-07-25T12:39:30Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"completion",
"en",
"dataset:Gustavosta/Stable-Diffusion-Prompts",
"license:apache-2.0",
"region:us"
] | null | 2023-07-25T03:50:53Z |
---
library_name: peft
license: apache-2.0
datasets:
- Gustavosta/Stable-Diffusion-Prompts
language:
- en
tags:
- completion
---
# MagicPrompt TinyStories-33M (LoRA)
## Info
Magic prompt completion model trained on a dataset of 80k Stable Diffusion prompts. Base model: TinyStories-33M. Inspired by [MagicPrompt-Stable-Diffusion](https://huggingface.co/Gustavosta/MagicPrompt-Stable-Diffusion).
Model seems to be pretty decent for 33M params due to the TinyStories base, but it clearly lacks much of an understanding of pretty much anything. Still, considering the size, I think it's decent. Whether you would use this over a small GPT-2 based model is up to you.
## Examples
Best generation settings I found: `max_new_tokens=40, do_sample=True, temperature=1.2, num_beams=10, no_repeat_ngram_size=2, early_stopping=True, repetition_penalty=1.35, top_k=50, top_p=0.55, eos_token_id=tokenizer.eos_token_id, pad_token_id=0` (there may be better settings).
`no_repeat_ngram_size` is important for making sure the model doesn't repeat phrases (as it is quite small).
(Bold text is generated by the model)
"found footage of a ufo **in the forest, by lusax, wlop, greg rutkowski, stanley artgerm, highly detailed, intricate, digital painting, artstation, concept art, smooth**"
"A close shot of a bird in a jungle, **with two legs, with long hair on a tall, long brown body, long white skin, sharp teeth, high bones, digital painting, artstation, concept art, illustration by wlop,**"
"Camera shot of **a strange young girl wearing a cloak, wearing a mask in clothes, with long curly hair, long hair, black eyes, dark skin, white teeth, long brown eyes eyes, big eyes, sharp**"
"An illustration of a house, stormy weather, **sun, moonlight, night, concept art, 4 k, wlop, by wlop, by jose stanley, ilya kuvshinov, sprig**"
"A field of flowers, camera shot, 70mm lens, **fantasy, intricate, highly detailed, artstation, concept art, sharp focus, illustration, illustration, artgerm jake daggaws, artgerm and jaggodieie brad**"
## Next steps
- Larger dataset ie [neuralworm/stable-diffusion-discord-prompts](https://huggingface.co/datasets/neuralworm/stable-diffusion-discord-prompts) or [daspartho/stable-diffusion-prompts](https://huggingface.co/datasets/daspartho/stable-diffusion-prompts)
- More epochs
- Instead of going smaller than GPT-2 137M, fine tune a 1-7B param model
## Training config
- Rank 16 LoRA
- Trained on Gustavosta/Stable-Diffusion-Prompts for 10 epochs
- Batch size of 64
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
dimonyara/redpj3B-lora-int8
|
dimonyara
| 2023-07-25T12:31:20Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T12:31:17Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
liuyt75/t5-small_prefix_tuning_sentences_allagree_3
|
liuyt75
| 2023-07-25T12:11:23Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-24T14:08:39Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
darren236/roberta-large-peft-p-tuning
|
darren236
| 2023-07-25T12:07:28Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-12T17:11:46Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
dwarfplanet/llama2-qlora-finetunined-french
|
dwarfplanet
| 2023-07-25T11:58:34Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T11:58:17Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
mertseker/Reinforce-Cartpole-v1
|
mertseker
| 2023-07-25T11:56:47Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T11:56:38Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
philschmid/llama-2-7b-instruction-generator
|
philschmid
| 2023-07-25T11:56:20Z | 16 | 18 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"en",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-25T11:43:08Z |
---
license: openrail
language:
- en
tags:
- llama-2
---
# **Llama 2 7B Instruction Generator**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
`philschmid/llama-7b-instruction-generator` is an fine-tuned version of `llama 2 7B` to generate instruction on a given input. The model was fined tuned using the Aplaca format and a modified version of `dolly`. Below you can find an example.
```bash
### Instruction:
Use the Input below to create an instruction, which could have been used to generate the input using an LLM.
### Input:
Dear [boss name],
I'm writing to request next week, August 1st through August 4th,
off as paid time off.
I have some personal matters to attend to that week that require
me to be out of the office. I wanted to give you as much advance
notice as possible so you can plan accordingly while I am away.
Please let me know if you need any additional information from me
or have any concerns with me taking next week off. I appreciate you
considering this request.
Thank you, [Your name]
### Response:
Write an email to my boss that I need next week 08/01 - 08/04 off.
```
_Everything after `### Response` will be generated by the model._
The idea of the model was to be able to synthetically generate instruction data from unsupervised data, like emails to personalize LLMs.
## Model Date
July 25, 2023
## How to use the model
```python
import torch
from transformers import AutoTokenizer, AutoModelModelForCausalLM
# load base LLM model and tokenizer
model = AutoModelModelForCausalLM.from_pretrained(
"philschmid/llama-2-7b-instruction-generator",
low_cpu_mem_usage=True,
torch_dtype=torch.float16,
load_in_4bit=True,
)
tokenizer = AutoTokenizer.from_pretrained("philschmid/llama-2-7b-instruction-generator")
prompt = f"""### Instruction:
Use the Input below to create an instruction, which could have been used to generate the input using an LLM.
### Input:
Dear [boss name],
I'm writing to request next week, August 1st through August 4th,
off as paid time off.
I have some personal matters to attend to that week that require
me to be out of the office. I wanted to give you as much advance
notice as possible so you can plan accordingly while I am away.
Please let me know if you need any additional information from me
or have any concerns with me taking next week off. I appreciate you
considering this request.
Thank you, [Your name]
### Response:
"""
input_ids = tokenizer(prompt, return_tensors="pt", truncation=True).input_ids.cuda()
outputs = model.generate(input_ids=input_ids, max_new_tokens=100, do_sample=True, top_p=0.9,temperature=0.9)
print(f"Generated instruction:\n{tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0][len(prompt):]}")
```
## Evaluated example
```bash
Prompt:
Plastic is made from oil, natural gas and even plant oils during refining of these oils into other products like gasoline. Ethane and propane are created when treated with heat during a refinery process called cracking. This turns the Ethane and propane into ethylene and propylene which are used with other chemical ingredients to create polymers that are the base of what plastic is made out of.
Generated instruction:
Given this paragraph, where does plastic come from?
Ground truth:
How is plastic made?
```
Planning to experiment with bigger sizes and starting from the chat models.
|
heegyu/KoLIMA-5.8b
|
heegyu
| 2023-07-25T11:38:55Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-25T11:22:54Z |
---
license: apache-2.0
---
polyglot-ko-5.8b를 lora로 학습 후 가중치 병합. ko-lima 데이터 사용. 10에폭 1e-4 -> 1e-5 cosine decay. 배치 128, 최대시퀀스길이 2048
|
mansee/vit-base-patch16-224-blur_vs_clean
|
mansee
| 2023-07-25T11:34:30Z | 246 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-25T10:55:15Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-blur_vs_clean
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9753602975360297
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-blur_vs_clean
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0714
- Accuracy: 0.9754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0539 | 1.0 | 151 | 0.1078 | 0.9596 |
| 0.0611 | 2.0 | 302 | 0.0846 | 0.9698 |
| 0.049 | 3.0 | 453 | 0.0714 | 0.9754 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
RohanKilledar/distilbert-base-uncased-finetuned-music
|
RohanKilledar
| 2023-07-25T11:30:49Z | 78 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-24T13:50:29Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: RohanKilledar/distilbert-base-uncased-finetuned-music
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# RohanKilledar/distilbert-base-uncased-finetuned-music
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.9392
- Validation Loss: 2.2673
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -828, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.9392 | 2.2673 | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.13.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mugaga/KazuyaRVC2
|
mugaga
| 2023-07-25T11:28:05Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-07-25T11:26:13Z |
---
license: openrail
made by: mugaga
---
|
facebook/encodec_24khz
|
facebook
| 2023-07-25T11:28:04Z | 1,129,361 | 43 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"encodec",
"feature-extraction",
"arxiv:2210.13438",
"region:us"
] |
feature-extraction
| 2023-06-12T16:10:36Z |
---
inference: false
---

# Model Card for EnCodec
This model card provides details and information about EnCodec, a state-of-the-art real-time audio codec developed by Meta AI.
## Model Details
### Model Description
EnCodec is a high-fidelity audio codec leveraging neural networks. It introduces a streaming encoder-decoder architecture with quantized latent space, trained in an end-to-end fashion.
The model simplifies and speeds up training using a single multiscale spectrogram adversary that efficiently reduces artifacts and produces high-quality samples.
It also includes a novel loss balancer mechanism that stabilizes training by decoupling the choice of hyperparameters from the typical scale of the loss.
Additionally, lightweight Transformer models are used to further compress the obtained representation while maintaining real-time performance.
- **Developed by:** Meta AI
- **Model type:** Audio Codec
### Model Sources
- **Repository:** [GitHub Repository](https://github.com/facebookresearch/encodec)
- **Paper:** [EnCodec: End-to-End Neural Audio Codec](https://arxiv.org/abs/2210.13438)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
EnCodec can be used directly as an audio codec for real-time compression and decompression of audio signals.
It provides high-quality audio compression and efficient decoding. The model was trained on various bandwiths, which can be specified when encoding (compressing) and decoding (decompressing).
Two different setup exist for EnCodec:
- Non-streamable: the input audio is split into chunks of 1 seconds, with an overlap of 10 ms, which are then encoded.
- Streamable: weight normalizationis used on the convolution layers, and the input is not split into chunks but rather padded on the left.
### Downstream Use
EnCodec can be fine-tuned for specific audio tasks or integrated into larger audio processing pipelines for applications such as speech generation,
music generation, or text to speech tasks.
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## How to Get Started with the Model
Use the following code to get started with the EnCodec model using a dummy example from the LibriSpeech dataset (~9MB). First, install the required Python packages:
```
pip install --upgrade pip
pip install --upgrade datasets[audio]
pip install git+https://github.com/huggingface/transformers.git@main
```
Then load an audio sample, and run a forward pass of the model:
```python
from datasets import load_dataset, Audio
from transformers import EncodecModel, AutoProcessor
# load a demonstration datasets
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
# load the model + processor (for pre-processing the audio)
model = EncodecModel.from_pretrained("facebook/encodec_24khz")
processor = AutoProcessor.from_pretrained("facebook/encodec_24khz")
# cast the audio data to the correct sampling rate for the model
librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=processor.sampling_rate))
audio_sample = librispeech_dummy[0]["audio"]["array"]
# pre-process the inputs
inputs = processor(raw_audio=audio_sample, sampling_rate=processor.sampling_rate, return_tensors="pt")
# explicitly encode then decode the audio inputs
encoder_outputs = model.encode(inputs["input_values"], inputs["padding_mask"])
audio_values = model.decode(encoder_outputs.audio_codes, encoder_outputs.audio_scales, inputs["padding_mask"])[0]
# or the equivalent with a forward pass
audio_values = model(inputs["input_values"], inputs["padding_mask"]).audio_values
```
## Training Details
The model was trained for 300 epochs, with one epoch being 2,000 updates with the Adam optimizer with a batch size of 64 examples of 1 second each, a learning rate of 3 · 10−4
, β1 = 0.5, and β2 = 0.9. All the models are traind using 8 A100 GPUs.
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
- For speech:
- DNS Challenge 4
- [Common Voice](https://huggingface.co/datasets/common_voice)
- For general audio:
- [AudioSet](https://huggingface.co/datasets/Fhrozen/AudioSet2K22)
- [FSD50K](https://huggingface.co/datasets/Fhrozen/FSD50k)
- For music:
- [Jamendo dataset](https://huggingface.co/datasets/rkstgr/mtg-jamendo)
They used four different training strategies to sample for these datasets:
- (s1) sample a single source from Jamendo with probability 0.32;
- (s2) sample a single source from the other datasets with the same probability;
- (s3) mix two sources from all datasets with a probability of 0.24;
- (s4) mix three sources from all datasets except music with a probability of 0.12.
The audio is normalized by file and a random gain between -10 and 6 dB id applied.
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Subjectif metric for restoration:
This models was evalutated using the MUSHRA protocol (Series, 2014), using both a hidden reference and a low anchor. Annotators were recruited using a
crowd-sourcing platform, in which they were asked to rate the perceptual quality of the provided samples in
a range between 1 to 100. They randomly select 50 samples of 5 seconds from each category of the the test set
and force at least 10 annotations per samples. To filter noisy annotations and outliers we remove annotators
who rate the reference recordings less then 90 in at least 20% of the cases, or rate the low-anchor recording
above 80 more than 50% of the time.
### Objective metric for restoration:
The ViSQOL()ink) metric was used together with the Scale-Invariant Signal-to-Noise Ration (SI-SNR) (Luo & Mesgarani, 2019;
Nachmani et al., 2020; Chazan et al., 2021).
### Results
The results of the evaluation demonstrate the superiority of EnCodec compared to the baselines across different bandwidths (1.5, 3, 6, and 12 kbps).
When comparing EnCodec with the baselines at the same bandwidth, EnCodec consistently outperforms them in terms of MUSHRA score.
Notably, EnCodec achieves better performance, on average, at 3 kbps compared to Lyra-v2 at 6 kbps and Opus at 12 kbps.
Additionally, by incorporating the language model over the codes, it is possible to achieve a bandwidth reduction of approximately 25-40%.
For example, the bandwidth of the 3 kbps model can be reduced to 1.9 kbps.
#### Summary
EnCodec is a state-of-the-art real-time neural audio compression model that excels in producing high-fidelity audio samples at various sample rates and bandwidths.
The model's performance was evaluated across different settings, ranging from 24kHz monophonic at 1.5 kbps to 48kHz stereophonic, showcasing both subjective and
objective results. Notably, EnCodec incorporates a novel spectrogram-only adversarial loss, effectively reducing artifacts and enhancing sample quality.
Training stability and interpretability were further enhanced through the introduction of a gradient balancer for the loss weights.
Additionally, the study demonstrated that a compact Transformer model can be employed to achieve an additional bandwidth reduction of up to 40% without compromising
quality, particularly in applications where low latency is not critical (e.g., music streaming).
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@misc{défossez2022high,
title={High Fidelity Neural Audio Compression},
author={Alexandre Défossez and Jade Copet and Gabriel Synnaeve and Yossi Adi},
year={2022},
eprint={2210.13438},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
|
layoric/llama-2-13b-code-alpaca
|
layoric
| 2023-07-25T11:14:00Z | 0 | 8 |
peft
|
[
"peft",
"pytorch",
"llama",
"text2text-generation",
"en",
"dataset:theblackcat102/evol-codealpaca-v1",
"license:cc-by-nc-4.0",
"region:us"
] |
text2text-generation
| 2023-07-24T01:15:10Z |
---
library_name: peft
license: cc-by-nc-4.0
datasets:
- theblackcat102/evol-codealpaca-v1
language:
- en
pipeline_tag: text2text-generation
---
## llama-2-13b-code-alpaca
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
Trained for 3 epochs on `theblackcat102/evol-codealpaca-v1` dataset, scored decent on locally run perplexity at 4.36.
## Axolotl config used
```yaml
base_model: NousResearch/Llama-2-13b-hf
base_model_config: NousResearch/Llama-2-13b-hf
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
push_dataset_to_hub:
hub_model_id:
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: theblackcat102/evol-codealpaca-v1
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.01
output_dir: ./checkpoints/llama-2-13b-qlora
adapter: qlora
lora_model_dir:
sequence_len: 4096
max_packed_sequence_len: 4096
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 2
micro_batch_size: 2
num_epochs: 3
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0001
train_on_inputs: false
group_by_length: true
bf16: true
fp16: false
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention: true
flash_attention:
warmup_steps: 10
eval_steps: 50
save_steps:
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
And then merged with Axolotl via:
```
accelerate launch scripts/finetune.py configs/your_config.yml --merge_lora --lora_model_dir="./completed-model" --load_in_8bit=False --load_in_4bit=False
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
ddoc/coco
|
ddoc
| 2023-07-25T11:07:06Z | 0 | 0 | null |
[
"arxiv:2211.06679",
"region:us"
] | null | 2023-07-25T11:06:44Z |
# Stable Diffusion web UI
A browser interface based on Gradio library for Stable Diffusion.

## Features
[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):
- Original txt2img and img2img modes
- One click install and run script (but you still must install python and git)
- Outpainting
- Inpainting
- Color Sketch
- Prompt Matrix
- Stable Diffusion Upscale
- Attention, specify parts of text that the model should pay more attention to
- a man in a `((tuxedo))` - will pay more attention to tuxedo
- a man in a `(tuxedo:1.21)` - alternative syntax
- select text and press `Ctrl+Up` or `Ctrl+Down` (or `Command+Up` or `Command+Down` if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)
- Loopback, run img2img processing multiple times
- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
- Textual Inversion
- have as many embeddings as you want and use any names you like for them
- use multiple embeddings with different numbers of vectors per token
- works with half precision floating point numbers
- train embeddings on 8GB (also reports of 6GB working)
- Extras tab with:
- GFPGAN, neural network that fixes faces
- CodeFormer, face restoration tool as an alternative to GFPGAN
- RealESRGAN, neural network upscaler
- ESRGAN, neural network upscaler with a lot of third party models
- SwinIR and Swin2SR ([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers
- LDSR, Latent diffusion super resolution upscaling
- Resizing aspect ratio options
- Sampling method selection
- Adjust sampler eta values (noise multiplier)
- More advanced noise setting options
- Interrupt processing at any time
- 4GB video card support (also reports of 2GB working)
- Correct seeds for batches
- Live prompt token length validation
- Generation parameters
- parameters you used to generate images are saved with that image
- in PNG chunks for PNG, in EXIF for JPEG
- can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
- can be disabled in settings
- drag and drop an image/text-parameters to promptbox
- Read Generation Parameters Button, loads parameters in promptbox to UI
- Settings page
- Running arbitrary python code from UI (must run with `--allow-code` to enable)
- Mouseover hints for most UI elements
- Possible to change defaults/mix/max/step values for UI elements via text config
- Tiling support, a checkbox to create images that can be tiled like textures
- Progress bar and live image generation preview
- Can use a separate neural network to produce previews with almost none VRAM or compute requirement
- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
- Styles, a way to save part of prompt and easily apply them via dropdown later
- Variations, a way to generate same image but with tiny differences
- Seed resizing, a way to generate same image but at slightly different resolution
- CLIP interrogator, a button that tries to guess prompt from an image
- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
- Batch Processing, process a group of files using img2img
- Img2img Alternative, reverse Euler method of cross attention control
- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
- Reloading checkpoints on the fly
- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community
- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
- separate prompts using uppercase `AND`
- also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
- DeepDanbooru integration, creates danbooru style tags for anime prompts
- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add `--xformers` to commandline args)
- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI
- Generate forever option
- Training tab
- hypernetworks and embeddings options
- Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
- Clip skip
- Hypernetworks
- Loras (same as Hypernetworks but more pretty)
- A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
- Can select to load a different VAE from settings screen
- Estimated completion time in progress bar
- API
- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML
- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))
- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions
- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
- Now without any bad letters!
- Load checkpoints in safetensors format
- Eased resolution restriction: generated image's domension must be a multiple of 8 rather than 64
- Now with a license!
- Reorder elements in the UI from settings screen
## Installation and Running
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
Alternatively, use online services (like Google Colab):
- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
### Installation on Windows 10/11 with NVidia-GPUs using release package
1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre) and extract it's contents.
2. Run `update.bat`.
3. Run `run.bat`.
> For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs)
### Automatic Installation on Windows
1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".
2. Install [git](https://git-scm.com/download/win).
3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`.
4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
### Automatic Installation on Linux
1. Install the dependencies:
```bash
# Debian-based:
sudo apt install wget git python3 python3-venv
# Red Hat-based:
sudo dnf install wget git python3
# Arch-based:
sudo pacman -S wget git python3
```
2. Navigate to the directory you would like the webui to be installed and execute the following command:
```bash
bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)
```
3. Run `webui.sh`.
4. Check `webui-user.sh` for options.
### Installation on Apple Silicon
Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon).
## Contributing
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
## Documentation
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
For the purposes of getting Google and other search engines to crawl the wiki, here's a link to the (not for humans) [crawlable wiki](https://github-wiki-see.page/m/AUTOMATIC1111/stable-diffusion-webui/wiki).
## Credits
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer
- ESRGAN - https://github.com/xinntao/ESRGAN
- SwinIR - https://github.com/JingyunLiang/SwinIR
- Swin2SR - https://github.com/mv-lab/swin2sr
- LDSR - https://github.com/Hafiidz/latent-diffusion
- MiDaS - https://github.com/isl-org/MiDaS
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
- xformers - https://github.com/facebookresearch/xformers
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
- Security advice - RyotaK
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
- LyCORIS - KohakuBlueleaf
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You)
|
s3nh/llama2-22b-GGML
|
s3nh
| 2023-07-25T10:56:50Z | 0 | 1 | null |
[
"text-generation-inference",
"text-generation",
"en",
"license:cc-by-sa-4.0",
"region:us"
] |
text-generation
| 2023-07-25T10:25:40Z |
---
license: cc-by-sa-4.0
language:
- en
tags:
- text-generation-inference
pipeline_tag: text-generation
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/chargoddard/llama2-22b).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
|
YarramsettiNaresh/rl_course_vizdoom_health_gathering_supreme
|
YarramsettiNaresh
| 2023-07-25T10:51:05Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T10:25:21Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.30 +/- 4.91
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r YarramsettiNaresh/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Bunjuk/distilbert-base-uncased-finetuned-squad
|
Bunjuk
| 2023-07-25T10:38:47Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-19T08:39:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.6657
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 4 | 5.7924 |
| No log | 2.0 | 8 | 5.7008 |
| No log | 3.0 | 12 | 5.6657 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/cbt-rarity
|
NasimB
| 2023-07-25T10:38:19Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-25T06:44:59Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-rarity
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-rarity
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3439 | 0.29 | 500 | 5.3171 |
| 5.0356 | 0.58 | 1000 | 4.8973 |
| 4.7113 | 0.87 | 1500 | 4.6631 |
| 4.4527 | 1.16 | 2000 | 4.5185 |
| 4.3042 | 1.45 | 2500 | 4.3965 |
| 4.1936 | 1.74 | 3000 | 4.2874 |
| 4.0885 | 2.03 | 3500 | 4.2062 |
| 3.8938 | 2.32 | 4000 | 4.1672 |
| 3.865 | 2.61 | 4500 | 4.1098 |
| 3.8203 | 2.9 | 5000 | 4.0591 |
| 3.6517 | 3.18 | 5500 | 4.0511 |
| 3.5823 | 3.47 | 6000 | 4.0221 |
| 3.5678 | 3.76 | 6500 | 3.9929 |
| 3.5014 | 4.05 | 7000 | 3.9837 |
| 3.3153 | 4.34 | 7500 | 3.9803 |
| 3.3079 | 4.63 | 8000 | 3.9677 |
| 3.2975 | 4.92 | 8500 | 3.9545 |
| 3.1763 | 5.21 | 9000 | 3.9634 |
| 3.1301 | 5.5 | 9500 | 3.9632 |
| 3.1272 | 5.79 | 10000 | 3.9618 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Uzair05u/StudentsGPT
|
Uzair05u
| 2023-07-25T10:22:34Z | 0 | 1 |
adapter-transformers
|
[
"adapter-transformers",
"legal",
"en",
"ur",
"dataset:openchat/openchat_sharegpt4_dataset",
"dataset:Open-Orca/OpenOrca",
"license:afl-3.0",
"region:us"
] | null | 2023-07-24T15:51:02Z |
---
license: afl-3.0
datasets:
- openchat/openchat_sharegpt4_dataset
- Open-Orca/OpenOrca
language:
- en
- ur
metrics:
- accuracy
- character
library_name: adapter-transformers
tags:
- legal
---
|
saharad/Texi-v3
|
saharad
| 2023-07-25T10:07:07Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T10:07:05Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Texi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.38 +/- 2.79
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="saharad/Texi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NasimB/cbt-log-rarity
|
NasimB
| 2023-07-25T10:05:16Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-25T06:13:26Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-log-rarity
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-log-rarity
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0415
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3632 | 0.29 | 500 | 5.3094 |
| 5.037 | 0.58 | 1000 | 4.8980 |
| 4.7094 | 0.87 | 1500 | 4.6590 |
| 4.443 | 1.16 | 2000 | 4.5132 |
| 4.2978 | 1.45 | 2500 | 4.3942 |
| 4.1967 | 1.74 | 3000 | 4.2951 |
| 4.0875 | 2.03 | 3500 | 4.2135 |
| 3.8882 | 2.32 | 4000 | 4.1726 |
| 3.8599 | 2.61 | 4500 | 4.1200 |
| 3.8335 | 2.9 | 5000 | 4.0700 |
| 3.6522 | 3.18 | 5500 | 4.0578 |
| 3.5797 | 3.47 | 6000 | 4.0298 |
| 3.5695 | 3.76 | 6500 | 3.9947 |
| 3.4993 | 4.05 | 7000 | 3.9883 |
| 3.314 | 4.34 | 7500 | 3.9829 |
| 3.3076 | 4.63 | 8000 | 3.9711 |
| 3.2962 | 4.92 | 8500 | 3.9570 |
| 3.1725 | 5.21 | 9000 | 3.9672 |
| 3.1316 | 5.5 | 9500 | 3.9663 |
| 3.1271 | 5.79 | 10000 | 3.9649 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
giuseppemassafra/ppo-SnowballTarget
|
giuseppemassafra
| 2023-07-25T09:59:35Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-25T09:59:33Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: giuseppemassafra/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
osmancanyuca/dqn-SpaceInvadersNoFrameskip-v4
|
osmancanyuca
| 2023-07-25T09:48:08Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T09:47:30Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 725.00 +/- 319.72
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga osmancanyuca -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga osmancanyuca -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga osmancanyuca
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Fynd/arien-starchat
|
Fynd
| 2023-07-25T09:44:18Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T09:44:01Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
YarramsettiNaresh/ppo-LunarLander-v2-1
|
YarramsettiNaresh
| 2023-07-25T09:39:54Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T09:39:36Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 233.33 +/- 14.60
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BigAbdul/first-model
|
BigAbdul
| 2023-07-25T09:38:17Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-25T09:37:05Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: first-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# first-model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.0
- Tokenizers 0.13.3
|
ishwarbb23/t5depression
|
ishwarbb23
| 2023-07-25T09:37:26Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:ThomasSimonini/t5-end2end-question-generation",
"base_model:finetune:ThomasSimonini/t5-end2end-question-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-24T16:45:47Z |
---
license: apache-2.0
base_model: ThomasSimonini/t5-end2end-question-generation
tags:
- generated_from_trainer
model-index:
- name: t5depression
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5depression
This model is a fine-tuned version of [ThomasSimonini/t5-end2end-question-generation](https://huggingface.co/ThomasSimonini/t5-end2end-question-generation) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
mcamara/distilhubert-finetuned-gtzan-xP
|
mcamara
| 2023-07-25T09:33:07Z | 168 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-25T08:47:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0379
- Accuracy: 0.81
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0307 | 1.0 | 113 | 2.0561 | 0.41 |
| 1.4208 | 2.0 | 226 | 1.4850 | 0.63 |
| 1.1959 | 3.0 | 339 | 1.0617 | 0.66 |
| 0.6929 | 4.0 | 452 | 0.8228 | 0.74 |
| 0.5104 | 5.0 | 565 | 0.6969 | 0.77 |
| 0.4735 | 6.0 | 678 | 0.7412 | 0.79 |
| 0.2185 | 7.0 | 791 | 0.6586 | 0.76 |
| 0.3087 | 8.0 | 904 | 0.8234 | 0.78 |
| 0.1066 | 9.0 | 1017 | 0.8210 | 0.8 |
| 0.0841 | 10.0 | 1130 | 1.0040 | 0.8 |
| 0.0387 | 11.0 | 1243 | 0.9195 | 0.81 |
| 0.0091 | 12.0 | 1356 | 0.9208 | 0.82 |
| 0.006 | 13.0 | 1469 | 0.9190 | 0.81 |
| 0.0051 | 14.0 | 1582 | 0.9796 | 0.8 |
| 0.0038 | 15.0 | 1695 | 0.9823 | 0.8 |
| 0.0035 | 16.0 | 1808 | 1.0252 | 0.8 |
| 0.0032 | 17.0 | 1921 | 1.0172 | 0.8 |
| 0.0032 | 18.0 | 2034 | 1.0433 | 0.81 |
| 0.0029 | 19.0 | 2147 | 1.0577 | 0.81 |
| 0.0029 | 20.0 | 2260 | 1.0379 | 0.81 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Aharneish/Taxi-v3
|
Aharneish
| 2023-07-25T09:27:14Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T09:27:12Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Aharneish/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Aharneish/q-FrozenLake-v1-4x4-noSlippery
|
Aharneish
| 2023-07-25T09:26:26Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T09:26:23Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Aharneish/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Baiheng/HandWritingDiffusion_v1.0
|
Baiheng
| 2023-07-25T09:20:37Z | 0 | 0 | null |
[
"license:cc-by-nc-nd-3.0",
"region:us"
] | null | 2023-07-25T08:59:07Z |
---
license: cc-by-nc-nd-3.0
---
|
kajalwiz/Electrostatic-bart-cnn-science
|
kajalwiz
| 2023-07-25T09:15:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T09:15:12Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
oleksandrfluxon/mpt-7b-instruct-evaluate
|
oleksandrfluxon
| 2023-07-25T09:07:14Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"dataset:mosaicml/dolly_hhrlhf",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2010.04245",
"license:cc-by-sa-3.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-21T13:37:15Z |
---
license: cc-by-sa-3.0
datasets:
- mosaicml/dolly_hhrlhf
tags:
- Composer
- MosaicML
- llm-foundry
inference: false
duplicated_from: mosaicml/mpt-7b-instruct
---
# MPT-7B-Instruct
MPT-7B-Instruct is a model for short-form instruction following.
It is built by finetuning [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) on a [dataset](https://huggingface.co/datasets/sam-mosaic/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
CC-By-SA-3.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Question/Instruction
**Longboi24**:
> What is a quoll?
**MPT-7B-Instruct**:
>A Quoll (pronounced “cool”) is one of Australia’s native carnivorous marsupial mammals, which are also known as macropods or wallabies in other parts around Asia and South America
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-instruct',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
### Formatting
This model was trained on data formatted in the dolly-15k format:
```python
INSTRUCTION_KEY = "### Instruction:"
RESPONSE_KEY = "### Response:"
INTRO_BLURB = "Below is an instruction that describes a task. Write a response that appropriately completes the request."
PROMPT_FOR_GENERATION_FORMAT = """{intro}
{instruction_key}
{instruction}
{response_key}
""".format(
intro=INTRO_BLURB,
instruction_key=INSTRUCTION_KEY,
instruction="{instruction}",
response_key=RESPONSE_KEY,
)
example = "James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week? Explain before answering."
fmt_ex = PROMPT_FOR_GENERATION_FORMAT.format(instruction=example)
```
In the above example, `fmt_ex` is ready to be tokenized and sent through the model.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 8 A100-40GBs for about 2.3 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-Instruct was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
HaziqRazali/taxi
|
HaziqRazali
| 2023-07-25T09:01:24Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T09:01:22Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="HaziqRazali/taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Ryan-sjtu/lora-trained-xl
|
Ryan-sjtu
| 2023-07-25T08:58:37Z | 3 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-25T07:51:55Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks face
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Ryan-sjtu/lora-trained-xl
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks face using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
CornerINCorner/distilhubert-finetuned-gtzan
|
CornerINCorner
| 2023-07-25T08:58:34Z | 166 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-21T15:09:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0726
- Accuracy: 0.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3797 | 1.0 | 57 | 1.7501 | 0.41 |
| 1.1585 | 2.0 | 114 | 1.3004 | 0.55 |
| 1.1663 | 3.0 | 171 | 1.1380 | 0.64 |
| 0.8421 | 4.0 | 228 | 1.0330 | 0.7 |
| 0.5175 | 5.0 | 285 | 0.7122 | 0.82 |
| 0.492 | 6.0 | 342 | 0.6735 | 0.79 |
| 0.2152 | 7.0 | 399 | 0.9674 | 0.79 |
| 0.1405 | 8.0 | 456 | 0.7406 | 0.84 |
| 0.0698 | 9.0 | 513 | 0.9159 | 0.83 |
| 0.0116 | 10.0 | 570 | 1.0726 | 0.85 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
ZGL2022/yolov8x-rice-panicle-detection
|
ZGL2022
| 2023-07-25T08:55:00Z | 0 | 0 | null |
[
"tensorboard",
"region:us"
] | null | 2023-07-25T07:13:00Z |
usage:
```
from ultralytics import YOLO
# Load our custom model
model = YOLO("weights/best.pt")
model.predict(source="your picture path", save=True, show=True)
```
|
FFusion/FFusionXL-09-SDXL
|
FFusion
| 2023-07-25T08:36:33Z | 106 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"stable-diffusion",
"text-to-image",
"di.ffusion.ai",
"arxiv:2108.01073",
"arxiv:2112.10752",
"arxiv:2307.01952",
"base_model:diffusers/stable-diffusion-xl-base-0.9",
"base_model:finetune:diffusers/stable-diffusion-xl-base-0.9",
"doi:10.57967/hf/0925",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2023-07-23T10:56:57Z |
---
license: other
base_model: diffusers/stable-diffusion-xl-base-0.9
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- stable-diffusion
- text-to-image
- diffusers
- di.ffusion.ai
inference: true
extra_gated_prompt: >-
Copyright (c) Stability AI Ltd. and Source Code Bulgaria Ltd.
This License Agreement (as may be amended in accordance with this License
Agreement, “License”), between you, or your employer or other entity (if you are
entering into this agreement on behalf of your employer or other entity)
(“Licensee” or “you”) and Stability AI Ltd. (“Stability AI” or “we”) and Source
Code Bulgaria Ltd. ("Source Code Bulgaria" or "we") applies to your use of any
computer program, algorithm, source code, object code, software, models, or
model weights that is made available by Stability AI and Source Code Bulgaria
under this License (“Software”) and any specifications, manuals, documentation,
and other written information provided by Stability AI and Source Code Bulgaria
related to the Software (“Documentation”).
By using the Software, you agree to the terms of this License. If you do not
agree to this License, then you do not have any rights to use the Software or
Documentation (collectively, the “Software Products”), and you must immediately
cease using the Software Products. If you are agreeing to be bound by the terms
of this License on behalf of your employer or other entity, you represent and
warrant to Stability AI and Source Code Bulgaria that you have full legal
authority to bind your employer or such entity to this License. If you do not
have the requisite authority, you may not accept the License or access the
Software Products on behalf of your employer or other entity.
This Model is a derivative work based on the SDXL 0.9 software provided by Stability AI Ltd under the SDXL Research License,
Copyright (c) Stability AI Ltd. All Rights Reserved.
Use of the Model must adhere to the terms outlined in the original SDXL Research License, which can be found at https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9/blob/main/LICENSE.md
If you have any questions or seek permissions beyond those granted in this License, please reach out to [email protected]
1. LICENSE GRANT
a. Subject to your compliance with the Documentation and Sections 2, 3, and 5,
Stability AI grants you a non-exclusive, worldwide, non-transferable,
non-sublicensable, revocable, royalty free and limited license under Stability
AI’s copyright interests to use, reproduce, and create derivative works of the
Software solely for your non-commercial research purposes. The foregoing
license is personal to you, and you may not assign, sublicense, distribute,
publish, host, or otherwise make available this Software, derivative works of
the Software, models or model weights associated with the Software, this
License, or any other rights or obligations under this License without
Stability AI’s prior written consent; any such assignment or sublicense
without Stability AI’s prior written consent will be void and will
automatically and immediately terminate this License. For sake of clarity,
this License does not grant to you the right or ability to extend any license
to the Software, derivative works of the Software, or associated models or
model weights to a non-Licensee, nor does this License permit you to create a
new Licensee, such as by making available a copy of this License. If you
would like rights not granted by this License, you may seek permission by
sending an email to [email protected].
b. You may make a reasonable number of copies of the Documentation solely for
your use in connection with the license to the Software granted above.
c. The grant of rights expressly set forth in this Section 1 (License Grant)
are the complete grant of rights to you in the Software Products, and no other
licenses are granted, whether by waiver, estoppel, implication, equity or
otherwise. Stability AI and its licensors reserve all rights not expressly
granted by this License.
2. RESTRICTIONS
You will not, and will not permit, assist or cause any third party to:
a. use, modify, copy, reproduce, create derivative works of, or distribute the
Software Products (or any derivative works thereof, works incorporating the
Software Products, or any data produced by the Software), in whole or in part,
for (i) any commercial or production purposes, (ii) military purposes or in
the service of nuclear technology, (iii) purposes of surveillance, including
any research or development relating to surveillance, (iv) biometric
processing, (v) in any manner that infringes, misappropriates, or otherwise
violates any third-party rights, or (vi) in any manner that violates any
applicable law and violating any privacy or security laws, rules, regulations,
directives, or governmental requirements (including the General Data Privacy
Regulation (Regulation (EU) 2016/679), the California Consumer Privacy Act,
and any and all laws governing the processing of biometric information), as
well as all amendments and successor laws to any of the foregoing;
b. alter or remove copyright and other proprietary notices which appear on or
in the Software Products;
c. utilize any equipment, device, software, or other means to circumvent or
remove any security or protection used by Stability AI in connection with the
Software, or to circumvent or remove any usage restrictions, or to enable
functionality disabled by Stability AI; or
d. offer or impose any terms on the Software Products that alter, restrict, or
are inconsistent with the terms of this License.
e. 1) violate any applicable U.S. and non-U.S. export control and trade
sanctions laws (“Export Laws”); 2) directly or indirectly export, re-export,
provide, or otherwise transfer Software Products: (a) to any individual,
entity, or country prohibited by Export Laws; (b) to anyone on U.S. or
non-U.S. government restricted parties lists; or (c) for any purpose
prohibited by Export Laws, including nuclear, chemical or biological weapons,
or missile technology applications; 3) use or download Software Products if
you or they are: (a) located in a comprehensively sanctioned jurisdiction, (b)
currently listed on any U.S. or non-U.S. restricted parties list, or (c) for
any purpose prohibited by Export Laws; and (4) will not disguise your location
through IP proxying or other methods.
3. ATTRIBUTION
Together with any copies of the Software Products (as well as derivative works
thereof or works incorporating the Software Products) that you distribute, you
must provide (i) a copy of this License, and (ii) the following attribution
notice: “SDXL 0.9 is licensed under the SDXL Research License, Copyright (c)
Stability AI Ltd. All Rights Reserved. FFusionXL-09-SDXL is developed and
maintained by FFusion.AI, a division of Source Code Bulgaria Ltd. All Rights
Reserved.”
4. DISCLAIMERS
THE SOFTWARE PRODUCTS ARE PROVIDED “AS IS” AND “WITH ALL FAULTS” WITH NO
WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. STABILITY AI EXPRESSLY DISCLAIMS ALL
REPRESENTATIONS AND WARRANTIES, EXPRESS OR IMPLIED, WHETHER BY STATUTE,
CUSTOM, USAGE OR OTHERWISE AS TO ANY MATTERS RELATED TO THE SOFTWARE PRODUCTS,
INCLUDING BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE, TITLE, SATISFACTORY QUALITY, OR
NON-INFRINGEMENT. STABILITY AI MAKES NO WARRANTIES OR REPRESENTATIONS THAT THE
SOFTWARE PRODUCTS WILL BE ERROR FREE OR FREE OF VIRUSES OR OTHER HARMFUL
COMPONENTS, OR PRODUCE ANY PARTICULAR RESULTS.
5. LIMITATION OF LIABILITY
TO THE FULLEST EXTENT PERMITTED BY LAW, IN NO EVENT WILL STABILITY AI BE
LIABLE TO YOU (A) UNDER ANY THEORY OF LIABILITY, WHETHER BASED IN CONTRACT,
TORT, NEGLIGENCE, STRICT LIABILITY, WARRANTY, OR OTHERWISE UNDER THIS LICENSE,
OR (B) FOR ANY INDIRECT, CONSEQUENTIAL, EXEMPLARY, INCIDENTAL, PUNITIVE OR
SPECIAL DAMAGES OR LOST PROFITS, EVEN IF STABILITY AI HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES. THE SOFTWARE PRODUCTS, THEIR CONSTITUENT
COMPONENTS, AND ANY OUTPUT (COLLECTIVELY, “SOFTWARE MATERIALS”) ARE NOT
DESIGNED OR INTENDED FOR USE IN ANY APPLICATION OR SITUATION WHERE FAILURE OR
FAULT OF THE SOFTWARE MATERIALS COULD REASONABLY BE ANTICIPATED TO LEAD TO
SERIOUS INJURY OF ANY PERSON, INCLUDING POTENTIAL DISCRIMINATION OR VIOLATION
OF AN INDIVIDUAL’S PRIVACY RIGHTS, OR TO SEVERE PHYSICAL, PROPERTY, OR
ENVIRONMENTAL DAMAGE (EACH, A “HIGH-RISK USE”). IF YOU ELECT TO USE ANY OF THE
SOFTWARE MATERIALS FOR A HIGH-RISK USE, YOU DO SO AT YOUR OWN RISK. YOU AGREE
TO DESIGN AND IMPLEMENT APPROPRIATE DECISION-MAKING AND RISK-MITIGATION
PROCEDURES AND POLICIES IN CONNECTION WITH A HIGH-RISK USE SUCH THAT EVEN IF
THERE IS A FAILURE OR FAULT IN ANY OF THE SOFTWARE MATERIALS, THE SAFETY OF
PERSONS OR PROPERTY AFFECTED BY THE ACTIVITY STAYS AT A LEVEL THAT IS
REASONABLE, APPROPRIATE, AND LAWFUL FOR THE FIELD OF THE HIGH-RISK USE.
6. INDEMNIFICATION
You will indemnify, defend and hold harmless Stability AI and our subsidiaries
and affiliates, and each of our respective shareholders, directors, officers,
employees, agents, successors, and assigns (collectively, the “Stability AI
Parties”) from and against any losses, liabilities, damages, fines, penalties,
and expenses (including reasonable attorneys’ fees) incurred by any Stability
AI Party in connection with any claim, demand, allegation, lawsuit,
proceeding, or investigation (collectively, “Claims”) arising out of or
related to: (a) your access to or use of the Software Products (as well as any
results or data generated from such access or use), including any High-Risk
Use (defined below); (b) your violation of this License; or (c) your
violation, misappropriation or infringement of any rights of another
(including intellectual property or other proprietary rights and privacy
rights). You will promptly notify the Stability AI Parties of any such Claims,
and cooperate with Stability AI Parties in defending such Claims. You will
also grant the Stability AI Parties sole control of the defense or settlement,
at Stability AI’s sole option, of any Claims. This indemnity is in addition
to, and not in lieu of, any other indemnities or remedies set forth in a
written agreement between you and Stability AI or the other Stability AI
Parties.
7. TERMINATION; SURVIVAL
a. This License will automatically terminate upon any breach by you of the
terms of this License.
b. We may terminate this License, in whole or in part, at any time upon notice
(including electronic) to you.
c. The following sections survive termination of this License: 2
(Restrictions), 3 (Attribution), 4 (Disclaimers), 5 (Limitation on Liability),
6 (Indemnification) 7 (Termination; Survival), 8 (Third Party Materials), 9
(Trademarks), 10 (Applicable Law; Dispute Resolution), and 11 (Miscellaneous).
8. THIRD PARTY MATERIALS
The Software Products may contain third-party software or other components
(including free and open source software) (all of the foregoing, “Third Party
Materials”), which are subject to the license terms of the respective
third-party licensors. Your dealings or correspondence with third parties and
your use of or interaction with any Third Party Materials are solely between
you and the third party. Stability AI does not control or endorse, and makes
no representations or warranties regarding, any Third Party Materials, and
your access to and use of such Third Party Materials are at your own risk.
9. TRADEMARKS
Licensee has not been granted any trademark license as part of this License
and may not use any name or mark associated with Stability AI without the
prior written permission of Stability AI, except to the extent necessary to
make the reference required by the “ATTRIBUTION” section of this Agreement.
10. APPLICABLE LAW; DISPUTE RESOLUTION
This License will be governed and construed under the laws of the State of
California without regard to conflicts of law provisions. Any suit or
proceeding arising out of or relating to this License will be brought in the
federal or state courts, as applicable, in San Mateo County, California, and
each party irrevocably submits to the jurisdiction and venue of such courts.
11. MISCELLANEOUS
If any provision or part of a provision of this License is unlawful, void or
unenforceable, that provision or part of the provision is deemed severed from
this License, and will not affect the validity and enforceability of any
remaining provisions. The failure of Stability AI to exercise or enforce any
right or provision of this License will not operate as a waiver of such right
or provision. This License does not confer any third-party beneficiary rights
upon any other person or entity. This License, together with the
Documentation, contains the entire understanding between you and Stability AI
regarding the subject matter of this License, and supersedes all other written
or oral agreements and understandings between you and Stability AI regarding
such subject matter. No change or addition to any provision of this License
will be binding unless it is in writing and signed by an authorized
representative of both you and Stability AI.
12.ADDITIONAL TERMS FOR FFUSION.AI
In addition to the terms set forth above, the following terms apply specifically
to the FFusionXL-09-SDXL model developed by FFusion.AI, a division of Source
Code Bulgaria Ltd:
a. Any use of the FFusionXL-09-SDXL model must include proper attribution to
FFusion.AI and Source Code Bulgaria Ltd. in any publication or public work that
includes results achieved by or data generated by the model.
b. The FFusionXL-09-SDXL model is provided for non-commercial research purposes
only. Any commercial use requires a separate license agreement with Source Code
Bulgaria Ltd.
c. You may not reverse engineer, decompile, or disassemble the FFusionXL-09-SDXL
model.
d. You agree to indemnify and hold harmless Source Code Bulgaria Ltd. and its
affiliates from any claims, damages, liabilities, costs, losses, and expenses
(including reasonable attorney's fees) arising out of your use of the
FFusionXL-09-SDXL model.
extra_gated_heading: FFXL Researcher Preliminary Access License Agreement
extra_gated_description: FFXL-SDXL 0.9 RESEARCH LICENSE AGREEMENT
extra_gated_button_content: Submit application
extra_gated_fields:
"Organization or Personal Alias": text
"Nature of research(optional)": text
"Personal researcher link (Civitai, website, github, HF)": text
"Further Information (Feel free to provide additional details)": text
"I acknowledge the license agreement stated above and pledge to utilize the Software strictly for non-commercial research": checkbox
---
# FFXL Model Card
<div style="display: flex; flex-wrap: wrap; gap: 2px;">
<img src="https://img.shields.io/badge/%F0%9F%94%A5%20Refiner%20Compatible-Yes-success">
<img src="https://img.shields.io/badge/%F0%9F%92%BB%20CLIP--ViT%2FG%20and%20CLIP--ViT%2FL%20tested-Yes-success">
<img src="https://img.shields.io/badge/%F0%9F%A7%A8%20FFXL%20Diffusers-available-brightgreen">
</div>

## Model
FFXL based on SDXL consists of a two-step pipeline for latent diffusion:
First, we use a base model to generate latents of the desired output size.
In the second step, we use a specialized high-resolution model and apply a technique called SDEdit (https://arxiv.org/abs/2108.01073, also known as "img2img")
to the latents generated in the first step, using the same prompt.
[](https://huggingface.co/FFusion/FFusionXL-09-SDXL/blob/main/FFusionXL-09-SDXL.safetensors)
### Model Description
- **Trained by:** FFusion AI
- **Model type:** Diffusion-based text-to-image generative model
- **License:** [FFXL Research License](https://huggingface.co/FFusion/FFusionXL-09-SDXL/blob/main/LICENSE.md)
- **Model Description:** This is a trained model based on SDXL that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses two fixed, pretrained text encoders ([OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip) and [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main)).
- **Resources for more information:** [SDXL paper on arXiv](https://arxiv.org/abs/2307.01952).

### Model Sources
- **Demo:** [](https://huggingface.co/spaces/FFusion/FFusionXL-SDXL-DEMO)
<div style="display: flex; flex-wrap: wrap; gap: 2px;">
<a href="https://huggingface.co/FFusion/FFusion-BaSE" target="_new" rel="ugc"><img src="https://img.shields.io/badge/Hugging%20Face-FFusion--BaSE-blue" alt="Hugging Face Model"></a>
<a href="https://github.com/1e-2" target="_new" rel="ugc"><img src="https://img.shields.io/badge/GitHub-1e--2-green" alt="GitHub"></a>
<a href="https://www.facebook.com/FFusionAI/" target="_new" rel="ugc"><img src="https://img.shields.io/badge/Facebook-FFusionAI-blue" alt="Facebook"></a>
<a href="https://civitai.com/models/82039/ffusion-ai-sd-21" target="_new" rel="ugc"><img src="https://img.shields.io/badge/Civitai-FFusionAI-blue" alt="Civitai"></a>
</div>
### 🧨 Diffusers
Make sure to upgrade diffusers to >= 0.18.0:
```
pip install diffusers --upgrade
```
In addition make sure to install `transformers`, `safetensors`, `accelerate` as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
You can use the model then as follows
```py
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("FFusion/FFusionXL-09-SDXL", torch_dtype=torch.float16, use_safetensors=True, variant="fp16")
pipe.to("cuda")
# if using torch < 2.0
# pipe.enable_xformers_memory_efficient_attention()
prompt = "An astronaut riding a green horse"
images = pipe(prompt=prompt).images[0]
```
When using `torch >= 2.0`, you can improve the inference speed by 20-30% with torch.compile. Simple wrap the unet with torch compile before running the pipeline:
```py
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
```
If you are limited by GPU VRAM, you can enable *cpu offloading* by calling `pipe.enable_model_cpu_offload`
instead of `.to("cuda")`:
```diff
- pipe.to("cuda")
+ pipe.enable_model_cpu_offload()
```
## Uses

### Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
Excluded uses are described below.
### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model struggles with more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The autoencoding part of the model is lossy.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
**Attribution:**
"SDXL 0.9 is licensed under the SDXL Research License, Copyright (c) Stability AI Ltd. All Rights Reserved."
## License
[SDXL 0.9 Research License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9/blob/main/LICENSE.md)"
[FFXL 0.9 Research License](https://huggingface.co/FFusion/FFusionXL-09-SDXL/blob/main/LICENSE.md)"
[](mailto:[email protected])
## SAMPLES




|
nicotaroni/finetuned_distilbert_classifier
|
nicotaroni
| 2023-07-25T08:31:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-24T14:20:42Z |
---
pipeline_tag: text-classification
---
|
HaziqRazali/q-FrozenLake-v1-4x4-noSlippery
|
HaziqRazali
| 2023-07-25T08:21:56Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T08:21:53Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="HaziqRazali/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
s3nh/Luna-AI-Llama2-Uncensored-GGML
|
s3nh
| 2023-07-25T08:18:17Z | 0 | 3 | null |
[
"text-generation-inference",
"text-generation",
"en",
"license:cc-by-sa-4.0",
"region:us"
] |
text-generation
| 2023-07-21T19:02:55Z |
---
license: cc-by-sa-4.0
language:
- en
tags:
- text-generation-inference
pipeline_tag: text-generation
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/Tap-M/Luna-AI-Llama2-Uncensored).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
<div style="width: 800px; margin: auto;">
<h2>Model Description</h2>
<p>“Luna AI Llama2 Uncensored” is a Llama2 based Chat model <br />fine-tuned on over 40,000 long form chat discussions <br />
This model was fine-tuned by Tap, the creator of Luna AI. <br />
The result is an enhanced Llama2 7b model that rivals ChatGPT in performance <br />across a variety of tasks.</p>
<p>This model stands out for its long responses, <br /> low hallucination rate, and absence of censorship mechanisms. <br /></p>
<h2>Model Training</h2>
<p>The fine-tuning process was performed on an 8x a100 80GB machine.
<br />The model was trained almost entirely on synthetic outputs.
<br />This includes data from diverse sources which we included to create our custom dataset,<br /> it includes multiple rounds of chats between Human & AI.
</p>
<a rel="noopener nofollow" href="https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GPTQ">4bit GPTQ Version provided by @TheBloke - for GPU inference</a><br />
<a rel="noopener nofollow" href="https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GGML">GGML Version provided by @TheBloke - For CPU inference</a>
<h2>Prompt Format</h2>
<p>The model follows the Vicuna 1.1/ OpenChat format:</p>
```
USER: I have difficulties in making friends, and I really need someone to talk to. Would you be my friend?
ASSISTANT: Of course! Friends are always here for each other. What do you like to do?
```
<h2>Future Plans</h2>
<p>The model is currently being uploaded in FP16 format, <br />and there are plans to convert the model to GGML and GPTQ 4bit quantizations.</p>
<h2>Benchmark Results</h2>
||||||
|---:|---:|---:|---:|---:|
|Task|Version| Metric |Value |Stderr|
|arc_challenge|0|acc_norm|0.5512|0.0146|
|hellaswag|0||||
|mmlu|1|acc_norm|0.46521|0.036|
|truthfulqa_mc|1|mc2|0.4716|0.0155|
|Average|-|-|0.5114|0.0150|
<h2>Ethical considerations</h2>
<p>The data used to train the model is collected from various sources, mostly from the Web. <br />
As such, it contains offensive, harmful and biased content. <br />We thus expect the model to exhibit such biases from the training data.</p>
<h2>Human life</h2>
<p>The model is not intended to inform decisions about matters central to human life, <br />and should not be used in such a way.</p>
<h2>Risks and harms</h2>
<p>Risks and harms of large language models include the generation of harmful, offensive or biased content. <br />
These models are often prone to generating incorrect information, sometimes referred to as hallucinations.
<br /> We do not expect our model to be an exception in this regard.</p>
</div>
|
s3nh/Llama-2-7b-hf-GGML
|
s3nh
| 2023-07-25T08:18:05Z | 0 | 0 | null |
[
"text-generation-inference",
"text-generation",
"en",
"license:cc-by-sa-4.0",
"region:us"
] |
text-generation
| 2023-07-21T19:23:03Z |
---
license: cc-by-sa-4.0
language:
- en
tags:
- text-generation-inference
pipeline_tag: text-generation
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/golaxy/gogpt2-7b).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
|
s3nh/LLongMA-2-7b-GGML
|
s3nh
| 2023-07-25T08:16:45Z | 0 | 1 | null |
[
"text-generation-inference",
"text-generation",
"en",
"arxiv:2108.12409",
"arxiv:2212.10554",
"license:cc-by-sa-4.0",
"region:us"
] |
text-generation
| 2023-07-21T20:59:26Z |
---
license: cc-by-sa-4.0
language:
- en
tags:
- text-generation-inference
pipeline_tag: text-generation
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/conceptofmind/LLongMA-2-7b).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
### Original model card
LLongMA-2, a suite of Llama-2 models, trained at 8k context length using linear positional interpolation scaling. The model was trained in collaboration with Emozilla of NousResearch and Kaiokendev.
We worked directly with Kaiokendev, to extend the context length of the Llama-2 7b model through fine-tuning. The models pass all our evaluations and maintain the same perplexity at 8k extrapolation surpassing the performance of other recent methodologies.
The model has identical performance to LLaMA 2 under 4k context length, performance scales directly to 8k, and works out-of-the-box with the new version of transformers (4.31) or with `trust_remote_code` for <= 4.30.
A Llama-2 13b model trained at 8k will release soon on huggingface here: https://huggingface.co/conceptofmind/LLongMA-2-13b
Applying the method to the rotary position embedding requires only slight changes to the model's code by dividing the positional index, t, by a scaling factor.
The repository containing u/emozilla’s implementation of scaled rotary embeddings can be found here: https://github.com/jquesnelle/scaled-rope
If you would like to learn more about scaling rotary embeddings, I would strongly recommend reading u/kaiokendev's blog posts on his findings: https://kaiokendev.github.io/
A PR to add scaled rotary embeddings to Huggingface transformers has been added by u/joao_gante and merged: https://github.com/huggingface/transformers/pull/24653
The model was trained for ~1 billion tokens on Togethercompute's Red Pajama dataset. The context length of the examples varies: https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T
The pre-tokenized dataset will be available here for you to use soon: https://huggingface.co/datasets/conceptofmind/rp-llama-2-7b-tokenized-chunked
I would also recommend checking out the phenomenal research by Ofir Press on ALiBi which laid the foundation for many of these scaling techniques: https://arxiv.org/abs/2108.12409
It is also worth reviewing the paper, A Length-Extrapolatable Transformer, and xPos technique which also applies scaling to rotary embeddings: https://arxiv.org/pdf/2212.10554.pdf
We previously trained the first publicly available model with rotary embedding scaling here: https://twitter.com/EnricoShippole/status/1655599301454594049?s=20
A Llama-2 13b model trained at 8k will release soon. As well as a suite of Llama-2 models trained at 16k context lengths will be released soon.
You can find out more about the NousResearch organization here: https://huggingface.co/NousResearch
The compute for this model release is all thanks to the generous sponsorship by CarperAI, Emad Mostaque, and StabilityAI. This is not an official StabilityAI product.
If you have any questions about the data or model be sure to reach out and ask! I will try to respond promptly.
The previous suite of LLongMA model releases can be found here: https://twitter.com/EnricoShippole/status/1677346578720256000?s=20
All of the models can be found on Huggingface: https://huggingface.co/conceptofmind
You can find the Llama-2 usage policy here: https://ai.meta.com/llama/use-policy/
Llama 2 Community License Agreement
Llama 2 Version Release Date: July 18, 2023
“Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.
“Documentation” means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/.
“Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.
“Llama 2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/.
“Llama Materials” means, collectively, Meta’s proprietary Llama 2 and Documentation (and any portion thereof) made available under this Agreement.
“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.
b. Redistribution and Use.
i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.”
iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into this Agreement.
v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof).
2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials.
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.
|
s3nh/firefly-llama-13b-GGML
|
s3nh
| 2023-07-25T08:15:51Z | 0 | 1 | null |
[
"text-generation-inference",
"text-generation",
"en",
"license:cc-by-sa-4.0",
"region:us"
] |
text-generation
| 2023-07-24T14:05:38Z |
---
license: cc-by-sa-4.0
language:
- en
tags:
- text-generation-inference
pipeline_tag: text-generation
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/YeungNLP/firefly-llama-13b).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
该模型使用llama-13b,使用UltraChat数据集进行指令微调,约140万多轮对话数据。仅需一张显卡即可完成训练。
firefly-llama-13b在🤗Hugging Face的Open LLM榜单上进行了客观的评测。
在榜单上,firefly-llama-13b取得了不错的效果,比vicuna-13b-1.1略高0.2分,比llama-2-13b-chat略低0.5分,比vicuna-13b-v1.3略低0.6分。从评测分数来看,firefly-llama-13b与vicuna-13b、llama-2-13b-chat的水平非常接近😎。
| 模型 | Average | ARC | HellaSwag | MMLU | TruthfulQA (MC) |
|--------------------------------------------------------------------------------|-------|----------------------|------------|------------|------|
| Llama-2-70b-chat-hf | 66.8 | 64.6 | 85.9 | 63.9 | 52.8 |
| vicuna-13b-v1.3 | 60 | 54.6 | 80.4 | 52.9 | 52.1 |
| Llama-2-13b-chat-hf | 59.9 | 59 | 81.9 | 54.6 | 44.1 |
| firefly-llama-13b |59.4 | 59 | 79.7 | 49.1 | 49.6 |
| vicuna-13b-1.1 | 59.2 | 52.7 | 80.1 |51.9 | 52.1 |
| guanaco-13B-HF | 59.1 | 57.8 | 83.8 |48.3 | 46.7|
值得注意的是,vicuna-13b模型采用的是全量参数微调,对训练资源的要求十分高。而firefly-llama-13b采用的则是QLoRA微调,最少仅需16G显存,即可对13B的模型进行微调。
详细介绍见文章:[Firefly单卡复刻Vicuna-13B,Open LLM榜单🤗略高0.2分](https://mp.weixin.qq.com/s/QG2YMo_QxaxS_Rr2yJrIeA)
更多详情见[Firefly项目](https://github.com/yangjianxin1/Firefly)
[Open LLM排行榜](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
robertpassmann/q-Taxi-v3
|
robertpassmann
| 2023-07-25T08:15:25Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T08:14:24Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="robertpassmann/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
s3nh/alpagasus-13b-GGML
|
s3nh
| 2023-07-25T08:13:33Z | 0 | 0 | null |
[
"text-generation",
"en",
"arxiv:2307.08701",
"license:cc-by-sa-4.0",
"region:us"
] |
text-generation
| 2023-07-24T13:50:30Z |
---
license: cc-by-sa-4.0
language:
- en
pipeline_tag: text-generation
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/gpt4life/alpagasus-13b/tree/main).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
## Model Details
This is an **unofficial** implementation of AlpaGasus-13B, which is a chat assistant trained by fine-tuning LLaMA on a Claud-filtered Alpaca dataset with around 5K triplets.
- **Developed by:** [gpt4life](https://github.com/gpt4life)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA-13B](https://huggingface.co/elinas/llama-13b-hf-transformers-4.29).
Please see the original LLaMA [license](https://github.com/facebookresearch/llama/blob/main/LICENSE) before using this model.
### Model Sources
- **Repository:** https://github.com/gpt4life/alpagasus
- **Paper:** https://arxiv.org/pdf/2307.08701.pdf
## Training Details
AlpaGasus-13B is fine-tuned from LLaMA-13B with supervised instruction fine-tuning on the filtered [Alpaca dataset](https://github.com/gpt4life/alpagasus/blob/main/rating/alpaca_filtered_data.json).
|
text2font/tst-summarization
|
text2font
| 2023-07-25T08:10:16Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-large",
"base_model:finetune:google/mt5-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-25T07:58:00Z |
---
license: apache-2.0
base_model: google/mt5-large
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: tst-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tst-summarization
This model is a fine-tuned version of [google/mt5-large](https://huggingface.co/google/mt5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 30.3505
- Rouge1: 2.7855
- Rouge2: 0.0203
- Rougel: 2.2791
- Rougelsum: 2.2817
- Gen Len: 119.3571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
vsufiy/rubert_tner_model
|
vsufiy
| 2023-07-25T08:10:07Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:multinerd",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:finetune:DeepPavlov/rubert-base-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-24T09:22:32Z |
---
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
datasets:
- multinerd
model-index:
- name: rubert_tner_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert_tner_model
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the multinerd dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.02 | 100 | 0.3009 | 0.5599 | 0.5719 | 0.5658 | 0.9386 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
Aspik101/llama-30b-instruct-2048-PL-lora
|
Aspik101
| 2023-07-25T08:07:58Z | 1,481 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-25T07:44:07Z |
---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
|
Anees-Aslam/llama2-qlora-finetunined-cloud-embedUR
|
Anees-Aslam
| 2023-07-25T08:04:32Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T08:04:24Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
VFiona/opus-mt-en-it-finetuned_10000-en-to-it
|
VFiona
| 2023-07-25T08:03:16Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-25T07:07:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-it-finetuned_10000-en-to-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-it-finetuned_10000-en-to-it
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-it](https://huggingface.co/Helsinki-NLP/opus-mt-en-it) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3025
- Bleu: 72.7906
- Gen Len: 28.197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.41 | 1.0 | 563 | 0.3025 | 72.7906 | 28.197 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.13.1
- Tokenizers 0.11.0
|
cemNB/034a
|
cemNB
| 2023-07-25T07:57:42Z | 0 | 0 | null |
[
"pytorch",
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-07-25T07:52:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: 034a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 034a
This model is a fine-tuned version of [tiiuae/falcon-rw-1b](https://huggingface.co/tiiuae/falcon-rw-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8771 | 0.0 | 10 | 2.6188 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Buseak/canine_vowelizer_0706_v4
|
Buseak
| 2023-07-25T07:54:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"canine",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-25T06:20:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: canine_vowelizer_0706_v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canine_vowelizer_0706_v4
This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1450
- Precision: 1.0000
- Recall: 1.0
- F1: 1.0000
- Accuracy: 0.9775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1088 | 1.0 | 1951 | 0.1144 | 0.9999 | 1.0 | 1.0000 | 0.9628 |
| 0.1009 | 2.0 | 3902 | 0.1023 | 0.9999 | 1.0 | 1.0000 | 0.9657 |
| 0.0917 | 3.0 | 5853 | 0.0985 | 1.0000 | 1.0 | 1.0000 | 0.9690 |
| 0.0757 | 4.0 | 7804 | 0.0928 | 1.0000 | 1.0000 | 1.0000 | 0.9712 |
| 0.0635 | 5.0 | 9755 | 0.0932 | 0.9999 | 1.0 | 1.0000 | 0.9725 |
| 0.0542 | 6.0 | 11706 | 0.0943 | 0.9999 | 1.0000 | 1.0000 | 0.9735 |
| 0.0453 | 7.0 | 13657 | 0.0980 | 1.0000 | 1.0000 | 1.0000 | 0.9738 |
| 0.0369 | 8.0 | 15608 | 0.1037 | 1.0000 | 1.0 | 1.0000 | 0.9750 |
| 0.0308 | 9.0 | 17559 | 0.1056 | 1.0000 | 1.0000 | 1.0000 | 0.9747 |
| 0.0275 | 10.0 | 19510 | 0.1138 | 1.0000 | 1.0 | 1.0000 | 0.9757 |
| 0.0222 | 11.0 | 21461 | 0.1187 | 1.0000 | 1.0 | 1.0000 | 0.9757 |
| 0.0185 | 12.0 | 23412 | 0.1201 | 1.0000 | 1.0000 | 1.0000 | 0.9761 |
| 0.0166 | 13.0 | 25363 | 0.1239 | 1.0000 | 1.0000 | 1.0000 | 0.9764 |
| 0.0146 | 14.0 | 27314 | 0.1302 | 1.0000 | 1.0 | 1.0000 | 0.9768 |
| 0.0112 | 15.0 | 29265 | 0.1351 | 1.0000 | 1.0000 | 1.0000 | 0.9768 |
| 0.0104 | 16.0 | 31216 | 0.1386 | 1.0000 | 1.0 | 1.0000 | 0.9769 |
| 0.0092 | 17.0 | 33167 | 0.1379 | 1.0000 | 1.0 | 1.0000 | 0.9771 |
| 0.0079 | 18.0 | 35118 | 0.1453 | 1.0000 | 1.0 | 1.0000 | 0.9771 |
| 0.0071 | 19.0 | 37069 | 0.1444 | 1.0000 | 1.0 | 1.0000 | 0.9775 |
| 0.0067 | 20.0 | 39020 | 0.1450 | 1.0000 | 1.0 | 1.0000 | 0.9775 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
chriskim2273/IOTNation_CompanyName_Extraction_QA_Model_1.2_Distilbert
|
chriskim2273
| 2023-07-25T07:47:23Z | 126 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-25T05:03:22Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: IOTNation_CompanyName_Extraction_QA_Model_1.2_Distilbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IOTNation_CompanyName_Extraction_QA_Model_1.2_Distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
Vidyuth/bert-finetuned-squad
|
Vidyuth
| 2023-07-25T07:47:11Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-25T07:02:29Z |
---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT large model (uncased) whole word masking finetuned on SQuAD
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.
The training is identical -- each masked WordPiece token is predicted independently.
After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
This model has the following configuration:
- 24-layer
- 1024 hidden dimension
- 16 attention heads
- 336M parameters.
## Intended uses & limitations
This model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the [task summary](https://huggingface.co/transformers/task_summary.html#extractive-question-answering) of the transformers documentation.## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### Fine-tuning
After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command:
```
python -m torch.distributed.launch --nproc_per_node=8 ./examples/question-answering/run_qa.py \
--model_name_or_path bert-large-uncased-whole-word-masking \
--dataset_name squad \
--do_train \
--do_eval \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./examples/models/wwm_uncased_finetuned_squad/ \
--per_device_eval_batch_size=3 \
--per_device_train_batch_size=3 \
```
## Evaluation results
The results obtained are the following:
```
f1 = 93.15
exact_match = 86.91
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
P01yH3dr0n/ReedMix
|
P01yH3dr0n
| 2023-07-25T07:36:38Z | 0 | 11 | null |
[
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-05-05T16:26:33Z |
---
license: cc-by-nc-4.0
---
* 本模型仅作学习交流使用,禁止用于任何商业活动或侵犯他人版权、肖像权等权利的行为。
* Study and communication only. Any commercial activity and violation of rights like portrait and copyright is prohibited.
几个自己常用的融合的模型,成分写在末尾了,不过比例不记得了。
Some of my merged models, the ingredients are at last of readme, though I forgot the proportions.
*推荐参数*
(其实也没什么推荐的,想怎么用怎么用,这只是我常用的设置)
*Recommended parameters*
(Anyhow, use it as you like. Here are my parameters)
Sampler: DPM++ SDE Karras, or DPM++ 2M Karras
Steps: 20
CFG scale: 8
Clip skip: 2
负面推荐使用EasyNegative,推荐使用高清修复。
EasyNegative recommended. hire.fix recommended.
## ReedMix_A
偏厚涂画风的模型。脸部画得比较好,背景需要较多tag才能画得比较细致。**容易出nsfw**。
Impasto-like model. Good at face, but many tags needed for detailed background. **Prone to nsfw**



## ReedMix_A_light
A模型画风明亮版,擅长画人物细节,缺点同A。
Brighter version of model A, good at figure details while same shortages with A.



## ReddMix_A_flat
测试模型,A模型2D版,线条会有点怪。
Beta model, 2D version of A, it may produce weird lines.



## ReedMix_B
比A的背景细节更多,但脸部略不如A。
More details in background than A, while a bit worse at face.



## ReedMix_C
以counterfeit 2.5为基础融合的模型。
Model based on counterfeit 2.5.



## ReedMix_P
偏平涂画风的模型,全身图肢体不易崩,背景细节也较多。
Flat-color model, good at anatomy and detailed background.



## ReedMix_P_light
P模型画风明亮版,比原版“蜡笔味”更少,但背景细节略逊于原版。
Brighter version of model P, less "pastel style" while inferior to origin model on background details.



### Acknowledgement
感谢这些作者发布的好用的模型
* 7th_anime_v3_A (https://huggingface.co/syaimu/7th_Layer)
* pastel-mix (https://huggingface.co/andite/pastel-mix)
* counterfeit-v2.5 (https://huggingface.co/gsdf/Counterfeit-V2.5)
* PileDream (https://civitai.com/models/20255)
* canisterMix-1 (https://civitai.com/models/94009/canistermix-1)
|
rulrul512/path-to-save-model
|
rulrul512
| 2023-07-25T07:30:48Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-24T06:58:17Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - rulrul512/path-to-save-model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Adarshagupta/BabyDragon
|
Adarshagupta
| 2023-07-25T07:13:53Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-25T07:13:53Z |
---
license: bigscience-openrail-m
---
|
sanka85/llama2-rstp-latest
|
sanka85
| 2023-07-25T07:07:38Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T07:07:32Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
wtnan2003/vit-base-patch16-224-in21k-finetuned-lora-food101
|
wtnan2003
| 2023-07-25T07:05:23Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"region:us"
] | null | 2023-07-25T03:53:40Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.