modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 18:30:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 537
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 18:30:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Saideva/title_generation
|
Saideva
| 2023-07-16T14:38:55Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-16T14:10:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: title_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# title_generation
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 41.5236
- Rouge2: 17.5894
- Rougel: 37.2852
- Rougelsum: 37.2749
- Gen Len: 13.3542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.0 | 1.0 | 3748 | nan | 41.5236 | 17.5894 | 37.2852 | 37.2749 | 13.3542 |
| 0.0 | 2.0 | 7496 | nan | 41.5236 | 17.5894 | 37.2852 | 37.2749 | 13.3542 |
| 0.0 | 3.0 | 11244 | nan | 41.5236 | 17.5894 | 37.2852 | 37.2749 | 13.3542 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
SwampMan/a2c-PandaReachDense-v2
|
SwampMan
| 2023-07-16T14:22:23Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T14:19:26Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.61 +/- 0.69
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
atiiisham988/speecht5_finetuned_voxpopuli_nl
|
atiiisham988
| 2023-07-16T14:16:09Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:common_voice_13_0",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-16T05:30:30Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5583 | 8.61 | 1000 | 0.4978 |
| 0.5238 | 17.22 | 2000 | 0.4833 |
| 0.5075 | 25.83 | 3000 | 0.4763 |
| 0.5026 | 34.45 | 4000 | 0.4763 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
huggingFacing/ddpm-butterflies-128
|
huggingFacing
| 2023-07-16T14:11:21Z | 0 | 0 |
diffusers
|
[
"diffusers",
"en",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2023-07-16T14:09:03Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: /content/drive/MyDrive/image_and_text
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `/content/drive/MyDrive/image_and_text` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Tian7/ddpm-butterflies-128/tensorboard?#scalars)
|
arham061/speecht5_finetuned_voxpopuli_nl
|
arham061
| 2023-07-16T14:04:23Z | 82 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:common_voice_13_0",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-14T07:08:15Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [arham061/speecht5_finetuned_voxpopuli_nl](https://huggingface.co/arham061/speecht5_finetuned_voxpopuli_nl) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5058 | 7.74 | 1000 | 0.5431 |
| 0.4938 | 15.49 | 2000 | 0.5487 |
| 0.4909 | 23.23 | 3000 | 0.5508 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
SwampMan/a2c-AntBulletEnv-v0
|
SwampMan
| 2023-07-16T13:44:52Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T13:43:45Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1250.47 +/- 141.94
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AnandSingh/Wizard-Vicuna-13B-Uncensored-HF_QnA
|
AnandSingh
| 2023-07-16T13:29:01Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T13:28:53Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
zfz/Cuteyukimix
|
zfz
| 2023-07-16T13:27:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-29T12:17:48Z |
https://civitai.com/user/newlifezfztty761/models
My personal space on civitai.com
|
NasimB/all-base-rarity-all-guten-rarity-all-2p5k-iorder-est-5p5k-mostf
|
NasimB
| 2023-07-16T13:17:43Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-16T11:29:42Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: all-base-rarity-all-guten-rarity-all-2p5k-iorder-est-5p5k-mostf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-base-rarity-all-guten-rarity-all-2p5k-iorder-est-5p5k-mostf
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7586 | 0.32 | 500 | 5.6557 |
| 5.4132 | 0.63 | 1000 | 5.2227 |
| 5.0611 | 0.95 | 1500 | 4.9660 |
| 4.7718 | 1.26 | 2000 | 4.8202 |
| 4.64 | 1.58 | 2500 | 4.6997 |
| 4.5178 | 1.89 | 3000 | 4.5866 |
| 4.308 | 2.21 | 3500 | 4.5285 |
| 4.223 | 2.52 | 4000 | 4.4602 |
| 4.1757 | 2.84 | 4500 | 4.3982 |
| 4.0214 | 3.15 | 5000 | 4.3825 |
| 3.8976 | 3.47 | 5500 | 4.3455 |
| 3.8816 | 3.78 | 6000 | 4.3106 |
| 3.7798 | 4.1 | 6500 | 4.3020 |
| 3.6074 | 4.41 | 7000 | 4.2988 |
| 3.5991 | 4.73 | 7500 | 4.2789 |
| 3.56 | 5.04 | 8000 | 4.2749 |
| 3.408 | 5.36 | 8500 | 4.2785 |
| 3.407 | 5.67 | 9000 | 4.2775 |
| 3.401 | 5.99 | 9500 | 4.2772 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
madoe001/a2c-AntBulletEnv-v0
|
madoe001
| 2023-07-16T12:58:37Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T12:56:52Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1411.03 +/- 55.48
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
asedmammad/Vicuna-7B-vanilla-1.1-GGML
|
asedmammad
| 2023-07-16T12:50:47Z | 0 | 1 | null |
[
"llama",
"vicuna",
"text-generation-inference",
"region:us"
] | null | 2023-07-16T09:47:34Z |
---
inference: false
tags:
- llama
- vicuna
- text-generation-inference
---
# Ejafa's Vicuna Vanilla 1.1 7B GGML
These files are GGML format model files for [Ejafa's Vicuna Vanilla 1.1 7B](https://huggingface.co/Ejafa/vicuna_7B_vanilla_1.1).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 8 -ngl 32 -m vicuna_7B_vanilla_1.1.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "prompt goes here"
```
Change `-t 8` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## Compatibility
I have uploded bothe the original llama.cpp quant methods (`q4_0, q4_1, q5_0, q5_1, q8_0`) as well as the new k-quant methods (`q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`).
Please refer to [llama.cpp](https://github.com/ggerganov/llama.cpp) and [TheBloke](https://huggingface.co/TheBloke)'s GGML models for further explanation.
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Thanks
Thanks to [TheBloke](https://huggingface.co/TheBloke) for inspiration and providing almost all of the readme here!
Thanks to [Ejafa](https://huggingface.co/Ejafa) for providing checkpoints of the model.
Thanks to [Georgi Gerganov](https://github.com/ggerganov) and all of the awesome people in the AI community.
|
ByteExplorer/q-ToyTaxi-v1
|
ByteExplorer
| 2023-07-16T12:49:02Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T12:48:42Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-ToyTaxi-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ByteExplorer/q-ToyTaxi-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
larry-jiang/RL
|
larry-jiang
| 2023-07-16T12:48:55Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T12:47:54Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.32 +/- 20.65
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
headflame02/AchaxV5
|
headflame02
| 2023-07-16T12:37:19Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T12:37:16Z |
---
license: creativeml-openrail-m
---
|
vuvuongvi/vivu_marketingAI_fourthbrain
|
vuvuongvi
| 2023-07-16T12:29:23Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T12:28:39Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
ALM-AHME/swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-BreastCancer-BreakHis-AH-60-20-20
|
ALM-AHME
| 2023-07-16T12:15:04Z | 199 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-16T09:38:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-BreastCancer-BreakHis-AH-60-20-20
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: Splitted-Resized
split: train
args: Splitted-Resized
metrics:
- name: Accuracy
type: accuracy
value: 0.9943422913719944
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-BreastCancer-BreakHis-AH-60-20-20
This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft](https://huggingface.co/microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0229
- Accuracy: 0.9943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.5
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2053 | 1.0 | 199 | 0.1227 | 0.9496 |
| 0.1302 | 2.0 | 398 | 0.0665 | 0.9736 |
| 0.0784 | 3.0 | 597 | 0.0600 | 0.9778 |
| 0.1181 | 4.0 | 796 | 0.0449 | 0.9849 |
| 0.208 | 5.0 | 995 | 0.0393 | 0.9887 |
| 0.0057 | 6.0 | 1194 | 0.0229 | 0.9943 |
| 0.0017 | 7.0 | 1393 | 0.0263 | 0.9939 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
joserodr68/Reinforce-cartpole
|
joserodr68
| 2023-07-16T12:12:28Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T12:11:19Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sjdata/speecht5_finetuned_single_speaker_en_test_librivox
|
sjdata
| 2023-07-16T12:09:19Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"en",
"dataset:speecht5_finetuned_single_speaker_en_test_librivox",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-13T12:31:39Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- speecht5_finetuned_single_speaker_en_test_librivox
model-index:
- name: SpeechT5 Single Speaker test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 Single Speaker test
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the single_speaker_en_test_librivox dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4809 | 1.78 | 1000 | 0.4215 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
indiaLLMs/dolly-llama-3b
|
indiaLLMs
| 2023-07-16T11:42:56Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T11:42:19Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
NasimB/all-base-rarity-all-children-rarity-all-iorder-est-5p5k-mostf
|
NasimB
| 2023-07-16T11:21:16Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-16T09:32:37Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: all-base-rarity-all-children-rarity-all-iorder-est-5p5k-mostf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-base-rarity-all-children-rarity-all-iorder-est-5p5k-mostf
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7525 | 0.31 | 500 | 5.6457 |
| 5.4141 | 0.63 | 1000 | 5.2112 |
| 5.0523 | 0.94 | 1500 | 4.9580 |
| 4.7674 | 1.25 | 2000 | 4.8174 |
| 4.6213 | 1.56 | 2500 | 4.6915 |
| 4.5132 | 1.88 | 3000 | 4.5796 |
| 4.3109 | 2.19 | 3500 | 4.5205 |
| 4.2115 | 2.5 | 4000 | 4.4590 |
| 4.1668 | 2.82 | 4500 | 4.3952 |
| 4.0277 | 3.13 | 5000 | 4.3712 |
| 3.8841 | 3.44 | 5500 | 4.3431 |
| 3.8738 | 3.75 | 6000 | 4.3064 |
| 3.7942 | 4.07 | 6500 | 4.2923 |
| 3.5972 | 4.38 | 7000 | 4.2869 |
| 3.5903 | 4.69 | 7500 | 4.2730 |
| 3.5681 | 5.01 | 8000 | 4.2585 |
| 3.3989 | 5.32 | 8500 | 4.2700 |
| 3.3939 | 5.63 | 9000 | 4.2694 |
| 3.3913 | 5.94 | 9500 | 4.2686 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
chrishoertnagl/dolly-v2-3b-chris
|
chrishoertnagl
| 2023-07-16T11:20:19Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T10:45:38Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-lr-v1
|
hafidikhsan
| 2023-07-16T11:14:40Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-16T11:12:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-lr-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-english-pronunciation-evaluation-lr-v1
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4017
- Accuracy: 0.25
- F1: 0.1
- Precision: 0.0625
- Recall: 0.25
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:---------:|:------:|
| 1.3826 | 1.0 | 500 | 1.4017 | 0.25 | 0.1 | 0.0625 | 0.25 |
| 1.4074 | 2.0 | 1000 | 1.3922 | 0.25 | 0.1 | 0.0625 | 0.25 |
| 1.3984 | 3.0 | 1500 | 1.3868 | 0.25 | 0.1 | 0.0625 | 0.25 |
| 1.387 | 4.0 | 2000 | 1.3863 | 0.25 | 0.1 | 0.0625 | 0.25 |
| 1.3861 | 5.0 | 2500 | 1.3863 | 0.25 | 0.1 | 0.0625 | 0.25 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Beams24/nzyi
|
Beams24
| 2023-07-16T11:08:38Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T11:06:38Z |
---
license: creativeml-openrail-m
---
|
hyunussarioglu/ppo-Huggy
|
hyunussarioglu
| 2023-07-16T11:04:45Z | 42 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-16T11:04:39Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: hyunussarioglu/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
linkanjarad/Doctor-OPT-350M
|
linkanjarad
| 2023-07-16T10:52:53Z | 145 | 1 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"dataset:sid6i7/patient-doctor",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-16T06:49:05Z |
---
license: other
model-index:
- name: Doctor-OPT-350M
results: []
widget:
- text: |-
Patient: I've been experiencing a persistent cough for the past week. What could be causing it?
Doctor: <start>
datasets:
- sid6i7/patient-doctor
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Doctor-OPT-350M
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the [sid6i7/patient-doctor](https://huggingface.co/datasets/sid6i7/patient-doctor).
## Model description
This is an LLM trained on patient-doctor dialogues, mostly comprised of doctors giving medical advice to patients.
## Intended uses & limitations
This model is intended for research on the capabilities of LLMs. Any output from this LLM is not to be treated as legitimate medical advice.
The model works best when given input in the format of the dataset it was trained on. Here is an example of the format:
```
Patient: Please can you help me with thoughts about its. Where my gut gives out spasms and. Uncontrol movements like pulsating 24,7. It seems. Worst when stressed I would like something that regards relaxing the bowel to get some more pleasures in life. Many thanks. XXXX
Doctor: <start> Hello, Continuous abdominal spasms can be due to different reasons. Is there any intense pain present? Are you on any medication?It might be related to food habits. Try consuming food that is rich in fibers.
```
## Sample Use
```python
from transformers import pipeline
generator = pipeline('text-generation', model='linkanjarad/Doctor-OPT-350M')
# Format the prompt to match the dataset formatting
prompt = '''Patient: Hi, I'm 30 years old. Recently, I've been having abdominal pain and discomfort after meals. It feels like a burning sensation. What could be the possible causes?
Doctor: <start>'''
output = generator(prompt, do_sample=True, min_length=10, max_length=50, temperature=0.5)
print(output[0]['generated_text'])
```
## Sample Outputs
```
Patient: "I'm a 40-year-old male with a family history of heart disease. Lately, I've been experiencing chest discomfort and shortness of breath during physical activity. Are these symptoms indicative of a heart problem, and what tests should I undergo to evaluate my heart health?
Doctor: <start> Hello,The symptoms you have mentioned are suggestive of cardiac problems. These symptoms are also possible due to high blood pressure and cholesterol. I would recommend performing physical activity such as walking, rollerblading, swimming, etc. to increase your activity levels.I would also recommend doing an ultrasound scan of the heart to determine if there is a risk of a heart problem. I would also recommend doing a blood test to
```
```
Patient: I'm feeling this pain in my chest when I see this particular person. My breath feels shortened and I feel nervous, and I get a tingly sensation. My eyes seem to follow the person. What could this be?
Doctor: <start> Hi,
Thanks for writing in.
The symptoms you have described suggest to be related to some sort of pulmonary infections. I would advise you to undergo a check up by a physician of the chest and abdomen. In addition, you should go for a chest X-ray and a chest and abdomen MRI.
```
```
Patient: Hello, I am a 20 year old female and have been getting stabbing pains on the lower right side of my abdomen everyday for about 4 weeks now. I know it cant be my appendix for it was removed in 2010. I have noticed that above the scar of my appendix removal the area seems risen, and beneath the scar the area is flat. Should i be worried and seek further help?
Doctor: <start> Hi dear,Thanks for writing in.The pain in the upper right side of the abdomen may be because of some sort of appendicitis. Some times it can be due to a condition called anemia. If there is anemia and the pain is not related to anemia, then chances of appendicitis are low. I would suggest you to consult an emic specialist and get a urine and blood test done. Anemia can be treated in a few ways. For example, erythrocyte sedimentation rate (ESR), blood protein level, and a few more tests like CT scan of the abdomen and kidney are important but the actual diagnosis is not always confirmed. I hope my answer will help you.Wish you all the best.
```
|
vlkn/falcon_instruct_5
|
vlkn
| 2023-07-16T10:39:15Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | 2023-07-16T10:17:20Z |
---
tags:
- generated_from_trainer
model-index:
- name: falcon_instruct_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon_instruct_5
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 75
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Kapiche/msmarco-MiniLM-L6-cos-v5
|
Kapiche
| 2023-07-16T10:31:33Z | 7 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-02-02T22:27:10Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# msmarco-MiniLM-L6-cos-v5
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and was designed for **semantic search**. It has been trained on 500k (query, answer) pairs from the [MS MARCO Passages dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking). For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
#Load the model
model = SentenceTransformer('sentence-transformers/msmarco-MiniLM-L6-cos-v5')
#Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)
#Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take average of all tokens
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output.last_hidden_state #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
#Encode text
def encode(texts):
# Tokenize sentences
encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input, return_dict=True)
# Perform pooling
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
return embeddings
# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/msmarco-MiniLM-L6-cos-v5")
model = AutoModel.from_pretrained("sentence-transformers/msmarco-MiniLM-L6-cos-v5")
#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)
#Compute dot score between query and all document embeddings
scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Technical Details
In the following some technical details how this model must be used:
| Setting | Value |
| --- | :---: |
| Dimensions | 384 |
| Produces normalized embeddings | Yes |
| Pooling-Method | Mean pooling |
| Suitable score functions | dot-product (`util.dot_score`), cosine-similarity (`util.cos_sim`), or euclidean distance |
Note: When loaded with `sentence-transformers`, this model produces normalized embeddings with length 1. In that case, dot-product and cosine-similarity are equivalent. dot-product is preferred as it is faster. Euclidean distance is proportional to dot-product and can also be used.
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
```
|
esculapeso/biogpt-finetuned-twspookyfromfile
|
esculapeso
| 2023-07-16T10:04:43Z | 129 | 0 |
transformers
|
[
"transformers",
"pytorch",
"biogpt",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-16T08:01:12Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: biogpt-finetuned-twspookyfromfile
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biogpt-finetuned-twspookyfromfile
This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 3.1962 |
| No log | 2.0 | 6 | 3.0132 |
| No log | 3.0 | 9 | 2.9607 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.13.1
- Tokenizers 0.13.3
|
caqlayan/falcon-7b-prompt
|
caqlayan
| 2023-07-16T09:51:16Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T09:31:48Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
neurae/electra-dnd-intents
|
neurae
| 2023-07-16T09:39:47Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"en",
"dataset:neurae/dnd_style_intents",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-16T12:44:45Z |
---
datasets:
- neurae/dnd_style_intents
language:
- en
pipeline_tag: text-classification
license: apache-2.0
metrics:
- accuracy
- f1
---
This is electra base tuned with optimal lr, lr scheduler and weight decay on dnd-style-intents dataset.
| parametrs | value |
|---------------|----------------------|
| learning rate | 6.6e-5 |
| lr scheduler | cosine with restarts |
| weight decay | 0 |
Model has next metrics on test data from dataset
| metric | value |
|----------|-------|
| accuracy | 0.978 |
| Macro F1 | 0.976 |
| Micro F1 | 0.978 |
|
neurae/albert-dnd-intents
|
neurae
| 2023-07-16T09:38:16Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"en",
"dataset:neurae/dnd_style_intents",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-16T09:58:57Z |
---
datasets:
- neurae/dnd_style_intents
language:
- en
pipeline_tag: text-classification
license: apache-2.0
metrics:
- accuracy
- f1
---
This is albert base tuned with optimal lr, lr scheduler and weight decay on dnd-style-intents dataset.
| parametrs | value |
|---------------|----------|
| learning rate | 5e-5 |
| lr scheduler | linear |
| weight decay | 0 |
Model has next metrics on test data from dataset
| metric | value |
|----------|-------|
| accuracy | 0.981 |
| Macro F1 | 0.979 |
| Micro F1 | 0.985 |
|
Xxmlala/q-Taxi-v3
|
Xxmlala
| 2023-07-16T09:33:44Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T09:33:42Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Xxmlala/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Xxmlala/q-FrozenLake-v1-4x4-noSlippery
|
Xxmlala
| 2023-07-16T09:29:27Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T09:29:19Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Xxmlala/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
manmyung/a2c-PandaReachDense-v2
|
manmyung
| 2023-07-16T08:43:12Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T08:40:02Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.88 +/- 0.21
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
openlm-research/open_llama_3b_v2_easylm
|
openlm-research
| 2023-07-16T08:32:50Z | 0 | 4 | null |
[
"dataset:tiiuae/falcon-refinedweb",
"dataset:bigcode/starcoderdata",
"dataset:togethercomputer/RedPajama-Data-1T",
"arxiv:2302.13971",
"license:apache-2.0",
"region:us"
] | null | 2023-07-16T00:40:05Z |
---
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
- bigcode/starcoderdata
- togethercomputer/RedPajama-Data-1T
---
# OpenLLaMA: An Open Reproduction of LLaMA
**TL;DR**: we are releasing our public preview of OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA. We are releasing a series of 3B, 7B and 13B models trained on different data mixtures. Our model weights can serve as the drop in replacement of LLaMA in existing implementations.
In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a series of 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. The v2 model is better than the old v1 model trained on a different data mixture. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
## Weights Release, License and Usage
We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
### Loading the Weights with Hugging Face Transformers
Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that** [**the auto-converted fast tokenizer sometimes gives incorrect tokenizations**](https://github.com/huggingface/transformers/issues/24233)**.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
## v2 models
model_path = 'openlm-research/open_llama_3b_v2'
# model_path = 'openlm-research/open_llama_7b_v2'
## v1 models
# model_path = 'openlm-research/open_llama_3b'
# model_path = 'openlm-research/open_llama_7b'
# model_path = 'openlm-research/open_llama_13b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
### Evaluating with LM-Eval-Harness
The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:
```python
tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained(
pretrained if tokenizer is None else tokenizer,
revision=revision + ("/" + subfolder if subfolder is not None else ""),
use_fast=False
)
```
### Loading the Weights with EasyLM
For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights.
## Dataset and Training
The v1 models are trained on the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). The v2 models are trained on a mixture of the [Falcon refined-web dataset](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) and the wikipedia, arxiv, book and stackexchange part of the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs open datasets rather than the one utilized by the original LLaMA.
We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and fully sharded data parallelism [](https://engineering.fb.com/2021/07/15/open-source/fsdp/)(also know as ZeRO stage 3) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
| **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 3Bv2 | OpenLLaMA 7Bv2 | OpenLLaMA 3B | OpenLLaMA 7B | OpenLLaMA 13B |
| ---------------------- | -------- | -------- | --------- | -------------- | -------------- | ------------ | ------------ | ------------- |
| anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.33 | 0.34 | 0.33 | 0.33 | 0.33 |
| anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.36 | 0.35 | 0.32 | 0.36 | 0.33 |
| anli_r3/acc | 0.35 | 0.37 | 0.39 | 0.38 | 0.39 | 0.35 | 0.38 | 0.40 |
| arc_challenge/acc | 0.34 | 0.39 | 0.44 | 0.34 | 0.39 | 0.34 | 0.37 | 0.41 |
| arc_challenge/acc_norm | 0.37 | 0.41 | 0.44 | 0.36 | 0.41 | 0.37 | 0.38 | 0.44 |
| arc_easy/acc | 0.67 | 0.68 | 0.75 | 0.68 | 0.73 | 0.69 | 0.72 | 0.75 |
| arc_easy/acc_norm | 0.62 | 0.52 | 0.59 | 0.63 | 0.70 | 0.65 | 0.68 | 0.70 |
| boolq/acc | 0.66 | 0.75 | 0.71 | 0.66 | 0.72 | 0.68 | 0.71 | 0.75 |
| hellaswag/acc | 0.50 | 0.56 | 0.59 | 0.52 | 0.56 | 0.49 | 0.53 | 0.56 |
| hellaswag/acc_norm | 0.66 | 0.73 | 0.76 | 0.70 | 0.75 | 0.67 | 0.72 | 0.76 |
| openbookqa/acc | 0.29 | 0.29 | 0.31 | 0.26 | 0.30 | 0.27 | 0.30 | 0.31 |
| openbookqa/acc_norm | 0.38 | 0.41 | 0.42 | 0.38 | 0.41 | 0.40 | 0.40 | 0.43 |
| piqa/acc | 0.75 | 0.78 | 0.79 | 0.77 | 0.79 | 0.75 | 0.76 | 0.77 |
| piqa/acc_norm | 0.76 | 0.78 | 0.79 | 0.78 | 0.80 | 0.76 | 0.77 | 0.79 |
| record/em | 0.88 | 0.91 | 0.92 | 0.87 | 0.89 | 0.88 | 0.89 | 0.91 |
| record/f1 | 0.89 | 0.91 | 0.92 | 0.88 | 0.89 | 0.89 | 0.90 | 0.91 |
| rte/acc | 0.54 | 0.56 | 0.69 | 0.55 | 0.57 | 0.58 | 0.60 | 0.64 |
| truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.25 | 0.22 | 0.23 | 0.22 | 0.23 | 0.25 |
| truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.40 | 0.35 | 0.35 | 0.35 | 0.35 | 0.38 |
| wic/acc | 0.50 | 0.50 | 0.50 | 0.50 | 0.50 | 0.48 | 0.51 | 0.47 |
| winogrande/acc | 0.64 | 0.68 | 0.70 | 0.63 | 0.66 | 0.62 | 0.67 | 0.70 |
| Average | 0.52 | 0.55 | 0.57 | 0.53 | 0.56 | 0.53 | 0.55 | 0.57 |
We removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
## Contact
We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
OpenLLaMA is developed by:
[Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research.
*Equal Contribution
## Acknowledgment
We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
The OpenLLaMA 13B v1 model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
## Reference
If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
```
@article{touvron2023llama,
title={Llama: Open and efficient foundation language models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
|
openlm-research/open_llama_3b_v2
|
openlm-research
| 2023-07-16T08:32:00Z | 25,360 | 149 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:tiiuae/falcon-refinedweb",
"dataset:bigcode/starcoderdata",
"dataset:togethercomputer/RedPajama-Data-1T",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-16T00:39:43Z |
---
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
- bigcode/starcoderdata
- togethercomputer/RedPajama-Data-1T
---
# OpenLLaMA: An Open Reproduction of LLaMA
**TL;DR**: we are releasing our public preview of OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA. We are releasing a series of 3B, 7B and 13B models trained on different data mixtures. Our model weights can serve as the drop in replacement of LLaMA in existing implementations.
In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a series of 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. The v2 model is better than the old v1 model trained on a different data mixture. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
## Weights Release, License and Usage
We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
### Loading the Weights with Hugging Face Transformers
Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that** [**the auto-converted fast tokenizer sometimes gives incorrect tokenizations**](https://github.com/huggingface/transformers/issues/24233)**.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
## v2 models
model_path = 'openlm-research/open_llama_3b_v2'
# model_path = 'openlm-research/open_llama_7b_v2'
## v1 models
# model_path = 'openlm-research/open_llama_3b'
# model_path = 'openlm-research/open_llama_7b'
# model_path = 'openlm-research/open_llama_13b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
### Evaluating with LM-Eval-Harness
The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:
```python
tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained(
pretrained if tokenizer is None else tokenizer,
revision=revision + ("/" + subfolder if subfolder is not None else ""),
use_fast=False
)
```
### Loading the Weights with EasyLM
For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights.
## Dataset and Training
The v1 models are trained on the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). The v2 models are trained on a mixture of the [Falcon refined-web dataset](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) and the wikipedia, arxiv, book and stackexchange part of the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs open datasets rather than the one utilized by the original LLaMA.
We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and fully sharded data parallelism [](https://engineering.fb.com/2021/07/15/open-source/fsdp/)(also know as ZeRO stage 3) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
| **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 3Bv2 | OpenLLaMA 7Bv2 | OpenLLaMA 3B | OpenLLaMA 7B | OpenLLaMA 13B |
| ---------------------- | -------- | -------- | --------- | -------------- | -------------- | ------------ | ------------ | ------------- |
| anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.33 | 0.34 | 0.33 | 0.33 | 0.33 |
| anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.36 | 0.35 | 0.32 | 0.36 | 0.33 |
| anli_r3/acc | 0.35 | 0.37 | 0.39 | 0.38 | 0.39 | 0.35 | 0.38 | 0.40 |
| arc_challenge/acc | 0.34 | 0.39 | 0.44 | 0.34 | 0.39 | 0.34 | 0.37 | 0.41 |
| arc_challenge/acc_norm | 0.37 | 0.41 | 0.44 | 0.36 | 0.41 | 0.37 | 0.38 | 0.44 |
| arc_easy/acc | 0.67 | 0.68 | 0.75 | 0.68 | 0.73 | 0.69 | 0.72 | 0.75 |
| arc_easy/acc_norm | 0.62 | 0.52 | 0.59 | 0.63 | 0.70 | 0.65 | 0.68 | 0.70 |
| boolq/acc | 0.66 | 0.75 | 0.71 | 0.66 | 0.72 | 0.68 | 0.71 | 0.75 |
| hellaswag/acc | 0.50 | 0.56 | 0.59 | 0.52 | 0.56 | 0.49 | 0.53 | 0.56 |
| hellaswag/acc_norm | 0.66 | 0.73 | 0.76 | 0.70 | 0.75 | 0.67 | 0.72 | 0.76 |
| openbookqa/acc | 0.29 | 0.29 | 0.31 | 0.26 | 0.30 | 0.27 | 0.30 | 0.31 |
| openbookqa/acc_norm | 0.38 | 0.41 | 0.42 | 0.38 | 0.41 | 0.40 | 0.40 | 0.43 |
| piqa/acc | 0.75 | 0.78 | 0.79 | 0.77 | 0.79 | 0.75 | 0.76 | 0.77 |
| piqa/acc_norm | 0.76 | 0.78 | 0.79 | 0.78 | 0.80 | 0.76 | 0.77 | 0.79 |
| record/em | 0.88 | 0.91 | 0.92 | 0.87 | 0.89 | 0.88 | 0.89 | 0.91 |
| record/f1 | 0.89 | 0.91 | 0.92 | 0.88 | 0.89 | 0.89 | 0.90 | 0.91 |
| rte/acc | 0.54 | 0.56 | 0.69 | 0.55 | 0.57 | 0.58 | 0.60 | 0.64 |
| truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.25 | 0.22 | 0.23 | 0.22 | 0.23 | 0.25 |
| truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.40 | 0.35 | 0.35 | 0.35 | 0.35 | 0.38 |
| wic/acc | 0.50 | 0.50 | 0.50 | 0.50 | 0.50 | 0.48 | 0.51 | 0.47 |
| winogrande/acc | 0.64 | 0.68 | 0.70 | 0.63 | 0.66 | 0.62 | 0.67 | 0.70 |
| Average | 0.52 | 0.55 | 0.57 | 0.53 | 0.56 | 0.53 | 0.55 | 0.57 |
We removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
## Contact
We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
OpenLLaMA is developed by:
[Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research.
*Equal Contribution
## Acknowledgment
We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
The OpenLLaMA 13B v1 model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
## Reference
If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
```
@article{touvron2023llama,
title={Llama: Open and efficient foundation language models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
|
peterwilli/photon
|
peterwilli
| 2023-07-16T08:26:36Z | 48 | 0 |
diffusers
|
[
"diffusers",
"art",
"en",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-16T08:24:07Z |
---
license: openrail
language:
- en
tags:
- art
---
Realistic SD1.5 model, ported from SafeTensors to Diffusers. Original is here: https://civitai.com/models/84728/photon
|
digiplay/polla_mix_2.4D
|
digiplay
| 2023-07-16T08:23:45Z | 334 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-16T06:56:58Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/110130?modelVersionId=118734
Simple image I made thru Huggingface's API :

prompt :
> pink spider with pink heart symbol
***Original Author's DEMO images :***
,%20blonde_hair,%20commentary_request,%20fate_prototype,%20fate_(series),%20green_eyes,%20hood,%20male_foc.jpeg)


|
Ricky1981/Lgmx
|
Ricky1981
| 2023-07-16T08:19:20Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T07:33:57Z |
---
license: creativeml-openrail-m
---
|
gpcarl123/resnet18_mnist
|
gpcarl123
| 2023-07-16T08:16:35Z | 0 | 0 |
timm
|
[
"timm",
"en",
"dataset:mnist",
"model-index",
"region:us"
] | null | 2023-07-16T07:48:41Z |
---
language:
- en
library_name: timm
datasets:
- mnist
metrics:
- accuracy
model-index:
- name: resnet18_mnist
results:
- task:
type: image-classification
dataset:
name: MNIST
type: mnist
metrics:
- type: accuracy
value: 0.9936
---
# Usage
```python
import timm
import torchvision
MNIST_PATH = './datasets/mnist'
net = timm.create_model("resnet18", pretrained=False, num_classes=10)
net.conv1 = torch.nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3, bias=False)
net.load_state_dict(
torch.hub.load_state_dict_from_url(
"https://huggingface.co/gpcarl123/resnet18_mnist/resolve/main/resnet18_mnist.pth",
map_location="cpu",
file_name="resnet18_mnist.pth",
)
)
preprocessor = torchvision.transforms.Normalize((0.1307,), (0.3081,))
transform = transforms.Compose([transforms.ToTensor()])
test_set = datasets.MNIST(root=MNIST_PATH, train=False, download=True, transform=transform)
test_loader = data.DataLoader(test_set, batch_size=5, shuffle=False, num_workers=2)
for data, target in test_loader:
print(net(preprocessor(data)))
print(target)
break
```
|
imgeaslikok/flan-t5-definition-en-large-taboo-for-llms-deft
|
imgeaslikok
| 2023-07-16T08:04:58Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-15T11:31:14Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-definition-en-large-taboo-for-llms-deft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-definition-en-large-taboo-for-llms-deft
This model is a fine-tuned version of [ltg/flan-t5-definition-en-large](https://huggingface.co/ltg/flan-t5-definition-en-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0332
- Rouge1: 33.5241
- Rouge2: 16.8064
- Rougel: 30.2969
- Rougelsum: 30.2909
- Gen Len: 16.5819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.6185 | 0.62 | 100 | 2.1816 | 33.3077 | 15.1203 | 28.9167 | 28.8557 | 17.7666 |
| 2.3457 | 1.24 | 200 | 2.0990 | 33.2477 | 16.1885 | 29.5227 | 29.4474 | 16.7143 |
| 2.1751 | 1.85 | 300 | 2.0604 | 33.5161 | 16.4732 | 30.0261 | 30.0036 | 16.3031 |
| 2.0749 | 2.47 | 400 | 2.0392 | 33.1594 | 16.8128 | 30.0222 | 30.0057 | 16.5401 |
| 2.035 | 3.09 | 500 | 2.0332 | 33.5241 | 16.8064 | 30.2969 | 30.2909 | 16.5819 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
digiplay/polla_mix_2.5D
|
digiplay
| 2023-07-16T07:56:07Z | 50 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-16T06:57:17Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/110130?modelVersionId=118741
Sample image I made thru Huggingface's API :

Original Author's DEMO images :


|
shihab17/bengali-bn-to-en
|
shihab17
| 2023-07-16T07:51:36Z | 25 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"bn",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-05-23T10:43:30Z |
---
library_name: transformers
pipeline_tag: translation
language:
- bn
---
### How to use
You can use this model directly with a pipeline:
```python
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("shihab17/bengali-bn-to-en")
model = AutoModelForSeq2SeqLM.from_pretrained("shihab17/bengali-bn-to-en")
sentence = 'ম্যাচ শেষে পুরস্কার বিতরণের মঞ্চে তামিমের মুখে মোস্তাফিজের প্রশংসা শোনা গেল'
translator = pipeline("translation_bn_to_en", model=model, tokenizer=tokenizer)
output = translator(sentence)
print(output)
```
|
digiplay/zodiac_eclipse_DAY1
|
digiplay
| 2023-07-16T07:40:46Z | 285 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-14T08:32:01Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/108417/zodiac-eclipse-day1
Sample image I made thru Huggingface's API :
```
dog eat mango icecream
```

Original Author's DEMO images :
),%20((masterpiece)),%20(detailed),%20alluring%20succubus,%20ethereal%20beauty,%20perched%20on%20a%20cloud,%20(fantasy%20illustration_1.3.jpeg)
)),.jpeg)
|
DanGalt/openai-finetuned-minds14
|
DanGalt
| 2023-07-16T07:39:51Z | 85 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-16T07:39:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: openai-finetuned-minds14
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.31463990554899646
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai-finetuned-minds14
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6316
- Wer Ortho: 0.3122
- Wer: 0.3146
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0003 | 17.86 | 500 | 0.6316 | 0.3122 | 0.3146 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
HemrajS/LORA
|
HemrajS
| 2023-07-16T07:34:19Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T07:34:18Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
tuanhnh/Reinforce-0
|
tuanhnh
| 2023-07-16T07:30:47Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T07:30:40Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 453.60 +/- 109.04
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
li-ping/falcon_0csv_to_sheng
|
li-ping
| 2023-07-16T07:12:27Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T06:56:45Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
digiplay/hellofantasytime_v1.22
|
digiplay
| 2023-07-16T07:00:33Z | 391 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T09:19:29Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/108289?modelVersionId=116540
Sample image I made thru Huggingface's API :

Original Author's DEMO images :
),%20((masterpiece)),%20(detailed_1.4),%203D,%20an%20image%20of%20a%20beautiful%20cyberpunk%20female%20with%20all%20black%20armour,HDR%20(High.jpeg)


,%20green%20dinosaur,%20(two%20hands_1.2),(two%20legs_1.4),(one%20tail_1.2),standing,solo,%20sharp%20teeth,.jpeg)
,%20black%20girl,%20curly%20hair,%20barista.jpeg)
,Fairy%20skin,(Fidelity%20_1.2),Standing,Super%20Detailed,realistic,High%20quality,Mov.jpeg)
|
laserchalk/kangaroo-training-part-10
|
laserchalk
| 2023-07-16T06:53:40Z | 6 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-16T06:39:24Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### kangaroo-training-part-10 Dreambooth model trained by laserchalk with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
nolanaatama/crsfnhllvnrvcv250pchszmbllth
|
nolanaatama
| 2023-07-16T06:48:04Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T06:41:26Z |
---
license: creativeml-openrail-m
---
|
bochen0909/ppo-LunarLander-v2
|
bochen0909
| 2023-07-16T06:41:23Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T06:41:08Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.97 +/- 20.22
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NasimB/bnc-rarity-no-cut-shuffled
|
NasimB
| 2023-07-16T06:24:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-16T04:27:02Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: bnc-rarity-no-cut-shuffled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bnc-rarity-no-cut-shuffled
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7157 | 0.29 | 500 | 5.6437 |
| 5.3513 | 0.58 | 1000 | 5.2021 |
| 5.0016 | 0.88 | 1500 | 4.9595 |
| 4.7286 | 1.17 | 2000 | 4.8122 |
| 4.5693 | 1.46 | 2500 | 4.6857 |
| 4.4647 | 1.75 | 3000 | 4.5770 |
| 4.3308 | 2.05 | 3500 | 4.5068 |
| 4.1402 | 2.34 | 4000 | 4.4574 |
| 4.1123 | 2.63 | 4500 | 4.3983 |
| 4.0711 | 2.92 | 5000 | 4.3468 |
| 3.8657 | 3.22 | 5500 | 4.3414 |
| 3.8086 | 3.51 | 6000 | 4.3099 |
| 3.7977 | 3.8 | 6500 | 4.2728 |
| 3.6947 | 4.09 | 7000 | 4.2729 |
| 3.5188 | 4.39 | 7500 | 4.2684 |
| 3.5211 | 4.68 | 8000 | 4.2523 |
| 3.5159 | 4.97 | 8500 | 4.2387 |
| 3.3414 | 5.26 | 9000 | 4.2532 |
| 3.3357 | 5.56 | 9500 | 4.2520 |
| 3.328 | 5.85 | 10000 | 4.2517 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Sucial/so-vits-svc4.1-sanwu
|
Sucial
| 2023-07-16T05:59:47Z | 4 | 3 |
transformers
|
[
"transformers",
"so-vits-svc",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-07-16T05:57:25Z |
---
license: cc-by-sa-4.0
tags:
- so-vits-svc
---
# so-vits-svc4.1-sanwu
## 官方项目地址:https://github.com/svc-develop-team/so-vits-svc
## 如何使用?How to use?
1. install requirements
2. download pretrain model [checkpoint_best_legacy_500.pt](https://ibm.box.com/s/z1wgl1stco8ffooyatzdwsqn2psd9lrr) and put it into `./pretrain`
3. put `sanwu_100800.pth`, `feature_and_index.pkl`, 'kmeans_10000.pt' into `./logs/44k`
4. put `config.json`into `./config`
5. enjoy!
## 以下引用官方文档
## 推理
使用 [inference_main.py](inference_main.py)
```shell
# 例
python inference_main.py -m "logs/44k/G_30400.pth" -c "configs/config.json" -n "君の知らない物語-src.wav" -t 0 -s "nen"
```
必填项部分:
+ `-m` | `--model_path`:模型路径
+ `-c` | `--config_path`:配置文件路径
+ `-n` | `--clean_names`:wav 文件名列表,放在 raw 文件夹下
+ `-t` | `--trans`:音高调整,支持正负(半音)
+ `-s` | `--spk_list`:合成目标说话人名称
+ `-cl` | `--clip`:音频强制切片,默认0为自动切片,单位为秒/s
可选项部分:部分具体见下一节
+ `-lg` | `--linear_gradient`:两段音频切片的交叉淡入长度,如果强制切片后出现人声不连贯可调整该数值,如果连贯建议采用默认值0,单位为秒
+ `-f0p` | `--f0_predictor`:选择F0预测器,可选择crepe,pm,dio,harvest,默认为pm(注意:crepe为原F0使用均值滤波器)
+ `-a` | `--auto_predict_f0`:语音转换自动预测音高,转换歌声时不要打开这个会严重跑调
+ `-cm` | `--cluster_model_path`:聚类模型或特征检索索引路径,如果没有训练聚类或特征检索则随便填
+ `-cr` | `--cluster_infer_ratio`:聚类方案或特征检索占比,范围0-1,若没有训练聚类模型或特征检索则默认0即可
+ `-eh` | `--enhance`:是否使用NSF_HIFIGAN增强器,该选项对部分训练集少的模型有一定的音质增强效果,但是对训练好的模型有反面效果,默认关闭
+ `-shd` | `--shallow_diffusion`:是否使用浅层扩散,使用后可解决一部分电音问题,默认关闭,该选项打开时,NSF_HIFIGAN增强器将会被禁止
+ `-usm` | `--use_spk_mix`:是否使用角色融合/动态声线融合
+ `-lea` | `--loudness_envelope_adjustment`:输入源响度包络替换输出响度包络融合比例,越靠近1越使用输出响度包络
+ `-fr` | `--feature_retrieval`:是否使用特征检索,如果使用聚类模型将被禁用,且cm与cr参数将会变成特征检索的索引路径与混合比例
浅扩散设置:
+ `-dm` | `--diffusion_model_path`:扩散模型路径
+ `-dc` | `--diffusion_config_path`:扩散模型配置文件路径
+ `-ks` | `--k_step`:扩散步数,越大越接近扩散模型的结果,默认100
+ `-od` | `--only_diffusion`:纯扩散模式,该模式不会加载sovits模型,以扩散模型推理
+ `-se` | `--second_encoding`:二次编码,浅扩散前会对原始音频进行二次编码,玄学选项,有时候效果好,有时候效果差
### 注意
如果使用`whisper-ppg` 声音编码器进行推理,需要将`--clip`设置为25,`-lg`设置为1。否则将无法正常推理。
## 🤔 可选项
如果前面的效果已经满意,或者没看明白下面在讲啥,那后面的内容都可以忽略,不影响模型使用(这些可选项影响比较小,可能在某些特定数据上有点效果,但大部分情况似乎都感知不太明显)
### 自动f0预测
4.0模型训练过程会训练一个f0预测器,对于语音转换可以开启自动音高预测,如果效果不好也可以使用手动的,但转换歌声时请不要启用此功能!!!会严重跑调!!
+ 在inference_main中设置auto_predict_f0为true即可
### 聚类音色泄漏控制
介绍:聚类方案可以减小音色泄漏,使得模型训练出来更像目标的音色(但其实不是特别明显),但是单纯的聚类方案会降低模型的咬字(会口齿不清)(这个很明显),本模型采用了融合的方式,可以线性控制聚类方案与非聚类方案的占比,也就是可以手动在"像目标音色" 和 "咬字清晰" 之间调整比例,找到合适的折中点
使用聚类前面的已有步骤不用进行任何的变动,只需要额外训练一个聚类模型,虽然效果比较有限,但训练成本也比较低
+ 训练过程:
+ 使用cpu性能较好的机器训练,据我的经验在腾讯云6核cpu训练每个speaker需要约4分钟即可完成训练
+ 执行`python cluster/train_cluster.py`,模型的输出会在`logs/44k/kmeans_10000.pt`
+ 聚类模型目前可以使用gpu进行训练,执行`python cluster/train_cluster.py --gpu`
+ 推理过程:
+ `inference_main.py`中指定`cluster_model_path`
+ `inference_main.py`中指定`cluster_infer_ratio`,`0`为完全不使用聚类,`1`为只使用聚类,通常设置`0.5`即可
### 特征检索
介绍:跟聚类方案一样可以减小音色泄漏,咬字比聚类稍好,但会降低推理速度,采用了融合的方式,可以线性控制特征检索与非特征检索的占比,
+ 训练过程:
首先需要在生成hubert与f0后执行:
```shell
python train_index.py -c configs/config.json
```
模型的输出会在`logs/44k/feature_and_index.pkl`
+ 推理过程:
+ 需要首先制定`--feature_retrieval`,此时聚类方案会自动切换到特征检索方案
+ `inference_main.py`中指定`cluster_model_path` 为模型输出文件
+ `inference_main.py`中指定`cluster_infer_ratio`,`0`为完全不使用特征检索,`1`为只使用特征检索,通常设置`0.5`即可
### 静态声线混合
**参考`webUI.py`文件中,小工具/实验室特性的静态声线融合。**
介绍:该功能可以将多个声音模型合成为一个声音模型(多个模型参数的凸组合或线性组合),从而制造出现实中不存在的声线
**注意:**
1. 该功能仅支持单说话人的模型
2. 如果强行使用多说话人模型,需要保证多个模型的说话人数量相同,这样可以混合同一个SpaekerID下的声音
3. 保证所有待混合模型的config.json中的model字段是相同的
4. 输出的混合模型可以使用待合成模型的任意一个config.json,但聚类模型将不能使用
5. 批量上传模型的时候最好把模型放到一个文件夹选中后一起上传
6. 混合比例调整建议大小在0-100之间,也可以调为其他数字,但在线性组合模式下会出现未知的效果
7. 混合完毕后,文件将会保存在项目根目录中,文件名为output.pth
8. 凸组合模式会将混合比例执行Softmax使混合比例相加为1,而线性组合模式不会
### 动态声线混合
**参考`spkmix.py`文件中关于动态声线混合的介绍**
角色混合轨道 编写规则:
角色ID : \[\[起始时间1, 终止时间1, 起始数值1, 起始数值1], [起始时间2, 终止时间2, 起始数值2, 起始数值2]]
起始时间和前一个的终止时间必须相同,第一个起始时间必须为0,最后一个终止时间必须为1 (时间的范围为0-1)
全部角色必须填写,不使用的角色填\[\[0., 1., 0., 0.]]即可
融合数值可以随便填,在指定的时间段内从起始数值线性变化为终止数值,内部会自动确保线性组合为1(凸组合条件),可以放心使用
推理的时候使用`--use_spk_mix`参数即可启用动态声线混合
## 📚 一些法律条例参考
#### 任何国家,地区,组织和个人使用此项目必须遵守以下法律
#### 《民法典》
##### 第一千零一十九条
任何组织或者个人不得以丑化、污损,或者利用信息技术手段伪造等方式侵害他人的肖像权。未经肖像权人同意,不得制作、使用、公开肖像权人的肖像,但是法律另有规定的除外。未经肖像权人同意,肖像作品权利人不得以发表、复制、发行、出租、展览等方式使用或者公开肖像权人的肖像。对自然人声音的保护,参照适用肖像权保护的有关规定。
##### 第一千零二十四条
【名誉权】民事主体享有名誉权。任何组织或者个人不得以侮辱、诽谤等方式侵害他人的名誉权。
##### 第一千零二十七条
【作品侵害名誉权】行为人发表的文学、艺术作品以真人真事或者特定人为描述对象,含有侮辱、诽谤内容,侵害他人名誉权的,受害人有权依法请求该行为人承担民事责任。行为人发表的文学、艺术作品不以特定人为描述对象,仅其中的情节与该特定人的情况相似的,不承担民事责任。
#### 《[中华人民共和国宪法](http://www.gov.cn/guoqing/2018-03/22/content_5276318.htm)》
#### 《[中华人民共和国刑法](http://gongbao.court.gov.cn/Details/f8e30d0689b23f57bfc782d21035c3.html?sw=中华人民共和国刑法)》
#### 《[中华人民共和国民法典](http://gongbao.court.gov.cn/Details/51eb6750b8361f79be8f90d09bc202.html)》
|
ardhies/vira
|
ardhies
| 2023-07-16T05:48:55Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-23T00:46:30Z |
---
license: creativeml-openrail-m
---
|
Vasanth/distilbert-stock-tweet-sentiment-analysis
|
Vasanth
| 2023-07-16T05:26:06Z | 185 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-16T05:15:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-stock-tweet-sentiment-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-stock-tweet-sentiment-analysis
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6075
- Accuracy: 0.782
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.686 | 1.0 | 1000 | 0.5916 | 0.7745 |
| 0.4804 | 2.0 | 2000 | 0.5635 | 0.7812 |
| 0.3644 | 3.0 | 3000 | 0.6075 | 0.782 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ltmai/morgan-embed-bio-clinical-bert-ddi
|
ltmai
| 2023-07-16T05:24:59Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-07-15T18:38:02Z |
---
tags:
- generated_from_trainer
model-index:
- name: morgan-embed-bio-clinical-bert-ddi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# morgan-embed-bio-clinical-bert-ddi
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000628
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
diogopaes10/007-microsoft-deberta-v3-base-finetuned-yahoo-80_20k
|
diogopaes10
| 2023-07-16T05:23:43Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-16T04:56:59Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
- precision
- recall
model-index:
- name: 007-microsoft-deberta-v3-base-finetuned-yahoo-80_20k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 007-microsoft-deberta-v3-base-finetuned-yahoo-80_20k
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8060
- F1: 0.7514
- Accuracy: 0.7552
- Precision: 0.7512
- Recall: 0.7552
- System Ram Used: 4.1778
- System Ram Total: 83.4807
- Gpu Ram Allocated: 2.0903
- Gpu Ram Cached: 34.3125
- Gpu Ram Total: 39.5640
- Gpu Utilization: 44
- Disk Space Used: 36.0258
- Disk Space Total: 78.1898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | Precision | Recall | System Ram Used | System Ram Total | Gpu Ram Allocated | Gpu Ram Cached | Gpu Ram Total | Gpu Utilization | Disk Space Used | Disk Space Total |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:|:------:|:---------------:|:----------------:|:-----------------:|:--------------:|:-------------:|:---------------:|:---------------:|:----------------:|
| 1.3512 | 0.15 | 375 | 0.9418 | 0.7160 | 0.7189 | 0.7210 | 0.7189 | 3.9586 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 42 | 24.9904 | 78.1898 |
| 0.9581 | 0.3 | 750 | 0.8981 | 0.7232 | 0.7298 | 0.7301 | 0.7298 | 3.9108 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 46 | 24.9906 | 78.1898 |
| 0.9184 | 0.45 | 1125 | 0.8941 | 0.7248 | 0.7316 | 0.7301 | 0.7316 | 3.8717 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 46 | 24.9910 | 78.1898 |
| 0.8716 | 0.6 | 1500 | 0.8481 | 0.7368 | 0.7391 | 0.7414 | 0.7391 | 3.9030 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 46 | 24.9913 | 78.1898 |
| 0.8564 | 0.75 | 1875 | 0.8394 | 0.7379 | 0.7440 | 0.7423 | 0.7440 | 3.8964 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 44 | 24.9915 | 78.1898 |
| 0.8359 | 0.9 | 2250 | 0.8371 | 0.7347 | 0.7403 | 0.7417 | 0.7403 | 3.8917 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 48 | 24.9917 | 78.1898 |
| 0.7896 | 1.05 | 2625 | 0.8277 | 0.7369 | 0.7435 | 0.7461 | 0.7435 | 4.1488 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 44 | 29.8274 | 78.1898 |
| 0.7368 | 1.2 | 3000 | 0.8204 | 0.7426 | 0.7473 | 0.7468 | 0.7473 | 4.1447 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 45 | 29.8276 | 78.1898 |
| 0.72 | 1.35 | 3375 | 0.8199 | 0.7455 | 0.7486 | 0.7467 | 0.7486 | 3.9562 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 43 | 29.8279 | 78.1898 |
| 0.7333 | 1.5 | 3750 | 0.7991 | 0.7488 | 0.7524 | 0.7496 | 0.7524 | 3.9475 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 45 | 29.8282 | 78.1898 |
| 0.7116 | 1.65 | 4125 | 0.8149 | 0.7470 | 0.7499 | 0.7497 | 0.7499 | 3.9456 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 43 | 29.8285 | 78.1898 |
| 0.7177 | 1.8 | 4500 | 0.7880 | 0.7523 | 0.7558 | 0.7529 | 0.7558 | 3.9296 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 44 | 29.8287 | 78.1898 |
| 0.7151 | 1.95 | 4875 | 0.7949 | 0.7509 | 0.7540 | 0.7507 | 0.7540 | 3.9427 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 41 | 29.8294 | 78.1898 |
| 0.657 | 2.1 | 5250 | 0.8097 | 0.7500 | 0.7537 | 0.7506 | 0.7537 | 4.1520 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 43 | 33.9634 | 78.1898 |
| 0.6218 | 2.25 | 5625 | 0.8049 | 0.7485 | 0.7528 | 0.7484 | 0.7528 | 4.1390 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 44 | 33.9635 | 78.1898 |
| 0.6185 | 2.4 | 6000 | 0.8093 | 0.7511 | 0.7543 | 0.7513 | 0.7543 | 3.9715 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 42 | 33.9637 | 78.1898 |
| 0.6271 | 2.55 | 6375 | 0.8019 | 0.7517 | 0.7550 | 0.7521 | 0.7550 | 3.9697 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 46 | 33.9638 | 78.1898 |
| 0.6103 | 2.7 | 6750 | 0.8026 | 0.7519 | 0.7554 | 0.7523 | 0.7554 | 3.9622 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 46 | 33.9639 | 78.1898 |
| 0.6111 | 2.85 | 7125 | 0.8056 | 0.7507 | 0.7546 | 0.7511 | 0.7546 | 3.9783 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 41 | 33.9640 | 78.1898 |
| 0.6015 | 3.0 | 7500 | 0.8060 | 0.7514 | 0.7552 | 0.7512 | 0.7552 | 3.9702 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 42 | 33.9642 | 78.1898 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
kojitakahiro/webui
|
kojitakahiro
| 2023-07-16T05:21:17Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-12T07:09:31Z |
---
license: creativeml-openrail-m
---
|
Denilah/distilbert-base-uncased-finetuned-emotion
|
Denilah
| 2023-07-16T05:15:46Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-16T03:24:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.937
- name: F1
type: f1
value: 0.9373121473490384
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1565
- Accuracy: 0.937
- F1: 0.9373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4774 | 1.0 | 1000 | 0.1971 | 0.923 | 0.9226 |
| 0.147 | 2.0 | 2000 | 0.1565 | 0.937 | 0.9373 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
BaleChen/dqn-SpaceInvadersNoFrameskip-v4-test
|
BaleChen
| 2023-07-16T05:13:06Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T05:12:23Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 545.00 +/- 104.33
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga BaleChen -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga BaleChen -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga BaleChen
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
NobodyExistsOnTheInternet/nous7badaptor
|
NobodyExistsOnTheInternet
| 2023-07-16T05:06:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T04:54:15Z |
---
library_name: peft
---
Use teknium's 7b model
i accidentally trained on vicuna 1.1 and not alpaca (original model)
|
weekcircle/wav2vec2-large-mms-1b-korean-colab
|
weekcircle
| 2023-07-16T04:57:38Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_13_0",
"base_model:weekcircle/wav2vec2-large-mms-1b-korean-colab",
"base_model:finetune:weekcircle/wav2vec2-large-mms-1b-korean-colab",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-11T12:39:54Z |
---
license: cc-by-nc-4.0
base_model: weekcircle/wav2vec2-large-mms-1b-korean-colab
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
metrics:
- wer
model-index:
- name: wav2vec2-large-mms-1b-korean-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_13_0
type: common_voice_13_0
config: ko
split: test
args: ko
metrics:
- name: Wer
type: wer
value: 0.9959718026183283
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-mms-1b-korean-colab
This model is a fine-tuned version of [weekcircle/wav2vec2-large-mms-1b-korean-colab](https://huggingface.co/weekcircle/wav2vec2-large-mms-1b-korean-colab) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 8.8258
- Wer: 0.9960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.4313 | 2.63 | 100 | 7.9123 | 0.9839 |
| 2.3616 | 5.26 | 200 | 7.9118 | 0.9930 |
| 1.859 | 7.89 | 300 | 7.9977 | 0.9909 |
| 1.4135 | 10.53 | 400 | 8.3395 | 1.0040 |
| 1.1407 | 13.16 | 500 | 8.5900 | 0.9940 |
| 0.9639 | 15.79 | 600 | 8.6300 | 0.9950 |
| 0.7991 | 18.42 | 700 | 8.8258 | 0.9960 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
LeoLyu/finetuning-sentiment-model-3000-samples
|
LeoLyu
| 2023-07-16T04:39:09Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-04T01:18:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.88
- name: F1
type: f1
value: 0.880794701986755
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2903
- Accuracy: 0.88
- F1: 0.8808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
diogopaes10/006-microsoft-deberta-v3-base-finetuned-yahoo-80_20k
|
diogopaes10
| 2023-07-16T04:38:02Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-16T03:12:19Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
- precision
- recall
model-index:
- name: 006-microsoft-deberta-v3-base-finetuned-yahoo-80_20k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 006-microsoft-deberta-v3-base-finetuned-yahoo-80_20k
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5718
- F1: 0.7212
- Accuracy: 0.7212
- Precision: 0.7217
- Recall: 0.7212
- System Ram Used: 4.3510
- System Ram Total: 83.4807
- Gpu Ram Allocated: 2.0903
- Gpu Ram Cached: 34.3125
- Gpu Ram Total: 39.5640
- Gpu Utilization: 26
- Disk Space Used: 42.2252
- Disk Space Total: 78.1898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | Precision | Recall | System Ram Used | System Ram Total | Gpu Ram Allocated | Gpu Ram Cached | Gpu Ram Total | Gpu Utilization | Disk Space Used | Disk Space Total |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:--------:|:---------:|:------:|:---------------:|:----------------:|:-----------------:|:--------------:|:-------------:|:---------------:|:---------------:|:----------------:|
| 1.0576 | 0.5 | 1250 | 0.8850 | 0.7207 | 0.727 | 0.7338 | 0.727 | 4.0147 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 26 | 24.9911 | 78.1898 |
| 0.853 | 1.0 | 2500 | 0.8239 | 0.7432 | 0.7461 | 0.7462 | 0.7461 | 4.0251 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 26 | 24.9919 | 78.1898 |
| 0.7364 | 1.5 | 3750 | 0.8151 | 0.7429 | 0.7478 | 0.7431 | 0.7478 | 4.0743 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 25 | 29.8278 | 78.1898 |
| 0.7345 | 2.0 | 5000 | 0.8102 | 0.7433 | 0.7470 | 0.7460 | 0.7470 | 4.0763 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 27 | 29.8285 | 78.1898 |
| 0.6184 | 2.5 | 6250 | 0.8222 | 0.7498 | 0.7518 | 0.7505 | 0.7518 | 4.0937 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 27 | 33.9630 | 78.1898 |
| 0.6174 | 3.0 | 7500 | 0.8322 | 0.7516 | 0.7545 | 0.7530 | 0.7545 | 4.0717 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 25 | 33.9634 | 78.1898 |
| 0.5036 | 3.5 | 8750 | 0.8948 | 0.7435 | 0.7476 | 0.7428 | 0.7476 | 4.1335 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 26 | 38.0971 | 78.1898 |
| 0.5149 | 4.0 | 10000 | 0.8892 | 0.7416 | 0.7451 | 0.7433 | 0.7451 | 4.1251 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 26 | 38.0980 | 78.1898 |
| 0.4106 | 4.5 | 11250 | 0.9957 | 0.7336 | 0.7345 | 0.7348 | 0.7345 | 4.1319 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 24 | 40.1601 | 78.1898 |
| 0.407 | 5.0 | 12500 | 0.9997 | 0.7318 | 0.7344 | 0.7332 | 0.7344 | 4.1372 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 25 | 40.1605 | 78.1898 |
| 0.3195 | 5.5 | 13750 | 1.0747 | 0.7320 | 0.736 | 0.7318 | 0.736 | 4.1359 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 25 | 42.2225 | 78.1898 |
| 0.3281 | 6.0 | 15000 | 1.1199 | 0.7300 | 0.7323 | 0.7313 | 0.7323 | 4.1438 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 26 | 42.2234 | 78.1898 |
| 0.2519 | 6.5 | 16250 | 1.2232 | 0.7299 | 0.7321 | 0.7295 | 0.7321 | 4.1640 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 26 | 42.2237 | 78.1898 |
| 0.2482 | 7.0 | 17500 | 1.2532 | 0.7274 | 0.7272 | 0.7282 | 0.7272 | 4.1578 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 26 | 42.2238 | 78.1898 |
| 0.1939 | 7.5 | 18750 | 1.3487 | 0.7222 | 0.7248 | 0.7215 | 0.7248 | 4.1702 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 26 | 42.2241 | 78.1898 |
| 0.1992 | 8.0 | 20000 | 1.3886 | 0.7197 | 0.7186 | 0.7218 | 0.7186 | 4.1572 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 27 | 42.2247 | 78.1898 |
| 0.1511 | 8.5 | 21250 | 1.4716 | 0.7197 | 0.7195 | 0.7204 | 0.7195 | 4.1566 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 25 | 42.2249 | 78.1898 |
| 0.1563 | 9.0 | 22500 | 1.4829 | 0.7223 | 0.7237 | 0.7221 | 0.7237 | 4.1650 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 27 | 42.2249 | 78.1898 |
| 0.1286 | 9.5 | 23750 | 1.5533 | 0.7210 | 0.7217 | 0.7210 | 0.7217 | 4.1784 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 26 | 42.2251 | 78.1898 |
| 0.1223 | 10.0 | 25000 | 1.5718 | 0.7212 | 0.7212 | 0.7217 | 0.7212 | 4.1618 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 26 | 42.2251 | 78.1898 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/children-rarity-all-guten-log-rarity-all
|
NasimB
| 2023-07-16T04:21:14Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-16T02:19:49Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: children-rarity-all-guten-log-rarity-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# children-rarity-all-guten-log-rarity-all
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3116
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7036 | 0.29 | 500 | 5.6365 |
| 5.348 | 0.58 | 1000 | 5.2064 |
| 4.99 | 0.87 | 1500 | 4.9589 |
| 4.7208 | 1.16 | 2000 | 4.8071 |
| 4.5602 | 1.46 | 2500 | 4.6761 |
| 4.4513 | 1.75 | 3000 | 4.5690 |
| 4.3332 | 2.04 | 3500 | 4.4907 |
| 4.1308 | 2.33 | 4000 | 4.4479 |
| 4.1002 | 2.62 | 4500 | 4.3912 |
| 4.0711 | 2.91 | 5000 | 4.3370 |
| 3.8621 | 3.2 | 5500 | 4.3334 |
| 3.803 | 3.49 | 6000 | 4.3002 |
| 3.7865 | 3.79 | 6500 | 4.2683 |
| 3.6992 | 4.08 | 7000 | 4.2633 |
| 3.5158 | 4.37 | 7500 | 4.2591 |
| 3.5163 | 4.66 | 8000 | 4.2433 |
| 3.501 | 4.95 | 8500 | 4.2300 |
| 3.3525 | 5.24 | 9000 | 4.2437 |
| 3.3213 | 5.53 | 9500 | 4.2424 |
| 3.3235 | 5.82 | 10000 | 4.2416 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
NasimB/guten-rarity-all-no-cut-shuffled
|
NasimB
| 2023-07-16T04:02:15Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-16T02:00:34Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-rarity-all-no-cut-shuffled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-rarity-all-no-cut-shuffled
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7098 | 0.29 | 500 | 5.6383 |
| 5.3461 | 0.59 | 1000 | 5.1998 |
| 5.0069 | 0.88 | 1500 | 4.9558 |
| 4.7285 | 1.17 | 2000 | 4.8116 |
| 4.5719 | 1.46 | 2500 | 4.6858 |
| 4.4638 | 1.76 | 3000 | 4.5832 |
| 4.3437 | 2.05 | 3500 | 4.5081 |
| 4.145 | 2.34 | 4000 | 4.4640 |
| 4.1225 | 2.63 | 4500 | 4.4066 |
| 4.0778 | 2.93 | 5000 | 4.3542 |
| 3.8706 | 3.22 | 5500 | 4.3487 |
| 3.8204 | 3.51 | 6000 | 4.3185 |
| 3.8077 | 3.8 | 6500 | 4.2826 |
| 3.7002 | 4.1 | 7000 | 4.2849 |
| 3.5345 | 4.39 | 7500 | 4.2807 |
| 3.5332 | 4.68 | 8000 | 4.2650 |
| 3.5096 | 4.97 | 8500 | 4.2535 |
| 3.3568 | 5.27 | 9000 | 4.2678 |
| 3.3403 | 5.56 | 9500 | 4.2672 |
| 3.3398 | 5.85 | 10000 | 4.2659 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
1112lee/setfit-model
|
1112lee
| 2023-07-16T03:42:36Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-16T03:28:00Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# 1112lee/setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("1112lee/setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
blackmount8/falcon-7b-instruct-ct2-int8_float16
|
blackmount8
| 2023-07-16T03:36:52Z | 1 | 0 |
transformers
|
[
"transformers",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"region:us"
] | null | 2023-07-15T16:58:47Z |
---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: false
license: apache-2.0
---
# blackmount8/falcon-7b-instruct-ct2-int8_float16
Int8_float16 version of [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct), quantized using CTranslate2.
# ✨ Falcon-7B-Instruct
**Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.**
*Paper coming soon 😊.*
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
## Why use Falcon-7B-Instruct?
* **You are looking for a ready-to-use chat/instruct model based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).**
* **Falcon-7B is a strong base model, outperforming comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
💬 **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
🔥 **Looking for an even more powerful model?** [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) is Falcon-7B-Instruct's big brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B-Instruct.
# Model Card for Falcon-7B-Instruct
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English and French;
- **License:** Apache 2.0;
- **Finetuned from model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.
| **Data source** | **Fraction** | **Tokens** | **Description** |
|--------------------|--------------|------------|-----------------------------------|
| [Bai ze](https://github.com/project-baize/baize-chatbot) | 65% | 164M | chat |
| [GPT4All](https://github.com/nomic-ai/gpt4all) | 25% | 62M | instruct |
| [GPTeacher](https://github.com/teknium1/GPTeacher) | 5% | 11M | instruct |
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 5% | 13M | massive web crawl |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
## Evaluation
*Paper coming soon.*
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
Note that this model variant is not optimized for NLP benchmarks.
## Technical Specifications
For more information about pretraining, see [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 32 | |
| `d_model` | 4544 | Increased to compensate for multiquery |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.
#### Software
Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## License
Falcon-7B-Instruct is made available under the Apache 2.0 license.
## Contact
[email protected]
|
KonekoSushi/Ado
|
KonekoSushi
| 2023-07-16T03:36:21Z | 0 | 2 | null |
[
"rvc",
"rvc2",
"japanese artist",
"artist ",
"ja",
"en",
"region:us"
] | null | 2023-07-15T23:01:30Z |
---
language:
- ja
- en
tags:
- rvc
- rvc2
- japanese artist
- 'artist '
---
|
OptimalScale/robin-33b-v2-delta
|
OptimalScale
| 2023-07-16T03:14:37Z | 1,548 | 8 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2302.13971",
"arxiv:2306.12420",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-05-28T06:02:53Z |
---
inference: false
---
# Robin Model Card
## Model Details
Robin is a series of models finetuned from LLaMA on several high-quality data.
- **Developed by:** [LMFlow](https://github.com/OptimalScale/LMFlow/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/OptimalScale/LMFlow/
- **Blog:** https://medium.com/@hkust.ml/robin-v2-launches-achieves-unparalleled-performance-on-openllm-4f6886e822c1
- **Paper:** https://arxiv.org/abs/2306.12420
- **Demo:** https://lmflow.com/
## Uses
Robin is primarily utilized for conducting research on extensive language models and chatbots, catering to users specializing in natural language processing, machine learning, and artificial intelligence research.
## How to Get Started with the Model
We provide four kinds of demos including:
- Online Service: If you don't want to run any code and just want to try our models, we deploy our instruction-tuned LLaMA you to have a try.
- Colab Chatbot (shell): An interactive shell-based chatbot for you to easily deploy a chatbot on colab.
- Colab Chatbot (web): An interactive web-based chatbot for you to easily deploy your own chatbot on colab.
- Local Deploy: We also provide a way for you to deploy your model/chatbot locally, which means you can deploy much larger model than previous three methods if you have enough resource.
Please refer to https://github.com/OptimalScale/LMFlow#demos
## Training Details
Expanding upon the initial idea of self-instruct techniques, we incorporated several different data sources and build a new dataset called [LMFlow Dataset](http://lmflow.org:5000/lmflow_data.tar.gz).
The new training split is created by merging the following datasets:
- ShareGPT: randomly sample 50K English data and 10K Chinese data from ShareGPT.
- GPT-4-LLM: 52K English data from GPT-4-LLM.
- BELLE: randomly sample 80K Chinese data from BELLE.
See more details in the "Instruction Tuning" section in our [paper](https://arxiv.org/pdf/2306.12420.pdf).
## Evaluation
Robin is evaluated with [LMFlow Benchmark](https://blog.gopenai.com/lmflow-benchmark-an-automatic-evaluation-framework-for-open-source-llms-ef5c6f142418).
See more details in this [paper](https://arxiv.org/pdf/2306.12420.pdf).
## Citation
If you find this repository useful, please consider giving ⭐ and citing our [paper](https://arxiv.org/abs/2306.12420):
```
@misc{lmflow,
author = {Shizhe Diao and Rui Pan and Hanze Dong and KaShun Shum and Jipeng Zhang and Wei Xiong and Tong Zhang},
title = {LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://optimalscale.github.io/LMFlow/}},
}
```
|
ALM-AHME/convnextv2-large-1k-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20
|
ALM-AHME
| 2023-07-16T03:13:16Z | 12 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"convnextv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-15T00:35:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: convnextv2-large-1k-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: Splitted-Resized
split: train
args: Splitted-Resized
metrics:
- name: Accuracy
type: accuracy
value: 0.9900990099009901
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-large-1k-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20
This model is a fine-tuned version of [facebook/convnextv2-large-1k-224](https://huggingface.co/facebook/convnextv2-large-1k-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0353
- Accuracy: 0.9901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.9
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5207 | 1.0 | 199 | 0.4745 | 0.8887 |
| 0.2029 | 2.0 | 398 | 0.2072 | 0.9401 |
| 0.1615 | 3.0 | 597 | 0.1489 | 0.9547 |
| 0.1662 | 4.0 | 796 | 0.1312 | 0.9562 |
| 0.1986 | 5.0 | 995 | 0.1026 | 0.9698 |
| 0.0854 | 6.0 | 1194 | 0.0583 | 0.9802 |
| 0.0538 | 7.0 | 1393 | 0.0568 | 0.9835 |
| 0.0977 | 8.0 | 1592 | 0.0654 | 0.9793 |
| 0.6971 | 9.0 | 1791 | 0.6821 | 0.5450 |
| 0.211 | 10.0 | 1990 | 0.1654 | 0.9326 |
| 0.1775 | 11.0 | 2189 | 0.0859 | 0.9665 |
| 0.0042 | 12.0 | 2388 | 0.0353 | 0.9901 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Pamela153/ppo-LunarLander-v2
|
Pamela153
| 2023-07-16T02:47:00Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T02:44:30Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.70 +/- 12.72
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PeterBrendan/pbjsGPT2v2
|
PeterBrendan
| 2023-07-16T02:32:02Z | 144 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-12T15:07:20Z |
---
license: mit
widget:
- text: bidderTimeout
- text: Usebidcache
- text: bidderSequence
- text: customPriceBucket
---
## Model: GPT-2
### Model name: pbjsGPT2v2
### Model description:
This fine-tuned version of the GPT-2 model was trained on a subset of 1100+ publisher domains' Prebid config files. Its focus is on sophisticated Prebid publishers. The model provides insights into how these publishers configure their Prebid settings. By inputting a Prebid config setting, such as ***bidderTimeout***, the model generates sample Prebid configuration settings based on the collected data. It aims to assist publishers in understanding different configurations used by sophisticated publishers.
### Intended uses:
This model is intended to assist publishers in understanding and exploring how other publishers configure their Prebid settings. It serves as a reference for gaining insights into common configurations, best practices, and different approaches used by top publishers across various domains.
### Limitations:
The generated Prebid configuration settings are based on the data from the training set and may not cover all possible configurations or reflect the specific requirements of a particular domain. Publishers should carefully review and adapt the generated configurations to their specific needs and business rules.
### How to use:
To use this model, provide a Prebid config setting, such as ***bidderSequence***. The model will generate a sample Prebid configuration related to that input based on the collected data.
### Training data:
This model was trained on a subset of 1100+ publisher domains Prebid config files. The dataset was collected from a variety of publishers and represents a wide range of Prebid settings used in the industry.
### Training procedure:
The model was fine-tuned using the GPT-2 base model with the aforementioned dataset.
### Evaluation results:
The evaluation of this model focuses on its ability to generate coherent and valid Prebid configuration settings based on the provided Prebid config setting. Human evaluators reviewed the generated configurations for relevance and accuracy.
### Safety and bias considerations:
The model is trained on data from actual Prebid config files and aims to provide accurate insights into publishers' configurations. However, it's important to note that biases may exist in the original data itself, as the training data is based on real-world configurations. Users should review and validate the generated configurations to ensure they align with their specific requirements and guidelines.
Users are encouraged to exercise caution and use their expertise in interpreting and adapting the generated Prebid configurations for their own use. The model should be seen as a helpful tool to gain inspiration and understanding of common Prebid settings but not as a substitute for thorough testing and manual review of the final configurations.
|
manmyung/ppo-SnowballTarget
|
manmyung
| 2023-07-16T02:08:22Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-16T02:08:19Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: manmyung/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
LarryAIDraw/anyloracleanlinearmix_v10
|
LarryAIDraw
| 2023-07-16T02:02:42Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T01:41:22Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/107677/anyloracleanlinearmix-clearvae
|
mitra-mir/setfit_model_Calgary_epochs2_Jul_15_2023
|
mitra-mir
| 2023-07-16T02:00:04Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-16T01:59:53Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 115 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 230,
"warmup_steps": 23,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Hex820000/anime_v10
|
Hex820000
| 2023-07-16T01:57:47Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T01:46:09Z |
---
license: creativeml-openrail-m
---
|
NasimB/aochildes-log-rarity-all-no-cut
|
NasimB
| 2023-07-16T01:36:11Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-15T21:18:08Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: aochildes-log-rarity-all-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aochildes-log-rarity-all-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3494
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7012 | 0.29 | 500 | 5.6318 |
| 5.3412 | 0.59 | 1000 | 5.2043 |
| 4.9942 | 0.88 | 1500 | 4.9555 |
| 4.7162 | 1.17 | 2000 | 4.8127 |
| 4.5637 | 1.47 | 2500 | 4.6882 |
| 4.4549 | 1.76 | 3000 | 4.5863 |
| 4.3244 | 2.05 | 3500 | 4.5156 |
| 4.1353 | 2.35 | 4000 | 4.4659 |
| 4.0987 | 2.64 | 4500 | 4.4137 |
| 4.0704 | 2.93 | 5000 | 4.3586 |
| 3.8584 | 3.23 | 5500 | 4.3513 |
| 3.8035 | 3.52 | 6000 | 4.3279 |
| 3.7832 | 3.81 | 6500 | 4.2999 |
| 3.6738 | 4.11 | 7000 | 4.3075 |
| 3.5158 | 4.4 | 7500 | 4.2989 |
| 3.5108 | 4.69 | 8000 | 4.2880 |
| 3.4999 | 4.99 | 8500 | 4.2787 |
| 3.3285 | 5.28 | 9000 | 4.2964 |
| 3.3229 | 5.57 | 9500 | 4.2965 |
| 3.3189 | 5.87 | 10000 | 4.2951 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
ebkjsh/galaxy
|
ebkjsh
| 2023-07-16T01:22:29Z | 0 | 0 |
asteroid
|
[
"asteroid",
"aa",
"dataset:Open-Orca/OpenOrca",
"license:openrail",
"region:us"
] | null | 2023-07-16T01:21:33Z |
---
license: openrail
datasets:
- Open-Orca/OpenOrca
language:
- aa
metrics:
- accuracy
library_name: asteroid
---
|
yzzhong/RL_q_tax_v3
|
yzzhong
| 2023-07-16T01:19:33Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T01:06:21Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: RL_q_tax_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="yzzhong/RL_q_tax_v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
yzzhong/RL_q_frozenlake
|
yzzhong
| 2023-07-16T01:18:53Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-15T05:46:54Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: RL_q_frozenlake
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="yzzhong/RL_q_frozenlake", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
akraieski/taxi-v3
|
akraieski
| 2023-07-16T01:06:47Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T01:06:42Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.36 +/- 2.88
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="akraieski/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
akraieski/q-FrozenLake-v1-4x4-noSlippery
|
akraieski
| 2023-07-16T01:03:22Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T01:03:19Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="akraieski/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AbdelSiam/nart-100k-7b-GPTQ
|
AbdelSiam
| 2023-07-16T00:41:25Z | 7 | 4 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-16T00:29:43Z |
---
license: cc-by-nc-nd-4.0
---
|
tyavika/lr1e5_bs16_layer1_Bert_CNN128LSTM128NoBid
|
tyavika
| 2023-07-16T00:31:17Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-12T18:38:30Z |
---
tags:
- generated_from_trainer
model-index:
- name: lr1e5_bs16_layer1_Bert_CNN128LSTM128NoBid
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lr1e5_bs16_layer1_Bert_CNN128LSTM128NoBid
This model is a fine-tuned version of [tyavika/lr1e5_bs16_layer1_Bert_CNN128LSTM128NoBid](https://huggingface.co/tyavika/lr1e5_bs16_layer1_Bert_CNN128LSTM128NoBid) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Nsjssj/Ai
|
Nsjssj
| 2023-07-16T00:24:40Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-07-16T00:21:57Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model card aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KingKazma/xsum_t5-small_prefix_tuning_500_10_3000_8_e-1_s55555_v3_prefix200_manual
|
KingKazma
| 2023-07-15T23:50:46Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T23:50:45Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
GliderMixesYT/RichardAshcroft1997
|
GliderMixesYT
| 2023-07-15T23:38:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-15T23:28:18Z |
Voice model for Verve frontman Richard Ashcroft, for his vocal range from 1996-2000.
|
NasimB/cbt-guten-log-rarity-all-no-cut
|
NasimB
| 2023-07-15T23:32:37Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-15T21:37:01Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-guten-log-rarity-all-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-guten-log-rarity-all-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3166
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6947 | 0.29 | 500 | 5.6397 |
| 5.3475 | 0.58 | 1000 | 5.2031 |
| 4.991 | 0.87 | 1500 | 4.9524 |
| 4.7228 | 1.17 | 2000 | 4.8034 |
| 4.563 | 1.46 | 2500 | 4.6832 |
| 4.446 | 1.75 | 3000 | 4.5709 |
| 4.3323 | 2.04 | 3500 | 4.4920 |
| 4.1314 | 2.33 | 4000 | 4.4447 |
| 4.1022 | 2.62 | 4500 | 4.3948 |
| 4.059 | 2.91 | 5000 | 4.3383 |
| 3.8712 | 3.21 | 5500 | 4.3368 |
| 3.8024 | 3.5 | 6000 | 4.3008 |
| 3.7855 | 3.79 | 6500 | 4.2702 |
| 3.6976 | 4.08 | 7000 | 4.2655 |
| 3.5207 | 4.37 | 7500 | 4.2612 |
| 3.5156 | 4.66 | 8000 | 4.2501 |
| 3.5001 | 4.95 | 8500 | 4.2351 |
| 3.357 | 5.24 | 9000 | 4.2478 |
| 3.3255 | 5.54 | 9500 | 4.2467 |
| 3.3217 | 5.83 | 10000 | 4.2455 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
MohamedExperio/layoutxlm-finetuned-xfund-fr
|
MohamedExperio
| 2023-07-15T23:14:01Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"dataset:xfun",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-15T22:52:20Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- xfun
model-index:
- name: layoutxlm-finetuned-xfund-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutxlm-finetuned-xfund-fr
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the xfun dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
seny1004/wav2vec2-large-mms-1b-korean-colab
|
seny1004
| 2023-07-15T22:55:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_13_0",
"base_model:facebook/mms-1b-l1107",
"base_model:finetune:facebook/mms-1b-l1107",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-14T06:47:50Z |
---
license: cc-by-nc-4.0
base_model: facebook/mms-1b-l1107
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
metrics:
- wer
model-index:
- name: wav2vec2-large-mms-1b-korean-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_13_0
type: common_voice_13_0
config: ko
split: test
args: ko
metrics:
- name: Wer
type: wer
value: 0.9929506545820745
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-mms-1b-korean-colab
This model is a fine-tuned version of [facebook/mms-1b-l1107](https://huggingface.co/facebook/mms-1b-l1107) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 7.8135
- Wer: 0.9930
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 10.9747 | 2.63 | 100 | 7.8812 | 0.9990 |
| 5.9431 | 5.26 | 200 | 8.2212 | 0.9960 |
| 5.7372 | 7.89 | 300 | 8.1054 | 0.9930 |
| 5.2582 | 10.53 | 400 | 8.2347 | 0.9940 |
| 3.8725 | 13.16 | 500 | 7.7536 | 0.9940 |
| 3.4454 | 15.79 | 600 | 7.7220 | 0.9930 |
| 2.5989 | 18.42 | 700 | 7.8135 | 0.9930 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
LarryAIDraw/Arima_Kana_V1-000003
|
LarryAIDraw
| 2023-07-15T22:16:49Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-15T22:11:09Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/55346/arima-kanaoshi-no-ko
|
KingKazma/xsum_t5-small_p_tuning_500_10_3000_8_e-1_s55555_v3_manual
|
KingKazma
| 2023-07-15T21:43:50Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T21:43:49Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
merthacioglu/roberta-finetuned-subjqa-movies_2
|
merthacioglu
| 2023-07-15T21:39:57Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-15T14:30:17Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-subjqa-movies_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-subjqa-movies_2
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ArthurBaia/albertina-squad-v1.1-pt.br
|
ArthurBaia
| 2023-07-15T21:32:25Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"question-answering",
"generated_from_trainer",
"dataset:ArthurBaia/squad_v1_pt_br",
"base_model:PORTULAN/albertina-900m-portuguese-ptbr-encoder-brwac",
"base_model:finetune:PORTULAN/albertina-900m-portuguese-ptbr-encoder-brwac",
"license:other",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-15T02:00:06Z |
---
license: other
base_model: PORTULAN/albertina-ptbr
tags:
- generated_from_trainer
datasets:
- ArthurBaia/squad_v1_pt_br
model-index:
- name: albertina
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albertina
This model is a fine-tuned version of [PORTULAN/albertina-ptbr](https://huggingface.co/PORTULAN/albertina-ptbr) on the ArthurBaia/squad_v1_pt_br dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
{
"epoch": 3.0,
"eval_exact_match": 76.96310312204352,
"eval_f1": 87.82372321450285,
"eval_runtime": 189.7132,
"eval_samples": 10977,
"eval_samples_per_second": 57.861,
"eval_steps_per_second": 7.237
}
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
lovelyxs/ppo-LunarLander-v2-2
|
lovelyxs
| 2023-07-15T21:23:37Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-15T20:27:07Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 133.96 +/- 135.43
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 2000000
'learning_rate': 0.0003
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.25
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'lovelyxs/ppo-LunarLander-v2-2'
'batch_size': 512
'minibatch_size': 128}
```
|
KingKazma/xsum_t5-small_p_tuning_500_10_3000_16_e-1_s55555_v3_manual
|
KingKazma
| 2023-07-15T21:12:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T21:12:39Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
NasimB/gpt2-concat-wiki-rarity-no-cut
|
NasimB
| 2023-07-15T21:10:22Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-15T19:08:48Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-wiki-rarity-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-wiki-rarity-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7051 | 0.29 | 500 | 5.6378 |
| 5.3367 | 0.58 | 1000 | 5.1972 |
| 4.9867 | 0.87 | 1500 | 4.9538 |
| 4.7104 | 1.16 | 2000 | 4.8093 |
| 4.5621 | 1.46 | 2500 | 4.6885 |
| 4.4544 | 1.75 | 3000 | 4.5808 |
| 4.3353 | 2.04 | 3500 | 4.5031 |
| 4.1291 | 2.33 | 4000 | 4.4542 |
| 4.1138 | 2.62 | 4500 | 4.3959 |
| 4.0612 | 2.91 | 5000 | 4.3429 |
| 3.8709 | 3.2 | 5500 | 4.3403 |
| 3.8046 | 3.49 | 6000 | 4.3115 |
| 3.7892 | 3.78 | 6500 | 4.2732 |
| 3.7056 | 4.07 | 7000 | 4.2679 |
| 3.5187 | 4.37 | 7500 | 4.2666 |
| 3.5135 | 4.66 | 8000 | 4.2503 |
| 3.5039 | 4.95 | 8500 | 4.2386 |
| 3.3508 | 5.24 | 9000 | 4.2509 |
| 3.324 | 5.53 | 9500 | 4.2505 |
| 3.3217 | 5.82 | 10000 | 4.2496 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
0sunfire0/Pixelcopter_train_01
|
0sunfire0
| 2023-07-15T21:01:20Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-15T21:01:01Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter_train_01
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 38.00 +/- 26.76
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.