modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 12:32:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 12:31:20
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
manuu01/dqn-SpaceInvadersNoFrameskip-v4
|
manuu01
| 2023-07-17T15:44:13Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T13:23:57Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 679.50 +/- 120.92
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga manuu01 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga manuu01 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga manuu01
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e4_s108_v3
|
KingKazma
| 2023-07-17T15:43:17Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T01:10:21Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e5_s6789_v3
|
KingKazma
| 2023-07-17T15:42:39Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T15:42:36Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e3_s108_v3
|
KingKazma
| 2023-07-17T15:36:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T01:02:48Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
ParallelnoMinded/distilbert-base-uncased-finetuned-squad
|
ParallelnoMinded
| 2023-07-17T15:36:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-16T14:22:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2273 | 1.0 | 5533 | 1.1657 |
| 0.9589 | 2.0 | 11066 | 1.1226 |
| 0.7485 | 3.0 | 16599 | 1.1562 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cu116
- Datasets 2.13.1
- Tokenizers 0.13.3
|
roa7n/gpt2-human_nontata_promoters-last_layer_1
|
roa7n
| 2023-07-17T15:35:28Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T15:35:26Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e4_s6789_v3
|
KingKazma
| 2023-07-17T15:35:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T15:35:15Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e2_s108_v3
|
KingKazma
| 2023-07-17T15:29:17Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:55:15Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e3_s6789_v3
|
KingKazma
| 2023-07-17T15:27:56Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T15:27:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e1_s108_v3
|
KingKazma
| 2023-07-17T15:22:19Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:47:41Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
dereklvlv/ILM_400
|
dereklvlv
| 2023-07-17T15:20:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T15:13:54Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e0_s108_v3
|
KingKazma
| 2023-07-17T15:15:20Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:40:07Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e-1_s108_v3
|
KingKazma
| 2023-07-17T15:08:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:32:34Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
roa7n/gpt2-human_nontata_promoters-last_layer
|
roa7n
| 2023-07-17T15:01:45Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T15:01:43Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
dereklvlv/ILM
|
dereklvlv
| 2023-07-17T14:59:39Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T14:48:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e-1_s6789_v3
|
KingKazma
| 2023-07-17T14:58:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T01:37:50Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
peterdamn/whisper-tiny-en
|
peterdamn
| 2023-07-17T14:46:28Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-17T14:22:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.34415584415584416
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6314
- Wer Ortho: 0.3473
- Wer: 0.3442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.001 | 17.86 | 500 | 0.6314 | 0.3473 | 0.3442 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
ahmadaarif/urdu_tts_finetuned_voxpopuli_nl
|
ahmadaarif
| 2023-07-17T14:16:28Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:common_voice_13_0",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-17T12:23:21Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
model-index:
- name: urdu_tts_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# urdu_tts_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5518 | 8.61 | 1000 | 0.5001 |
| 0.5213 | 17.22 | 2000 | 0.4917 |
| 0.5091 | 25.83 | 3000 | 0.4825 |
| 0.5053 | 34.45 | 4000 | 0.4837 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
digiplay/PhotoSomnia_vFinal
|
digiplay
| 2023-07-17T14:12:27Z | 435 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-17T13:45:52Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/18637/photosomnia
Original Author's DEMO image :

Sample image thru huggingface's API :


|
SHENMU007/neunit_BASE_V14.1
|
SHENMU007
| 2023-07-17T14:10:56Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-17T10:49:11Z |
---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hseokool/vicuna-7b-v1.3-230717-01
|
hseokool
| 2023-07-17T14:09:26Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T14:09:24Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Mursel/turkishReviews-ds-mini
|
Mursel
| 2023-07-17T14:00:23Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T12:44:47Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Mursel/turkishReviews-ds-mini
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Mursel/turkishReviews-ds-mini
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.9491
- Validation Loss: 8.2449
- Epoch: 17
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -896, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.3116 | 9.9951 | 0 |
| 9.6734 | 9.6486 | 1 |
| 9.1872 | 9.2715 | 2 |
| 8.6895 | 8.9614 | 3 |
| 8.3149 | 8.7701 | 4 |
| 8.0652 | 8.6205 | 5 |
| 7.8347 | 8.5185 | 6 |
| 7.5944 | 8.3560 | 7 |
| 7.3397 | 8.2770 | 8 |
| 7.0877 | 8.2449 | 9 |
| 6.9492 | 8.2449 | 10 |
| 6.9488 | 8.2449 | 11 |
| 6.9486 | 8.2449 | 12 |
| 6.9483 | 8.2449 | 13 |
| 6.9483 | 8.2449 | 14 |
| 6.9473 | 8.2449 | 15 |
| 6.9495 | 8.2449 | 16 |
| 6.9491 | 8.2449 | 17 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
GreenBitAI/LLaMA-7B-2bit-alpaca
|
GreenBitAI
| 2023-07-17T13:54:51Z | 0 | 2 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-07-17T13:51:47Z |
---
license: apache-2.0
---
# GreenBit LLaMA
This is GreenBitAI's instruction-tuned LoRA parameters for our [*2-bit 7B LLaMA model*](https://huggingface.co/GreenBitAI/LLaMA-7B-2bit) trained on the Alpaca-clean 50k dataset.
Please refer to our [Github page](https://github.com/GreenBitAI/low_bit_llama) for the code to run the model and more information.
|
Jonathaniu/alpaca-breast-cancer-13b-mix_data_3
|
Jonathaniu
| 2023-07-17T13:42:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T13:42:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
### Framework versions
- PEFT 0.4.0.dev0
|
NasimB/all-base-rarity-all-cbt-rarity-all-p8k-iorder-est-5p5k
|
NasimB
| 2023-07-17T13:31:00Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T11:32:22Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: all-base-rarity-all-cbt-rarity-all-p8k-iorder-est-5p5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-base-rarity-all-cbt-rarity-all-p8k-iorder-est-5p5k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7559 | 0.31 | 500 | 5.6511 |
| 5.4062 | 0.63 | 1000 | 5.2172 |
| 5.0687 | 0.94 | 1500 | 4.9678 |
| 4.7662 | 1.25 | 2000 | 4.8187 |
| 4.628 | 1.57 | 2500 | 4.6878 |
| 4.5225 | 1.88 | 3000 | 4.5768 |
| 4.3098 | 2.19 | 3500 | 4.5210 |
| 4.2125 | 2.51 | 4000 | 4.4508 |
| 4.1764 | 2.82 | 4500 | 4.3910 |
| 4.0275 | 3.13 | 5000 | 4.3703 |
| 3.8912 | 3.45 | 5500 | 4.3383 |
| 3.8735 | 3.76 | 6000 | 4.3003 |
| 3.7925 | 4.07 | 6500 | 4.2941 |
| 3.5917 | 4.39 | 7000 | 4.2879 |
| 3.5908 | 4.7 | 7500 | 4.2713 |
| 3.577 | 5.01 | 8000 | 4.2617 |
| 3.4004 | 5.33 | 8500 | 4.2710 |
| 3.3993 | 5.64 | 9000 | 4.2699 |
| 3.3898 | 5.95 | 9500 | 4.2692 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
quangnguyennn/pokemon-lora-sophia
|
quangnguyennn
| 2023-07-17T13:28:19Z | 4 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-17T06:53:24Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - quangnguyennn/pokemon-lora-sophia
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
Camih/distilbert-base-uncased-finetuned-cola
|
Camih
| 2023-07-17T13:27:44Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-17T11:57:30Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Camih/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Camih/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1923
- Validation Loss: 0.5619
- Train Matthews Correlation: 0.5219
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5102 | 0.4731 | 0.4311 | 0 |
| 0.3212 | 0.5034 | 0.5079 | 1 |
| 0.1923 | 0.5619 | 0.5219 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Serjssv/whisper-tiny-v1
|
Serjssv
| 2023-07-17T13:24:04Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-17T12:59:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-v1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.32762691853600945
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-v1
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6409
- Wer Ortho: 33.1277
- Wer: 0.3276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0009 | 17.86 | 500 | 0.6409 | 33.1277 | 0.3276 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
B0b91/AILearnsToMultiply2
|
B0b91
| 2023-07-17T13:24:04Z | 0 | 0 |
mlconsole
|
[
"mlconsole",
"tabular-regression",
"dataset:house_price_prediction",
"license:unknown",
"model-index",
"region:us"
] |
tabular-regression
| 2023-07-17T13:23:58Z |
---
license: unknown
inference: false
tags:
- mlconsole
- tabular-regression
library_name: mlconsole
metrics:
- mae
- loss
datasets:
- house_price_prediction
model-index:
- name: AILearnsToMultiply2
results:
- task:
type: tabular-regression
name: tabular-regression
dataset:
type: house_price_prediction
name: house_price_prediction
metrics:
- type: mae
name: Mean absolute error
value: 4.996237277984619
- type: loss
name: Model loss
value: 45.071861267089844
---
# regression model trained on "house_price_prediction"
🤖 [Load and use this model](https://mlconsole.com/model/hf/B0b91/AILearnsToMultiply2) in one click.
🧑💻 [Train your own model](https://mlconsole.com) on ML Console.
|
Oslaw/ppo-Huggy
|
Oslaw
| 2023-07-17T13:23:22Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-17T13:23:16Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Oslaw/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bharadwajkg/finetune-stable-diffusion-v1-4-planogram-lora-data3
|
bharadwajkg
| 2023-07-17T13:17:21Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-17T11:47:22Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - bharadwajkg/finetune-stable-diffusion-v1-4-planogram-lora-data3
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the bharadwajkg/planogram-sample-sd-data3 dataset. You can find some example images in the following.




|
huarddk/finetuning-sentiment-model-350-samples
|
huarddk
| 2023-07-17T13:00:02Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-12T14:50:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-350-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-350-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1608
- Accuracy: 0.9619
- F1: 0.9806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
pygospa/distilbert-base-uncased-finetuned-squad
|
pygospa
| 2023-07-17T12:59:16Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-17T09:40:00Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: pygospa/distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pygospa/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9783
- Train End Logits Accuracy: 0.7290
- Train Start Logits Accuracy: 0.6897
- Validation Loss: 1.1334
- Validation End Logits Accuracy: 0.6997
- Validation Start Logits Accuracy: 0.6622
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.5304 | 0.6023 | 0.5658 | 1.1695 | 0.6831 | 0.6468 | 0 |
| 0.9783 | 0.7290 | 0.6897 | 1.1334 | 0.6997 | 0.6622 | 1 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
roa7n/gpt2-human_enhancers_ensembl-rng
|
roa7n
| 2023-07-17T12:56:37Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T12:56:33Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
google/flan-t5-large
|
google
| 2023-07-17T12:49:05Z | 2,292,533 | 680 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:svakulenk0/qrecc",
"dataset:taskmaster2",
"dataset:djaym7/wiki_dialog",
"dataset:deepmind/code_contests",
"dataset:lambada",
"dataset:gsm8k",
"dataset:aqua_rat",
"dataset:esnli",
"dataset:quasc",
"dataset:qed",
"arxiv:2210.11416",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-21T10:07:08Z |
---
language:
- en
- fr
- ro
- de
- multilingual
widget:
- text: "Translate to German: My name is Arthur"
example_title: "Translation"
- text: "Please answer to the following question. Who is going to be the next Ballon d'or?"
example_title: "Question Answering"
- text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering."
example_title: "Logical reasoning"
- text: "Please answer the following question. What is the boiling point of Nitrogen?"
example_title: "Scientific knowledge"
- text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?"
example_title: "Yes/no question"
- text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
example_title: "Reasoning task"
- text: "Q: ( False or not False or False ) is? A: Let's think step by step"
example_title: "Boolean Expressions"
- text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?"
example_title: "Math reasoning"
- text: "Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?"
example_title: "Premise and hypothesis"
tags:
- text2text-generation
datasets:
- svakulenk0/qrecc
- taskmaster2
- djaym7/wiki_dialog
- deepmind/code_contests
- lambada
- gsm8k
- aqua_rat
- esnli
- quasc
- qed
license: apache-2.0
---
# Model Card for FLAN-T5 large
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/flan2_architecture.jpg"
alt="drawing" width="600"/>
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
5. [Training Details](#training-details)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
# TL;DR
If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages.
As mentioned in the first few lines of the abstract :
> Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large).
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English, Spanish, Japanese, Persian, Hindi, French, Chinese, Bengali, Gujarati, German, Telugu, Italian, Arabic, Polish, Tamil, Marathi, Malayalam, Oriya, Panjabi, Portuguese, Urdu, Galician, Hebrew, Korean, Catalan, Thai, Dutch, Indonesian, Vietnamese, Bulgarian, Filipino, Central Khmer, Lao, Turkish, Russian, Croatian, Swedish, Yoruba, Kurdish, Burmese, Malay, Czech, Finnish, Somali, Tagalog, Swahili, Sinhala, Kannada, Zhuang, Igbo, Xhosa, Romanian, Haitian, Estonian, Slovak, Lithuanian, Greek, Nepali, Assamese, Norwegian
- **License:** Apache 2.0
- **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5)
- **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints)
- **Resources for more information:**
- [Research paper](https://arxiv.org/pdf/2210.11416.pdf)
- [GitHub Repo](https://github.com/google-research/t5x)
- [Hugging Face FLAN-T5 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/t5)
# Usage
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", torch_dtype=torch.float16)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", load_in_8bit=True)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
# Uses
## Direct Use and Downstream Use
The authors write in [the original paper's model card](https://arxiv.org/pdf/2210.11416.pdf) that:
> The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models
See the [research paper](https://arxiv.org/pdf/2210.11416.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
The information below in this section are copied from the model's [official model card](https://arxiv.org/pdf/2210.11416.pdf):
> Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
## Ethical considerations and risks
> Flan-T5 is fine-tuned on a large corpus of text data that was not filtered for explicit content or assessed for existing biases. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
## Known Limitations
> Flan-T5 has not been tested in real world applications.
## Sensitive Use:
> Flan-T5 should not be applied for any unacceptable use cases, e.g., generation of abusive speech.
# Training Details
## Training Data
The model was trained on a mixture of tasks, that includes the tasks described in the table below (from the original paper, figure 2):

## Training Procedure
According to the model card from the [original paper](https://arxiv.org/pdf/2210.11416.pdf):
> These models are based on pretrained T5 (Raffel et al., 2020) and fine-tuned with instructions for better zero-shot and few-shot performance. There is one fine-tuned Flan model per T5 model size.
The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax).
# Evaluation
## Testing Data, Factors & Metrics
The authors evaluated the model on various tasks covering several languages (1836 in total). See the table below for some quantitative evaluation:

For full details, please check the [research paper](https://arxiv.org/pdf/2210.11416.pdf).
## Results
For full results for FLAN-T5-Large, see the [research paper](https://arxiv.org/pdf/2210.11416.pdf), Table 3.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4.
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@misc{https://doi.org/10.48550/arxiv.2210.11416,
doi = {10.48550/ARXIV.2210.11416},
url = {https://arxiv.org/abs/2210.11416},
author = {Chung, Hyung Won and Hou, Le and Longpre, Shayne and Zoph, Barret and Tay, Yi and Fedus, William and Li, Eric and Wang, Xuezhi and Dehghani, Mostafa and Brahma, Siddhartha and Webson, Albert and Gu, Shixiang Shane and Dai, Zhuyun and Suzgun, Mirac and Chen, Xinyun and Chowdhery, Aakanksha and Narang, Sharan and Mishra, Gaurav and Yu, Adams and Zhao, Vincent and Huang, Yanping and Dai, Andrew and Yu, Hongkun and Petrov, Slav and Chi, Ed H. and Dean, Jeff and Devlin, Jacob and Roberts, Adam and Zhou, Denny and Le, Quoc V. and Wei, Jason},
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Scaling Instruction-Finetuned Language Models},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
google/flan-t5-base
|
google
| 2023-07-17T12:48:39Z | 804,134 | 836 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:svakulenk0/qrecc",
"dataset:taskmaster2",
"dataset:djaym7/wiki_dialog",
"dataset:deepmind/code_contests",
"dataset:lambada",
"dataset:gsm8k",
"dataset:aqua_rat",
"dataset:esnli",
"dataset:quasc",
"dataset:qed",
"arxiv:2210.11416",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-21T10:02:31Z |
---
language:
- en
- fr
- ro
- de
- multilingual
tags:
- text2text-generation
widget:
- text: "Translate to German: My name is Arthur"
example_title: "Translation"
- text: "Please answer to the following question. Who is going to be the next Ballon d'or?"
example_title: "Question Answering"
- text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering."
example_title: "Logical reasoning"
- text: "Please answer the following question. What is the boiling point of Nitrogen?"
example_title: "Scientific knowledge"
- text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?"
example_title: "Yes/no question"
- text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
example_title: "Reasoning task"
- text: "Q: ( False or not False or False ) is? A: Let's think step by step"
example_title: "Boolean Expressions"
- text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?"
example_title: "Math reasoning"
- text: "Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?"
example_title: "Premise and hypothesis"
datasets:
- svakulenk0/qrecc
- taskmaster2
- djaym7/wiki_dialog
- deepmind/code_contests
- lambada
- gsm8k
- aqua_rat
- esnli
- quasc
- qed
license: apache-2.0
---
# Model Card for FLAN-T5 base
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/flan2_architecture.jpg"
alt="drawing" width="600"/>
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
5. [Training Details](#training-details)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
# TL;DR
If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages.
As mentioned in the first few lines of the abstract :
> Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large).
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English, Spanish, Japanese, Persian, Hindi, French, Chinese, Bengali, Gujarati, German, Telugu, Italian, Arabic, Polish, Tamil, Marathi, Malayalam, Oriya, Panjabi, Portuguese, Urdu, Galician, Hebrew, Korean, Catalan, Thai, Dutch, Indonesian, Vietnamese, Bulgarian, Filipino, Central Khmer, Lao, Turkish, Russian, Croatian, Swedish, Yoruba, Kurdish, Burmese, Malay, Czech, Finnish, Somali, Tagalog, Swahili, Sinhala, Kannada, Zhuang, Igbo, Xhosa, Romanian, Haitian, Estonian, Slovak, Lithuanian, Greek, Nepali, Assamese, Norwegian
- **License:** Apache 2.0
- **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5)
- **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints)
- **Resources for more information:**
- [Research paper](https://arxiv.org/pdf/2210.11416.pdf)
- [GitHub Repo](https://github.com/google-research/t5x)
- [Hugging Face FLAN-T5 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/t5)
# Usage
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base", device_map="auto")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base", device_map="auto", torch_dtype=torch.float16)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base", device_map="auto", load_in_8bit=True)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
# Uses
## Direct Use and Downstream Use
The authors write in [the original paper's model card](https://arxiv.org/pdf/2210.11416.pdf) that:
> The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models
See the [research paper](https://arxiv.org/pdf/2210.11416.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
The information below in this section are copied from the model's [official model card](https://arxiv.org/pdf/2210.11416.pdf):
> Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
## Ethical considerations and risks
> Flan-T5 is fine-tuned on a large corpus of text data that was not filtered for explicit content or assessed for existing biases. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
## Known Limitations
> Flan-T5 has not been tested in real world applications.
## Sensitive Use:
> Flan-T5 should not be applied for any unacceptable use cases, e.g., generation of abusive speech.
# Training Details
## Training Data
The model was trained on a mixture of tasks, that includes the tasks described in the table below (from the original paper, figure 2):

## Training Procedure
According to the model card from the [original paper](https://arxiv.org/pdf/2210.11416.pdf):
> These models are based on pretrained T5 (Raffel et al., 2020) and fine-tuned with instructions for better zero-shot and few-shot performance. There is one fine-tuned Flan model per T5 model size.
The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax).
# Evaluation
## Testing Data, Factors & Metrics
The authors evaluated the model on various tasks covering several languages (1836 in total). See the table below for some quantitative evaluation:

For full details, please check the [research paper](https://arxiv.org/pdf/2210.11416.pdf).
## Results
For full results for FLAN-T5-Base, see the [research paper](https://arxiv.org/pdf/2210.11416.pdf), Table 3.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4.
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@misc{https://doi.org/10.48550/arxiv.2210.11416,
doi = {10.48550/ARXIV.2210.11416},
url = {https://arxiv.org/abs/2210.11416},
author = {Chung, Hyung Won and Hou, Le and Longpre, Shayne and Zoph, Barret and Tay, Yi and Fedus, William and Li, Eric and Wang, Xuezhi and Dehghani, Mostafa and Brahma, Siddhartha and Webson, Albert and Gu, Shixiang Shane and Dai, Zhuyun and Suzgun, Mirac and Chen, Xinyun and Chowdhery, Aakanksha and Narang, Sharan and Mishra, Gaurav and Yu, Adams and Zhao, Vincent and Huang, Yanping and Dai, Andrew and Yu, Hongkun and Petrov, Slav and Chi, Ed H. and Dean, Jeff and Devlin, Jacob and Roberts, Adam and Zhou, Denny and Le, Quoc V. and Wei, Jason},
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Scaling Instruction-Finetuned Language Models},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
## Model Recycling
[Evaluation on 36 datasets](https://ibm.github.io/model-recycling/model_gain_chart?avg=9.16&mnli_lp=nan&20_newsgroup=3.34&ag_news=1.49&amazon_reviews_multi=0.21&anli=13.91&boolq=16.75&cb=23.12&cola=9.97&copa=34.50&dbpedia=6.90&esnli=5.37&financial_phrasebank=18.66&imdb=0.33&isear=1.37&mnli=11.74&mrpc=16.63&multirc=6.24&poem_sentiment=14.62&qnli=3.41&qqp=6.18&rotten_tomatoes=2.98&rte=24.26&sst2=0.67&sst_5bins=5.44&stsb=20.68&trec_coarse=3.95&trec_fine=10.73&tweet_ev_emoji=13.39&tweet_ev_emotion=4.62&tweet_ev_hate=3.46&tweet_ev_irony=9.04&tweet_ev_offensive=1.69&tweet_ev_sentiment=0.75&wic=14.22&wnli=9.44&wsc=5.53&yahoo_answers=4.14&model_name=google%2Fflan-t5-base&base_name=google%2Ft5-v1_1-base) using google/flan-t5-base as a base model yields average score of 77.98 in comparison to 68.82 by google/t5-v1_1-base.
The model is ranked 1st among all tested models for the google/t5-v1_1-base architecture as of 06/02/2023
Results:
| 20_newsgroup | ag_news | amazon_reviews_multi | anli | boolq | cb | cola | copa | dbpedia | esnli | financial_phrasebank | imdb | isear | mnli | mrpc | multirc | poem_sentiment | qnli | qqp | rotten_tomatoes | rte | sst2 | sst_5bins | stsb | trec_coarse | trec_fine | tweet_ev_emoji | tweet_ev_emotion | tweet_ev_hate | tweet_ev_irony | tweet_ev_offensive | tweet_ev_sentiment | wic | wnli | wsc | yahoo_answers |
|---------------:|----------:|-----------------------:|--------:|--------:|--------:|--------:|-------:|----------:|--------:|-----------------------:|-------:|--------:|--------:|--------:|----------:|-----------------:|--------:|--------:|------------------:|--------:|--------:|------------:|--------:|--------------:|------------:|-----------------:|-------------------:|----------------:|-----------------:|---------------------:|---------------------:|--------:|-------:|--------:|----------------:|
| 86.2188 | 89.6667 | 67.12 | 51.9688 | 82.3242 | 78.5714 | 80.1534 | 75 | 77.6667 | 90.9507 | 85.4 | 93.324 | 72.425 | 87.2457 | 89.4608 | 62.3762 | 82.6923 | 92.7878 | 89.7724 | 89.0244 | 84.8375 | 94.3807 | 57.2851 | 89.4759 | 97.2 | 92.8 | 46.848 | 80.2252 | 54.9832 | 76.6582 | 84.3023 | 70.6366 | 70.0627 | 56.338 | 53.8462 | 73.4 |
For more information, see: [Model Recycling](https://ibm.github.io/model-recycling/)
|
KoRiF/whisper-tiny-en
|
KoRiF
| 2023-07-17T12:26:37Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-17T11:52:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.3252656434474616
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8008
- Wer Ortho: 0.3523
- Wer: 0.3253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 1.593 | 1.79 | 50 | 1.0054 | 0.5003 | 0.4185 |
| 0.3982 | 3.57 | 100 | 0.7250 | 0.4121 | 0.3554 |
| 0.2075 | 5.36 | 150 | 0.6898 | 0.4226 | 0.3518 |
| 0.0957 | 7.14 | 200 | 0.6909 | 0.4028 | 0.3371 |
| 0.0412 | 8.93 | 250 | 0.7296 | 0.3695 | 0.3300 |
| 0.0186 | 10.71 | 300 | 0.7522 | 0.3627 | 0.3270 |
| 0.008 | 12.5 | 350 | 0.7703 | 0.3584 | 0.3288 |
| 0.0049 | 14.29 | 400 | 0.7756 | 0.3553 | 0.3294 |
| 0.0032 | 16.07 | 450 | 0.7889 | 0.3516 | 0.3235 |
| 0.0023 | 17.86 | 500 | 0.8008 | 0.3523 | 0.3253 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Oslaw/ppo-LunarLander-v2
|
Oslaw
| 2023-07-17T12:21:48Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T12:21:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.52 +/- 15.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ShekDass/donut-base-sroie-cord
|
ShekDass
| 2023-07-17T12:16:05Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-07-17T12:11:18Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie-cord
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie-cord
This model is a fine-tuned version of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
moritzwilke/distilbert-base-uncased-finetuned-squad
|
moritzwilke
| 2023-07-17T11:50:41Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-17T09:13:23Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: moritzwilke/distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# moritzwilke/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6756
- Train End Logits Accuracy: 0.5691
- Train Start Logits Accuracy: 0.5327
- Validation Loss: 1.2714
- Validation End Logits Accuracy: 0.6582
- Validation Start Logits Accuracy: 0.6184
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2766, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.6756 | 0.5691 | 0.5327 | 1.2714 | 0.6582 | 0.6184 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
weekcircle/wav2vec2-large-mms-1b-korean-colab_v3
|
weekcircle
| 2023-07-17T11:49:30Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:weekcircle/wav2vec2-large-mms-1b-korean-colab_v2",
"base_model:finetune:weekcircle/wav2vec2-large-mms-1b-korean-colab_v2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-17T09:08:44Z |
---
license: cc-by-nc-4.0
base_model: weekcircle/wav2vec2-large-mms-1b-korean-colab_v2
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-mms-1b-korean-colab_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-mms-1b-korean-colab_v3
This model is a fine-tuned version of [weekcircle/wav2vec2-large-mms-1b-korean-colab_v2](https://huggingface.co/weekcircle/wav2vec2-large-mms-1b-korean-colab_v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1476
- Wer: 0.3443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2374 | 0.18 | 100 | 0.1654 | 0.3761 |
| 0.2231 | 0.36 | 200 | 0.1648 | 0.3752 |
| 0.2263 | 0.53 | 300 | 0.1647 | 0.3859 |
| 0.2197 | 0.71 | 400 | 0.1618 | 0.3628 |
| 0.223 | 0.89 | 500 | 0.1642 | 0.3792 |
| 0.2143 | 1.07 | 600 | 0.1585 | 0.3684 |
| 0.2082 | 1.24 | 700 | 0.1589 | 0.3711 |
| 0.2166 | 1.42 | 800 | 0.1567 | 0.3647 |
| 0.2087 | 1.6 | 900 | 0.1561 | 0.3567 |
| 0.2109 | 1.78 | 1000 | 0.1551 | 0.3570 |
| 0.2036 | 1.95 | 1100 | 0.1553 | 0.3644 |
| 0.1926 | 2.13 | 1200 | 0.1545 | 0.3579 |
| 0.1972 | 2.31 | 1300 | 0.1539 | 0.3508 |
| 0.2086 | 2.49 | 1400 | 0.1526 | 0.3523 |
| 0.2179 | 2.66 | 1500 | 0.1524 | 0.3502 |
| 0.2036 | 2.84 | 1600 | 0.1515 | 0.3502 |
| 0.2196 | 3.02 | 1700 | 0.1510 | 0.3459 |
| 0.2149 | 3.2 | 1800 | 0.1498 | 0.3462 |
| 0.2111 | 3.37 | 1900 | 0.1485 | 0.3477 |
| 0.2043 | 3.55 | 2000 | 0.1481 | 0.3443 |
| 0.2043 | 3.73 | 2100 | 0.1475 | 0.3480 |
| 0.2018 | 3.91 | 2200 | 0.1476 | 0.3443 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
DaniloGMatto/distilbert-base-uncased-finetuned-cola
|
DaniloGMatto
| 2023-07-17T11:43:06Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-17T11:32:33Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: DaniloGMatto/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# DaniloGMatto/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3235
- Validation Loss: 0.4519
- Train Matthews Correlation: 0.5089
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5136 | 0.4726 | 0.4337 | 0 |
| 0.3235 | 0.4519 | 0.5089 | 1 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
samarthum/model
|
samarthum
| 2023-07-17T11:40:49Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-17T10:57:31Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - samarthum/model
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
nadle/xlm-roberta-base-finetuned-panx-de
|
nadle
| 2023-07-17T11:40:06Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-17T11:27:00Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.7478932584269663
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2258
- F1: 0.7479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4393 | 1.0 | 125 | 0.2258 | 0.7479 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ignatius/igbo_model
|
ignatius
| 2023-07-17T11:37:03Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"token-classification",
"ig",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-26T10:16:27Z |
---
license: cc-by-nc-4.0
language:
- ig
---
|
ShekDass/donut-base-sroie
|
ShekDass
| 2023-07-17T11:36:10Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-07-16T17:10:51Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Arindamdas70/llora7B-finetuned
|
Arindamdas70
| 2023-07-17T11:36:03Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T11:35:59Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
NasimB/cbt-rarity-all-end-p8k-guten-rarity-all-mixed
|
NasimB
| 2023-07-17T11:13:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T09:15:48Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-rarity-all-end-p8k-guten-rarity-all-mixed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-rarity-all-end-p8k-guten-rarity-all-mixed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6958 | 0.29 | 500 | 5.6331 |
| 5.3364 | 0.58 | 1000 | 5.2041 |
| 4.9968 | 0.88 | 1500 | 4.9505 |
| 4.7186 | 1.17 | 2000 | 4.8044 |
| 4.5561 | 1.46 | 2500 | 4.6841 |
| 4.4622 | 1.75 | 3000 | 4.5747 |
| 4.3263 | 2.04 | 3500 | 4.4949 |
| 4.1311 | 2.33 | 4000 | 4.4481 |
| 4.101 | 2.63 | 4500 | 4.3896 |
| 4.0645 | 2.92 | 5000 | 4.3353 |
| 3.871 | 3.21 | 5500 | 4.3306 |
| 3.8006 | 3.5 | 6000 | 4.3048 |
| 3.7879 | 3.79 | 6500 | 4.2723 |
| 3.6977 | 4.08 | 7000 | 4.2640 |
| 3.5167 | 4.38 | 7500 | 4.2617 |
| 3.5203 | 4.67 | 8000 | 4.2466 |
| 3.5051 | 4.96 | 8500 | 4.2353 |
| 3.3506 | 5.25 | 9000 | 4.2461 |
| 3.3237 | 5.54 | 9500 | 4.2458 |
| 3.3231 | 5.83 | 10000 | 4.2450 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
chayanbhansali/clock-tower
|
chayanbhansali
| 2023-07-17T11:07:56Z | 10 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-17T11:03:06Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### clock_tower Dreambooth model trained by chayanbhansali with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
naltatis/distilbert-base-uncased-finetuned-squad
|
naltatis
| 2023-07-17T10:59:14Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-17T09:13:27Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: naltatis/distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# naltatis/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0002
- Train End Logits Accuracy: 0.7231
- Train Start Logits Accuracy: 0.6883
- Validation Loss: 1.1339
- Validation End Logits Accuracy: 0.6926
- Validation Start Logits Accuracy: 0.6580
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.5428 | 0.5983 | 0.5604 | 1.1748 | 0.6817 | 0.6417 | 0 |
| 1.0002 | 0.7231 | 0.6883 | 1.1339 | 0.6926 | 0.6580 | 1 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.13.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ajaycompete143/ppo-Huggy
|
ajaycompete143
| 2023-07-17T10:48:41Z | 25 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-17T10:48:36Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ajaycompete143/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
OneNinetySeven/ppo-Huggy
|
OneNinetySeven
| 2023-07-17T10:23:18Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-17T10:23:14Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: OneNinetySeven/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
avichr/hebEMO_anticipation
|
avichr
| 2023-07-17T10:12:57Z | 193 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:[email protected]) <br>
[Inbal yahav](mailto:[email protected]) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={INFORMS Journal on Data Science},
year={2022}
}
```
|
avichr/hebEMO_disgust
|
avichr
| 2023-07-17T10:12:48Z | 184 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:[email protected]) <br>
[Inbal yahav](mailto:[email protected]) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={INFORMS Journal on Data Science},
year={2022}
}
```
|
avichr/hebEMO_surprise
|
avichr
| 2023-07-17T10:12:14Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:[email protected]) <br>
[Inbal yahav](mailto:[email protected]) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={INFORMS Journal on Data Science},
year={2022}
}
```
|
roa7n/gpt2-human_nontata_promoters
|
roa7n
| 2023-07-17T10:01:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T10:01:33Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
geolearner/fill-mask-camembert-base
|
geolearner
| 2023-07-17T09:53:32Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"fill-mask",
"en",
"dataset:SetFit/mrpc",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-17T02:45:50Z |
---
license: mit
datasets:
- SetFit/mrpc
language:
- en
metrics:
- f1
pipeline_tag: fill-mask
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model card aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
msrtoto/Coral_TB_2
|
msrtoto
| 2023-07-17T09:50:12Z | 237 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-17T09:50:06Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Coral_TB_2
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9777777791023254
---
# Coral_TB_2
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### bear

#### beaver

#### bird

#### cat

#### dog

#### human

#### lynx

#### wolf

|
bl4dylion/faster-whisper-small-belarusian
|
bl4dylion
| 2023-07-17T09:41:14Z | 18 | 2 |
transformers
|
[
"transformers",
"audio",
"automatic-speech-recognition",
"be",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-14T09:43:57Z |
---
license: apache-2.0
tags:
- audio
- automatic-speech-recognition
language:
- be
pipeline_tag: automatic-speech-recognition
---
# Whisper small model for CTranslate2
This repository contains the conversion of [ales/whisper-small-belarusian](https://huggingface.co/ales/whisper-small-belarusian) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
## Install faster-whisper
```bash
pip install git+https://github.com/guillaumekln/faster-whisper.git
```
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("bl4dylion/faster-whisper-small-belarusian")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model ales/whisper-small-belarusian --output_dir faster-whisper-small-belarusian \
--copy_files tokenizer_config.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/ales/whisper-small-belarusian).**
|
TheUpperCaseGuy/Guy-Urdu-TTS
|
TheUpperCaseGuy
| 2023-07-17T09:34:18Z | 203 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-17T09:23:10Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Guy-Urdu-TTS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Guy-Urdu-TTS
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
FrancescoBonzi/whisper-tiny-en
|
FrancescoBonzi
| 2023-07-17T09:34:18Z | 85 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-13T16:11:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.34238488783943327
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6816
- Wer Ortho: 34.3615
- Wer: 0.3424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0005 | 1.79 | 50 | 0.6816 | 34.3615 | 0.3424 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Aditya78b/my-awesome-model-new
|
Aditya78b
| 2023-07-17T09:28:38Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T09:27:56Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
peterdamn/distil-ast-audioset-finetuned-gtzan-finetuned-gtzan
|
peterdamn
| 2023-07-17T09:25:45Z | 166 | 0 |
transformers
|
[
"transformers",
"pytorch",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-17T07:43:01Z |
---
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distil-ast-audioset-finetuned-gtzan-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil-ast-audioset-finetuned-gtzan-finetuned-gtzan
This model is a fine-tuned version of [peterdamn/distil-ast-audioset-finetuned-gtzan](https://huggingface.co/peterdamn/distil-ast-audioset-finetuned-gtzan) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8269
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2642 | 1.0 | 225 | 1.0594 | 0.8 |
| 0.1655 | 2.0 | 450 | 0.9670 | 0.84 |
| 0.0009 | 3.0 | 675 | 0.9774 | 0.79 |
| 0.0093 | 4.0 | 900 | 0.9330 | 0.83 |
| 0.0 | 5.0 | 1125 | 0.8269 | 0.84 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
SotirisLegkas/Socratic-GODEL
|
SotirisLegkas
| 2023-07-17T09:22:48Z | 96 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-14T15:48:21Z |
Instruction: given a context, respond using Socratic dialogue principles by asking questions, considering various viewpoints, and promoting critical thinking.
|
SotirisLegkas/Socratic-GODEL-2
|
SotirisLegkas
| 2023-07-17T09:21:47Z | 96 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-14T17:16:26Z |
Instruction: given a context, respond using Socratic dialogue principles by asking questions, considering various viewpoints, and promoting critical thinking.
|
Uminosachi/MobileSAM
|
Uminosachi
| 2023-07-17T09:20:51Z | 0 | 2 | null |
[
"arxiv:2306.14289",
"license:apache-2.0",
"region:us"
] | null | 2023-07-17T09:01:14Z |
---
license: apache-2.0
---
TinyViT based Segment Anything Model of [MobileSAM](https://github.com/ChaoningZhang/MobileSAM).
**Reference**
Zhang, C., Han, D., Qiao, Y., Kim, J. U., Bae, S-H., Lee, S., & Hong, C. S. (2023). [Faster Segment Anything: Towards Lightweight SAM for Mobile Applications](https://arxiv.org/abs/2306.14289). arXiv preprint arXiv:2306.14289.
|
akdeniz27/q-FrozenLake-v1-4x4-noSlippery
|
akdeniz27
| 2023-07-17T09:20:29Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T09:20:25Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="akdeniz27/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v3
|
hafidikhsan
| 2023-07-17T09:17:39Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-17T09:15:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v3
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9769
- Accuracy: 0.816
- F1: 0.8151
- Precision: 0.8156
- Recall: 0.816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.9796 | 1.0 | 1000 | 0.8390 | 0.59 | 0.5710 | 0.5698 | 0.59 |
| 0.851 | 2.0 | 2000 | 0.7105 | 0.676 | 0.6707 | 0.6717 | 0.676 |
| 1.0458 | 3.0 | 3000 | 0.9331 | 0.718 | 0.7141 | 0.7334 | 0.718 |
| 0.3999 | 4.0 | 4000 | 0.9352 | 0.79 | 0.7873 | 0.7905 | 0.79 |
| 0.0843 | 5.0 | 5000 | 1.0333 | 0.812 | 0.8130 | 0.8148 | 0.812 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Masterjp123/AnythingV5Nijimix
|
Masterjp123
| 2023-07-17T09:08:11Z | 8 | 0 |
diffusers
|
[
"diffusers",
"art",
"en",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-17T07:18:24Z |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
tags:
- art
---
A Mix of Anything-v5 with 4 niji jounery style Loras to try to recreate a niji-jounery like style.
****WARNING I HAVE NOT TESTED THIS MODEL AT ALL!****
Citivai link: https://civitai.com/models/110761/anythingv5nijimix
|
MatthisHoules/t5-large-finetuned-break-qdmr-decomposition
|
MatthisHoules
| 2023-07-17T08:56:04Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:break_data",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-02T17:43:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- break_data
metrics:
- bleu
model-index:
- name: t5-large-finetuned-break-qdmr-decomposition
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: break_data
type: break_data
config: QDMR
split: validation
args: QDMR
metrics:
- name: Bleu
type: bleu
value: 0.22169382457557757
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-large-finetuned-break-qdmr-decomposition
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the break_data dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1729
- Bleu: 0.2217
- Brevity Penalty: 0.2926
- Length Ratio: 0.4487
- Translation Length: 108954
- Reference Length: 242845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Brevity Penalty | Length Ratio | Translation Length | Reference Length |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------------:|:------------:|:------------------:|:----------------:|
| No log | 1.0 | 346 | 0.2217 | 0.2190 | 0.2973 | 0.4519 | 109738 | 242845 |
| 0.3597 | 2.0 | 692 | 0.1898 | 0.2213 | 0.2944 | 0.4499 | 109245 | 242845 |
| 0.1943 | 3.0 | 1038 | 0.1780 | 0.2213 | 0.2936 | 0.4494 | 109125 | 242845 |
| 0.1943 | 4.0 | 1385 | 0.1722 | 0.2209 | 0.2926 | 0.4486 | 108943 | 242845 |
| 0.1588 | 5.0 | 1731 | 0.1708 | 0.2221 | 0.2938 | 0.4495 | 109159 | 242845 |
| 0.1395 | 6.0 | 2077 | 0.1699 | 0.2209 | 0.2907 | 0.4473 | 108635 | 242845 |
| 0.1395 | 7.0 | 2423 | 0.1699 | 0.2219 | 0.2927 | 0.4487 | 108964 | 242845 |
| 0.1245 | 8.0 | 2770 | 0.1717 | 0.2215 | 0.2924 | 0.4485 | 108909 | 242845 |
| 0.1152 | 9.0 | 3116 | 0.1724 | 0.2215 | 0.2924 | 0.4485 | 108914 | 242845 |
| 0.1152 | 9.99 | 3460 | 0.1729 | 0.2217 | 0.2926 | 0.4487 | 108954 | 242845 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/cbt-mod-guten-mod-rarity-all-mixed
|
NasimB
| 2023-07-17T08:47:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T06:49:35Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-mod-guten-mod-rarity-all-mixed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-mod-guten-mod-rarity-all-mixed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6966 | 0.29 | 500 | 5.6441 |
| 5.3409 | 0.58 | 1000 | 5.2046 |
| 4.9948 | 0.88 | 1500 | 4.9627 |
| 4.7265 | 1.17 | 2000 | 4.8189 |
| 4.5651 | 1.46 | 2500 | 4.6894 |
| 4.4539 | 1.75 | 3000 | 4.5863 |
| 4.3346 | 2.05 | 3500 | 4.5066 |
| 4.1409 | 2.34 | 4000 | 4.4585 |
| 4.1117 | 2.63 | 4500 | 4.4013 |
| 4.0669 | 2.92 | 5000 | 4.3496 |
| 3.8709 | 3.22 | 5500 | 4.3442 |
| 3.8157 | 3.51 | 6000 | 4.3154 |
| 3.7926 | 3.8 | 6500 | 4.2830 |
| 3.6943 | 4.09 | 7000 | 4.2806 |
| 3.5299 | 4.39 | 7500 | 4.2754 |
| 3.5211 | 4.68 | 8000 | 4.2625 |
| 3.5137 | 4.97 | 8500 | 4.2477 |
| 3.354 | 5.26 | 9000 | 4.2619 |
| 3.3365 | 5.56 | 9500 | 4.2609 |
| 3.3354 | 5.85 | 10000 | 4.2597 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
cgr28/CartPole-v1
|
cgr28
| 2023-07-17T08:44:20Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T08:44:08Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ITG/wav2vec2-large-xlsr-gl
|
ITG
| 2023-07-17T08:35:55Z | 78 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ITG",
"PyTorch",
"Transformers",
"gl",
"dataset:openslr",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-17T08:15:40Z |
---
license: cc-by-nc-nd-4.0
datasets:
- openslr
language:
- gl
pipeline_tag: automatic-speech-recognition
tags:
- ITG
- PyTorch
- Transformers
- wav2vec2
---
# Wav2Vec2 Large XLSR Galician
## Description
This is a fine-tuned version of the [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) pre-trained model for ASR in galician.
---
## Dataset
The dataset used for fine-tuning this model was the [OpenSLR galician](https://huggingface.co/datasets/openslr/viewer/SLR77) dataset, available in the openslr repository.
---
## Example inference script
### Check this example script to run our model in inference mode
```python
import torch
from transformers import AutoProcessor, AutoModelForCTC
filename = "demo.wav" #change this line to the name of your audio file
sample_rate = 16_000
processor = AutoProcessor.from_pretrained('ITG/wav2vec2-large-xlsr-gl')
model = AutoModelForSpeechSeq2Seq.from_pretrained('ITG/wav2vec2-large-xlsr-gl')
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
speech_array, _ = librosa.load(filename, sr=sample_rate)
inputs = processor(speech_array, sampling_rate=sample_rate, return_tensors="pt", padding=True).to(device)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask.to(device)).logits
decode_output = processor.batch_decode(torch.argmax(logits, dim=-1))[0]
print(f"ASR Galician wav2vec2-large-xlsr output: {decode_output}")
```
---
## Fine-tuning hyper-parameters
| **Hyper-parameter** | **Value** |
|:----------------------------------------:|:---------------------------:|
| Training batch size | 16 |
| Evaluation batch size | 8 |
| Learning rate | 3e-4 |
| Gradient accumulation steps | 2 |
| Group by length | true |
| Evaluation strategy | steps |
| Max training epochs | 50 |
| Max steps | 4000 |
| Generate max length | 225 |
| FP16 | true |
| Metric for best model | wer |
| Greater is better | false |
## Fine-tuning in a different dataset or style
If you're interested in fine-tuning your own wav2vec2 model, we suggest starting with the [facebook/wav2vec2-large-xlsr-53 model](https://huggingface.co/facebook/wav2vec2-large-xlsr-53). Additionally,
you may find this [fine-tuning on galician notebook by Diego Fustes](https://github.com/diego-fustes/xlsr-fine-tuning-gl/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Galician.ipynb) to be a valuable resource.
This guide served as a helpful reference during the training process of this Galician wav2vec2-large-xlsr model!
|
nolanaatama/mnnrl
|
nolanaatama
| 2023-07-17T08:25:25Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-15T00:50:33Z |
---
license: creativeml-openrail-m
---
|
peterdamn/distilhubert-finetuned-gtzan-finetuned-gtzan
|
peterdamn
| 2023-07-17T08:23:56Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-16T09:00:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan-finetuned-gtzan
This model is a fine-tuned version of [NemesisAlm/distilhubert-finetuned-gtzan](https://huggingface.co/NemesisAlm/distilhubert-finetuned-gtzan) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1748
- Accuracy: 0.81
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0023 | 1.0 | 899 | 1.1748 | 0.81 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
yacine-djm/fg-bert-sustainability-20-1e-05-0.02-64
|
yacine-djm
| 2023-07-17T08:16:36Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-17T07:13:31Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: fg-bert-sustainability-20-1e-05-0.02-64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fg-bert-sustainability-20-1e-05-0.02-64
This model is a fine-tuned version of [Raccourci/fairguest-bert](https://huggingface.co/Raccourci/fairguest-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0768
- F1: 0.9111
- Roc Auc: 0.9481
- Accuracy: 0.8721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 55 | 0.3490 | 0.0 | 0.4999 | 0.0946 |
| No log | 2.0 | 110 | 0.3051 | 0.0 | 0.5 | 0.0956 |
| No log | 3.0 | 165 | 0.2361 | 0.2265 | 0.5641 | 0.1611 |
| No log | 4.0 | 220 | 0.1869 | 0.6345 | 0.7492 | 0.4657 |
| No log | 5.0 | 275 | 0.1469 | 0.8934 | 0.9318 | 0.8358 |
| No log | 6.0 | 330 | 0.1197 | 0.9057 | 0.9409 | 0.8555 |
| No log | 7.0 | 385 | 0.1060 | 0.9126 | 0.9507 | 0.8680 |
| No log | 8.0 | 440 | 0.0958 | 0.9151 | 0.9487 | 0.8763 |
| No log | 9.0 | 495 | 0.0912 | 0.9153 | 0.9496 | 0.8721 |
| 0.2274 | 10.0 | 550 | 0.0863 | 0.9163 | 0.9521 | 0.8742 |
| 0.2274 | 11.0 | 605 | 0.0842 | 0.9131 | 0.9507 | 0.8711 |
| 0.2274 | 12.0 | 660 | 0.0816 | 0.9160 | 0.9507 | 0.8773 |
| 0.2274 | 13.0 | 715 | 0.0810 | 0.9156 | 0.9511 | 0.8763 |
| 0.2274 | 14.0 | 770 | 0.0803 | 0.9097 | 0.9484 | 0.8680 |
| 0.2274 | 15.0 | 825 | 0.0790 | 0.9103 | 0.9466 | 0.8690 |
| 0.2274 | 16.0 | 880 | 0.0774 | 0.9100 | 0.9475 | 0.8701 |
| 0.2274 | 17.0 | 935 | 0.0779 | 0.9134 | 0.9499 | 0.8732 |
| 0.2274 | 18.0 | 990 | 0.0767 | 0.9136 | 0.9508 | 0.8763 |
| 0.0682 | 19.0 | 1045 | 0.0767 | 0.9112 | 0.9486 | 0.8732 |
| 0.0682 | 20.0 | 1100 | 0.0768 | 0.9111 | 0.9481 | 0.8721 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ykirpichev/speecht5_finetuned_voxpopuli_nl
|
ykirpichev
| 2023-07-17T08:13:17Z | 83 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-07-17T05:53:12Z |
---
license: mit
tags:
- generated_from_trainer
- text-to-speech
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5242 | 4.3 | 1000 | 0.4753 |
| 0.5023 | 8.61 | 2000 | 0.4625 |
| 0.4941 | 12.91 | 3000 | 0.4577 |
| 0.4903 | 17.21 | 4000 | 0.4569 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
rtyui123/CartPole-v1
|
rtyui123
| 2023-07-17T08:03:51Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T08:03:46Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 124.50 +/- 5.70
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
EhsanElahi/speecht5_finetuned_voxpopuli_nl
|
EhsanElahi
| 2023-07-17T07:48:50Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:common_voice_13_0",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-14T12:16:23Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5771 | 8.61 | 1000 | 0.5219 |
| 0.5411 | 17.22 | 2000 | 0.5064 |
| 0.5352 | 25.83 | 3000 | 0.5012 |
| 0.5324 | 34.45 | 4000 | 0.5015 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
netrough/new-data-model
|
netrough
| 2023-07-17T07:42:24Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-07-17T07:36:52Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model card aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ethan1278/WizardLM-Uncensored-Falcon-7b-sharded-bf16
|
ethan1278
| 2023-07-17T07:37:34Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"RefinedWebModel",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T06:01:19Z |
Copy of [Wizard-Uncensored-Falcon-7b](https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-7b) but sharded. Please refer to the original repo for details about license/dataset/etc.
|
PranjaliS/my_setiment_analysis_model3
|
PranjaliS
| 2023-07-17T07:27:27Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-17T04:55:42Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: PranjaliS/my_setiment_analysis_model3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# PranjaliS/my_setiment_analysis_model3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1253
- Validation Loss: 0.4902
- Train Accuracy: 0.847
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2665, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.4221 | 0.3816 | 0.843 | 0 |
| 0.2373 | 0.4097 | 0.843 | 1 |
| 0.1253 | 0.4902 | 0.847 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Shinjigen/Mimi
|
Shinjigen
| 2023-07-17T07:17:24Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-17T07:13:49Z |
---
license: creativeml-openrail-m
---
|
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v2
|
hafidikhsan
| 2023-07-17T07:14:50Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-17T07:12:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v2
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8697
- Accuracy: 0.78
- F1: 0.7738
- Precision: 0.7735
- Recall: 0.78
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.0774 | 1.0 | 500 | 0.9199 | 0.57 | 0.5728 | 0.6154 | 0.57 |
| 0.6526 | 2.0 | 1000 | 0.6857 | 0.7 | 0.6925 | 0.7167 | 0.7 |
| 0.3767 | 3.0 | 1500 | 0.5830 | 0.79 | 0.7887 | 0.7884 | 0.79 |
| 0.242 | 4.0 | 2000 | 0.7786 | 0.82 | 0.8160 | 0.8163 | 0.82 |
| 0.2691 | 5.0 | 2500 | 0.8399 | 0.814 | 0.8113 | 0.8109 | 0.814 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
StarRing2022/RWKV-4-World-1.5B
|
StarRing2022
| 2023-07-17T06:40:37Z | 124 | 1 |
transformers
|
[
"transformers",
"pytorch",
"rwkv",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-26T00:32:37Z |
---
license: apache-2.0
---
RWKV-4-World的Hugface格式,因新版World的tokenizer较之前Raven\Pile版本有较大变化,因而需要进行新版HF适配
ringrwkv兼容了原生rwkv库和transformers的rwkv库,同时新添入World版本的配置及代码(支持1.5B,3B,7B全系列),并修复了原HF的RWKV在
Forward RWKVOutput时的细微问题,主要是引入和明确last_hidden_state。以下是轻量级使用代码,比较方便:<br>
RingRWKV GIT开源地址:https://github.com/StarRing2022/RingRWKV <br>
import torch<br>
from ringrwkv.configuration_rwkv_world import RwkvConfig<br>
from ringrwkv.rwkv_tokenizer import TRIE_TOKENIZER<br>
from ringrwkv.modehf_world import RwkvForCausalLM<br>
model = RwkvForCausalLM.from_pretrained("StarRing2022/RWKV-4-World-1.5B") #或将本模型下载至本地文件夹<br>
tokenizer = TRIE_TOKENIZER('./ringrwkv/rwkv_vocab_v20230424.txt')<br>
text = "你叫什么名字?"<br>
question = f'Question: {text.strip()}\n\nAnswer:'<br>
input_ids = tokenizer.encode(question)<br>
input_ids = torch.tensor(input_ids).unsqueeze(0)<br>
out = model.generate(input_ids,max_new_tokens=40)<br><br>
outlist = out[0].tolist()<br>
for i in outlist:<br>
if i==0: #要删除tokenid为0的元素 <br>
outlist.remove(i)<br>
answer = tokenizer.decode(outlist)<br>
print(answer)<br>
|
ailabturkiye/shaco
|
ailabturkiye
| 2023-07-17T06:35:20Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-17T06:30:09Z |
---
license: openrail
language:
- tr
tags:
- music
---
League of Legends oyunundaki Shaco adlı şampiyonun yaklaşık 5 dakikalık datasetiyle 250 epoch basılarak oluşturulmuştur. -3 ya da -5 Pitch(Transpose) önerilir. Herhangi bir platformda model ile yapılan bir cover paylaşımında discord linkimizi vermeniz rica olunur. discord.gg/ailab
|
StarRing2022/RWKV-4-World-3B
|
StarRing2022
| 2023-07-17T06:31:33Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rwkv",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-07-17T00:40:44Z |
---
license: apache-2.0
---
RWKV-4-World的Hugface格式,因新版World的tokenizer较之前Raven\Pile版本有较大变化,因而需要进行新版HF适配
ringrwkv兼容了原生rwkv库和transformers的rwkv库,同时新添入World版本的配置及代码(支持1.5B,3B,7B全系列),并修复了原HF的RWKV在
Forward RWKVOutput时的细微问题,主要是引入和明确last_hidden_state。以下是轻量级使用代码,比较方便:<br>
RingRWKV GIT开源地址:https://github.com/StarRing2022/RingRWKV <br>
import torch<br>
from ringrwkv.configuration_rwkv_world import RwkvConfig<br>
from ringrwkv.rwkv_tokenizer import TRIE_TOKENIZER<br>
from ringrwkv.modehf_world import RwkvForCausalLM<br>
model = RwkvForCausalLM.from_pretrained("StarRing2022/RWKV-4-World-3B") #或将本模型下载至本地文件夹<br>
tokenizer = TRIE_TOKENIZER('./ringrwkv/rwkv_vocab_v20230424.txt')<br>
text = "你叫什么名字?"<br>
question = f'Question: {text.strip()}\n\nAnswer:'<br>
input_ids = tokenizer.encode(question)<br>
input_ids = torch.tensor(input_ids).unsqueeze(0)<br>
out = model.generate(input_ids,max_new_tokens=40)<br><br>
outlist = out[0].tolist()<br>
for i in outlist:<br>
if i==0: #要删除tokenid为0的元素 <br>
outlist.remove(i)<br>
answer = tokenizer.decode(outlist)<br>
print(answer)<br>
|
charlieoneill/falcon-abstracts
|
charlieoneill
| 2023-07-17T06:29:06Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-07-17T00:55:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: falcon-abstracts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-abstracts
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2500
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ailabturkiye/rtkamil
|
ailabturkiye
| 2023-07-17T06:25:41Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-17T06:21:55Z |
---
license: openrail
language:
- tr
tags:
- music
---
Rafadan Tayfa adlı çizgi filmde sevilen bir karakter olan Kamil'in yaklaşık 3 dakikalık datasetiyle 1000 epoch basılarak oluşturulmuştur. Herhangi bir platformda model ile yapılan bir cover paylaşımında discord linkimizi vermeniz rica olunur. discord.gg/ailab
|
NasimB/cbt-mod-log-rarity-all
|
NasimB
| 2023-07-17T06:22:57Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T04:11:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-mod-log-rarity-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-mod-log-rarity-all
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7026 | 0.29 | 500 | 5.6447 |
| 5.3372 | 0.58 | 1000 | 5.2129 |
| 4.9906 | 0.87 | 1500 | 4.9629 |
| 4.7124 | 1.17 | 2000 | 4.8120 |
| 4.5602 | 1.46 | 2500 | 4.6878 |
| 4.4529 | 1.75 | 3000 | 4.5834 |
| 4.3223 | 2.04 | 3500 | 4.5006 |
| 4.1297 | 2.33 | 4000 | 4.4577 |
| 4.097 | 2.62 | 4500 | 4.3979 |
| 4.0576 | 2.92 | 5000 | 4.3446 |
| 3.8608 | 3.21 | 5500 | 4.3387 |
| 3.7927 | 3.5 | 6000 | 4.3073 |
| 3.7829 | 3.79 | 6500 | 4.2777 |
| 3.6916 | 4.08 | 7000 | 4.2713 |
| 3.5078 | 4.37 | 7500 | 4.2688 |
| 3.5099 | 4.66 | 8000 | 4.2551 |
| 3.4934 | 4.96 | 8500 | 4.2416 |
| 3.3384 | 5.25 | 9000 | 4.2546 |
| 3.3186 | 5.54 | 9500 | 4.2532 |
| 3.3113 | 5.83 | 10000 | 4.2524 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
ailabturkiye/2xciv
|
ailabturkiye
| 2023-07-17T06:22:21Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-17T06:16:23Z |
---
license: openrail
language:
- tr
tags:
- music
---
VALORANT youtuberı olan 2xCIV'in yaklaşık 5 dakikalık datasetiyle 250 epoch basılarak oluşturulmuştur. Herhangi bir platformda model ile yapılan bir cover paylaşımında discord linkimizi vermeniz rica olunur. discord.gg/ailab
|
shivaneej/my_awesome_billsum_model
|
shivaneej
| 2023-07-17T06:19:13Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-14T06:38:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1425
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4536
- Rouge1: 0.1425
- Rouge2: 0.051
- Rougel: 0.1174
- Rougelsum: 0.1176
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7496 | 0.1275 | 0.0381 | 0.1084 | 0.1082 | 19.0 |
| No log | 2.0 | 124 | 2.5353 | 0.1365 | 0.0475 | 0.1138 | 0.1136 | 19.0 |
| No log | 3.0 | 186 | 2.4718 | 0.1409 | 0.0495 | 0.1157 | 0.1156 | 19.0 |
| No log | 4.0 | 248 | 2.4536 | 0.1425 | 0.051 | 0.1174 | 0.1176 | 19.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ailabturkiye/yasuo
|
ailabturkiye
| 2023-07-17T06:18:49Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-17T06:13:49Z |
---
license: openrail
language:
- tr
tags:
- music
---
League of Legends oyunundaki Yasuo adlı şampiyonun yaklaşık 5 dakikalık datasetiyle 250 epoch basılarak oluşturulmuştur.
Herhangi bir platformda model ile yapılan bir cover paylaşımında discord linkimizi vermeniz rica olunur. discord.gg/ailab
|
Althhecow/CattleMix
|
Althhecow
| 2023-07-17T06:00:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-16T21:23:09Z |
Model based on Anything v3 and a few older models that I've since lost track of. This model was originally mixed over 6 months ago, but has stayed useful for cartoonish / anthropomorphic subjects, despite newer models since releasing.
|
digiplay/CosplayMix_v2
|
digiplay
| 2023-07-17T05:59:37Z | 10 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-17T05:06:32Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: false
---
Model info :
https://civitai.com/models/34502?modelVersionId=48334
Original Author's DEMO image :

more image info:
https://civitai.com/images/519469
|
MHRDYN7/distilhubert-finetuned-gtzan
|
MHRDYN7
| 2023-07-17T05:48:16Z | 158 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-17T05:37:35Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hyeongjin99/vit-base-aihub_model-v2
|
hyeongjin99
| 2023-07-17T05:36:33Z | 221 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-17T05:21:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: vit-base-aihub_model-v2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.963855421686747
- name: Precision
type: precision
value: 0.9609609235289817
- name: Recall
type: recall
value: 0.9613676432460462
- name: F1
type: f1
value: 0.9604284776111401
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-aihub_model-v2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3076
- Accuracy: 0.9639
- Precision: 0.9610
- Recall: 0.9614
- F1: 0.9604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 3 | 1.2753 | 0.8373 | 0.8563 | 0.7993 | 0.8022 |
| No log | 2.0 | 6 | 1.1252 | 0.8675 | 0.8895 | 0.8300 | 0.8333 |
| No log | 3.0 | 9 | 0.9427 | 0.8976 | 0.9185 | 0.8696 | 0.8760 |
| 1.1721 | 4.0 | 12 | 0.7995 | 0.9398 | 0.9474 | 0.9195 | 0.9246 |
| 1.1721 | 5.0 | 15 | 0.6820 | 0.9699 | 0.9704 | 0.9613 | 0.9642 |
| 1.1721 | 6.0 | 18 | 0.5927 | 0.9639 | 0.9603 | 0.9583 | 0.9587 |
| 0.7084 | 7.0 | 21 | 0.5239 | 0.9759 | 0.9725 | 0.9729 | 0.9725 |
| 0.7084 | 8.0 | 24 | 0.4743 | 0.9699 | 0.9665 | 0.9671 | 0.9665 |
| 0.7084 | 9.0 | 27 | 0.4436 | 0.9578 | 0.9558 | 0.9556 | 0.9544 |
| 0.4668 | 10.0 | 30 | 0.4070 | 0.9639 | 0.9610 | 0.9614 | 0.9604 |
| 0.4668 | 11.0 | 33 | 0.3817 | 0.9699 | 0.9665 | 0.9671 | 0.9665 |
| 0.4668 | 12.0 | 36 | 0.3625 | 0.9699 | 0.9665 | 0.9671 | 0.9665 |
| 0.4668 | 13.0 | 39 | 0.3536 | 0.9578 | 0.9558 | 0.9556 | 0.9544 |
| 0.3611 | 14.0 | 42 | 0.3384 | 0.9578 | 0.9558 | 0.9556 | 0.9544 |
| 0.3611 | 15.0 | 45 | 0.3249 | 0.9699 | 0.9665 | 0.9671 | 0.9665 |
| 0.3611 | 16.0 | 48 | 0.3164 | 0.9699 | 0.9665 | 0.9671 | 0.9665 |
| 0.3063 | 17.0 | 51 | 0.3142 | 0.9639 | 0.9610 | 0.9614 | 0.9604 |
| 0.3063 | 18.0 | 54 | 0.3122 | 0.9639 | 0.9610 | 0.9614 | 0.9604 |
| 0.3063 | 19.0 | 57 | 0.3093 | 0.9639 | 0.9610 | 0.9614 | 0.9604 |
| 0.294 | 20.0 | 60 | 0.3076 | 0.9639 | 0.9610 | 0.9614 | 0.9604 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
zwangab91/q-FrozenLake-v1-4x4-noSlippery
|
zwangab91
| 2023-07-17T05:19:06Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T05:19:04Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="zwangab91/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
will99/document-finetuned-orca-mini-v2-7b
|
will99
| 2023-07-17T04:51:26Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T04:51:23Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v1
|
hafidikhsan
| 2023-07-17T04:48:17Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-17T04:47:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v1
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9211
- Accuracy: 0.718
- F1: 0.7197
- Precision: 0.7231
- Recall: 0.718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.9511 | 1.0 | 250 | 0.9034 | 0.548 | 0.5357 | 0.5409 | 0.548 |
| 0.6108 | 2.0 | 500 | 0.7361 | 0.68 | 0.6727 | 0.6731 | 0.68 |
| 0.4412 | 3.0 | 750 | 0.7990 | 0.726 | 0.7188 | 0.7221 | 0.726 |
| 0.2178 | 4.0 | 1000 | 0.7983 | 0.764 | 0.7652 | 0.7674 | 0.764 |
| 0.1726 | 5.0 | 1250 | 0.9572 | 0.764 | 0.7633 | 0.7647 | 0.764 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.