modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-24 00:43:13
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 573
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-24 00:37:34
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
bigmorning/whisper_syl_cv12_pad_lob100__0080
|
bigmorning
| 2023-08-25T11:09:53Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-25T11:09:47Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_cv12_pad_lob100__0080
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_cv12_pad_lob100__0080
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0003
- Train Accuracy: 0.0362
- Train Wermet: 1.0294
- Validation Loss: 0.6021
- Validation Accuracy: 0.0240
- Validation Wermet: 2.5022
- Epoch: 79
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.0233 | 0.0115 | 1.6383 | 3.8616 | 0.0117 | 0.9516 | 0 |
| 4.4412 | 0.0127 | 0.8560 | 3.5410 | 0.0125 | 0.8971 | 1 |
| 4.0719 | 0.0138 | 0.8366 | 3.2944 | 0.0132 | 0.8706 | 2 |
| 3.8091 | 0.0146 | 0.8133 | 3.1691 | 0.0134 | 0.8487 | 3 |
| 3.6239 | 0.0152 | 0.7866 | 3.0647 | 0.0136 | 0.8282 | 4 |
| 3.4749 | 0.0156 | 0.7589 | 2.9835 | 0.0139 | 0.8049 | 5 |
| 3.3444 | 0.0161 | 0.7359 | 2.9351 | 0.0140 | 0.7979 | 6 |
| 3.2215 | 0.0165 | 0.7138 | 2.8468 | 0.0145 | 0.7589 | 7 |
| 3.0754 | 0.0172 | 0.6873 | 2.7530 | 0.0148 | 0.7413 | 8 |
| 2.8713 | 0.0181 | 0.6484 | 2.5226 | 0.0157 | 0.7017 | 9 |
| 2.5469 | 0.0197 | 0.5934 | 2.1931 | 0.0168 | 0.6285 | 10 |
| 2.0233 | 0.0225 | 0.4997 | 1.6411 | 0.0189 | 0.5215 | 11 |
| 1.3808 | 0.0264 | 0.3852 | 1.2401 | 0.0205 | 0.4238 | 12 |
| 0.9722 | 0.0290 | 0.3123 | 1.0195 | 0.0215 | 0.3682 | 13 |
| 0.7388 | 0.0305 | 0.2828 | 0.8773 | 0.0221 | 0.3322 | 14 |
| 0.5787 | 0.0317 | 0.2751 | 0.7970 | 0.0225 | 0.3083 | 15 |
| 0.4642 | 0.0325 | 0.2878 | 0.7315 | 0.0227 | 0.2964 | 16 |
| 0.3752 | 0.0332 | 0.4217 | 0.6897 | 0.0229 | 0.3297 | 17 |
| 0.3042 | 0.0338 | 0.7294 | 0.6572 | 0.0231 | 0.4453 | 18 |
| 0.2444 | 0.0343 | 1.1298 | 0.6369 | 0.0232 | 0.6637 | 19 |
| 0.1949 | 0.0348 | 1.6370 | 0.6180 | 0.0233 | 1.6119 | 20 |
| 0.1544 | 0.0352 | 1.6151 | 0.6149 | 0.0233 | 1.6843 | 21 |
| 0.1212 | 0.0355 | 1.3832 | 0.6066 | 0.0233 | 0.8721 | 22 |
| 0.0931 | 0.0357 | 1.2799 | 0.6034 | 0.0234 | 0.5109 | 23 |
| 0.0725 | 0.0359 | 1.0940 | 0.6102 | 0.0234 | 1.0111 | 24 |
| 0.0551 | 0.0361 | 1.2865 | 0.6000 | 0.0234 | 1.1393 | 25 |
| 0.0411 | 0.0361 | 1.8511 | 0.6037 | 0.0235 | 2.0574 | 26 |
| 0.0311 | 0.0362 | 1.7179 | 0.6018 | 0.0235 | 1.4847 | 27 |
| 0.0253 | 0.0362 | 0.9801 | 0.6010 | 0.0235 | 0.4457 | 28 |
| 0.0231 | 0.0362 | 0.9376 | 0.6046 | 0.0235 | 0.9247 | 29 |
| 0.0196 | 0.0362 | 0.6466 | 0.6078 | 0.0235 | 0.5271 | 30 |
| 0.0177 | 0.0362 | 0.4041 | 0.6155 | 0.0235 | 0.4352 | 31 |
| 0.0139 | 0.0362 | 0.4202 | 0.6037 | 0.0236 | 0.5585 | 32 |
| 0.0137 | 0.0362 | 0.8151 | 0.6015 | 0.0236 | 1.8476 | 33 |
| 0.0122 | 0.0362 | 3.4515 | 0.6043 | 0.0236 | 3.8210 | 34 |
| 0.0098 | 0.0362 | 1.1787 | 0.5985 | 0.0236 | 0.8094 | 35 |
| 0.0071 | 0.0362 | 0.9920 | 0.5992 | 0.0236 | 0.8755 | 36 |
| 0.0055 | 0.0362 | 2.4665 | 0.6047 | 0.0236 | 2.0127 | 37 |
| 0.0124 | 0.0362 | 4.2468 | 0.6089 | 0.0236 | 2.8886 | 38 |
| 0.0109 | 0.0362 | 2.0177 | 0.6097 | 0.0236 | 0.3417 | 39 |
| 0.0073 | 0.0362 | 0.9927 | 0.6057 | 0.0237 | 2.5519 | 40 |
| 0.0080 | 0.0362 | 1.7341 | 0.6099 | 0.0236 | 1.3119 | 41 |
| 0.0063 | 0.0362 | 2.4288 | 0.6058 | 0.0237 | 1.3465 | 42 |
| 0.0038 | 0.0362 | 1.4535 | 0.6022 | 0.0237 | 1.6804 | 43 |
| 0.0028 | 0.0362 | 2.2629 | 0.6001 | 0.0238 | 3.4388 | 44 |
| 0.0021 | 0.0362 | 3.5877 | 0.6018 | 0.0238 | 2.6165 | 45 |
| 0.0017 | 0.0362 | 3.0080 | 0.6043 | 0.0238 | 2.6827 | 46 |
| 0.0061 | 0.0362 | 2.5182 | 0.6545 | 0.0235 | 0.2316 | 47 |
| 0.0126 | 0.0362 | 0.2097 | 0.6206 | 0.0236 | 0.6194 | 48 |
| 0.0071 | 0.0362 | 0.3045 | 0.6047 | 0.0237 | 0.7476 | 49 |
| 0.0053 | 0.0362 | 1.2045 | 0.6010 | 0.0238 | 0.6553 | 50 |
| 0.0040 | 0.0362 | 0.2626 | 0.5964 | 0.0238 | 0.7027 | 51 |
| 0.0021 | 0.0362 | 0.5023 | 0.5950 | 0.0238 | 0.3812 | 52 |
| 0.0014 | 0.0362 | 0.7108 | 0.6233 | 0.0237 | 1.4647 | 53 |
| 0.0017 | 0.0362 | 0.3475 | 0.6087 | 0.0238 | 0.2213 | 54 |
| 0.0011 | 0.0362 | 0.1825 | 0.5984 | 0.0239 | 0.2391 | 55 |
| 0.0021 | 0.0362 | 1.0757 | 0.6211 | 0.0238 | 7.3766 | 56 |
| 0.0078 | 0.0362 | 2.1996 | 0.6349 | 0.0237 | 5.2774 | 57 |
| 0.0071 | 0.0362 | 1.2499 | 0.6225 | 0.0237 | 0.9927 | 58 |
| 0.0045 | 0.0362 | 5.3986 | 0.6088 | 0.0238 | 27.5186 | 59 |
| 0.0027 | 0.0362 | 9.4813 | 0.6035 | 0.0239 | 0.2741 | 60 |
| 0.0015 | 0.0362 | 20.4251 | 0.6005 | 0.0239 | 73.4792 | 61 |
| 0.0012 | 0.0362 | 17.1227 | 0.6148 | 0.0238 | 4.2506 | 62 |
| 0.0024 | 0.0362 | 3.7081 | 0.6249 | 0.0238 | 5.8937 | 63 |
| 0.0050 | 0.0362 | 2.2590 | 0.6136 | 0.0238 | 9.6813 | 64 |
| 0.0026 | 0.0362 | 3.1954 | 0.6060 | 0.0239 | 15.4541 | 65 |
| 0.0032 | 0.0362 | 5.1838 | 0.6233 | 0.0238 | 10.2566 | 66 |
| 0.0053 | 0.0362 | 3.1310 | 0.6178 | 0.0239 | 1.4216 | 67 |
| 0.0030 | 0.0362 | 1.1169 | 0.6106 | 0.0239 | 0.9273 | 68 |
| 0.0018 | 0.0362 | 0.9183 | 0.6034 | 0.0239 | 1.7868 | 69 |
| 0.0011 | 0.0362 | 0.3862 | 0.6116 | 0.0239 | 0.5909 | 70 |
| 0.0014 | 0.0362 | 0.6235 | 0.6143 | 0.0239 | 0.9794 | 71 |
| 0.0025 | 0.0362 | 0.5583 | 0.6510 | 0.0237 | 0.3524 | 72 |
| 0.0058 | 0.0362 | 1.9614 | 0.6179 | 0.0239 | 1.2838 | 73 |
| 0.0029 | 0.0362 | 0.6039 | 0.6222 | 0.0239 | 3.0512 | 74 |
| 0.0013 | 0.0362 | 0.8265 | 0.6088 | 0.0239 | 1.1328 | 75 |
| 0.0008 | 0.0362 | 0.9354 | 0.6003 | 0.0240 | 4.7201 | 76 |
| 0.0008 | 0.0362 | 2.7001 | 0.6041 | 0.0240 | 6.5868 | 77 |
| 0.0005 | 0.0362 | 1.6010 | 0.6025 | 0.0240 | 3.0820 | 78 |
| 0.0003 | 0.0362 | 1.0294 | 0.6021 | 0.0240 | 2.5022 | 79 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
bigmorning/whisper_syl_cv12_pad_lob100__0075
|
bigmorning
| 2023-08-25T10:56:39Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-25T10:56:30Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_cv12_pad_lob100__0075
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_cv12_pad_lob100__0075
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0029
- Train Accuracy: 0.0362
- Train Wermet: 0.6039
- Validation Loss: 0.6222
- Validation Accuracy: 0.0239
- Validation Wermet: 3.0512
- Epoch: 74
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.0233 | 0.0115 | 1.6383 | 3.8616 | 0.0117 | 0.9516 | 0 |
| 4.4412 | 0.0127 | 0.8560 | 3.5410 | 0.0125 | 0.8971 | 1 |
| 4.0719 | 0.0138 | 0.8366 | 3.2944 | 0.0132 | 0.8706 | 2 |
| 3.8091 | 0.0146 | 0.8133 | 3.1691 | 0.0134 | 0.8487 | 3 |
| 3.6239 | 0.0152 | 0.7866 | 3.0647 | 0.0136 | 0.8282 | 4 |
| 3.4749 | 0.0156 | 0.7589 | 2.9835 | 0.0139 | 0.8049 | 5 |
| 3.3444 | 0.0161 | 0.7359 | 2.9351 | 0.0140 | 0.7979 | 6 |
| 3.2215 | 0.0165 | 0.7138 | 2.8468 | 0.0145 | 0.7589 | 7 |
| 3.0754 | 0.0172 | 0.6873 | 2.7530 | 0.0148 | 0.7413 | 8 |
| 2.8713 | 0.0181 | 0.6484 | 2.5226 | 0.0157 | 0.7017 | 9 |
| 2.5469 | 0.0197 | 0.5934 | 2.1931 | 0.0168 | 0.6285 | 10 |
| 2.0233 | 0.0225 | 0.4997 | 1.6411 | 0.0189 | 0.5215 | 11 |
| 1.3808 | 0.0264 | 0.3852 | 1.2401 | 0.0205 | 0.4238 | 12 |
| 0.9722 | 0.0290 | 0.3123 | 1.0195 | 0.0215 | 0.3682 | 13 |
| 0.7388 | 0.0305 | 0.2828 | 0.8773 | 0.0221 | 0.3322 | 14 |
| 0.5787 | 0.0317 | 0.2751 | 0.7970 | 0.0225 | 0.3083 | 15 |
| 0.4642 | 0.0325 | 0.2878 | 0.7315 | 0.0227 | 0.2964 | 16 |
| 0.3752 | 0.0332 | 0.4217 | 0.6897 | 0.0229 | 0.3297 | 17 |
| 0.3042 | 0.0338 | 0.7294 | 0.6572 | 0.0231 | 0.4453 | 18 |
| 0.2444 | 0.0343 | 1.1298 | 0.6369 | 0.0232 | 0.6637 | 19 |
| 0.1949 | 0.0348 | 1.6370 | 0.6180 | 0.0233 | 1.6119 | 20 |
| 0.1544 | 0.0352 | 1.6151 | 0.6149 | 0.0233 | 1.6843 | 21 |
| 0.1212 | 0.0355 | 1.3832 | 0.6066 | 0.0233 | 0.8721 | 22 |
| 0.0931 | 0.0357 | 1.2799 | 0.6034 | 0.0234 | 0.5109 | 23 |
| 0.0725 | 0.0359 | 1.0940 | 0.6102 | 0.0234 | 1.0111 | 24 |
| 0.0551 | 0.0361 | 1.2865 | 0.6000 | 0.0234 | 1.1393 | 25 |
| 0.0411 | 0.0361 | 1.8511 | 0.6037 | 0.0235 | 2.0574 | 26 |
| 0.0311 | 0.0362 | 1.7179 | 0.6018 | 0.0235 | 1.4847 | 27 |
| 0.0253 | 0.0362 | 0.9801 | 0.6010 | 0.0235 | 0.4457 | 28 |
| 0.0231 | 0.0362 | 0.9376 | 0.6046 | 0.0235 | 0.9247 | 29 |
| 0.0196 | 0.0362 | 0.6466 | 0.6078 | 0.0235 | 0.5271 | 30 |
| 0.0177 | 0.0362 | 0.4041 | 0.6155 | 0.0235 | 0.4352 | 31 |
| 0.0139 | 0.0362 | 0.4202 | 0.6037 | 0.0236 | 0.5585 | 32 |
| 0.0137 | 0.0362 | 0.8151 | 0.6015 | 0.0236 | 1.8476 | 33 |
| 0.0122 | 0.0362 | 3.4515 | 0.6043 | 0.0236 | 3.8210 | 34 |
| 0.0098 | 0.0362 | 1.1787 | 0.5985 | 0.0236 | 0.8094 | 35 |
| 0.0071 | 0.0362 | 0.9920 | 0.5992 | 0.0236 | 0.8755 | 36 |
| 0.0055 | 0.0362 | 2.4665 | 0.6047 | 0.0236 | 2.0127 | 37 |
| 0.0124 | 0.0362 | 4.2468 | 0.6089 | 0.0236 | 2.8886 | 38 |
| 0.0109 | 0.0362 | 2.0177 | 0.6097 | 0.0236 | 0.3417 | 39 |
| 0.0073 | 0.0362 | 0.9927 | 0.6057 | 0.0237 | 2.5519 | 40 |
| 0.0080 | 0.0362 | 1.7341 | 0.6099 | 0.0236 | 1.3119 | 41 |
| 0.0063 | 0.0362 | 2.4288 | 0.6058 | 0.0237 | 1.3465 | 42 |
| 0.0038 | 0.0362 | 1.4535 | 0.6022 | 0.0237 | 1.6804 | 43 |
| 0.0028 | 0.0362 | 2.2629 | 0.6001 | 0.0238 | 3.4388 | 44 |
| 0.0021 | 0.0362 | 3.5877 | 0.6018 | 0.0238 | 2.6165 | 45 |
| 0.0017 | 0.0362 | 3.0080 | 0.6043 | 0.0238 | 2.6827 | 46 |
| 0.0061 | 0.0362 | 2.5182 | 0.6545 | 0.0235 | 0.2316 | 47 |
| 0.0126 | 0.0362 | 0.2097 | 0.6206 | 0.0236 | 0.6194 | 48 |
| 0.0071 | 0.0362 | 0.3045 | 0.6047 | 0.0237 | 0.7476 | 49 |
| 0.0053 | 0.0362 | 1.2045 | 0.6010 | 0.0238 | 0.6553 | 50 |
| 0.0040 | 0.0362 | 0.2626 | 0.5964 | 0.0238 | 0.7027 | 51 |
| 0.0021 | 0.0362 | 0.5023 | 0.5950 | 0.0238 | 0.3812 | 52 |
| 0.0014 | 0.0362 | 0.7108 | 0.6233 | 0.0237 | 1.4647 | 53 |
| 0.0017 | 0.0362 | 0.3475 | 0.6087 | 0.0238 | 0.2213 | 54 |
| 0.0011 | 0.0362 | 0.1825 | 0.5984 | 0.0239 | 0.2391 | 55 |
| 0.0021 | 0.0362 | 1.0757 | 0.6211 | 0.0238 | 7.3766 | 56 |
| 0.0078 | 0.0362 | 2.1996 | 0.6349 | 0.0237 | 5.2774 | 57 |
| 0.0071 | 0.0362 | 1.2499 | 0.6225 | 0.0237 | 0.9927 | 58 |
| 0.0045 | 0.0362 | 5.3986 | 0.6088 | 0.0238 | 27.5186 | 59 |
| 0.0027 | 0.0362 | 9.4813 | 0.6035 | 0.0239 | 0.2741 | 60 |
| 0.0015 | 0.0362 | 20.4251 | 0.6005 | 0.0239 | 73.4792 | 61 |
| 0.0012 | 0.0362 | 17.1227 | 0.6148 | 0.0238 | 4.2506 | 62 |
| 0.0024 | 0.0362 | 3.7081 | 0.6249 | 0.0238 | 5.8937 | 63 |
| 0.0050 | 0.0362 | 2.2590 | 0.6136 | 0.0238 | 9.6813 | 64 |
| 0.0026 | 0.0362 | 3.1954 | 0.6060 | 0.0239 | 15.4541 | 65 |
| 0.0032 | 0.0362 | 5.1838 | 0.6233 | 0.0238 | 10.2566 | 66 |
| 0.0053 | 0.0362 | 3.1310 | 0.6178 | 0.0239 | 1.4216 | 67 |
| 0.0030 | 0.0362 | 1.1169 | 0.6106 | 0.0239 | 0.9273 | 68 |
| 0.0018 | 0.0362 | 0.9183 | 0.6034 | 0.0239 | 1.7868 | 69 |
| 0.0011 | 0.0362 | 0.3862 | 0.6116 | 0.0239 | 0.5909 | 70 |
| 0.0014 | 0.0362 | 0.6235 | 0.6143 | 0.0239 | 0.9794 | 71 |
| 0.0025 | 0.0362 | 0.5583 | 0.6510 | 0.0237 | 0.3524 | 72 |
| 0.0058 | 0.0362 | 1.9614 | 0.6179 | 0.0239 | 1.2838 | 73 |
| 0.0029 | 0.0362 | 0.6039 | 0.6222 | 0.0239 | 3.0512 | 74 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
klasocki/roberta-large-lora-ner-comma-fixer
|
klasocki
| 2023-08-25T10:56:20Z | 4 | 1 |
peft
|
[
"peft",
"en",
"dataset:wikitext",
"arxiv:2106.09685",
"license:mit",
"region:us"
] | null | 2023-08-24T00:05:11Z |
---
library_name: peft
license: mit
datasets:
- wikitext
language:
- en
metrics:
- f1
---
RoBERTa large fine-tuned using [LoRa](https://arxiv.org/pdf/2106.09685.pdf) for predicting comma placement in text. It expects input with commas removed
and classifies each token for whether it should have a comma inserted after it or not.
As a PEFT model, it does not seem to work well with huggingface pipelines, at least not at the time of writing.
Examples of usage and a wrapper class for text-to-text comma fixing can be seen in the [demo](https://huggingface.co/spaces/klasocki/comma-fixer).
Loading the raw model in code:
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForTokenClassification
import torch
id2label = {
0: "O",
1: "B-COMMA"
}
label2id = {
"O": 0,
"B-COMMA": 1
}
peft_model_id = 'klasocki/roberta-large-lora-ner-comma-fixer'
config = PeftConfig.from_pretrained(peft_model_id)
inference_model = AutoModelForTokenClassification.from_pretrained(
config.base_model_name_or_path, num_labels=len(id2label), id2label=id2label, label2id=label2id
)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(inference_model, peft_model_id)
text = "This text should have commas here here and there however it does not."
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
tokens = inputs.tokens()
predictions = torch.argmax(logits, dim=2)
for token, prediction in zip(tokens, predictions[0].numpy()):
print((token, model.config.id2label[prediction]))
### OUTPUT:
('<s>', 'O')
('This', 'O')
('Ġtext', 'O')
('Ġshould', 'O')
('Ġhave', 'O')
('Ġcomm', 'O')
('as', 'O')
('Ġhere', 'B-COMMA')
('Ġhere', 'O')
('Ġand', 'O')
('Ġthere', 'B-COMMA')
('Ġhowever', 'O')
('Ġit', 'O')
('Ġdoes', 'O')
('Ġnot', 'O')
('.', 'O')
('</s>', 'O')
```
## Evaluation results
Results for commas on the wikitext validation set:
| Model | precision | recall | F1 | support |
|----------|-----------|--------|------|---------|
| baseline* | 0.79 | 0.72 | 0.75 | 10079 |
| ours | 0.84 | 0.84 | 0.84 | 10079 |
*baseline is the [oliverguhr/fullstop-punctuation-multilang-large](https://huggingface.co/oliverguhr/fullstop-punctuation-multilang-large)
model evaluated on commas, out of domain on wikitext. In-domain, authors report F1 of 0.819 for English political speeches, however,
it seems that wikipedia text could be more challenging for comma restoration.
## Training procedure
To compare with the baseline, we fine-tune the same model, RoBERTa large, on the wikitext English dataset.
We use a similar approach, where we treat comma-fixing as a NER problem, and for each token predict whether a comma should be inserted after it.
The biggest advantage of this approach is that it preserves the input structure and only focuses on commas, ensuring that nothing else will be changed and that the model will not have to learn repeating the input back in case no commas should be inserted.
We use LoRa to reduce training time and costs, and synthesize a training dataset from wikitext.
In the end the model seems to converge after only about 15000 training examples, so a small subset of wikitext is more than enough.
Adding more languages and domains can be explored in the future.
### Framework versions
- PEFT 0.5.0
- Transformers 4.31.0
- Torch 2.0.1
|
AirinElizabath/JBS-3
|
AirinElizabath
| 2023-08-25T10:44:48Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:spider",
"base_model:facebook/opt-350m",
"base_model:quantized:facebook/opt-350m",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2023-08-25T10:44:02Z |
---
license: other
base_model: facebook/opt-350m
tags:
- generated_from_trainer
datasets:
- spider
model-index:
- name: JBS-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# JBS-3
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the spider dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
bigmorning/whisper_syl_cv12_pad_lob100__0070
|
bigmorning
| 2023-08-25T10:43:22Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-25T10:43:14Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_cv12_pad_lob100__0070
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_cv12_pad_lob100__0070
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0018
- Train Accuracy: 0.0362
- Train Wermet: 0.9183
- Validation Loss: 0.6034
- Validation Accuracy: 0.0239
- Validation Wermet: 1.7868
- Epoch: 69
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.0233 | 0.0115 | 1.6383 | 3.8616 | 0.0117 | 0.9516 | 0 |
| 4.4412 | 0.0127 | 0.8560 | 3.5410 | 0.0125 | 0.8971 | 1 |
| 4.0719 | 0.0138 | 0.8366 | 3.2944 | 0.0132 | 0.8706 | 2 |
| 3.8091 | 0.0146 | 0.8133 | 3.1691 | 0.0134 | 0.8487 | 3 |
| 3.6239 | 0.0152 | 0.7866 | 3.0647 | 0.0136 | 0.8282 | 4 |
| 3.4749 | 0.0156 | 0.7589 | 2.9835 | 0.0139 | 0.8049 | 5 |
| 3.3444 | 0.0161 | 0.7359 | 2.9351 | 0.0140 | 0.7979 | 6 |
| 3.2215 | 0.0165 | 0.7138 | 2.8468 | 0.0145 | 0.7589 | 7 |
| 3.0754 | 0.0172 | 0.6873 | 2.7530 | 0.0148 | 0.7413 | 8 |
| 2.8713 | 0.0181 | 0.6484 | 2.5226 | 0.0157 | 0.7017 | 9 |
| 2.5469 | 0.0197 | 0.5934 | 2.1931 | 0.0168 | 0.6285 | 10 |
| 2.0233 | 0.0225 | 0.4997 | 1.6411 | 0.0189 | 0.5215 | 11 |
| 1.3808 | 0.0264 | 0.3852 | 1.2401 | 0.0205 | 0.4238 | 12 |
| 0.9722 | 0.0290 | 0.3123 | 1.0195 | 0.0215 | 0.3682 | 13 |
| 0.7388 | 0.0305 | 0.2828 | 0.8773 | 0.0221 | 0.3322 | 14 |
| 0.5787 | 0.0317 | 0.2751 | 0.7970 | 0.0225 | 0.3083 | 15 |
| 0.4642 | 0.0325 | 0.2878 | 0.7315 | 0.0227 | 0.2964 | 16 |
| 0.3752 | 0.0332 | 0.4217 | 0.6897 | 0.0229 | 0.3297 | 17 |
| 0.3042 | 0.0338 | 0.7294 | 0.6572 | 0.0231 | 0.4453 | 18 |
| 0.2444 | 0.0343 | 1.1298 | 0.6369 | 0.0232 | 0.6637 | 19 |
| 0.1949 | 0.0348 | 1.6370 | 0.6180 | 0.0233 | 1.6119 | 20 |
| 0.1544 | 0.0352 | 1.6151 | 0.6149 | 0.0233 | 1.6843 | 21 |
| 0.1212 | 0.0355 | 1.3832 | 0.6066 | 0.0233 | 0.8721 | 22 |
| 0.0931 | 0.0357 | 1.2799 | 0.6034 | 0.0234 | 0.5109 | 23 |
| 0.0725 | 0.0359 | 1.0940 | 0.6102 | 0.0234 | 1.0111 | 24 |
| 0.0551 | 0.0361 | 1.2865 | 0.6000 | 0.0234 | 1.1393 | 25 |
| 0.0411 | 0.0361 | 1.8511 | 0.6037 | 0.0235 | 2.0574 | 26 |
| 0.0311 | 0.0362 | 1.7179 | 0.6018 | 0.0235 | 1.4847 | 27 |
| 0.0253 | 0.0362 | 0.9801 | 0.6010 | 0.0235 | 0.4457 | 28 |
| 0.0231 | 0.0362 | 0.9376 | 0.6046 | 0.0235 | 0.9247 | 29 |
| 0.0196 | 0.0362 | 0.6466 | 0.6078 | 0.0235 | 0.5271 | 30 |
| 0.0177 | 0.0362 | 0.4041 | 0.6155 | 0.0235 | 0.4352 | 31 |
| 0.0139 | 0.0362 | 0.4202 | 0.6037 | 0.0236 | 0.5585 | 32 |
| 0.0137 | 0.0362 | 0.8151 | 0.6015 | 0.0236 | 1.8476 | 33 |
| 0.0122 | 0.0362 | 3.4515 | 0.6043 | 0.0236 | 3.8210 | 34 |
| 0.0098 | 0.0362 | 1.1787 | 0.5985 | 0.0236 | 0.8094 | 35 |
| 0.0071 | 0.0362 | 0.9920 | 0.5992 | 0.0236 | 0.8755 | 36 |
| 0.0055 | 0.0362 | 2.4665 | 0.6047 | 0.0236 | 2.0127 | 37 |
| 0.0124 | 0.0362 | 4.2468 | 0.6089 | 0.0236 | 2.8886 | 38 |
| 0.0109 | 0.0362 | 2.0177 | 0.6097 | 0.0236 | 0.3417 | 39 |
| 0.0073 | 0.0362 | 0.9927 | 0.6057 | 0.0237 | 2.5519 | 40 |
| 0.0080 | 0.0362 | 1.7341 | 0.6099 | 0.0236 | 1.3119 | 41 |
| 0.0063 | 0.0362 | 2.4288 | 0.6058 | 0.0237 | 1.3465 | 42 |
| 0.0038 | 0.0362 | 1.4535 | 0.6022 | 0.0237 | 1.6804 | 43 |
| 0.0028 | 0.0362 | 2.2629 | 0.6001 | 0.0238 | 3.4388 | 44 |
| 0.0021 | 0.0362 | 3.5877 | 0.6018 | 0.0238 | 2.6165 | 45 |
| 0.0017 | 0.0362 | 3.0080 | 0.6043 | 0.0238 | 2.6827 | 46 |
| 0.0061 | 0.0362 | 2.5182 | 0.6545 | 0.0235 | 0.2316 | 47 |
| 0.0126 | 0.0362 | 0.2097 | 0.6206 | 0.0236 | 0.6194 | 48 |
| 0.0071 | 0.0362 | 0.3045 | 0.6047 | 0.0237 | 0.7476 | 49 |
| 0.0053 | 0.0362 | 1.2045 | 0.6010 | 0.0238 | 0.6553 | 50 |
| 0.0040 | 0.0362 | 0.2626 | 0.5964 | 0.0238 | 0.7027 | 51 |
| 0.0021 | 0.0362 | 0.5023 | 0.5950 | 0.0238 | 0.3812 | 52 |
| 0.0014 | 0.0362 | 0.7108 | 0.6233 | 0.0237 | 1.4647 | 53 |
| 0.0017 | 0.0362 | 0.3475 | 0.6087 | 0.0238 | 0.2213 | 54 |
| 0.0011 | 0.0362 | 0.1825 | 0.5984 | 0.0239 | 0.2391 | 55 |
| 0.0021 | 0.0362 | 1.0757 | 0.6211 | 0.0238 | 7.3766 | 56 |
| 0.0078 | 0.0362 | 2.1996 | 0.6349 | 0.0237 | 5.2774 | 57 |
| 0.0071 | 0.0362 | 1.2499 | 0.6225 | 0.0237 | 0.9927 | 58 |
| 0.0045 | 0.0362 | 5.3986 | 0.6088 | 0.0238 | 27.5186 | 59 |
| 0.0027 | 0.0362 | 9.4813 | 0.6035 | 0.0239 | 0.2741 | 60 |
| 0.0015 | 0.0362 | 20.4251 | 0.6005 | 0.0239 | 73.4792 | 61 |
| 0.0012 | 0.0362 | 17.1227 | 0.6148 | 0.0238 | 4.2506 | 62 |
| 0.0024 | 0.0362 | 3.7081 | 0.6249 | 0.0238 | 5.8937 | 63 |
| 0.0050 | 0.0362 | 2.2590 | 0.6136 | 0.0238 | 9.6813 | 64 |
| 0.0026 | 0.0362 | 3.1954 | 0.6060 | 0.0239 | 15.4541 | 65 |
| 0.0032 | 0.0362 | 5.1838 | 0.6233 | 0.0238 | 10.2566 | 66 |
| 0.0053 | 0.0362 | 3.1310 | 0.6178 | 0.0239 | 1.4216 | 67 |
| 0.0030 | 0.0362 | 1.1169 | 0.6106 | 0.0239 | 0.9273 | 68 |
| 0.0018 | 0.0362 | 0.9183 | 0.6034 | 0.0239 | 1.7868 | 69 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
MattStammers/ppo-MsPacmanNoFrameskip-v4
|
MattStammers
| 2023-08-25T10:41:01Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"MsPacmanNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-25T10:35:42Z |
---
library_name: stable-baselines3
tags:
- MsPacmanNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MsPacmanNoFrameskip-v4
type: MsPacmanNoFrameskip-v4
metrics:
- type: mean_reward
value: 1470.00 +/- 492.52
name: mean_reward
verified: false
---
# **PPO** Agent playing **MsPacmanNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **MsPacmanNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env MsPacmanNoFrameskip-v4 -orga MattStammers -f logs/
python -m rl_zoo3.enjoy --algo ppo --env MsPacmanNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env MsPacmanNoFrameskip-v4 -orga MattStammers -f logs/
python -m rl_zoo3.enjoy --algo ppo --env MsPacmanNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env MsPacmanNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env MsPacmanNoFrameskip-v4 -f logs/ -orga MattStammers
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 10000000.0),
('normalize', False),
('policy', 'CnnPolicy'),
('vf_coef', 0.5)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
Extended replay is available. Performance is only moderate after 10 million training steps
|
rjindal/rohit-bloom-finetuned
|
rjindal
| 2023-08-25T10:21:23Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-25T10:21:16Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
fp16-guy/Meichidark_Mix_fp16_cleaned
|
fp16-guy
| 2023-08-25T10:19:31Z | 0 | 2 | null |
[
"text-to-image",
"region:us"
] |
text-to-image
| 2023-08-21T11:18:42Z |
---
pipeline_tag: text-to-image
---
Meichidark_Mix, but fp16/cleaned - smaller size, same result.
========
///
**[**original checkpoint link**](https://civitai.com/models/69158/meichidarkmix)**
*(all rights to the model belong to JuzuArupukato)*
---
**v4**
*[*grid 01*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/meichidarkV4%2001%2020230820194436-111-meichidarkMix_meichidarkV4_fp16-Euler%20a-6.png) *(1.99gb version)*
*[*grid 02*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/meichidarkV4%2002%2020230820194635-111-meichidarkMix_meichidarkV4_fp16_no_vae-Euler%20a-6.png) *(1.83gb version - no vae)*
*[*grid 03*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/meichidarkV4%20inp%2001%2020230822100253-111-meichidarkMix_meichidarkV4_fp16-Euler%20a-5.5.png) *(1.99gb inpainting version)*
*[*grid 04*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/meichidarkV4%20inp%2002%2020230822100426-111-meichidarkMix_meichidarkV4_fp16_no_vae-Euler%20a-5.5.png) *(1.83gb inpainting version - no vae)*
**v4.5**
*[*grid 01*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/meichidarkV45%2001%2020230825121105-111-meichidarkMix_meichidarkV45_fp16-Euler%20a-6.png) *(1.99gb version)*
*[*grid 02*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/meichidarkV45%2002%2020230825121342-111-meichidarkMix_meichidarkV45_fp16_no_vae-Euler%20a-6.png) *(1.83gb version - no vae)*
*[*grid 03*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/meichidarkV45%20inp%2001%2020230825125534-111-meichidarkMix_meichidarkV45_fp16-Euler%20a-5.5.png) *(1.99gb inpainting version)*
*[*grid 04*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/meichidarkV4%20inp%2002%2020230822100426-111-meichidarkMix_meichidarkV4_fp16_no_vae-Euler%20a-5.5.png) *(1.83gb inpainting version - no vae)*
|
bigmorning/whisper_syl_cv12_pad_lob100__0060
|
bigmorning
| 2023-08-25T10:16:59Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-25T10:16:51Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_cv12_pad_lob100__0060
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_cv12_pad_lob100__0060
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0045
- Train Accuracy: 0.0362
- Train Wermet: 5.3986
- Validation Loss: 0.6088
- Validation Accuracy: 0.0238
- Validation Wermet: 27.5186
- Epoch: 59
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.0233 | 0.0115 | 1.6383 | 3.8616 | 0.0117 | 0.9516 | 0 |
| 4.4412 | 0.0127 | 0.8560 | 3.5410 | 0.0125 | 0.8971 | 1 |
| 4.0719 | 0.0138 | 0.8366 | 3.2944 | 0.0132 | 0.8706 | 2 |
| 3.8091 | 0.0146 | 0.8133 | 3.1691 | 0.0134 | 0.8487 | 3 |
| 3.6239 | 0.0152 | 0.7866 | 3.0647 | 0.0136 | 0.8282 | 4 |
| 3.4749 | 0.0156 | 0.7589 | 2.9835 | 0.0139 | 0.8049 | 5 |
| 3.3444 | 0.0161 | 0.7359 | 2.9351 | 0.0140 | 0.7979 | 6 |
| 3.2215 | 0.0165 | 0.7138 | 2.8468 | 0.0145 | 0.7589 | 7 |
| 3.0754 | 0.0172 | 0.6873 | 2.7530 | 0.0148 | 0.7413 | 8 |
| 2.8713 | 0.0181 | 0.6484 | 2.5226 | 0.0157 | 0.7017 | 9 |
| 2.5469 | 0.0197 | 0.5934 | 2.1931 | 0.0168 | 0.6285 | 10 |
| 2.0233 | 0.0225 | 0.4997 | 1.6411 | 0.0189 | 0.5215 | 11 |
| 1.3808 | 0.0264 | 0.3852 | 1.2401 | 0.0205 | 0.4238 | 12 |
| 0.9722 | 0.0290 | 0.3123 | 1.0195 | 0.0215 | 0.3682 | 13 |
| 0.7388 | 0.0305 | 0.2828 | 0.8773 | 0.0221 | 0.3322 | 14 |
| 0.5787 | 0.0317 | 0.2751 | 0.7970 | 0.0225 | 0.3083 | 15 |
| 0.4642 | 0.0325 | 0.2878 | 0.7315 | 0.0227 | 0.2964 | 16 |
| 0.3752 | 0.0332 | 0.4217 | 0.6897 | 0.0229 | 0.3297 | 17 |
| 0.3042 | 0.0338 | 0.7294 | 0.6572 | 0.0231 | 0.4453 | 18 |
| 0.2444 | 0.0343 | 1.1298 | 0.6369 | 0.0232 | 0.6637 | 19 |
| 0.1949 | 0.0348 | 1.6370 | 0.6180 | 0.0233 | 1.6119 | 20 |
| 0.1544 | 0.0352 | 1.6151 | 0.6149 | 0.0233 | 1.6843 | 21 |
| 0.1212 | 0.0355 | 1.3832 | 0.6066 | 0.0233 | 0.8721 | 22 |
| 0.0931 | 0.0357 | 1.2799 | 0.6034 | 0.0234 | 0.5109 | 23 |
| 0.0725 | 0.0359 | 1.0940 | 0.6102 | 0.0234 | 1.0111 | 24 |
| 0.0551 | 0.0361 | 1.2865 | 0.6000 | 0.0234 | 1.1393 | 25 |
| 0.0411 | 0.0361 | 1.8511 | 0.6037 | 0.0235 | 2.0574 | 26 |
| 0.0311 | 0.0362 | 1.7179 | 0.6018 | 0.0235 | 1.4847 | 27 |
| 0.0253 | 0.0362 | 0.9801 | 0.6010 | 0.0235 | 0.4457 | 28 |
| 0.0231 | 0.0362 | 0.9376 | 0.6046 | 0.0235 | 0.9247 | 29 |
| 0.0196 | 0.0362 | 0.6466 | 0.6078 | 0.0235 | 0.5271 | 30 |
| 0.0177 | 0.0362 | 0.4041 | 0.6155 | 0.0235 | 0.4352 | 31 |
| 0.0139 | 0.0362 | 0.4202 | 0.6037 | 0.0236 | 0.5585 | 32 |
| 0.0137 | 0.0362 | 0.8151 | 0.6015 | 0.0236 | 1.8476 | 33 |
| 0.0122 | 0.0362 | 3.4515 | 0.6043 | 0.0236 | 3.8210 | 34 |
| 0.0098 | 0.0362 | 1.1787 | 0.5985 | 0.0236 | 0.8094 | 35 |
| 0.0071 | 0.0362 | 0.9920 | 0.5992 | 0.0236 | 0.8755 | 36 |
| 0.0055 | 0.0362 | 2.4665 | 0.6047 | 0.0236 | 2.0127 | 37 |
| 0.0124 | 0.0362 | 4.2468 | 0.6089 | 0.0236 | 2.8886 | 38 |
| 0.0109 | 0.0362 | 2.0177 | 0.6097 | 0.0236 | 0.3417 | 39 |
| 0.0073 | 0.0362 | 0.9927 | 0.6057 | 0.0237 | 2.5519 | 40 |
| 0.0080 | 0.0362 | 1.7341 | 0.6099 | 0.0236 | 1.3119 | 41 |
| 0.0063 | 0.0362 | 2.4288 | 0.6058 | 0.0237 | 1.3465 | 42 |
| 0.0038 | 0.0362 | 1.4535 | 0.6022 | 0.0237 | 1.6804 | 43 |
| 0.0028 | 0.0362 | 2.2629 | 0.6001 | 0.0238 | 3.4388 | 44 |
| 0.0021 | 0.0362 | 3.5877 | 0.6018 | 0.0238 | 2.6165 | 45 |
| 0.0017 | 0.0362 | 3.0080 | 0.6043 | 0.0238 | 2.6827 | 46 |
| 0.0061 | 0.0362 | 2.5182 | 0.6545 | 0.0235 | 0.2316 | 47 |
| 0.0126 | 0.0362 | 0.2097 | 0.6206 | 0.0236 | 0.6194 | 48 |
| 0.0071 | 0.0362 | 0.3045 | 0.6047 | 0.0237 | 0.7476 | 49 |
| 0.0053 | 0.0362 | 1.2045 | 0.6010 | 0.0238 | 0.6553 | 50 |
| 0.0040 | 0.0362 | 0.2626 | 0.5964 | 0.0238 | 0.7027 | 51 |
| 0.0021 | 0.0362 | 0.5023 | 0.5950 | 0.0238 | 0.3812 | 52 |
| 0.0014 | 0.0362 | 0.7108 | 0.6233 | 0.0237 | 1.4647 | 53 |
| 0.0017 | 0.0362 | 0.3475 | 0.6087 | 0.0238 | 0.2213 | 54 |
| 0.0011 | 0.0362 | 0.1825 | 0.5984 | 0.0239 | 0.2391 | 55 |
| 0.0021 | 0.0362 | 1.0757 | 0.6211 | 0.0238 | 7.3766 | 56 |
| 0.0078 | 0.0362 | 2.1996 | 0.6349 | 0.0237 | 5.2774 | 57 |
| 0.0071 | 0.0362 | 1.2499 | 0.6225 | 0.0237 | 0.9927 | 58 |
| 0.0045 | 0.0362 | 5.3986 | 0.6088 | 0.0238 | 27.5186 | 59 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
sarwarbeing/accrmwmbg
|
sarwarbeing
| 2023-08-25T10:09:46Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-25T10:09:09Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# sarwarbeing/accrmwmbg
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("sarwarbeing/accrmwmbg")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
bigmorning/whisper_syl_cv12_pad_lob100__0055
|
bigmorning
| 2023-08-25T10:03:47Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-25T10:03:38Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_cv12_pad_lob100__0055
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_cv12_pad_lob100__0055
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0017
- Train Accuracy: 0.0362
- Train Wermet: 0.3475
- Validation Loss: 0.6087
- Validation Accuracy: 0.0238
- Validation Wermet: 0.2213
- Epoch: 54
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.0233 | 0.0115 | 1.6383 | 3.8616 | 0.0117 | 0.9516 | 0 |
| 4.4412 | 0.0127 | 0.8560 | 3.5410 | 0.0125 | 0.8971 | 1 |
| 4.0719 | 0.0138 | 0.8366 | 3.2944 | 0.0132 | 0.8706 | 2 |
| 3.8091 | 0.0146 | 0.8133 | 3.1691 | 0.0134 | 0.8487 | 3 |
| 3.6239 | 0.0152 | 0.7866 | 3.0647 | 0.0136 | 0.8282 | 4 |
| 3.4749 | 0.0156 | 0.7589 | 2.9835 | 0.0139 | 0.8049 | 5 |
| 3.3444 | 0.0161 | 0.7359 | 2.9351 | 0.0140 | 0.7979 | 6 |
| 3.2215 | 0.0165 | 0.7138 | 2.8468 | 0.0145 | 0.7589 | 7 |
| 3.0754 | 0.0172 | 0.6873 | 2.7530 | 0.0148 | 0.7413 | 8 |
| 2.8713 | 0.0181 | 0.6484 | 2.5226 | 0.0157 | 0.7017 | 9 |
| 2.5469 | 0.0197 | 0.5934 | 2.1931 | 0.0168 | 0.6285 | 10 |
| 2.0233 | 0.0225 | 0.4997 | 1.6411 | 0.0189 | 0.5215 | 11 |
| 1.3808 | 0.0264 | 0.3852 | 1.2401 | 0.0205 | 0.4238 | 12 |
| 0.9722 | 0.0290 | 0.3123 | 1.0195 | 0.0215 | 0.3682 | 13 |
| 0.7388 | 0.0305 | 0.2828 | 0.8773 | 0.0221 | 0.3322 | 14 |
| 0.5787 | 0.0317 | 0.2751 | 0.7970 | 0.0225 | 0.3083 | 15 |
| 0.4642 | 0.0325 | 0.2878 | 0.7315 | 0.0227 | 0.2964 | 16 |
| 0.3752 | 0.0332 | 0.4217 | 0.6897 | 0.0229 | 0.3297 | 17 |
| 0.3042 | 0.0338 | 0.7294 | 0.6572 | 0.0231 | 0.4453 | 18 |
| 0.2444 | 0.0343 | 1.1298 | 0.6369 | 0.0232 | 0.6637 | 19 |
| 0.1949 | 0.0348 | 1.6370 | 0.6180 | 0.0233 | 1.6119 | 20 |
| 0.1544 | 0.0352 | 1.6151 | 0.6149 | 0.0233 | 1.6843 | 21 |
| 0.1212 | 0.0355 | 1.3832 | 0.6066 | 0.0233 | 0.8721 | 22 |
| 0.0931 | 0.0357 | 1.2799 | 0.6034 | 0.0234 | 0.5109 | 23 |
| 0.0725 | 0.0359 | 1.0940 | 0.6102 | 0.0234 | 1.0111 | 24 |
| 0.0551 | 0.0361 | 1.2865 | 0.6000 | 0.0234 | 1.1393 | 25 |
| 0.0411 | 0.0361 | 1.8511 | 0.6037 | 0.0235 | 2.0574 | 26 |
| 0.0311 | 0.0362 | 1.7179 | 0.6018 | 0.0235 | 1.4847 | 27 |
| 0.0253 | 0.0362 | 0.9801 | 0.6010 | 0.0235 | 0.4457 | 28 |
| 0.0231 | 0.0362 | 0.9376 | 0.6046 | 0.0235 | 0.9247 | 29 |
| 0.0196 | 0.0362 | 0.6466 | 0.6078 | 0.0235 | 0.5271 | 30 |
| 0.0177 | 0.0362 | 0.4041 | 0.6155 | 0.0235 | 0.4352 | 31 |
| 0.0139 | 0.0362 | 0.4202 | 0.6037 | 0.0236 | 0.5585 | 32 |
| 0.0137 | 0.0362 | 0.8151 | 0.6015 | 0.0236 | 1.8476 | 33 |
| 0.0122 | 0.0362 | 3.4515 | 0.6043 | 0.0236 | 3.8210 | 34 |
| 0.0098 | 0.0362 | 1.1787 | 0.5985 | 0.0236 | 0.8094 | 35 |
| 0.0071 | 0.0362 | 0.9920 | 0.5992 | 0.0236 | 0.8755 | 36 |
| 0.0055 | 0.0362 | 2.4665 | 0.6047 | 0.0236 | 2.0127 | 37 |
| 0.0124 | 0.0362 | 4.2468 | 0.6089 | 0.0236 | 2.8886 | 38 |
| 0.0109 | 0.0362 | 2.0177 | 0.6097 | 0.0236 | 0.3417 | 39 |
| 0.0073 | 0.0362 | 0.9927 | 0.6057 | 0.0237 | 2.5519 | 40 |
| 0.0080 | 0.0362 | 1.7341 | 0.6099 | 0.0236 | 1.3119 | 41 |
| 0.0063 | 0.0362 | 2.4288 | 0.6058 | 0.0237 | 1.3465 | 42 |
| 0.0038 | 0.0362 | 1.4535 | 0.6022 | 0.0237 | 1.6804 | 43 |
| 0.0028 | 0.0362 | 2.2629 | 0.6001 | 0.0238 | 3.4388 | 44 |
| 0.0021 | 0.0362 | 3.5877 | 0.6018 | 0.0238 | 2.6165 | 45 |
| 0.0017 | 0.0362 | 3.0080 | 0.6043 | 0.0238 | 2.6827 | 46 |
| 0.0061 | 0.0362 | 2.5182 | 0.6545 | 0.0235 | 0.2316 | 47 |
| 0.0126 | 0.0362 | 0.2097 | 0.6206 | 0.0236 | 0.6194 | 48 |
| 0.0071 | 0.0362 | 0.3045 | 0.6047 | 0.0237 | 0.7476 | 49 |
| 0.0053 | 0.0362 | 1.2045 | 0.6010 | 0.0238 | 0.6553 | 50 |
| 0.0040 | 0.0362 | 0.2626 | 0.5964 | 0.0238 | 0.7027 | 51 |
| 0.0021 | 0.0362 | 0.5023 | 0.5950 | 0.0238 | 0.3812 | 52 |
| 0.0014 | 0.0362 | 0.7108 | 0.6233 | 0.0237 | 1.4647 | 53 |
| 0.0017 | 0.0362 | 0.3475 | 0.6087 | 0.0238 | 0.2213 | 54 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
jjimdark/distilbert-base-uncased-finetuned-cola
|
jjimdark
| 2023-08-25T10:02:06Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-24T04:35:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5290369945616428
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5219
- Matthews Correlation: 0.5290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 268 | 0.5025 | 0.4154 |
| 0.4551 | 2.0 | 536 | 0.5071 | 0.4792 |
| 0.4551 | 3.0 | 804 | 0.5219 | 0.5290 |
| 0.2312 | 4.0 | 1072 | 0.6287 | 0.5089 |
| 0.2312 | 5.0 | 1340 | 0.6631 | 0.5182 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
MateiCv/spa-eng-pos-tagging-v1.3
|
MateiCv
| 2023-08-25T09:56:04Z | 178 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-25T09:55:31Z |
---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: spa-eng-pos-tagging-v1.3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spa-eng-pos-tagging-v1.3
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1650
- Accuracy: 0.9471
- Precision: 0.9372
- Recall: 0.8815
- F1: 0.8779
- Hamming Loss: 0.0529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Hamming Loss |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|:------------:|
| 0.3809 | 1.0 | 1744 | 0.2945 | 0.8919 | 0.8798 | 0.8290 | 0.8221 | 0.1081 |
| 0.2625 | 2.0 | 3488 | 0.2725 | 0.8975 | 0.9004 | 0.8279 | 0.8319 | 0.1025 |
| 0.1918 | 3.0 | 5232 | 0.1901 | 0.9317 | 0.9224 | 0.8645 | 0.8618 | 0.0683 |
| 0.1674 | 4.0 | 6976 | 0.1780 | 0.9369 | 0.9319 | 0.8695 | 0.8694 | 0.0631 |
| 0.1478 | 5.0 | 8720 | 0.1816 | 0.9385 | 0.9303 | 0.8735 | 0.8697 | 0.0615 |
| 0.1201 | 6.0 | 10464 | 0.1650 | 0.9471 | 0.9372 | 0.8815 | 0.8779 | 0.0529 |
| 0.096 | 7.0 | 12208 | 0.1663 | 0.9493 | 0.9390 | 0.8851 | 0.8806 | 0.0507 |
| 0.0844 | 8.0 | 13952 | 0.1715 | 0.9500 | 0.9421 | 0.8838 | 0.8815 | 0.0500 |
| 0.0687 | 9.0 | 15696 | 0.1877 | 0.9502 | 0.9433 | 0.8816 | 0.8811 | 0.0498 |
| 0.0573 | 10.0 | 17440 | 0.1949 | 0.9483 | 0.9444 | 0.8781 | 0.8799 | 0.0517 |
| 0.0533 | 11.0 | 19184 | 0.1960 | 0.9544 | 0.9450 | 0.8872 | 0.8847 | 0.0456 |
| 0.0399 | 12.0 | 20928 | 0.2012 | 0.9565 | 0.9494 | 0.8884 | 0.8876 | 0.0435 |
| 0.031 | 13.0 | 22672 | 0.2119 | 0.9571 | 0.9496 | 0.8889 | 0.8879 | 0.0429 |
| 0.0292 | 14.0 | 24416 | 0.2213 | 0.9587 | 0.9512 | 0.8906 | 0.8896 | 0.0413 |
| 0.024 | 15.0 | 26160 | 0.2274 | 0.9587 | 0.9517 | 0.8899 | 0.8895 | 0.0413 |
| 0.0198 | 16.0 | 27904 | 0.2314 | 0.9591 | 0.8894 | 0.8905 | 0.8899 | 0.0409 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
arroyadr/speecht5_finetuned_voxpopuli_it
|
arroyadr
| 2023-08-25T09:53:57Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-08-25T08:40:42Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- text-to-speech
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_it
results:
- task:
name: text-to-speech
type: text-to-speech
dataset:
name: VOXPOPULI
type: facebook/voxpopuli
config: it
split: train
args: all
metrics:
- name: MSE
type: mse
value: 0.5028
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_it
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5028
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5198 | 31.37 | 1000 | 0.5028 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
AndreeaSon/distilbert-dialects-classifier
|
AndreeaSon
| 2023-08-25T09:49:36Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-25T08:23:54Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: AndreeaSon/distilbert-dialects-classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# AndreeaSon/distilbert-dialects-classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0824
- Validation Loss: 0.1289
- Train Accuracy: 0.9628
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 10390, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6386 | 0.4342 | 0.8371 | 0 |
| 0.2623 | 0.3137 | 0.8901 | 1 |
| 0.0824 | 0.1289 | 0.9628 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
mediaProcessing/Transcriber-Medium
|
mediaProcessing
| 2023-08-25T09:38:36Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:dataset_whisper",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-10T14:29:43Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- dataset_whisper
metrics:
- wer
model-index:
- name: Transcriber-Medium
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: dataset_whisper
type: dataset_whisper
config: default
split: test
args: default
metrics:
- name: Wer
type: wer
value: 108.52032520325203
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Transcriber-Medium
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the dataset_whisper dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9360
- Wer: 108.5203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7536 | 4.02 | 100 | 2.9360 | 108.5203 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.14.1
- Tokenizers 0.13.3
|
Kashfia/base_model
|
Kashfia
| 2023-08-25T09:31:54Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2023-08-17T16:59:48Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
bigmorning/whisper_syl_cv12_pad_lob100__0040
|
bigmorning
| 2023-08-25T09:24:07Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-25T09:23:58Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_cv12_pad_lob100__0040
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_cv12_pad_lob100__0040
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0109
- Train Accuracy: 0.0362
- Train Wermet: 2.0177
- Validation Loss: 0.6097
- Validation Accuracy: 0.0236
- Validation Wermet: 0.3417
- Epoch: 39
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.0233 | 0.0115 | 1.6383 | 3.8616 | 0.0117 | 0.9516 | 0 |
| 4.4412 | 0.0127 | 0.8560 | 3.5410 | 0.0125 | 0.8971 | 1 |
| 4.0719 | 0.0138 | 0.8366 | 3.2944 | 0.0132 | 0.8706 | 2 |
| 3.8091 | 0.0146 | 0.8133 | 3.1691 | 0.0134 | 0.8487 | 3 |
| 3.6239 | 0.0152 | 0.7866 | 3.0647 | 0.0136 | 0.8282 | 4 |
| 3.4749 | 0.0156 | 0.7589 | 2.9835 | 0.0139 | 0.8049 | 5 |
| 3.3444 | 0.0161 | 0.7359 | 2.9351 | 0.0140 | 0.7979 | 6 |
| 3.2215 | 0.0165 | 0.7138 | 2.8468 | 0.0145 | 0.7589 | 7 |
| 3.0754 | 0.0172 | 0.6873 | 2.7530 | 0.0148 | 0.7413 | 8 |
| 2.8713 | 0.0181 | 0.6484 | 2.5226 | 0.0157 | 0.7017 | 9 |
| 2.5469 | 0.0197 | 0.5934 | 2.1931 | 0.0168 | 0.6285 | 10 |
| 2.0233 | 0.0225 | 0.4997 | 1.6411 | 0.0189 | 0.5215 | 11 |
| 1.3808 | 0.0264 | 0.3852 | 1.2401 | 0.0205 | 0.4238 | 12 |
| 0.9722 | 0.0290 | 0.3123 | 1.0195 | 0.0215 | 0.3682 | 13 |
| 0.7388 | 0.0305 | 0.2828 | 0.8773 | 0.0221 | 0.3322 | 14 |
| 0.5787 | 0.0317 | 0.2751 | 0.7970 | 0.0225 | 0.3083 | 15 |
| 0.4642 | 0.0325 | 0.2878 | 0.7315 | 0.0227 | 0.2964 | 16 |
| 0.3752 | 0.0332 | 0.4217 | 0.6897 | 0.0229 | 0.3297 | 17 |
| 0.3042 | 0.0338 | 0.7294 | 0.6572 | 0.0231 | 0.4453 | 18 |
| 0.2444 | 0.0343 | 1.1298 | 0.6369 | 0.0232 | 0.6637 | 19 |
| 0.1949 | 0.0348 | 1.6370 | 0.6180 | 0.0233 | 1.6119 | 20 |
| 0.1544 | 0.0352 | 1.6151 | 0.6149 | 0.0233 | 1.6843 | 21 |
| 0.1212 | 0.0355 | 1.3832 | 0.6066 | 0.0233 | 0.8721 | 22 |
| 0.0931 | 0.0357 | 1.2799 | 0.6034 | 0.0234 | 0.5109 | 23 |
| 0.0725 | 0.0359 | 1.0940 | 0.6102 | 0.0234 | 1.0111 | 24 |
| 0.0551 | 0.0361 | 1.2865 | 0.6000 | 0.0234 | 1.1393 | 25 |
| 0.0411 | 0.0361 | 1.8511 | 0.6037 | 0.0235 | 2.0574 | 26 |
| 0.0311 | 0.0362 | 1.7179 | 0.6018 | 0.0235 | 1.4847 | 27 |
| 0.0253 | 0.0362 | 0.9801 | 0.6010 | 0.0235 | 0.4457 | 28 |
| 0.0231 | 0.0362 | 0.9376 | 0.6046 | 0.0235 | 0.9247 | 29 |
| 0.0196 | 0.0362 | 0.6466 | 0.6078 | 0.0235 | 0.5271 | 30 |
| 0.0177 | 0.0362 | 0.4041 | 0.6155 | 0.0235 | 0.4352 | 31 |
| 0.0139 | 0.0362 | 0.4202 | 0.6037 | 0.0236 | 0.5585 | 32 |
| 0.0137 | 0.0362 | 0.8151 | 0.6015 | 0.0236 | 1.8476 | 33 |
| 0.0122 | 0.0362 | 3.4515 | 0.6043 | 0.0236 | 3.8210 | 34 |
| 0.0098 | 0.0362 | 1.1787 | 0.5985 | 0.0236 | 0.8094 | 35 |
| 0.0071 | 0.0362 | 0.9920 | 0.5992 | 0.0236 | 0.8755 | 36 |
| 0.0055 | 0.0362 | 2.4665 | 0.6047 | 0.0236 | 2.0127 | 37 |
| 0.0124 | 0.0362 | 4.2468 | 0.6089 | 0.0236 | 2.8886 | 38 |
| 0.0109 | 0.0362 | 2.0177 | 0.6097 | 0.0236 | 0.3417 | 39 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
larabe/testt
|
larabe
| 2023-08-25T09:21:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-08-23T22:01:20Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: testt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testt
This model is a fine-tuned version of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.0
- Tokenizers 0.13.3
|
64FC/whisper-tiny-en
|
64FC
| 2023-08-25T09:20:10Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-25T08:36:01Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.36186540731995276
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7102
- Wer Ortho: 0.3646
- Wer: 0.3619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 23
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0005 | 35.71 | 500 | 0.7102 | 0.3646 | 0.3619 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
AbeShinzo0708/Voicevox_SugaYoshihide
|
AbeShinzo0708
| 2023-08-25T09:18:05Z | 0 | 1 | null |
[
"菅義偉",
"Former Japanese Prime Minister",
"Suga",
"SugaYoshihide",
"Yoshihide",
"ja",
"license:openrail",
"region:us"
] | null | 2023-03-18T09:38:23Z |
---
license: openrail
language:
- ja
tags:
- 菅義偉
- Former Japanese Prime Minister
- Suga
- SugaYoshihide
- Yoshihide
---
|
AbeShinzo0708/so_vits_svc4_AbeShinzo
|
AbeShinzo0708
| 2023-08-25T09:17:02Z | 15 | 3 |
transformers
|
[
"transformers",
"Abe",
"Shinzo",
"AbeShinzo",
"Former Japanese Prime Minister",
"ja",
"license:openrail",
"endpoints_compatible",
"region:us"
] | null | 2023-03-30T05:55:41Z |
---
tags:
- Abe
- Shinzo
- AbeShinzo
- Former Japanese Prime Minister
language:
- ja
license: openrail
---
|
bigmorning/whisper_syl_cv12_pad_lob100__0035
|
bigmorning
| 2023-08-25T09:10:53Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-25T09:10:44Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_cv12_pad_lob100__0035
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_cv12_pad_lob100__0035
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0122
- Train Accuracy: 0.0362
- Train Wermet: 3.4515
- Validation Loss: 0.6043
- Validation Accuracy: 0.0236
- Validation Wermet: 3.8210
- Epoch: 34
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.0233 | 0.0115 | 1.6383 | 3.8616 | 0.0117 | 0.9516 | 0 |
| 4.4412 | 0.0127 | 0.8560 | 3.5410 | 0.0125 | 0.8971 | 1 |
| 4.0719 | 0.0138 | 0.8366 | 3.2944 | 0.0132 | 0.8706 | 2 |
| 3.8091 | 0.0146 | 0.8133 | 3.1691 | 0.0134 | 0.8487 | 3 |
| 3.6239 | 0.0152 | 0.7866 | 3.0647 | 0.0136 | 0.8282 | 4 |
| 3.4749 | 0.0156 | 0.7589 | 2.9835 | 0.0139 | 0.8049 | 5 |
| 3.3444 | 0.0161 | 0.7359 | 2.9351 | 0.0140 | 0.7979 | 6 |
| 3.2215 | 0.0165 | 0.7138 | 2.8468 | 0.0145 | 0.7589 | 7 |
| 3.0754 | 0.0172 | 0.6873 | 2.7530 | 0.0148 | 0.7413 | 8 |
| 2.8713 | 0.0181 | 0.6484 | 2.5226 | 0.0157 | 0.7017 | 9 |
| 2.5469 | 0.0197 | 0.5934 | 2.1931 | 0.0168 | 0.6285 | 10 |
| 2.0233 | 0.0225 | 0.4997 | 1.6411 | 0.0189 | 0.5215 | 11 |
| 1.3808 | 0.0264 | 0.3852 | 1.2401 | 0.0205 | 0.4238 | 12 |
| 0.9722 | 0.0290 | 0.3123 | 1.0195 | 0.0215 | 0.3682 | 13 |
| 0.7388 | 0.0305 | 0.2828 | 0.8773 | 0.0221 | 0.3322 | 14 |
| 0.5787 | 0.0317 | 0.2751 | 0.7970 | 0.0225 | 0.3083 | 15 |
| 0.4642 | 0.0325 | 0.2878 | 0.7315 | 0.0227 | 0.2964 | 16 |
| 0.3752 | 0.0332 | 0.4217 | 0.6897 | 0.0229 | 0.3297 | 17 |
| 0.3042 | 0.0338 | 0.7294 | 0.6572 | 0.0231 | 0.4453 | 18 |
| 0.2444 | 0.0343 | 1.1298 | 0.6369 | 0.0232 | 0.6637 | 19 |
| 0.1949 | 0.0348 | 1.6370 | 0.6180 | 0.0233 | 1.6119 | 20 |
| 0.1544 | 0.0352 | 1.6151 | 0.6149 | 0.0233 | 1.6843 | 21 |
| 0.1212 | 0.0355 | 1.3832 | 0.6066 | 0.0233 | 0.8721 | 22 |
| 0.0931 | 0.0357 | 1.2799 | 0.6034 | 0.0234 | 0.5109 | 23 |
| 0.0725 | 0.0359 | 1.0940 | 0.6102 | 0.0234 | 1.0111 | 24 |
| 0.0551 | 0.0361 | 1.2865 | 0.6000 | 0.0234 | 1.1393 | 25 |
| 0.0411 | 0.0361 | 1.8511 | 0.6037 | 0.0235 | 2.0574 | 26 |
| 0.0311 | 0.0362 | 1.7179 | 0.6018 | 0.0235 | 1.4847 | 27 |
| 0.0253 | 0.0362 | 0.9801 | 0.6010 | 0.0235 | 0.4457 | 28 |
| 0.0231 | 0.0362 | 0.9376 | 0.6046 | 0.0235 | 0.9247 | 29 |
| 0.0196 | 0.0362 | 0.6466 | 0.6078 | 0.0235 | 0.5271 | 30 |
| 0.0177 | 0.0362 | 0.4041 | 0.6155 | 0.0235 | 0.4352 | 31 |
| 0.0139 | 0.0362 | 0.4202 | 0.6037 | 0.0236 | 0.5585 | 32 |
| 0.0137 | 0.0362 | 0.8151 | 0.6015 | 0.0236 | 1.8476 | 33 |
| 0.0122 | 0.0362 | 3.4515 | 0.6043 | 0.0236 | 3.8210 | 34 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
Isotonic/flan-t5-base-trading_candles
|
Isotonic
| 2023-08-25T09:10:33Z | 126 | 11 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:0xMaka/trading-candles-subset-qa-format",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-22T16:32:50Z |
---
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-base-trading_candles
results: []
datasets:
- 0xMaka/trading-candles-subset-qa-format
widget:
- text: "Context: -30811302.00,464.00,-156202.00,309984.00,276.00,7664.00,4174.00,824467.00,19741.12,19798.04,19860.18,19567.9 Question: identify candle"
- text: "Context: 867553.00,-4282049.00,6306.00,4440418.00,13.00,50962.00,101.00,59152496.00,39512.71,39477.49,39512.71,39380.74 Question: identify candle"
- text: "Context: -206.00,626162.00,-35917428.00,-49739.00,6669939.00,64.00,19988.00,7094559.00,17752.71,17752.71,17752.71,17752.71 Question: find candle: Four Price Doji"
pipeline_tag: text2text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-trading_candles
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on [0xMaka/trading-candles-subset-qa-format](https://huggingface.co/datasets/0xMaka/trading-candles-subset-qa-format) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0061
- Rouge1: 88.3665
- Rouge2: 86.86
- Rougel: 88.3651
- Rougelsum: 88.3665
- Gen Len: 18.9025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.019 | 1.0 | 70009 | 0.0089 | 88.0774 | 86.4734 | 88.0734 | 88.0748 | 18.9022 |
| 0.0095 | 2.0 | 140018 | 0.0069 | 88.3636 | 86.8542 | 88.3612 | 88.3625 | 18.9016 |
| 0.0071 | 3.0 | 210027 | 0.0061 | 88.3665 | 86.86 | 88.3651 | 88.3665 | 18.9025 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dmitrijsk/blooms-3b-rick-trainer
|
dmitrijsk
| 2023-08-25T09:05:39Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:bigscience/bloomz-3b",
"base_model:finetune:bigscience/bloomz-3b",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2023-08-25T08:38:53Z |
---
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloomz-3b
tags:
- generated_from_trainer
model-index:
- name: blooms-3b-rick-trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# blooms-3b-rick-trainer
This model is a fine-tuned version of [bigscience/bloomz-3b](https://huggingface.co/bigscience/bloomz-3b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5704 | 1.0 | 7 | 3.6198 |
| 3.467 | 2.0 | 14 | 3.5774 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Skie0007/a2c-PandaReachDense-v3
|
Skie0007
| 2023-08-25T08:56:30Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-25T08:50:41Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.22 +/- 0.15
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nishant-glance/model-sd-1-4-priorp-unet-1200
|
nishant-glance
| 2023-08-25T08:55:23Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-25T08:22:30Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - nishant-glance/model-sd-1-4-priorp-unet-1200
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
diana9m/falcon-7b-sharded-bf16-finetuned-mental-health-NUNA_reevaluated
|
diana9m
| 2023-08-25T08:47:26Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:finetune:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2023-08-24T13:06:28Z |
---
base_model: ybelkada/falcon-7b-sharded-bf16
tags:
- generated_from_trainer
model-index:
- name: falcon-7b-sharded-bf16-finetuned-mental-health-NUNA_reevaluated
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-sharded-bf16-finetuned-mental-health-NUNA_reevaluated
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 320
### Training results
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Vezora/Narwhal-7b
|
Vezora
| 2023-08-25T08:43:13Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"orca",
"stable",
"stability",
"bloke",
"hf",
"7b",
"13b",
"34b",
"70b",
"22b",
"60b",
"coding",
"progaming",
"logic",
"deduction",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-24T07:33:10Z |
---
tags:
- llama
- orca
- stable
- stability
- bloke
- hf
- 7b
- 13b
- 34b
- 70b
- 22b
- 60b
- coding
- progaming
- logic
- deduction
---
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Narwhal-7b Model Overview</title>
<style>
body {
font-family: Arial, sans-serif;
color: #333;
line-height: 1.6;
margin: 30px;
}
h1 {
color: #2c3e50;
}
img {
display: block;
margin: 20px auto;
}
p {
margin: 15px 0;
}
</style>
</head>
<body>
<img src="https://i.imgur.com/FYuPeho.jpg" width="300" alt="Description of the image">
<h1>Narwhal-7b</h1>
<p>Model Created by Vezora</p>
<p>The Narwhal-7b is an innovative model comprising a blend of 60% Stable Beluga and 40% MegaCoder. This combination was further enhanced by integrating 40% Wizard-Math and 40% Llama Chat7b.</p>
<p>This synthesis has led to a model that demonstrates remarkable performance in mathematical tasks. It also maintains a robust ability to respond to various queries. During the testing phase, we employed both Stable Beluga and Llama Chat prompts, with the Llama v2 prompting yielding superior results. This improvement was likely a result of it being the final merge in the development process.</p>
<p>It's worth noting that the Narwhal-7b may stand as one of the best-performing models in its category. However, those interested in utilizing it must be aware of the commercial licenses associated with the underlying models. Due to some datasets used in training originating from OpenAI, this model is explicitly not intended for commercial use.</p>
<p><strong>Benchmarks:</strong> Comprehensive benchmarking details will be available soon.</p>
</body>
</html>
|
JakeYunwooKim/mt5-small-finetuned-amazon-en-es
|
JakeYunwooKim
| 2023-08-25T08:38:47Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-08-25T07:03:41Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0341
- Rouge1: 16.9947
- Rouge2: 8.1917
- Rougel: 16.5751
- Rougelsum: 16.6864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 7.0901 | 1.0 | 1209 | 3.2969 | 13.9062 | 5.7456 | 13.4148 | 13.409 |
| 3.9124 | 2.0 | 2418 | 3.1529 | 16.6418 | 8.4375 | 15.85 | 15.9119 |
| 3.5991 | 3.0 | 3627 | 3.1181 | 18.7571 | 9.9189 | 18.0758 | 18.1545 |
| 3.4197 | 4.0 | 4836 | 3.0619 | 17.8796 | 8.8002 | 17.2547 | 17.3509 |
| 3.3215 | 5.0 | 6045 | 3.0706 | 16.9356 | 7.5098 | 16.2641 | 16.468 |
| 3.2448 | 6.0 | 7254 | 3.0455 | 16.7471 | 7.7886 | 16.345 | 16.4044 |
| 3.2033 | 7.0 | 8463 | 3.0349 | 17.0401 | 8.3424 | 16.6741 | 16.7633 |
| 3.177 | 8.0 | 9672 | 3.0341 | 16.9947 | 8.1917 | 16.5751 | 16.6864 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Donnaphat/bloomz-560m_PROMPT_TUNING_CAUSAL_LM
|
Donnaphat
| 2023-08-25T08:36:44Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-25T08:36:41Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
googcheng/7b-viggo
|
googcheng
| 2023-08-25T08:32:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-21T08:25:16Z |
ft on llama2 with viggo dataset
just to test something like function calling.
drivered from https://www.anyscale.com/blog/fine-tuning-llama-2-a-comprehensive-case-study-for-tailoring-models-to-unique-applications
|
bigmorning/whisper_syl_cv12_pad_lob100__0020
|
bigmorning
| 2023-08-25T08:31:14Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-25T08:31:05Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_cv12_pad_lob100__0020
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_cv12_pad_lob100__0020
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2444
- Train Accuracy: 0.0343
- Train Wermet: 1.1298
- Validation Loss: 0.6369
- Validation Accuracy: 0.0232
- Validation Wermet: 0.6637
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.0233 | 0.0115 | 1.6383 | 3.8616 | 0.0117 | 0.9516 | 0 |
| 4.4412 | 0.0127 | 0.8560 | 3.5410 | 0.0125 | 0.8971 | 1 |
| 4.0719 | 0.0138 | 0.8366 | 3.2944 | 0.0132 | 0.8706 | 2 |
| 3.8091 | 0.0146 | 0.8133 | 3.1691 | 0.0134 | 0.8487 | 3 |
| 3.6239 | 0.0152 | 0.7866 | 3.0647 | 0.0136 | 0.8282 | 4 |
| 3.4749 | 0.0156 | 0.7589 | 2.9835 | 0.0139 | 0.8049 | 5 |
| 3.3444 | 0.0161 | 0.7359 | 2.9351 | 0.0140 | 0.7979 | 6 |
| 3.2215 | 0.0165 | 0.7138 | 2.8468 | 0.0145 | 0.7589 | 7 |
| 3.0754 | 0.0172 | 0.6873 | 2.7530 | 0.0148 | 0.7413 | 8 |
| 2.8713 | 0.0181 | 0.6484 | 2.5226 | 0.0157 | 0.7017 | 9 |
| 2.5469 | 0.0197 | 0.5934 | 2.1931 | 0.0168 | 0.6285 | 10 |
| 2.0233 | 0.0225 | 0.4997 | 1.6411 | 0.0189 | 0.5215 | 11 |
| 1.3808 | 0.0264 | 0.3852 | 1.2401 | 0.0205 | 0.4238 | 12 |
| 0.9722 | 0.0290 | 0.3123 | 1.0195 | 0.0215 | 0.3682 | 13 |
| 0.7388 | 0.0305 | 0.2828 | 0.8773 | 0.0221 | 0.3322 | 14 |
| 0.5787 | 0.0317 | 0.2751 | 0.7970 | 0.0225 | 0.3083 | 15 |
| 0.4642 | 0.0325 | 0.2878 | 0.7315 | 0.0227 | 0.2964 | 16 |
| 0.3752 | 0.0332 | 0.4217 | 0.6897 | 0.0229 | 0.3297 | 17 |
| 0.3042 | 0.0338 | 0.7294 | 0.6572 | 0.0231 | 0.4453 | 18 |
| 0.2444 | 0.0343 | 1.1298 | 0.6369 | 0.0232 | 0.6637 | 19 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
Vertti/TuumaPEFTDialogue01
|
Vertti
| 2023-08-25T08:29:37Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-25T08:29:18Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
arroyadr/speecht5_finetuned_voxpopuli_nl
|
arroyadr
| 2023-08-25T08:26:45Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-08-24T21:59:06Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5937
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6482 | 3.14 | 100 | 0.5937 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
allen37/llama2-qlora-finetuined-frech
|
allen37
| 2023-08-25T08:21:47Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-25T08:21:30Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
bigmorning/whisper_syl_cv12_pad_lob100__0015
|
bigmorning
| 2023-08-25T08:18:04Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-25T08:17:55Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_cv12_pad_lob100__0015
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_cv12_pad_lob100__0015
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7388
- Train Accuracy: 0.0305
- Train Wermet: 0.2828
- Validation Loss: 0.8773
- Validation Accuracy: 0.0221
- Validation Wermet: 0.3322
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.0233 | 0.0115 | 1.6383 | 3.8616 | 0.0117 | 0.9516 | 0 |
| 4.4412 | 0.0127 | 0.8560 | 3.5410 | 0.0125 | 0.8971 | 1 |
| 4.0719 | 0.0138 | 0.8366 | 3.2944 | 0.0132 | 0.8706 | 2 |
| 3.8091 | 0.0146 | 0.8133 | 3.1691 | 0.0134 | 0.8487 | 3 |
| 3.6239 | 0.0152 | 0.7866 | 3.0647 | 0.0136 | 0.8282 | 4 |
| 3.4749 | 0.0156 | 0.7589 | 2.9835 | 0.0139 | 0.8049 | 5 |
| 3.3444 | 0.0161 | 0.7359 | 2.9351 | 0.0140 | 0.7979 | 6 |
| 3.2215 | 0.0165 | 0.7138 | 2.8468 | 0.0145 | 0.7589 | 7 |
| 3.0754 | 0.0172 | 0.6873 | 2.7530 | 0.0148 | 0.7413 | 8 |
| 2.8713 | 0.0181 | 0.6484 | 2.5226 | 0.0157 | 0.7017 | 9 |
| 2.5469 | 0.0197 | 0.5934 | 2.1931 | 0.0168 | 0.6285 | 10 |
| 2.0233 | 0.0225 | 0.4997 | 1.6411 | 0.0189 | 0.5215 | 11 |
| 1.3808 | 0.0264 | 0.3852 | 1.2401 | 0.0205 | 0.4238 | 12 |
| 0.9722 | 0.0290 | 0.3123 | 1.0195 | 0.0215 | 0.3682 | 13 |
| 0.7388 | 0.0305 | 0.2828 | 0.8773 | 0.0221 | 0.3322 | 14 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
raygx/sushantNGPT-NepSA
|
raygx
| 2023-08-25T08:14:05Z | 64 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-classification",
"generated_from_keras_callback",
"license:bsd-3-clause-clear",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-25T07:51:23Z |
---
license: bsd-3-clause-clear
tags:
- generated_from_keras_callback
model-index:
- name: sushantNGPT-NepSA
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# sushantNGPT-NepSA
This model is a fine-tuned version of [Shushant/thesis_nepaliGPT](https://huggingface.co/Shushant/thesis_nepaliGPT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6003
- Validation Loss: 0.6551
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 6.99e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.001}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.8939 | 0.7639 | 0 |
| 0.7120 | 0.7073 | 1 |
| 0.6481 | 0.6529 | 2 |
| 0.6003 | 0.6551 | 3 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.11.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
4i-ai/BERT_disfluency_cls
|
4i-ai
| 2023-08-25T08:09:58Z | 149 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"disfluency identification",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-23T14:27:16Z |
---
license: cc-by-nc-sa-4.0
language:
- en
tags:
- disfluency identification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This BERT model classifies a dialogue system's user utterance as fluent or disfluent.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** 4i Intelligent Insights
- **Model type:** BERT base cased
- **Language(s) (NLP):** English
- **License:** cc-by-nc-sa-4.0
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** http://research.4i.ai/code/BERT_disfluency_cls
- **Paper:** https://aclanthology.org/2023.findings-acl.728/
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The model is intended to be used for classifying English utterances of users interacting with a dialogue system. In our evaluation, the user utterances were speech transcriptions.
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
This model has not been evaluated to be used on machine-generated text.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model may not be accurate with non-native English speakers.
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The model has been fine-tuned on the Fisher English Corpus:
http://github.com/joshua-decoder/fisher-callhome-corpus
|
zimhe/controlnet-wall-constrained-floorplan
|
zimhe
| 2023-08-25T08:06:23Z | 3 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"dataset:zimhe/wall-constrained-floorplans-10k",
"region:us"
] | null | 2023-08-23T09:05:20Z |
---
datasets:
- zimhe/wall-constrained-floorplans-10k
---
|
LibrAI/longformer-harmful-ro
|
LibrAI
| 2023-08-25T07:57:41Z | 18,067 | 1 |
transformers
|
[
"transformers",
"pytorch",
"longformer",
"text-classification",
"generated_from_trainer",
"base_model:allenai/longformer-base-4096",
"base_model:finetune:allenai/longformer-base-4096",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-25T07:40:06Z |
---
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: longformer-harmful-ro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer-harmful-ro
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0102
- Accuracy: 0.996
- Precision: 0.998
- Recall: 0.955
- F1: 0.975
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:-----:|
| No log | 1.0 | 89 | 0.0972 | 0.978 | 0.989 | 0.75 | 0.828 |
| No log | 2.0 | 178 | 0.0337 | 0.986 | 0.993 | 0.841 | 0.902 |
| No log | 3.0 | 267 | 0.0102 | 0.996 | 0.998 | 0.955 | 0.975 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
chrisrtt/gbert-multi-class-german-hate
|
chrisrtt
| 2023-08-25T07:53:41Z | 645 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-25T07:39:21Z |
# Model Card for German Hate Speech Classifier
## Model Details
### Introduction
This model was developed to explore the potential of German language models in multi-class classification of hate speech in German online journals. It is a fine-tuned version of the GBERT model from (Chan, Schweter, and Möller, 2020).
### Dataset
The dataset used for training is a consolidation of three pre-existing German hate speech datasets:
- **RP (Assenmacher et al., 2021)**
- **DeTox (Demus et al., 2022)**
- **Twitter dataset (Glasenbach, 2022)**
The combined dataset underwent cleaning to minimize biases and remove redundant data.
## Performance
Our experiments delivered promising results, with the model reliably classifying comments into:
- **No Hate Speech**
- **Other Hate Speech (Threat, Insult, Profanity)**
- **Political Hate Speech**
- **Racist Hate Speech**
- **Sexist Hate Speech**
The model achieved a macro F1-score of 0.775. However, to further reduce misclassifications, improvements are essential. Short comments are overproportionally classified as Sexist Hate Speech.
|
LibrAI/bert-action-ro
|
LibrAI
| 2023-08-25T07:39:45Z | 113 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-24T12:10:09Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-action-ro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-action-ro
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1567
- Accuracy: 0.958
- Precision: 0.949
- Recall: 0.941
- F1: 0.944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:-----:|
| No log | 1.0 | 89 | 0.3700 | 0.876 | 0.836 | 0.809 | 0.815 |
| No log | 2.0 | 178 | 0.2057 | 0.936 | 0.927 | 0.924 | 0.924 |
| No log | 3.0 | 267 | 0.1567 | 0.958 | 0.949 | 0.941 | 0.944 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
LibrAI/bert-harmful-ro
|
LibrAI
| 2023-08-25T07:39:06Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-25T07:20:22Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-harmful-ro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-harmful-ro
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0200
- Accuracy: 0.994
- Precision: 0.997
- Recall: 0.921
- F1: 0.956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:-----:|
| No log | 1.0 | 89 | 0.1010 | 0.972 | 0.986 | 0.632 | 0.701 |
| No log | 2.0 | 178 | 0.0376 | 0.99 | 0.995 | 0.868 | 0.922 |
| No log | 3.0 | 267 | 0.0200 | 0.994 | 0.997 | 0.921 | 0.956 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
starboi/env_semeval_bigscience_bloomz-560m_PROMPT_TUNING_CAUSAL_LM_v1_50.pt
|
starboi
| 2023-08-25T07:38:05Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-25T07:38:02Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
natsusakiyomi/AsagaoMix
|
natsusakiyomi
| 2023-08-25T07:32:49Z | 13 | 7 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"ja",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-29T03:42:53Z |
---
license: creativeml-openrail-m
language:
- ja
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
library_name: diffusers
---
---
<h4>📄 ライセンス / License</h4>
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
|
MateiCv/spa-eng-pos-tagging-v6
|
MateiCv
| 2023-08-25T07:27:55Z | 180 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-25T07:27:22Z |
---
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: spa-eng-pos-tagging-v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spa-eng-pos-tagging-v6
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3128
- Accuracy: 0.9056
- Precision: 0.9032
- Recall: 0.8293
- F1: 0.8345
- Hamming Loss: 0.0944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 14
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Hamming Loss |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|:------------:|
| 1.0141 | 1.0 | 1744 | 0.7804 | 0.7158 | 0.7328 | 0.6183 | 0.6345 | 0.2842 |
| 0.6292 | 2.0 | 3488 | 0.5384 | 0.7973 | 0.8111 | 0.7029 | 0.7213 | 0.2027 |
| 0.4438 | 3.0 | 5232 | 0.4236 | 0.8462 | 0.8346 | 0.7762 | 0.7732 | 0.1538 |
| 0.3626 | 4.0 | 6976 | 0.3856 | 0.8651 | 0.8524 | 0.7933 | 0.7903 | 0.1349 |
| 0.3141 | 5.0 | 8720 | 0.3697 | 0.8712 | 0.8688 | 0.7998 | 0.8028 | 0.1288 |
| 0.2575 | 6.0 | 10464 | 0.3689 | 0.8751 | 0.8758 | 0.8003 | 0.8058 | 0.1249 |
| 0.2117 | 7.0 | 12208 | 0.3329 | 0.8890 | 0.8832 | 0.8169 | 0.8184 | 0.1110 |
| 0.1864 | 8.0 | 13952 | 0.3235 | 0.9010 | 0.8946 | 0.8278 | 0.8293 | 0.0990 |
| 0.1555 | 9.0 | 15696 | 0.3128 | 0.9056 | 0.9032 | 0.8293 | 0.8345 | 0.0944 |
| 0.1322 | 10.0 | 17440 | 0.3311 | 0.9088 | 0.9010 | 0.8376 | 0.8377 | 0.0912 |
| 0.1111 | 11.0 | 19184 | 0.3394 | 0.9101 | 0.9081 | 0.8319 | 0.8383 | 0.0899 |
| 0.0874 | 12.0 | 20928 | 0.3472 | 0.9148 | 0.9100 | 0.8407 | 0.8440 | 0.0852 |
| 0.0659 | 13.0 | 22672 | 0.3635 | 0.9131 | 0.9072 | 0.8400 | 0.8422 | 0.0869 |
| 0.0608 | 14.0 | 24416 | 0.3560 | 0.9187 | 0.9140 | 0.8452 | 0.8482 | 0.0813 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
abhiramsatuluri34/roberta-finetuned-subjqa-movies_2
|
abhiramsatuluri34
| 2023-08-25T07:15:26Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:deepset/roberta-base-squad2",
"base_model:finetune:deepset/roberta-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-24T16:19:42Z |
---
license: cc-by-4.0
base_model: deepset/roberta-base-squad2
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-subjqa-movies_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-subjqa-movies_2
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Datactive/BERT_sud_queries_classification
|
Datactive
| 2023-08-25T07:14:42Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-24T17:02:17Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Datactive/BERT_sud_queries_classification
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Datactive/BERT_sud_queries_classification
This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0277
- Validation Loss: 0.0188
- Train F1: 0.9958
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1419, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train F1 | Epoch |
|:----------:|:---------------:|:--------:|:-----:|
| 0.0277 | 0.0188 | 0.9958 | 0 |
### Framework versions
- Transformers 4.29.0.dev0
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Kaisergura/AceTaffy_sovits4.0
|
Kaisergura
| 2023-08-25T07:00:29Z | 2 | 1 |
transformers
|
[
"transformers",
"license:creativeml-openrail-m",
"endpoints_compatible",
"region:us"
] | null | 2023-08-25T06:50:08Z |
---
license: creativeml-openrail-m
---
|
danrothman/sungal-starter-app
|
danrothman
| 2023-08-25T06:53:24Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-08-25T06:40:27Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Dan Rothman]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
litagin/rvc_jikken
|
litagin
| 2023-08-25T06:42:22Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T09:50:32Z |
---
license: creativeml-openrail-m
---
|
bogeumkim/polyglot-1.3b-qlora-emotion-classification
|
bogeumkim
| 2023-08-25T06:35:41Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-25T06:23:51Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
rachit221195/rachit-trained-xl-colab
|
rachit221195
| 2023-08-25T06:27:16Z | 6 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-25T06:04:55Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks human
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - rachit221195/rachit-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks human using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
wineminem/results
|
wineminem
| 2023-08-25T06:26:24Z | 2 | 0 |
transformers
|
[
"transformers",
"generated_from_trainer",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:finetune:ybelkada/falcon-7b-sharded-bf16",
"endpoints_compatible",
"region:us"
] | null | 2023-08-25T05:08:31Z |
---
base_model: ybelkada/falcon-7b-sharded-bf16
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 300
### Training results
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
54data/xlm-roberta-base-finetuned-panx-it
|
54data
| 2023-08-25T06:24:54Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-25T06:21:57Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.it
split: validation
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.818144666939109
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2607
- F1: 0.8181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.822 | 1.0 | 70 | 0.3305 | 0.7049 |
| 0.2972 | 2.0 | 140 | 0.2715 | 0.7781 |
| 0.1979 | 3.0 | 210 | 0.2607 | 0.8181 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
54data/xlm-roberta-base-finetuned-panx-fr
|
54data
| 2023-08-25T06:21:45Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-25T06:17:13Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: validation
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8463611859838274
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2792
- F1: 0.8464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5766 | 1.0 | 191 | 0.3445 | 0.7611 |
| 0.2638 | 2.0 | 382 | 0.2696 | 0.8355 |
| 0.1752 | 3.0 | 573 | 0.2792 | 0.8464 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
nishant-glance/model-sd-1-4-priorp-lowlr-unet
|
nishant-glance
| 2023-08-25T06:19:48Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-25T05:41:42Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - nishant-glance/model-sd-1-4-priorp-lowlr-unet
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
eunbi-jeong/gpt2
|
eunbi-jeong
| 2023-08-25T06:19:07Z | 0 | 0 | null |
[
"translation",
"en",
"dataset:hellaswag",
"region:us"
] |
translation
| 2023-08-25T06:17:58Z |
---
datasets:
- hellaswag
language:
- en
pipeline_tag: translation
---
|
VkStyle/roma
|
VkStyle
| 2023-08-25T06:18:05Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-08-22T20:25:11Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
54data/xlm-roberta-base-finetuned-panx-de-fr
|
54data
| 2023-08-25T06:16:07Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-25T06:03:26Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1658
- F1: 0.8588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2908 | 1.0 | 715 | 0.1909 | 0.8125 |
| 0.1466 | 2.0 | 1430 | 0.1613 | 0.8492 |
| 0.0945 | 3.0 | 2145 | 0.1658 | 0.8588 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
nithiroj/wav2vec2-base-finetuned-gtzan
|
nithiroj
| 2023-08-25T06:07:33Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-25T03:44:15Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.81
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-gtzan
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6608
- Accuracy: 0.81
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9578 | 1.0 | 113 | 1.8537 | 0.28 |
| 1.4644 | 2.0 | 226 | 1.5867 | 0.5 |
| 0.9624 | 3.0 | 339 | 1.1706 | 0.66 |
| 0.8329 | 4.0 | 452 | 0.8807 | 0.76 |
| 0.5047 | 5.0 | 565 | 0.9421 | 0.73 |
| 0.4525 | 6.0 | 678 | 0.7879 | 0.73 |
| 0.5111 | 7.0 | 791 | 0.6493 | 0.79 |
| 0.1836 | 8.0 | 904 | 0.5938 | 0.85 |
| 0.1806 | 9.0 | 1017 | 0.5787 | 0.84 |
| 0.1338 | 10.0 | 1130 | 0.6608 | 0.81 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
rachit221195/lora-trained-xl-colab
|
rachit221195
| 2023-08-25T06:00:12Z | 4 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-25T05:56:20Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks human
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - rachit221195/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks human using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
dt-and-vanilla-ardt/ardt-vanilla-arrl_sgld_train_halfcheetah_high-2508_0437-33
|
dt-and-vanilla-ardt
| 2023-08-25T05:48:01Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-25T03:38:59Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-vanilla-arrl_sgld_train_halfcheetah_high-2508_0437-33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-vanilla-arrl_sgld_train_halfcheetah_high-2508_0437-33
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
darkbloodevil/bloomz-560m_PROMPT_TUNING_CAUSAL_LM
|
darkbloodevil
| 2023-08-25T05:43:48Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-25T05:43:45Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
dkimds/ppo-LunarLander-v2
|
dkimds
| 2023-08-25T05:17:39Z | 2 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-08-01T04:24:12Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -132.01 +/- 71.74
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'dkimds/ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
feliciamj/ppo-LunarLander-v2
|
feliciamj
| 2023-08-25T05:08:34Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-25T05:08:12Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.17 +/- 22.91
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Unmand/en_procare_referrer_organisation
|
Unmand
| 2023-08-25T04:16:32Z | 0 | 0 |
spacy
|
[
"spacy",
"text-classification",
"en",
"region:us"
] |
text-classification
| 2023-08-25T04:03:16Z |
---
tags:
- spacy
- text-classification
language:
- en
model-index:
- name: en_procare_referrer_organisation
results: []
---
| Feature | Description |
| --- | --- |
| **Name** | `en_procare_referrer_organisation` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.5.4,<3.6.0` |
| **Default Pipeline** | `textcat_multilabel` |
| **Components** | `textcat_multilabel` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (726 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`textcat_multilabel`** | `H D PROJECTS PTY LTD`, `McCabe Curwood`, `Dept of Education, Skills & Employment`, `StateCover Mutual Limited`, `Perth Orthopaedic & Sports Medicine`, `Queensland Child Care Service Pty Ltd Ttee`, `Allianz Australia Insurance Limited c/- Jensen McConaghy Lawyers`, `Catholic Care Diocese of Broken Bay`, `Helping Hand New Aged Care`, `Suncorp Life`, `Qantas Airways Limited`, `Department of Defence`, `Master Builders Association of SA`, `HWL Ebsworth Lawyers`, `Alexander Watson`, `Zoetis`, `RSL Care`, `P&N Bank`, `University of NSW`, `Uber Technologies, Inc.`, `Finlay Plumbing Services Pty Ltd`, `Hays Specialist Recruitment`, `KENNARDS HIRE PTY LIMITED`, `Carer Solutions Australia`, `Unitingcare`, `No. 1 Riverside Quay Proprietary Limited`, `Gallagher Basset`, `Department of the Chief MInister and Cabinet`, `CHEP Australia`, `Minda Incorporated`, `The Star`, `Tas Water`, `Feros Care`, `Roshana Group`, `Atradius Crédito y Caución S.A de Seguros y Reaseguros`, `Services Australia`, `RT Consulting`, `The Australian Electoral Commission`, `Federal Court of Australia`, `NRMA INSURANCE`, `Catholic Education Office`, `Svitzer Australia Pty Ltd`, `QBE acting as the agent of NSW Self Insurance Corporation`, `LAWRENCE & HANSON`, `UnitingCare Queensland`, `LibertyGFG`, `Australian Tax Office`, `Alvaro Transport Pty Ltd`, `GIO Workers Compensation ACT`, `Cso Diocese Of Broken Bay`, `Glencore`, `EASTERN HOSPITAL`, `BOC Limited, a member of the Linde Group`, `INVOCARE AUSTRALIA PTY LIMITED`, `UNITRANS ASIA PACIFIC PTY LTD`, `Services Australia (Dept of Human Services)`, `VEOLIA ENVIRONMENTAL SERVICES (AUSTRALIA) PTY LTD `, `Vickilynn Pty Ltd`, `Coles Team Cover`, `MLC Life Insurance`, `Sparke Helmore Lawyers`, `RSL Lifecare Limited`, `QBE Workers Compensation TAS`, `Kimberley Clark Australia`, `The Personnel Group Ltd`, `Insurance Australia Group`, `Canberra Sand & Gravel`, `Viva Energy Australia Pty Ltd`, `Moran Aged Care Engadine`, `Australian Taxation Office`, `Youis Group Pty Ltd`, `Cleanaway`, `Mosaic Brands (Rockmans)`, `Children Hospital Foundation`, `Civil Aviation Safety Authority`, `QBE Workers Compensation WA`, `United Protestant Association`, `PSC Capital Insurance Brokers`, `Woolworths Group Limited`, `Kilcoy Global Foods`, `American Express Australia Limited`, `Palios Meegan Nicholson`, `Uniting`, `Coles Group Supply Chain Pty Ltd`, `QBE`, `OBE Organic`, `Cyprium Metals Limited`, `Kincare Health Services Pty Ltd`, `StateCover Mutual Ltd`, `FIRE RESCUE VICTORIA`, `N2N Claims Solutions`, `WesFarmers – Group TeamCover`, `NDIS Quality and Safeguards Commission`, `HD Projects Pty Ltd`, `St Finn Barr's Catholic Primary School - Lanceston`, `Power and Water Corporation`, `EML VIC Pty Ltd`, `Wanton Kearney`, `Kmart Australia Ltd`, `Territory Families – Housing & Communities`, `Calvary Community Care`, `Sedgwick`, `Leonora Contracting P/L`, `NSW Health Pathology`, `Kilcoy Pastoral Company Ltd`, `GIO CTP ACT`, `DXC Claims Management Services - VIC`, `Schindler Lifts Australia Pty Ltd`, `Meridian Lawyers`, `GIO Workers Compensation WA`, `AUB Group Limited`, `Coateshire`, `Aurizon`, `JWLand`, `Trusted Support Coordination`, `Gosford Quarries Pty Ltd`, `GIO NSW Workers Compensation`, `DESE`, `Busways Group`, `Gallagher Bassett Workers Compensation NSW`, `Allianz Australia Insurance Limited C/- McInnes Wilson Lawyers`, `oOh!Media`, `West Gate Tunnel Project`, `KOMATSU MARKETING SUPPORT AUST`, `Mills Oakley Lawyers`, `Hall & Wilcox`, `Skybridge Group Pty Limited`, `Retirement Living Business & Financial Services`, `Allianz Workers Compensation NT`, `Environmental Industries Pty Ltd`, `EML Workers Insurance NSW`, `Department of Agriculture, Water and the Environment`, `MS Australia`, `CSIRO`, `Orange Health Service`, `AHI Insurance`, `Bupa`, `Allianz Australia Workers Compensation (Victoria) Ltd`, `Cappello Civil Contracting Services Pty Ltd`, `LAF Group`, `RTozerconsulting`, `St Michaels College`, `Gallagher Bassett for Opal Healthcare`, `Department of Families, Fairness and Housing`, `WESTHAVEN LIMITED`, `Integrity Care`, `GPC Asia Pacific`, `Department of Primary Industries`, `Mosaic Brands Limited`, `QBE Workers Compensation NT`, `Coredev`, `South Western Sydney Local Health District`, `CGU Workers Compensation ACT`, `Tas Prison Service`, `Sonic Healthcare`, `Workcover C/BT Lawyers`, `PSC WCS`, `CPB Contractors Pty Ltd`, `Cookie Steelfixing and Construction`, `Warner Bros`, `CGU Workers Compensation NT`, `CMET`, `AnglicareSA`, `St Vincent’s Care Services Carseldine`, `Tasmanian Catholic Education Office`, `Allianz Australia Insurance Ltd`, `Roussos Legal Advisory`, `BGIS Technical Services`, `AAMI NSW CTP`, `Wotton Kearney`, `Galllgher Bassett Workers Compensation VIC`, `Brisbane Fire Pty Ltd`, `QBE Workers Compensation NSW`, `Sunshine Coast Hospital and Health Service`, `Oaks Hotels & Resorts Limited - 9004`, `Ausgrid`, `Boral Limited`, `Aerison Pty Ltd`, `Cooper Grace Ward Lawyers`, `Hsswa Pty Ltd`, `Weir Minerals Australia Ltd`, `Labour Force Pty Ltd`, `Barry Nilsson Lawyers`, `Liberty Oil Australia Pty Ltd`, `ABPhillips`, `Austral Risk`, `AAI Limited trading as GIO - Agent for the Workers Compensation Nominal Insurer`, `OCEAN GARDENS INC`, `Roshana Group Pty Ltd`, `GIO CTP NSW`, `Lachlan Shire Council`, `Allianz Workers Compensation WA`, `United Equipment Pty Ltd`, `PFD FOOD SERVICES PTY LTD`, `Phoenix Insurance Brokers`, `Blumers`, `Department of Home Affairs`, `Anglo Coal (Grosvenor Management) Pty Ltd c/- Ashurst Australia`, `Anglicare Southern QLD`, `Lifetime Support`, `The Trustee for The Roshana Family Trust`, `Zurich Australian Insurance Ltd`, `Dept of Education & Training - School Cleaners`, `DXC Claims Management Services`, `The Medical Clinic Millicent`, `Melbourne Water`, `COMPASS GROUP AUSTRALIA PTY LTD`, `Andreasens Green NSW Andreasens Green QLD`, `Astridge and Murray`, `EML Plus`, `Philips Electronics P/L`, `ISS Facility Services Australia Ltd`, `Busy Bees Early Learning Australia Pty Ltd`, `Coates Hire`, `Sydney Trains`, `Catholic Schools Parramatta Diocese Limited`, `CGU Workers Compensation TAS`, `Mercer`, `COFFS HARBOUR SUPPORT SERVICES LTD`, `I-MED GROUP`, `One Path`, `Transport Accident Commission`, `Department of Corporate and Digital Development Northern Territory Government`, `Boral Insurance Pty Limited`, `Department of Justice`, `AB Phillips Pty Ltd`, `Irwin & Hartshorn`, `Pacific Labour Facility`, `Suncorp Staff Pty Ltd`, `Vilis Bakery`, `NRMA`, `The Hospitals Contribution Fund Of Australia Ltd`, `SCE Group`, `Our Lady of Mercy College Parramatta`, `DOSER Freight Forwarding`, `Employers Mutual NSW Limited`, `Cappello Hydraulics & Civil Pty Ltd`, `Buderim Kindergarten`, `ACT Recycling Pty Ltd`, `Bupa Medical Visa Services`, `Allianz CTP SA`, `Auspost`, `Liverpool Plains Shire Council`, `Corporate Services Network Pty Ltd`, `DP World Australia Pty Ltd`, `Complete Personnel Recruitment`, `DXC Integrated Services`, `QBE Workers Compensation - ACT`, `BINGO PTY LTD`, `The Arnott’s Group`, `EML Agent for icare Workers Insurance`, `IHG Irwin Hartshorn Group`, `Civilmart`, `ORAMS Agencies`, `Liberty GFG`, `QBE NSW Treasury Managed Fund`, `EML (NSW Treasury Managed Fund)`, `Hays Recruitment`, `Mosaic Group Ltd Pty`, `BlueCare`, `Gallagher Bassett Services`, `Ernst & Young (EY)`, `Cootharinga North Queensland`, `BUPA AGED CARE AUSTRALIA P/L`, `Toll Self Insurance`, `Corporate Services Network`, `ACT GOV`, `SA Health Northern Adelaide Local Health Network`, `Inghams Enterprises Pty Ltd`, `Centrewest Insurance Brokers`, `Department of Foreign Affairs and Trade (DFAT)`, `RSL Life Care`, `Star of the Sea School`, `Chubb`, `Suncorp CTP QLD`, `JACANA ENERGY`, `Toll Group`, `Corporeal Health`, `Mosaic Brands (Noni B Limited)`, `QBE CTP Insurance`, `Q Super`, `Bartier Perry Lawyers`, `Queensland Government`, `Department of Health and Human Services Tasmania`, `Hall and Wilcox Lawyers`, `Griffin Coal`, `Cappello Commercial Hydraulics and Civil Pty Ltd`, `Bolton Clarke`, `Australian Unity`, `Gallagher Bassett Services Pty Ltd`, `St John Ambulance Western Australia Ltd`, `Geocon Group Pty Ltd`, `Allianz Australia Insurance Limited c/ Jensen McConaghy Lawyers`, `UAA Pty Ltd`, `Tamex Transport Services Pty Ltd`, `WFI Insurance Limited`, `Programmed Skilled Workforce Limited`, `Bartier Perry`, `Australian Competition & Consumer Commission`, `Queensland Health`, `Holcim (Australia) Pty Ltd`, `Southern NSW Local Health District`, `Blue Care`, `Gallagher Bassett Workers Compensation VIC`, `Point Insurance`, `Workers Compensation & Risk Specialists (WCRS) services render for Philips electronics P/L`, `Country Wide Insurance Brokers (CWIB)`, `Allianz Australia Insurance Ltd C/ - Moray and Agnew Lawyers`, `CHUBB AUSTRALASIA`, `Sirius Support & Industrious People`, `BORG MANUFACTURING P/L`, `Department of Climate Change, Energy, the Environment and Water`, `Hireup Pty. Ltd.`, `Workcover QLD`, `Greenham Tasmania `, `Fantastic Furniture Ltd`, `CGU Workers Compensation VIC`, `Lawson Risk Management Services Pty Ltd`, `SGP Civil`, `Moray & Agnew`, `Edwards Michael Lawyers`, `Jensen McConarchy`, `Cyprium Metals`, `Hunter New England Local Health District`, `EML TMF, Insurance for NSW`, `RACQ Insurance`, `Blue Care ATF The Uniting Church in Aust. Property Trust (Q)`, `ENERGYAUSTRALIA SERVICES P/L`, `AAMI CTP`, `Bupa Asia Pacific`, `The Good Shepherd Home`, `Department of Corporate and Digital Development`, `Allianz CTP Claims NSW`, `Sedgwick Australia`, `Racing NSW`, `GCI Group`, `Australia Post`, `Coles Group Limited`, `Minter Ellison`, `MCCOLL'S OPERATIONS P/L`, `Apprenticeship Support Australia`, `AIA Australia Limited`, `Ernst & Young Services Pty Limited`, `North Metropolitan Health Service`, `St Vincent de Paul Society Canberra/Goulburn (Inc)`, `DP WORLD AUSTRALIA FREMANTLE TERMINAL`, `Moray and Agnew`, `Mosaic Group`, `Ovato`, `ACT Formwork Pty Ltd`, `DORMAKABA AUSTRALIA PTY LTD`, `Jones Harley Toole`, `QBE Accident and Health`, `Crawford Legal`, `REA Group Ltd`, `Amadeus IT Pacific Pty Ltd`, `DXC Integrated Services Victoria Pty Ltd`, `Vellex Pty Ltd`, `3M Australia`, `RTC Consulting`, `Somerset College Ltd`, `Bupa Care Services`, `IKEA North Lakes`, `Australian Criminal Intelligence Commission`, `McInnes Wilson Lawyers`, `UnitingCare Queensland `, `Anglican Community Care Incorporated (trading as ac.care)`, `Electrolux Home Products Pty Ltd`, `Gen Leads`, `FUSE RECRUITMENT MELBOURNE P/L`, `Zurich Financial Services Australia Limited`, `Wesfarmers Group TeamCover`, `Connect Infrastructure`, `Oji Fibre Solutions (Aus) Pty Ltd`, `Quality Bakers Australia Pty Limited`, `Workers Compensation & Risk Specialists`, `Civil Aviation Safety Authority (CASA)`, `Endeavour Foundation`, `The Territory Boundless Possible`, `Territory Families – Housing & Communities`, `Ampol Australia Petroleum Pty Ltd`, `Seven Network (Operations) Ltd`, `HopgoodGanim Lawyers`, `Coal Mines Insurance`, `QBE Insurance Australia`, `UGL Limited`, `QBE Accident and Health `, `C.INC`, `Ikea Logan`, `VERO`, `Geodis Australia`, `McCabes Lawyers`, `Programmed`, `UNSW Canberra`, `EML, Agent for ReturnToWorkSA`, `TEST ORG 2. EML Workers Insurance NSW`, `Kings Group`, `Maney Transport`, `South Western Sydney Lhd`, `Force Fire and Safety Pty Ltd`, `Astridge & Murray Solicitors `, `Rankin Ellison Lawyers`, `EML Insurance`, `ACCC/AER`, `Facilities First`, `Turks Legal`, `Jenson McConaghy Lawyers`, `CGU Insurance`, `AAI Limited trading as GIO`, `BP Australia Limited C/ Collin Biggers & Paisley Lawyers`, `O’Neill & Brown Electrical Services Pty Ltd`, `St Kilda PCYC`, `Justice Services Pty Ltd`, `American Express International Inc`, `Gillis Delaney Lawyers`, `Cabra Dominican College Ltd.`, `Trident Services Cleaning Pty Ltd`, `Hicksons Lawyers`, `Healthscope Operations Pty Ltd`, `GSK CX Healthcare Pty Ltd`, `ACT Government`, `AJ Bush & Sons Pty Ltd`, `OMB Solicitors`, `EML Self Insurance`, `Cooper Grace Ward`, `GC Legal`, `Centacare Catholic Family Services`, `Etex Australia Pty Ltd`, `Allianz Australia Ltd`, `Envirolab Service`, `Ikea `, `Allianz Australia Insurance Limited`, `WorkCover Queensland`, `Allianz Workers Compensation ACT`, `GIO Workers Compensation NSW`, `GenesisCare`, `Rocklea Pressed Metal Pty Ltd `, `Australian Digital Health Agency`, `HWL Ebsworth`, `Museum and Art Gallery Northern Territory (MAGNT)`, `CSR`, `Connell`, `4cRisk`, `HBA Legal`, `Coles Supermarkets Australia Pty Ltd`, `The University of Queensland`, `VENTIA SERVICES GROUP P/L,VENT`, `Point Underwriting Agency Pty Ltd`, `Youi CTP SA`, `Allianz Workers Compensation NSW`, `Detmold Packaging Pty Ltd`, `KENNARDS HIRE PTY LTD`, `QBE CTP QLD`, `Insurance House Group`, `Kilcoy Pastoral Company Limited`, `SRG Global Mining (Australia) Pty Ltd`, `Hunter Imaging Group`, `Park Hyatt Melbourne`, `Enviro Lab`, `QBE Australia Insurance Limited`, `EML c/o Moray`, `Catholic Church Insurance Limited`, `NV EMPLOYMENT PTY LTD`, `IP Australia`, `Qantas`, `Wesfarmer Limited`, `Melton City Council`, `Workcover Employer For Special Policies`, `Allianz Australia Workers Compensation (NSW) Ltd.`, `Uniting Care Health`, `Staff Australia Payroll Services Pty Ltd`, `WN Group`, `Infrabuild`, `Western NSW Local Health District`, `APS Group`, `DXC Claims Management Services - VIC`, `GIO`, `Northern Adelaide Local Health Network `, `Austbrokers Canberra`, `Department of Treasury and Finance Northern Territory Government`, `PSC Workers Compensation & Consulting`, `Alinta Energy`, `Sunline ACT Pty Ltd`, `Allianz Australia Workers' Compensation (Victoria)`, `Suncorp`, `JW Land Construction`, `Comcare - VIC`, `IKEA Pty Limited`, `KENNARDS HIRE`, `IRI Worldwide`, `RFI Technology Solutions`, `Engage TSS Internal Resources`, `St Vincent’s Care Services Mitchelton`, `Cappello Concreting Services Pty Ltd`, `Correct Care Australasia P/L`, `Coal Services`, `VELLA TRANSPORT ADMINISTRATION PTY LTD`, `CGU Workers Compensation WA`, `CORPORATE SERVICE NETWORK`, `BGIS`, `SCENTRE LIMITED`, `Employers Mutual Limited`, `RAPE & DOMESTIC VIOLENCE SERVICES AUSTRALIA`, `PSC Insurance`, `Allianz Australia Insurance Ltd ACT`, `Big W`, `Coverforce Pty Ltd`, `AAMI SA CTP Claims`, `EML Workers Insurance`, `Emjay Insurance Brokers`, `EML Victoria`, `WorkSafe Claims and Recovery Support team`, `Adcor`, `Territory Families, Housing and Communities (TFHC)`, `Nazareth Catholic Community`, `Gallagher Bassett Workers Compensation SA`, `INVOCARE AUSTRALIA P/L`, `Hardman Risk Management`, `The Sydney Childrens Hospital Network`, `The Junction Works Limited`, `PEM DEMO`, `Queensland Ambulance Service`, `Fel Child Care Centres 1 Pty Ltd`, `Allianz CTP QLD`, `Moray & Agnew Lawyers`, `Programmed Maintenance Services Ltd (Self Insured)`, `iag`, `Barnardos`, `eReports `, `Youi Pty Ltd`, `HM Focus Pty Ltd`, `Allianz Workers Compensation VIC`, `iCare Workers Insurance`, `Procare Group`, `Kemp & Co Lawyers`, `AAMI Insurance`, `Combined Insurance`, `STAWELL GOLD MINES P/L`, `QBE CTP NSW`, `SA Health`, `Gilshenan & Luton Legal Practice`, `Genesis Care`, `SOUTH AUSTRALIA POLICE`, `Wollongong City Council`, `TUTT BRYANT GROUP LTD`, `Endeavour Energy`, `Tasmanian Health Service`, `IC Formwork Services Pty Ltd`, `Humdrum`, `Comcare`, `The Gowrie (Qld) Inc`, `Australian Government Department of Education, Skills and Employment`, `Gair Legal`, `Dept of Territory Families, Housing and Communities`, `McArthur River Mining PTY Ltd`, `Kincare Management Pty Ltd`, `CFA`, `Department of Territory Families, Housing and Communities Division Library & Archives NT`, `Department for Education and Child Development`, `Core Building Group Pty Ltd`, `ACH Group`, `Busy Bees Australia Operations Pty Ltd.`, `Wesfarmers Ltd`, `JBC Corporate`, `NULL`, `No Employer - ADL`, `BT Lawyers`, `InfraBuild Steel Centre`, `Kimberly-Clark`, `Tas TAFE`, `EML National Self Insurance`, `National Disability Insurance Agency`, `Colin Biggers & Paisley Pty`, `DP World Brisbane Pty Ltd`, `Australian Trade and Investment Commission (Austrade)`, `Allianz Australia Limited c/- McInnes Wilson Lawyers`, `Community Solutions`, `RFI`, `RACQ Insurance Limited ABN 50 009 704 152`, `AAI Limited trading as GIO`, `Gallagher Bassett Services Workers Compensation Vic Pty Ltd`, `Department of Infrastructure, Transport and Regional Development`, `PSC Insurance Group`, `Allianz CTP NSW`, `CSR Limited`, `Kimberly-Clark Australia P/L`, `Hall and Willcox Lawyers`, `Page Seager Lawyers`, `Iconic Hotels Management`, `St John Medical Centre`, `Department of Veterans Affairs`, `Allianz QLD CTP`, `Morgan & Agnew Lawyers`, `Bureau of Meteorology`, `Forest Coach Lines Pty / Ltd`, `Shaw's Darwin Transport Pty Ltd`, `Dynamic Diesel Mechanical Services Pty Ltd`, `Hall & Wilcox Lawyers`, `Moran Aged Care`, `[email protected]`, `Gallagher Bassett Self Insurance NSW`, `EML as agent for icare Workers Insurance NSW`, `Minter Ellison Lawyers`, `Lee Legal Group`, `Child and Adolescent Health Service (CAHS)`, `Holman Webb Lawyers`, `Dept of Home Affairs`, `QSuper`, `TIO Motor Accidents Compensation `, `Allianz Australia Workers' Compensation (Victoria) Limited`, `Perpetual Limited`, `Barwang Pty Ltd`, `CTP QLD Claims Division`, `InvoCare`, `Australian Border Force`, `I MED Radiology Network`, `Ensure Pty Ltd`, `CITY OF PALMERSTON`, `AKUBRA HATS PTY LTD`, `Secom Australia`, `GIO Workers Compensation NT`, `Pialligo Estate`, `Berry Buddle Wilkins`, `Department of Infrastructure, Transport, Regional Development and Communications`, `Aussie Skip Bins Services P/L`, `BGIS Pty Ltd`, `NSW Police Force`, `GIO Workers Compensation TAS`, `Eighteen33 Pty Ltd`, `Crown Law`, `Paramatta Council`, `Northern Territory Government`, `Australian Electoral Commission`, `Department of Health`, `Hunt & Hunt Lawyers`, `Batemans Bay Soldiers Club`, `Allianz Workers Compensation Tasmania`, `SMK Lawyers`, `Envirolab Group`, `WorkSafe Victoria`, `Allianz Australia Insurance Limited, c/- Moray & Agnew`, `Allianz Australia Insurance Limited ABN 15 000 122 850, c/- Moray & Agnew`, `City of Parramatta`, `UES International Pty Ltd`, `Westpac Group`, `Logistics & Stores (Mailroom, Stores & Transport) Services CHW`, `Device Technologies Australia Pty Ltd`, `Willis Towers Watson`, `Hsswa Pty Ltd & HSS Resources Pty Ltd & Other`, `Kingspan Water & Energy Pty Limited`, `SAPOL`, `Guild Insurance`, `Westpac Banking Group`, `St Hilarion Aged Care`, `AAI Limited trading as GIO - Agent for the Workers Compensation Nominal Insurer ABN 83 564 379 108`, `Roshana Pty Ltd`, `QBE Insurance (Australia) Limited (ABN 78003191035)`, `Service Australia`, `BOC Limited `, `HWLE Lawyers`, `NRMA CTP NSW`, `RACQ Insurance Limited ABN 50009704152/ C- Cooper Grace Ward`, `CALVARY ADMINISTRATION PTY LTD`, `Cappello Group`, `Wesfarmers Limited`, `GIO NSW CTP `, `FK Gardner Services (Qld) Pty Ltd`, `Challenge Implements Holdings`, `Bartier Perry Pty Limited`, `Chubb Insurance Australia Limited`, `EMP Michael Lawyers`, `I-MED RADIOLOGY NETWORK LIMITED`, `Gilchrist Connell Legal`, `Premier Office Relocations`, `Nominal Defendant c/- Jensen McConaghy Lawyers`, `Detmold Mental Health Training`, `EML`, `Premise`, `Balance Rehab`, `Xchanging Workers Compensation - NSW`, `Coogee Chemicals Pty Ltd`, `Safe Work Australia`, `Jensen McConaghy Lawyers`, `Hawkesbury City Council`, `Toll Global Express`, `The Corporation of the Synod of the Diocese of Brisbane`, `NRMA CTP SA`, `Ambulance Victoria`, `APSystems`, `Austbrokers (Finsura)`, `SCENTRE GROUP`, `Ikea Australia`, `Department of Treasury and Finance`, `Gallagher Bassett Services Workers Compensation NSW`, `NONI B HOLDINGS PTY LIMITED`, `QBE Workers Compensation SA`, `The Star Entertainment Group Self Insurance Unit`, `Catholic Care Diocese of Bathurst`, `GAIR LEGAL PTY LIMITED`, `QBE CTP SA`, `Wesfarmers Group`, `Rod Pilon Transport`, `TG Legal`, `Department of the Prime Minister and Cabinet`, `UNSW`, `RACQ Group`, `REMONDIS Australia Pty Ltd`, `Australian Federal Police`, `Marshall & Brougham Constructions `, `Chandler Macleod Group`, `University of Tasmania`, `Goodman Fielder Pty Limited`, `SONIC HEALTHCARE GROUP`, `Hastings Medical Centre`, `Hospitality Employers Mutual`, `HCF`, `Colin Biggers Paisley Lawyers`, `Department Veterans Affairs`, `Maddocks Lawyers`, `SRG Group`, `Australian Personnel Solutions (APS Group)`, `EY Business Solutions Pty Ltd`, `National Indigenous Australians Agency`, `St Catherine's School, Berwick`, `Transport for NSW`, `South Australian Native Titles Services` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `CATS_SCORE` | 32.28 |
| `CATS_MICRO_P` | 71.89 |
| `CATS_MICRO_R` | 23.49 |
| `CATS_MICRO_F` | 35.41 |
| `CATS_MACRO_P` | 7.06 |
| `CATS_MACRO_R` | 3.40 |
| `CATS_MACRO_F` | 4.32 |
| `CATS_MACRO_AUC` | 32.28 |
| `TEXTCAT_MULTILABEL_LOSS` | 7.88 |
|
vodkaslime/codellama-7b-hf
|
vodkaslime
| 2023-08-25T04:09:00Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"code",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-25T04:03:43Z |
---
language:
- code
pipeline_tag: text-generation
tags:
- llama-2
license: llama2
---
# **Code Llama**
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 7B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.
| | Base Model | Python | Instruct |
| --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
| 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
| 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
Make sure to be using this temporary branch of transformers unit support is fully merged and released.
```bash
pip install git+https://github.com/huggingface/transformers.git@refs/pull/25740/head accelerate
```
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "codellama/CodeLlama-7b-hf"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'import socket\n\ndef ping_exponential_backoff(host: str):',
do_sample=True,
top_k=10,
temperature=0.1,
top_p=0.95,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Model Details
*Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
**Model Developers** Meta
**Variations** Code Llama comes in three model sizes, and three variants:
* Code Llama: base models designed for general code synthesis and understanding
* Code Llama - Python: designed specifically for Python
* Code Llama - Instruct: for instruction following and safer deployment
=All variants are available in sizes of 7B, 13B and 34B parameters.
**This repository contains the base model of 7B parameters.**
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture.
**Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)".
## Intended Use
**Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## Hardware and Software
**Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
**Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program.
## Training Data
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details).
## Evaluation Results
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## Ethical Considerations and Limitations
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
|
zhang-yice/spt-absa-bert-10k
|
zhang-yice
| 2023-08-25T04:06:13Z | 33 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-05-23T10:36:20Z |
---
license: cc-by-4.0
---
## SPT-ABSA
We continue to pre-train BERT-base via Sentiment-enhance pre-training (SPT).
- Title: An Empirical Study of Sentiment-Enhanced Pre-Training for Aspect-Based Sentiment Analysis
- Author: Yice Zhang, Yifan Yang, Bin Liang, Shiwei Chen, Bing Qin, and Ruifeng Xu
- Conference: ACL-2023 Finding (Long)
GitHub Repository: https://github.com/HITSZ-HLT/SPT-ABSA
### What Did We Do?
Aspect-Based Sentiment Analysis (ABSA) is an important problem in sentiment analysis.
Its goal is to recognize opinions and sentiments towards specific aspects from user-generated content.
Many research efforts leverage pre-training techniques to learn sentiment-aware representations and achieve significant gains in various ABSA tasks.
We conduct an empirical study of SPT-ABSA to systematically investigate and analyze the effectiveness of the existing approaches.
We mainly concentrate on the following questions:
- (a) what impact do different types of sentiment knowledge have on downstream ABSA tasks?;
- (b) which knowledge integration method is most effective?; and
- (c) does injecting non-sentiment-specific linguistic knowledge (e.g., part-of-speech tags and syntactic relations) into pre-training have positive impacts?
Based on the experimental investigation of these questions, we eventually obtain a powerful sentiment-enhanced pre-trained model.
The powerful sentiment-enhanced pre-trained model has two versions, namely [zhang-yice/spt-absa-bert-400k](https://huggingface.co/zhang-yice/spt-absa-bert-400k) and [zhang-yice/spt-absa-bert-10k](https://huggingface.co/zhang-yice/spt-absa-bert-10k), which integrates three types of knowledge:
- aspect words: masking aspects' context and predicting them.
- review's rating score: rating prediction.
- syntax knowledge:
- part-of-speech,
- dependency direction,
- dependency distance.
### Experimental Results
<img width="75%" alt="image" src="https://github.com/HITSZ-HLT/SPT-ABSA/assets/9134454/38fc2db0-6ccf-47a7-a93c-cf54667e1a23">
<img width="75%" alt="image" src="https://github.com/HITSZ-HLT/SPT-ABSA/assets/9134454/20c5a976-014e-433f-a2ec-4bb259e5a382">
|
abdiharyadi/IndoT5-base-amr-to-text-linearized-penman-ilmy-epochs-10-with-lemma-and-upos-and-voice
|
abdiharyadi
| 2023-08-25T04:00:46Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Wikidepia/IndoT5-base",
"base_model:finetune:Wikidepia/IndoT5-base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-25T03:06:44Z |
---
base_model: Wikidepia/IndoT5-base
tags:
- generated_from_trainer
model-index:
- name: IndoT5-base-amr-to-text-linearized-penman-ilmy-epochs-10-with-lemma-and-upos-and-voice
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IndoT5-base-amr-to-text-linearized-penman-ilmy-epochs-10-with-lemma-and-upos-and-voice
This model is a fine-tuned version of [Wikidepia/IndoT5-base](https://huggingface.co/Wikidepia/IndoT5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2948
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 331 | 0.7072 |
| 0.4599 | 2.0 | 662 | 0.7502 |
| 0.4599 | 3.0 | 993 | 0.8377 |
| 0.0605 | 4.0 | 1324 | 1.0332 |
| 0.0253 | 5.0 | 1655 | 1.1047 |
| 0.0253 | 6.0 | 1986 | 1.0692 |
| 0.016 | 7.0 | 2317 | 1.1282 |
| 0.0093 | 8.0 | 2648 | 1.2508 |
| 0.0093 | 9.0 | 2979 | 1.2754 |
| 0.0067 | 10.0 | 3310 | 1.2948 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
LarryAIDraw/GenshinImpactRosaria
|
LarryAIDraw
| 2023-08-25T03:59:13Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-25T03:54:18Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/9132/rosaria-genshin-impact
|
LarryAIDraw/rosaria
|
LarryAIDraw
| 2023-08-25T03:58:15Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-25T03:52:53Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/101711/rosaria-genshin-impact-or-goofy-ai
|
AdanLee/Reinforce-CartPole-v1
|
AdanLee
| 2023-08-25T03:45:01Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-25T03:44:49Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
volodya-leveryev/wav2vec2-sah
|
volodya-leveryev
| 2023-08-25T03:38:46Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-25T03:38:02Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-sah
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-sah
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4177
- Wer: 0.4480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.3675 | 4.5 | 500 | 2.2690 | 1.0 |
| 0.8695 | 9.01 | 1000 | 0.4878 | 0.5811 |
| 0.3469 | 13.51 | 1500 | 0.4021 | 0.4973 |
| 0.2236 | 18.02 | 2000 | 0.4299 | 0.4750 |
| 0.1685 | 22.52 | 2500 | 0.4266 | 0.4612 |
| 0.1383 | 27.03 | 3000 | 0.4177 | 0.4480 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
spear1/q-Taxi-v3
|
spear1
| 2023-08-25T03:31:57Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-25T03:31:56Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="spear1/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DataMonke/bert-base-uncased-finetuned-review-sentiment-analysis
|
DataMonke
| 2023-08-25T03:27:20Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"en",
"dataset:amazon_us_reviews",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-21T14:55:06Z |
---
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
datasets:
- amazon_us_reviews
---
# E-Commerce Product Sentiment Analysis
This model classifies texts into stars categories ranging from 1 to 5. This model has a BERT base and further finetuned on Amazon and e-commerce clothing product reviews.
|
AdanLee/dqn-SpaceInvadersNoFrameskip-v4
|
AdanLee
| 2023-08-25T03:15:50Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-25T03:15:07Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 690.50 +/- 213.28
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AdanLee -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AdanLee -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga AdanLee
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
DunnBC22/trocr-large-printed-cmc7_tesseract_MICR_ocr
|
DunnBC22
| 2023-08-25T03:15:01Z | 77 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"image-to-text",
"en",
"base_model:microsoft/trocr-large-printed",
"base_model:finetune:microsoft/trocr-large-printed",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2023-07-23T18:53:50Z |
---
base_model: microsoft/trocr-large-printed
tags:
- generated_from_trainer
model-index:
- name: trocr-large-printed-cmc7_tesseract_MICR_ocr
results: []
license: bsd-3-clause
language:
- en
metrics:
- cer
pipeline_tag: image-to-text
---
# trocr-large-printed-cmc7_tesseract_MICR_ocr
This model is a fine-tuned version of [microsoft/trocr-large-printed](https://huggingface.co/microsoft/trocr-large-printed).
## Model description
For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Optical%20Character%20Recognition%20(OCR)/Tesseract%20MICR%20(CMC7%20Dataset)/TrOCR_cmc7_tesseractMICR.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology. You are welcome to test and experiment with this model, but it is at your own risk/peril.
## Training and evaluation data
Dataset Source: https://github.com/DoubangoTelecom/tesseractMICR/tree/master/datasets/cmc7
**Histogram of Label Character Lengths**
/Tesseract%20MICR%20(CMC7%20Dataset)/Images/Histogram%20of%20Label%20Character%20Length.png)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
The Character Error Rate (CER) for this model is 0.004970720413999727.
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
tyayoi/xlm-roberta-base-finetuned-panx-all
|
tyayoi
| 2023-08-25T03:05:19Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-21T11:01:34Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1761
- F1: 0.8555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.303 | 1.0 | 835 | 0.1887 | 0.8212 |
| 0.1582 | 2.0 | 1670 | 0.1708 | 0.8409 |
| 0.1034 | 3.0 | 2505 | 0.1761 | 0.8555 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
tyayoi/xlm-roberta-base-finetuned-panx-de-fr
|
tyayoi
| 2023-08-25T02:53:58Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-21T09:18:03Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1642
- F1: 0.8561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2932 | 1.0 | 715 | 0.1829 | 0.8220 |
| 0.1486 | 2.0 | 1430 | 0.1612 | 0.8463 |
| 0.0925 | 3.0 | 2145 | 0.1642 | 0.8561 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
debadas/dog
|
debadas
| 2023-08-25T02:35:28Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-25T02:28:07Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - debadas/dog
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
thanhnew2001/sport
|
thanhnew2001
| 2023-08-25T02:29:59Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:bigscience/bloom-560m",
"base_model:finetune:bigscience/bloom-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2023-08-25T01:34:22Z |
---
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloom-560m
tags:
- generated_from_trainer
model-index:
- name: sport
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sport
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dkqjrm/20230825091928
|
dkqjrm
| 2023-08-25T02:29:35Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-25T00:19:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230825091928'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230825091928
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1543
- Accuracy: 0.7437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 156 | 0.6113 | 0.5307 |
| No log | 2.0 | 312 | 0.9432 | 0.4693 |
| No log | 3.0 | 468 | 0.9610 | 0.4729 |
| 0.8937 | 4.0 | 624 | 0.5415 | 0.5487 |
| 0.8937 | 5.0 | 780 | 0.4722 | 0.6209 |
| 0.8937 | 6.0 | 936 | 0.4314 | 0.6390 |
| 0.7579 | 7.0 | 1092 | 0.7937 | 0.5704 |
| 0.7579 | 8.0 | 1248 | 0.4160 | 0.6282 |
| 0.7579 | 9.0 | 1404 | 0.3071 | 0.6787 |
| 0.7059 | 10.0 | 1560 | 0.4325 | 0.6498 |
| 0.7059 | 11.0 | 1716 | 0.7958 | 0.5090 |
| 0.7059 | 12.0 | 1872 | 0.3046 | 0.6823 |
| 0.654 | 13.0 | 2028 | 0.3405 | 0.7220 |
| 0.654 | 14.0 | 2184 | 0.2875 | 0.6751 |
| 0.654 | 15.0 | 2340 | 0.4266 | 0.6426 |
| 0.654 | 16.0 | 2496 | 0.5710 | 0.5957 |
| 0.6649 | 17.0 | 2652 | 0.3009 | 0.7256 |
| 0.6649 | 18.0 | 2808 | 0.7588 | 0.6534 |
| 0.6649 | 19.0 | 2964 | 0.2785 | 0.7292 |
| 0.5523 | 20.0 | 3120 | 0.2400 | 0.6895 |
| 0.5523 | 21.0 | 3276 | 0.2582 | 0.6859 |
| 0.5523 | 22.0 | 3432 | 0.3514 | 0.6462 |
| 0.511 | 23.0 | 3588 | 0.2163 | 0.7112 |
| 0.511 | 24.0 | 3744 | 0.2226 | 0.7076 |
| 0.511 | 25.0 | 3900 | 0.2138 | 0.7148 |
| 0.4948 | 26.0 | 4056 | 0.2851 | 0.7437 |
| 0.4948 | 27.0 | 4212 | 0.2584 | 0.7220 |
| 0.4948 | 28.0 | 4368 | 0.2217 | 0.7401 |
| 0.4342 | 29.0 | 4524 | 0.2014 | 0.7076 |
| 0.4342 | 30.0 | 4680 | 0.1907 | 0.7184 |
| 0.4342 | 31.0 | 4836 | 0.2176 | 0.7076 |
| 0.4342 | 32.0 | 4992 | 0.1863 | 0.7184 |
| 0.4098 | 33.0 | 5148 | 0.1862 | 0.7292 |
| 0.4098 | 34.0 | 5304 | 0.2253 | 0.7292 |
| 0.4098 | 35.0 | 5460 | 0.1960 | 0.7256 |
| 0.3743 | 36.0 | 5616 | 0.2416 | 0.7401 |
| 0.3743 | 37.0 | 5772 | 0.1988 | 0.7292 |
| 0.3743 | 38.0 | 5928 | 0.2031 | 0.7076 |
| 0.3477 | 39.0 | 6084 | 0.1847 | 0.7292 |
| 0.3477 | 40.0 | 6240 | 0.2001 | 0.7220 |
| 0.3477 | 41.0 | 6396 | 0.1955 | 0.7401 |
| 0.3221 | 42.0 | 6552 | 0.2075 | 0.7329 |
| 0.3221 | 43.0 | 6708 | 0.1751 | 0.7365 |
| 0.3221 | 44.0 | 6864 | 0.2256 | 0.7148 |
| 0.3034 | 45.0 | 7020 | 0.1913 | 0.7329 |
| 0.3034 | 46.0 | 7176 | 0.1867 | 0.7437 |
| 0.3034 | 47.0 | 7332 | 0.1842 | 0.7292 |
| 0.3034 | 48.0 | 7488 | 0.1719 | 0.7365 |
| 0.2656 | 49.0 | 7644 | 0.1810 | 0.7617 |
| 0.2656 | 50.0 | 7800 | 0.2172 | 0.7256 |
| 0.2656 | 51.0 | 7956 | 0.2065 | 0.7545 |
| 0.2676 | 52.0 | 8112 | 0.1682 | 0.7473 |
| 0.2676 | 53.0 | 8268 | 0.1819 | 0.7329 |
| 0.2676 | 54.0 | 8424 | 0.1703 | 0.7509 |
| 0.2396 | 55.0 | 8580 | 0.1971 | 0.7509 |
| 0.2396 | 56.0 | 8736 | 0.1889 | 0.7365 |
| 0.2396 | 57.0 | 8892 | 0.2933 | 0.6968 |
| 0.2355 | 58.0 | 9048 | 0.1650 | 0.7509 |
| 0.2355 | 59.0 | 9204 | 0.1760 | 0.7473 |
| 0.2355 | 60.0 | 9360 | 0.1553 | 0.7581 |
| 0.2196 | 61.0 | 9516 | 0.1707 | 0.7437 |
| 0.2196 | 62.0 | 9672 | 0.1933 | 0.7401 |
| 0.2196 | 63.0 | 9828 | 0.1726 | 0.7401 |
| 0.2196 | 64.0 | 9984 | 0.1654 | 0.7509 |
| 0.2114 | 65.0 | 10140 | 0.1783 | 0.7401 |
| 0.2114 | 66.0 | 10296 | 0.1724 | 0.7473 |
| 0.2114 | 67.0 | 10452 | 0.1647 | 0.7473 |
| 0.208 | 68.0 | 10608 | 0.1734 | 0.7437 |
| 0.208 | 69.0 | 10764 | 0.1640 | 0.7365 |
| 0.208 | 70.0 | 10920 | 0.1953 | 0.7329 |
| 0.2014 | 71.0 | 11076 | 0.1550 | 0.7509 |
| 0.2014 | 72.0 | 11232 | 0.1781 | 0.7509 |
| 0.2014 | 73.0 | 11388 | 0.1687 | 0.7365 |
| 0.1906 | 74.0 | 11544 | 0.1695 | 0.7473 |
| 0.1906 | 75.0 | 11700 | 0.1560 | 0.7509 |
| 0.1906 | 76.0 | 11856 | 0.1532 | 0.7509 |
| 0.1864 | 77.0 | 12012 | 0.1524 | 0.7401 |
| 0.1864 | 78.0 | 12168 | 0.1537 | 0.7545 |
| 0.1864 | 79.0 | 12324 | 0.1531 | 0.7509 |
| 0.1864 | 80.0 | 12480 | 0.1543 | 0.7437 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Alignment-Lab-AI/Big-Boy-Code-Instruct
|
Alignment-Lab-AI
| 2023-08-25T02:26:00Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-08-25T02:20:16Z |
experimental code-instruct pretrain on rwkv5 architecture, unbenched, untested.
|
ad019el/m2m100_418M-finetuned-tq-to-ar-1
|
ad019el
| 2023-08-25T02:24:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:ad019el/m2m100_418M-finetuned-tq-to-ar",
"base_model:finetune:ad019el/m2m100_418M-finetuned-tq-to-ar",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-23T02:51:54Z |
---
base_model: ad019el/m2m100_418M-finetuned-tq-to-ar
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: m2m100_418M-finetuned-tq-to-ar-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-finetuned-tq-to-ar-1
This model is a fine-tuned version of [ad019el/m2m100_418M-finetuned-tq-to-ar](https://huggingface.co/ad019el/m2m100_418M-finetuned-tq-to-ar) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2002
- Bleu: 3.6349
- Gen Len: 35.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 2.7537 | 0.71 | 500 | 2.2710 | 4.2969 | 35.4312 |
| 2.6442 | 1.42 | 1000 | 2.2373 | 4.0784 | 35.1062 |
| 2.6329 | 2.13 | 1500 | 2.2257 | 3.8894 | 36.225 |
| 2.564 | 2.84 | 2000 | 2.2210 | 3.5513 | 36.076 |
| 2.5352 | 3.56 | 2500 | 2.2151 | 3.7339 | 35.0885 |
| 2.4991 | 4.27 | 3000 | 2.2078 | 3.4662 | 36.3333 |
| 2.4782 | 4.98 | 3500 | 2.2100 | 3.3332 | 36.4062 |
| 2.4363 | 5.69 | 4000 | 2.2085 | 3.3587 | 36.3135 |
| 2.4411 | 6.4 | 4500 | 2.2034 | 3.8744 | 34.5073 |
| 2.4002 | 7.11 | 5000 | 2.2036 | 3.6693 | 36.3448 |
| 2.3841 | 7.82 | 5500 | 2.2030 | 3.7486 | 35.076 |
| 2.3619 | 8.53 | 6000 | 2.1970 | 3.5687 | 35.8271 |
| 2.3627 | 9.25 | 6500 | 2.2016 | 3.5394 | 35.3583 |
| 2.3451 | 9.96 | 7000 | 2.1996 | 3.5863 | 34.9271 |
| 2.3323 | 10.67 | 7500 | 2.2002 | 3.6349 | 35.5271 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Unmand/procare_referrer_org_build2
|
Unmand
| 2023-08-25T02:04:44Z | 0 | 0 |
spacy
|
[
"spacy",
"text-classification",
"en",
"region:us"
] |
text-classification
| 2023-08-25T01:36:02Z |
---
tags:
- spacy
- text-classification
language:
- en
model-index:
- name: en_procare_referrer_organisation
results: []
---
| Feature | Description |
| --- | --- |
| **Name** | `en_procare_referrer_organisation` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.5.4,<3.6.0` |
| **Default Pipeline** | `textcat_multilabel` |
| **Components** | `textcat_multilabel` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (726 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`textcat_multilabel`** | `H D PROJECTS PTY LTD`, `McCabe Curwood`, `Dept of Education, Skills & Employment`, `StateCover Mutual Limited`, `Perth Orthopaedic & Sports Medicine`, `Queensland Child Care Service Pty Ltd Ttee`, `Allianz Australia Insurance Limited c/- Jensen McConaghy Lawyers`, `Catholic Care Diocese of Broken Bay`, `Helping Hand New Aged Care`, `Suncorp Life`, `Qantas Airways Limited`, `Department of Defence`, `Master Builders Association of SA`, `HWL Ebsworth Lawyers`, `Alexander Watson`, `Zoetis`, `RSL Care`, `P&N Bank`, `University of NSW`, `Uber Technologies, Inc.`, `Finlay Plumbing Services Pty Ltd`, `Hays Specialist Recruitment`, `KENNARDS HIRE PTY LIMITED`, `Carer Solutions Australia`, `Unitingcare`, `No. 1 Riverside Quay Proprietary Limited`, `Gallagher Basset`, `Department of the Chief MInister and Cabinet`, `CHEP Australia`, `Minda Incorporated`, `The Star`, `Tas Water`, `Feros Care`, `Roshana Group`, `Atradius Crédito y Caución S.A de Seguros y Reaseguros`, `Services Australia`, `RT Consulting`, `The Australian Electoral Commission`, `Federal Court of Australia`, `NRMA INSURANCE`, `Catholic Education Office`, `Svitzer Australia Pty Ltd`, `QBE acting as the agent of NSW Self Insurance Corporation`, `LAWRENCE & HANSON`, `UnitingCare Queensland`, `LibertyGFG`, `Australian Tax Office`, `Alvaro Transport Pty Ltd`, `GIO Workers Compensation ACT`, `Cso Diocese Of Broken Bay`, `Glencore`, `EASTERN HOSPITAL`, `BOC Limited, a member of the Linde Group`, `INVOCARE AUSTRALIA PTY LIMITED`, `UNITRANS ASIA PACIFIC PTY LTD`, `Services Australia (Dept of Human Services)`, `VEOLIA ENVIRONMENTAL SERVICES (AUSTRALIA) PTY LTD `, `Vickilynn Pty Ltd`, `Coles Team Cover`, `MLC Life Insurance`, `Sparke Helmore Lawyers`, `RSL Lifecare Limited`, `QBE Workers Compensation TAS`, `Kimberley Clark Australia`, `The Personnel Group Ltd`, `Insurance Australia Group`, `Canberra Sand & Gravel`, `Viva Energy Australia Pty Ltd`, `Moran Aged Care Engadine`, `Australian Taxation Office`, `Youis Group Pty Ltd`, `Cleanaway`, `Mosaic Brands (Rockmans)`, `Children Hospital Foundation`, `Civil Aviation Safety Authority`, `QBE Workers Compensation WA`, `United Protestant Association`, `PSC Capital Insurance Brokers`, `Woolworths Group Limited`, `Kilcoy Global Foods`, `American Express Australia Limited`, `Palios Meegan Nicholson`, `Uniting`, `Coles Group Supply Chain Pty Ltd`, `QBE`, `OBE Organic`, `Cyprium Metals Limited`, `Kincare Health Services Pty Ltd`, `StateCover Mutual Ltd`, `FIRE RESCUE VICTORIA`, `N2N Claims Solutions`, `WesFarmers – Group TeamCover`, `NDIS Quality and Safeguards Commission`, `HD Projects Pty Ltd`, `St Finn Barr's Catholic Primary School - Lanceston`, `Power and Water Corporation`, `EML VIC Pty Ltd`, `Wanton Kearney`, `Kmart Australia Ltd`, `Territory Families – Housing & Communities`, `Calvary Community Care`, `Sedgwick`, `Leonora Contracting P/L`, `NSW Health Pathology`, `Kilcoy Pastoral Company Ltd`, `GIO CTP ACT`, `DXC Claims Management Services - VIC`, `Schindler Lifts Australia Pty Ltd`, `Meridian Lawyers`, `GIO Workers Compensation WA`, `AUB Group Limited`, `Coateshire`, `Aurizon`, `JWLand`, `Trusted Support Coordination`, `Gosford Quarries Pty Ltd`, `GIO NSW Workers Compensation`, `DESE`, `Busways Group`, `Gallagher Bassett Workers Compensation NSW`, `Allianz Australia Insurance Limited C/- McInnes Wilson Lawyers`, `oOh!Media`, `West Gate Tunnel Project`, `KOMATSU MARKETING SUPPORT AUST`, `Mills Oakley Lawyers`, `Hall & Wilcox`, `Skybridge Group Pty Limited`, `Retirement Living Business & Financial Services`, `Allianz Workers Compensation NT`, `Environmental Industries Pty Ltd`, `EML Workers Insurance NSW`, `Department of Agriculture, Water and the Environment`, `MS Australia`, `CSIRO`, `Orange Health Service`, `AHI Insurance`, `Bupa`, `Allianz Australia Workers Compensation (Victoria) Ltd`, `Cappello Civil Contracting Services Pty Ltd`, `LAF Group`, `RTozerconsulting`, `St Michaels College`, `Gallagher Bassett for Opal Healthcare`, `Department of Families, Fairness and Housing`, `WESTHAVEN LIMITED`, `Integrity Care`, `GPC Asia Pacific`, `Department of Primary Industries`, `Mosaic Brands Limited`, `QBE Workers Compensation NT`, `Coredev`, `South Western Sydney Local Health District`, `CGU Workers Compensation ACT`, `Tas Prison Service`, `Sonic Healthcare`, `Workcover C/BT Lawyers`, `PSC WCS`, `CPB Contractors Pty Ltd`, `Cookie Steelfixing and Construction`, `Warner Bros`, `CGU Workers Compensation NT`, `CMET`, `AnglicareSA`, `St Vincent’s Care Services Carseldine`, `Tasmanian Catholic Education Office`, `Allianz Australia Insurance Ltd`, `Roussos Legal Advisory`, `BGIS Technical Services`, `AAMI NSW CTP`, `Wotton Kearney`, `Galllgher Bassett Workers Compensation VIC`, `Brisbane Fire Pty Ltd`, `QBE Workers Compensation NSW`, `Sunshine Coast Hospital and Health Service`, `Oaks Hotels & Resorts Limited - 9004`, `Ausgrid`, `Boral Limited`, `Aerison Pty Ltd`, `Cooper Grace Ward Lawyers`, `Hsswa Pty Ltd`, `Weir Minerals Australia Ltd`, `Labour Force Pty Ltd`, `Barry Nilsson Lawyers`, `Liberty Oil Australia Pty Ltd`, `ABPhillips`, `Austral Risk`, `AAI Limited trading as GIO - Agent for the Workers Compensation Nominal Insurer`, `OCEAN GARDENS INC`, `Roshana Group Pty Ltd`, `GIO CTP NSW`, `Lachlan Shire Council`, `Allianz Workers Compensation WA`, `United Equipment Pty Ltd`, `PFD FOOD SERVICES PTY LTD`, `Phoenix Insurance Brokers`, `Blumers`, `Department of Home Affairs`, `Anglo Coal (Grosvenor Management) Pty Ltd c/- Ashurst Australia`, `Anglicare Southern QLD`, `Lifetime Support`, `The Trustee for The Roshana Family Trust`, `Zurich Australian Insurance Ltd`, `Dept of Education & Training - School Cleaners`, `DXC Claims Management Services`, `The Medical Clinic Millicent`, `Melbourne Water`, `COMPASS GROUP AUSTRALIA PTY LTD`, `Andreasens Green NSW Andreasens Green QLD`, `Astridge and Murray`, `EML Plus`, `Philips Electronics P/L`, `ISS Facility Services Australia Ltd`, `Busy Bees Early Learning Australia Pty Ltd`, `Coates Hire`, `Sydney Trains`, `Catholic Schools Parramatta Diocese Limited`, `CGU Workers Compensation TAS`, `Mercer`, `COFFS HARBOUR SUPPORT SERVICES LTD`, `I-MED GROUP`, `One Path`, `Transport Accident Commission`, `Department of Corporate and Digital Development Northern Territory Government`, `Boral Insurance Pty Limited`, `Department of Justice`, `AB Phillips Pty Ltd`, `Irwin & Hartshorn`, `Pacific Labour Facility`, `Suncorp Staff Pty Ltd`, `Vilis Bakery`, `NRMA`, `The Hospitals Contribution Fund Of Australia Ltd`, `SCE Group`, `Our Lady of Mercy College Parramatta`, `DOSER Freight Forwarding`, `Employers Mutual NSW Limited`, `Cappello Hydraulics & Civil Pty Ltd`, `Buderim Kindergarten`, `ACT Recycling Pty Ltd`, `Bupa Medical Visa Services`, `Allianz CTP SA`, `Auspost`, `Liverpool Plains Shire Council`, `Corporate Services Network Pty Ltd`, `DP World Australia Pty Ltd`, `Complete Personnel Recruitment`, `DXC Integrated Services`, `QBE Workers Compensation - ACT`, `BINGO PTY LTD`, `The Arnott’s Group`, `EML Agent for icare Workers Insurance`, `IHG Irwin Hartshorn Group`, `Civilmart`, `ORAMS Agencies`, `Liberty GFG`, `QBE NSW Treasury Managed Fund`, `EML (NSW Treasury Managed Fund)`, `Hays Recruitment`, `Mosaic Group Ltd Pty`, `BlueCare`, `Gallagher Bassett Services`, `Ernst & Young (EY)`, `Cootharinga North Queensland`, `BUPA AGED CARE AUSTRALIA P/L`, `Toll Self Insurance`, `Corporate Services Network`, `ACT GOV`, `SA Health Northern Adelaide Local Health Network`, `Inghams Enterprises Pty Ltd`, `Centrewest Insurance Brokers`, `Department of Foreign Affairs and Trade (DFAT)`, `RSL Life Care`, `Star of the Sea School`, `Chubb`, `Suncorp CTP QLD`, `JACANA ENERGY`, `Toll Group`, `Corporeal Health`, `Mosaic Brands (Noni B Limited)`, `QBE CTP Insurance`, `Q Super`, `Bartier Perry Lawyers`, `Queensland Government`, `Department of Health and Human Services Tasmania`, `Hall and Wilcox Lawyers`, `Griffin Coal`, `Cappello Commercial Hydraulics and Civil Pty Ltd`, `Bolton Clarke`, `Australian Unity`, `Gallagher Bassett Services Pty Ltd`, `St John Ambulance Western Australia Ltd`, `Geocon Group Pty Ltd`, `Allianz Australia Insurance Limited c/ Jensen McConaghy Lawyers`, `UAA Pty Ltd`, `Tamex Transport Services Pty Ltd`, `WFI Insurance Limited`, `Programmed Skilled Workforce Limited`, `Bartier Perry`, `Australian Competition & Consumer Commission`, `Queensland Health`, `Holcim (Australia) Pty Ltd`, `Southern NSW Local Health District`, `Blue Care`, `Gallagher Bassett Workers Compensation VIC`, `Point Insurance`, `Workers Compensation & Risk Specialists (WCRS) services render for Philips electronics P/L`, `Country Wide Insurance Brokers (CWIB)`, `Allianz Australia Insurance Ltd C/ - Moray and Agnew Lawyers`, `CHUBB AUSTRALASIA`, `Sirius Support & Industrious People`, `BORG MANUFACTURING P/L`, `Department of Climate Change, Energy, the Environment and Water`, `Hireup Pty. Ltd.`, `Workcover QLD`, `Greenham Tasmania `, `Fantastic Furniture Ltd`, `CGU Workers Compensation VIC`, `Lawson Risk Management Services Pty Ltd`, `SGP Civil`, `Moray & Agnew`, `Edwards Michael Lawyers`, `Jensen McConarchy`, `Cyprium Metals`, `Hunter New England Local Health District`, `EML TMF, Insurance for NSW`, `RACQ Insurance`, `Blue Care ATF The Uniting Church in Aust. Property Trust (Q)`, `ENERGYAUSTRALIA SERVICES P/L`, `AAMI CTP`, `Bupa Asia Pacific`, `The Good Shepherd Home`, `Department of Corporate and Digital Development`, `Allianz CTP Claims NSW`, `Sedgwick Australia`, `Racing NSW`, `GCI Group`, `Australia Post`, `Coles Group Limited`, `Minter Ellison`, `MCCOLL'S OPERATIONS P/L`, `Apprenticeship Support Australia`, `AIA Australia Limited`, `Ernst & Young Services Pty Limited`, `North Metropolitan Health Service`, `St Vincent de Paul Society Canberra/Goulburn (Inc)`, `DP WORLD AUSTRALIA FREMANTLE TERMINAL`, `Moray and Agnew`, `Mosaic Group`, `Ovato`, `ACT Formwork Pty Ltd`, `DORMAKABA AUSTRALIA PTY LTD`, `Jones Harley Toole`, `QBE Accident and Health`, `Crawford Legal`, `REA Group Ltd`, `Amadeus IT Pacific Pty Ltd`, `DXC Integrated Services Victoria Pty Ltd`, `Vellex Pty Ltd`, `3M Australia`, `RTC Consulting`, `Somerset College Ltd`, `Bupa Care Services`, `IKEA North Lakes`, `Australian Criminal Intelligence Commission`, `McInnes Wilson Lawyers`, `UnitingCare Queensland `, `Anglican Community Care Incorporated (trading as ac.care)`, `Electrolux Home Products Pty Ltd`, `Gen Leads`, `FUSE RECRUITMENT MELBOURNE P/L`, `Zurich Financial Services Australia Limited`, `Wesfarmers Group TeamCover`, `Connect Infrastructure`, `Oji Fibre Solutions (Aus) Pty Ltd`, `Quality Bakers Australia Pty Limited`, `Workers Compensation & Risk Specialists`, `Civil Aviation Safety Authority (CASA)`, `Endeavour Foundation`, `The Territory Boundless Possible`, `Territory Families – Housing & Communities`, `Ampol Australia Petroleum Pty Ltd`, `Seven Network (Operations) Ltd`, `HopgoodGanim Lawyers`, `Coal Mines Insurance`, `QBE Insurance Australia`, `UGL Limited`, `QBE Accident and Health `, `C.INC`, `Ikea Logan`, `VERO`, `Geodis Australia`, `McCabes Lawyers`, `Programmed`, `UNSW Canberra`, `EML, Agent for ReturnToWorkSA`, `TEST ORG 2. EML Workers Insurance NSW`, `Kings Group`, `Maney Transport`, `South Western Sydney Lhd`, `Force Fire and Safety Pty Ltd`, `Astridge & Murray Solicitors `, `Rankin Ellison Lawyers`, `EML Insurance`, `ACCC/AER`, `Facilities First`, `Turks Legal`, `Jenson McConaghy Lawyers`, `CGU Insurance`, `AAI Limited trading as GIO`, `BP Australia Limited C/ Collin Biggers & Paisley Lawyers`, `O’Neill & Brown Electrical Services Pty Ltd`, `St Kilda PCYC`, `Justice Services Pty Ltd`, `American Express International Inc`, `Gillis Delaney Lawyers`, `Cabra Dominican College Ltd.`, `Trident Services Cleaning Pty Ltd`, `Hicksons Lawyers`, `Healthscope Operations Pty Ltd`, `GSK CX Healthcare Pty Ltd`, `ACT Government`, `AJ Bush & Sons Pty Ltd`, `OMB Solicitors`, `EML Self Insurance`, `Cooper Grace Ward`, `GC Legal`, `Centacare Catholic Family Services`, `Etex Australia Pty Ltd`, `Allianz Australia Ltd`, `Envirolab Service`, `Ikea `, `Allianz Australia Insurance Limited`, `WorkCover Queensland`, `Allianz Workers Compensation ACT`, `GIO Workers Compensation NSW`, `GenesisCare`, `Rocklea Pressed Metal Pty Ltd `, `Australian Digital Health Agency`, `HWL Ebsworth`, `Museum and Art Gallery Northern Territory (MAGNT)`, `CSR`, `Connell`, `4cRisk`, `HBA Legal`, `Coles Supermarkets Australia Pty Ltd`, `The University of Queensland`, `VENTIA SERVICES GROUP P/L,VENT`, `Point Underwriting Agency Pty Ltd`, `Youi CTP SA`, `Allianz Workers Compensation NSW`, `Detmold Packaging Pty Ltd`, `KENNARDS HIRE PTY LTD`, `QBE CTP QLD`, `Insurance House Group`, `Kilcoy Pastoral Company Limited`, `SRG Global Mining (Australia) Pty Ltd`, `Hunter Imaging Group`, `Park Hyatt Melbourne`, `Enviro Lab`, `QBE Australia Insurance Limited`, `EML c/o Moray`, `Catholic Church Insurance Limited`, `NV EMPLOYMENT PTY LTD`, `IP Australia`, `Qantas`, `Wesfarmer Limited`, `Melton City Council`, `Workcover Employer For Special Policies`, `Allianz Australia Workers Compensation (NSW) Ltd.`, `Uniting Care Health`, `Staff Australia Payroll Services Pty Ltd`, `WN Group`, `Infrabuild`, `Western NSW Local Health District`, `APS Group`, `DXC Claims Management Services - VIC`, `GIO`, `Northern Adelaide Local Health Network `, `Austbrokers Canberra`, `Department of Treasury and Finance Northern Territory Government`, `PSC Workers Compensation & Consulting`, `Alinta Energy`, `Sunline ACT Pty Ltd`, `Allianz Australia Workers' Compensation (Victoria)`, `Suncorp`, `JW Land Construction`, `Comcare - VIC`, `IKEA Pty Limited`, `KENNARDS HIRE`, `IRI Worldwide`, `RFI Technology Solutions`, `Engage TSS Internal Resources`, `St Vincent’s Care Services Mitchelton`, `Cappello Concreting Services Pty Ltd`, `Correct Care Australasia P/L`, `Coal Services`, `VELLA TRANSPORT ADMINISTRATION PTY LTD`, `CGU Workers Compensation WA`, `CORPORATE SERVICE NETWORK`, `BGIS`, `SCENTRE LIMITED`, `Employers Mutual Limited`, `RAPE & DOMESTIC VIOLENCE SERVICES AUSTRALIA`, `PSC Insurance`, `Allianz Australia Insurance Ltd ACT`, `Big W`, `Coverforce Pty Ltd`, `AAMI SA CTP Claims`, `EML Workers Insurance`, `Emjay Insurance Brokers`, `EML Victoria`, `WorkSafe Claims and Recovery Support team`, `Adcor`, `Territory Families, Housing and Communities (TFHC)`, `Nazareth Catholic Community`, `Gallagher Bassett Workers Compensation SA`, `INVOCARE AUSTRALIA P/L`, `Hardman Risk Management`, `The Sydney Childrens Hospital Network`, `The Junction Works Limited`, `PEM DEMO`, `Queensland Ambulance Service`, `Fel Child Care Centres 1 Pty Ltd`, `Allianz CTP QLD`, `Moray & Agnew Lawyers`, `Programmed Maintenance Services Ltd (Self Insured)`, `iag`, `Barnardos`, `eReports `, `Youi Pty Ltd`, `HM Focus Pty Ltd`, `Allianz Workers Compensation VIC`, `iCare Workers Insurance`, `Procare Group`, `Kemp & Co Lawyers`, `AAMI Insurance`, `Combined Insurance`, `STAWELL GOLD MINES P/L`, `QBE CTP NSW`, `SA Health`, `Gilshenan & Luton Legal Practice`, `Genesis Care`, `SOUTH AUSTRALIA POLICE`, `Wollongong City Council`, `TUTT BRYANT GROUP LTD`, `Endeavour Energy`, `Tasmanian Health Service`, `IC Formwork Services Pty Ltd`, `Humdrum`, `Comcare`, `The Gowrie (Qld) Inc`, `Australian Government Department of Education, Skills and Employment`, `Gair Legal`, `Dept of Territory Families, Housing and Communities`, `McArthur River Mining PTY Ltd`, `Kincare Management Pty Ltd`, `CFA`, `Department of Territory Families, Housing and Communities Division Library & Archives NT`, `Department for Education and Child Development`, `Core Building Group Pty Ltd`, `ACH Group`, `Busy Bees Australia Operations Pty Ltd.`, `Wesfarmers Ltd`, `JBC Corporate`, `NULL`, `No Employer - ADL`, `BT Lawyers`, `InfraBuild Steel Centre`, `Kimberly-Clark`, `Tas TAFE`, `EML National Self Insurance`, `National Disability Insurance Agency`, `Colin Biggers & Paisley Pty`, `DP World Brisbane Pty Ltd`, `Australian Trade and Investment Commission (Austrade)`, `Allianz Australia Limited c/- McInnes Wilson Lawyers`, `Community Solutions`, `RFI`, `RACQ Insurance Limited ABN 50 009 704 152`, `AAI Limited trading as GIO`, `Gallagher Bassett Services Workers Compensation Vic Pty Ltd`, `Department of Infrastructure, Transport and Regional Development`, `PSC Insurance Group`, `Allianz CTP NSW`, `CSR Limited`, `Kimberly-Clark Australia P/L`, `Hall and Willcox Lawyers`, `Page Seager Lawyers`, `Iconic Hotels Management`, `St John Medical Centre`, `Department of Veterans Affairs`, `Allianz QLD CTP`, `Morgan & Agnew Lawyers`, `Bureau of Meteorology`, `Forest Coach Lines Pty / Ltd`, `Shaw's Darwin Transport Pty Ltd`, `Dynamic Diesel Mechanical Services Pty Ltd`, `Hall & Wilcox Lawyers`, `Moran Aged Care`, `[email protected]`, `Gallagher Bassett Self Insurance NSW`, `EML as agent for icare Workers Insurance NSW`, `Minter Ellison Lawyers`, `Lee Legal Group`, `Child and Adolescent Health Service (CAHS)`, `Holman Webb Lawyers`, `Dept of Home Affairs`, `QSuper`, `TIO Motor Accidents Compensation `, `Allianz Australia Workers' Compensation (Victoria) Limited`, `Perpetual Limited`, `Barwang Pty Ltd`, `CTP QLD Claims Division`, `InvoCare`, `Australian Border Force`, `I MED Radiology Network`, `Ensure Pty Ltd`, `CITY OF PALMERSTON`, `AKUBRA HATS PTY LTD`, `Secom Australia`, `GIO Workers Compensation NT`, `Pialligo Estate`, `Berry Buddle Wilkins`, `Department of Infrastructure, Transport, Regional Development and Communications`, `Aussie Skip Bins Services P/L`, `BGIS Pty Ltd`, `NSW Police Force`, `GIO Workers Compensation TAS`, `Eighteen33 Pty Ltd`, `Crown Law`, `Paramatta Council`, `Northern Territory Government`, `Australian Electoral Commission`, `Department of Health`, `Hunt & Hunt Lawyers`, `Batemans Bay Soldiers Club`, `Allianz Workers Compensation Tasmania`, `SMK Lawyers`, `Envirolab Group`, `WorkSafe Victoria`, `Allianz Australia Insurance Limited, c/- Moray & Agnew`, `Allianz Australia Insurance Limited ABN 15 000 122 850, c/- Moray & Agnew`, `City of Parramatta`, `UES International Pty Ltd`, `Westpac Group`, `Logistics & Stores (Mailroom, Stores & Transport) Services CHW`, `Device Technologies Australia Pty Ltd`, `Willis Towers Watson`, `Hsswa Pty Ltd & HSS Resources Pty Ltd & Other`, `Kingspan Water & Energy Pty Limited`, `SAPOL`, `Guild Insurance`, `Westpac Banking Group`, `St Hilarion Aged Care`, `AAI Limited trading as GIO - Agent for the Workers Compensation Nominal Insurer ABN 83 564 379 108`, `Roshana Pty Ltd`, `QBE Insurance (Australia) Limited (ABN 78003191035)`, `Service Australia`, `BOC Limited `, `HWLE Lawyers`, `NRMA CTP NSW`, `RACQ Insurance Limited ABN 50009704152/ C- Cooper Grace Ward`, `CALVARY ADMINISTRATION PTY LTD`, `Cappello Group`, `Wesfarmers Limited`, `GIO NSW CTP `, `FK Gardner Services (Qld) Pty Ltd`, `Challenge Implements Holdings`, `Bartier Perry Pty Limited`, `Chubb Insurance Australia Limited`, `EMP Michael Lawyers`, `I-MED RADIOLOGY NETWORK LIMITED`, `Gilchrist Connell Legal`, `Premier Office Relocations`, `Nominal Defendant c/- Jensen McConaghy Lawyers`, `Detmold Mental Health Training`, `EML`, `Premise`, `Balance Rehab`, `Xchanging Workers Compensation - NSW`, `Coogee Chemicals Pty Ltd`, `Safe Work Australia`, `Jensen McConaghy Lawyers`, `Hawkesbury City Council`, `Toll Global Express`, `The Corporation of the Synod of the Diocese of Brisbane`, `NRMA CTP SA`, `Ambulance Victoria`, `APSystems`, `Austbrokers (Finsura)`, `SCENTRE GROUP`, `Ikea Australia`, `Department of Treasury and Finance`, `Gallagher Bassett Services Workers Compensation NSW`, `NONI B HOLDINGS PTY LIMITED`, `QBE Workers Compensation SA`, `The Star Entertainment Group Self Insurance Unit`, `Catholic Care Diocese of Bathurst`, `GAIR LEGAL PTY LIMITED`, `QBE CTP SA`, `Wesfarmers Group`, `Rod Pilon Transport`, `TG Legal`, `Department of the Prime Minister and Cabinet`, `UNSW`, `RACQ Group`, `REMONDIS Australia Pty Ltd`, `Australian Federal Police`, `Marshall & Brougham Constructions `, `Chandler Macleod Group`, `University of Tasmania`, `Goodman Fielder Pty Limited`, `SONIC HEALTHCARE GROUP`, `Hastings Medical Centre`, `Hospitality Employers Mutual`, `HCF`, `Colin Biggers Paisley Lawyers`, `Department Veterans Affairs`, `Maddocks Lawyers`, `SRG Group`, `Australian Personnel Solutions (APS Group)`, `EY Business Solutions Pty Ltd`, `National Indigenous Australians Agency`, `St Catherine's School, Berwick`, `Transport for NSW`, `South Australian Native Titles Services` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `CATS_SCORE` | 32.28 |
| `CATS_MICRO_P` | 71.89 |
| `CATS_MICRO_R` | 23.49 |
| `CATS_MICRO_F` | 35.41 |
| `CATS_MACRO_P` | 7.06 |
| `CATS_MACRO_R` | 3.40 |
| `CATS_MACRO_F` | 4.32 |
| `CATS_MACRO_AUC` | 32.28 |
| `TEXTCAT_MULTILABEL_LOSS` | 7.88 |
|
hemlataC/llama-2-7b-hindie2-v4
|
hemlataC
| 2023-08-25T02:04:02Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-25T02:02:53Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
Bugsys0302/CharactersLoRA
|
Bugsys0302
| 2023-08-25T01:54:36Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-25T01:54:36Z |
---
license: creativeml-openrail-m
---
|
Vasanth/idefics-mscoco-captioner
|
Vasanth
| 2023-08-25T01:49:11Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-25T01:49:08Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: ['lm_head', 'embed_tokens']
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
AdanLee/q-Taxi-v3
|
AdanLee
| 2023-08-25T01:41:15Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-25T01:30:27Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```
from urllib.error import HTTPError
from huggingface_hub import hf_hub_download
def load_from_hub(repo_id: str, filename: str) -> str:
"""
Download a model from Hugging Face Hub.
:param repo_id: id of the model repository from the Hugging Face Hub
:param filename: name of the model zip file from the repository
"""
# Get the model from the Hub, download and cache the model on your local disk
pickle_model = hf_hub_download(repo_id=repo_id, filename=filename)
with open(pickle_model, "rb") as f:
downloaded_model_file = pickle.load(f)
return downloaded_model_file
model = load_from_hub(repo_id="AdanLee/q-Taxi-v3", filename="q-learning.pkl")
```
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
```
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
dt-and-vanilla-ardt/ardt-vanilla-arrl_train_halfcheetah_high-2508_0016-66
|
dt-and-vanilla-ardt
| 2023-08-25T01:27:59Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-24T23:17:55Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-vanilla-arrl_train_halfcheetah_high-2508_0016-66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-vanilla-arrl_train_halfcheetah_high-2508_0016-66
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
abdiharyadi/IndoT5-base-amr-to-text-linearized-penman-ilmy-epochs-3-with-lemma-and-upos-and-voice
|
abdiharyadi
| 2023-08-25T01:00:24Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Wikidepia/IndoT5-base",
"base_model:finetune:Wikidepia/IndoT5-base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-25T00:38:53Z |
---
base_model: Wikidepia/IndoT5-base
tags:
- generated_from_trainer
model-index:
- name: IndoT5-base-amr-to-text-linearized-penman-ilmy-epochs-3-with-lemma-and-upos-and-voice
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IndoT5-base-amr-to-text-linearized-penman-ilmy-epochs-3-with-lemma-and-upos-and-voice
This model is a fine-tuned version of [Wikidepia/IndoT5-base](https://huggingface.co/Wikidepia/IndoT5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 331 | 0.6963 |
| 0.4398 | 2.0 | 662 | 0.7256 |
| 0.4398 | 3.0 | 993 | 0.7899 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dkqjrm/20230825070638
|
dkqjrm
| 2023-08-25T00:19:17Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-24T22:06:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230825070638'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230825070638
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3456
- Accuracy: 0.7329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 156 | 0.7894 | 0.5271 |
| No log | 2.0 | 312 | 0.6658 | 0.5379 |
| No log | 3.0 | 468 | 0.6408 | 0.5054 |
| 0.886 | 4.0 | 624 | 0.7134 | 0.4729 |
| 0.886 | 5.0 | 780 | 0.6234 | 0.5560 |
| 0.886 | 6.0 | 936 | 0.4782 | 0.6318 |
| 0.7765 | 7.0 | 1092 | 1.1394 | 0.5776 |
| 0.7765 | 8.0 | 1248 | 0.5214 | 0.6534 |
| 0.7765 | 9.0 | 1404 | 0.4206 | 0.6570 |
| 0.7206 | 10.0 | 1560 | 0.5019 | 0.6643 |
| 0.7206 | 11.0 | 1716 | 0.7680 | 0.5343 |
| 0.7206 | 12.0 | 1872 | 0.3433 | 0.7220 |
| 0.6543 | 13.0 | 2028 | 0.3834 | 0.7292 |
| 0.6543 | 14.0 | 2184 | 0.4588 | 0.6751 |
| 0.6543 | 15.0 | 2340 | 0.3413 | 0.7040 |
| 0.6543 | 16.0 | 2496 | 0.4874 | 0.6426 |
| 0.5973 | 17.0 | 2652 | 0.3283 | 0.7256 |
| 0.5973 | 18.0 | 2808 | 0.3605 | 0.7329 |
| 0.5973 | 19.0 | 2964 | 0.3314 | 0.7256 |
| 0.5433 | 20.0 | 3120 | 0.5998 | 0.6606 |
| 0.5433 | 21.0 | 3276 | 0.3489 | 0.6931 |
| 0.5433 | 22.0 | 3432 | 0.4316 | 0.6715 |
| 0.5373 | 23.0 | 3588 | 0.3328 | 0.7076 |
| 0.5373 | 24.0 | 3744 | 0.3379 | 0.7220 |
| 0.5373 | 25.0 | 3900 | 0.3580 | 0.7148 |
| 0.4923 | 26.0 | 4056 | 0.3141 | 0.7329 |
| 0.4923 | 27.0 | 4212 | 0.4341 | 0.7365 |
| 0.4923 | 28.0 | 4368 | 0.3386 | 0.7220 |
| 0.4513 | 29.0 | 4524 | 0.3038 | 0.7220 |
| 0.4513 | 30.0 | 4680 | 0.3775 | 0.7220 |
| 0.4513 | 31.0 | 4836 | 0.4197 | 0.7076 |
| 0.4513 | 32.0 | 4992 | 0.4666 | 0.7220 |
| 0.4041 | 33.0 | 5148 | 0.3355 | 0.7365 |
| 0.4041 | 34.0 | 5304 | 0.3147 | 0.7329 |
| 0.4041 | 35.0 | 5460 | 0.3810 | 0.7184 |
| 0.3705 | 36.0 | 5616 | 0.3184 | 0.7256 |
| 0.3705 | 37.0 | 5772 | 0.3668 | 0.7076 |
| 0.3705 | 38.0 | 5928 | 0.3859 | 0.7220 |
| 0.3556 | 39.0 | 6084 | 0.3010 | 0.7329 |
| 0.3556 | 40.0 | 6240 | 0.3201 | 0.7220 |
| 0.3556 | 41.0 | 6396 | 0.3304 | 0.7329 |
| 0.3089 | 42.0 | 6552 | 0.3634 | 0.7365 |
| 0.3089 | 43.0 | 6708 | 0.3844 | 0.7184 |
| 0.3089 | 44.0 | 6864 | 0.3320 | 0.7220 |
| 0.3015 | 45.0 | 7020 | 0.3696 | 0.7220 |
| 0.3015 | 46.0 | 7176 | 0.3665 | 0.7220 |
| 0.3015 | 47.0 | 7332 | 0.3355 | 0.7256 |
| 0.3015 | 48.0 | 7488 | 0.3568 | 0.7292 |
| 0.2709 | 49.0 | 7644 | 0.3450 | 0.7329 |
| 0.2709 | 50.0 | 7800 | 0.3790 | 0.7148 |
| 0.2709 | 51.0 | 7956 | 0.3516 | 0.7112 |
| 0.2681 | 52.0 | 8112 | 0.3741 | 0.7329 |
| 0.2681 | 53.0 | 8268 | 0.3615 | 0.7220 |
| 0.2681 | 54.0 | 8424 | 0.3479 | 0.7292 |
| 0.2477 | 55.0 | 8580 | 0.3401 | 0.7184 |
| 0.2477 | 56.0 | 8736 | 0.3766 | 0.7329 |
| 0.2477 | 57.0 | 8892 | 0.3562 | 0.7148 |
| 0.2344 | 58.0 | 9048 | 0.3412 | 0.7220 |
| 0.2344 | 59.0 | 9204 | 0.3782 | 0.7437 |
| 0.2344 | 60.0 | 9360 | 0.3723 | 0.7040 |
| 0.2126 | 61.0 | 9516 | 0.3852 | 0.7292 |
| 0.2126 | 62.0 | 9672 | 0.3901 | 0.7256 |
| 0.2126 | 63.0 | 9828 | 0.3698 | 0.7112 |
| 0.2126 | 64.0 | 9984 | 0.3249 | 0.7220 |
| 0.2127 | 65.0 | 10140 | 0.3979 | 0.7004 |
| 0.2127 | 66.0 | 10296 | 0.3705 | 0.7365 |
| 0.2127 | 67.0 | 10452 | 0.3317 | 0.7220 |
| 0.199 | 68.0 | 10608 | 0.3322 | 0.7329 |
| 0.199 | 69.0 | 10764 | 0.3706 | 0.7220 |
| 0.199 | 70.0 | 10920 | 0.3628 | 0.7148 |
| 0.1959 | 71.0 | 11076 | 0.3600 | 0.7437 |
| 0.1959 | 72.0 | 11232 | 0.3349 | 0.7437 |
| 0.1959 | 73.0 | 11388 | 0.3650 | 0.7184 |
| 0.184 | 74.0 | 11544 | 0.3337 | 0.7365 |
| 0.184 | 75.0 | 11700 | 0.3309 | 0.7329 |
| 0.184 | 76.0 | 11856 | 0.3237 | 0.7365 |
| 0.183 | 77.0 | 12012 | 0.3430 | 0.7256 |
| 0.183 | 78.0 | 12168 | 0.3567 | 0.7329 |
| 0.183 | 79.0 | 12324 | 0.3541 | 0.7329 |
| 0.183 | 80.0 | 12480 | 0.3456 | 0.7329 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
IALABS/Arturosfastfood
|
IALABS
| 2023-08-25T00:14:58Z | 0 | 1 | null |
[
"conversational",
"es",
"license:other",
"region:us"
] |
text-generation
| 2023-08-24T23:32:33Z |
---
license: other
language:
- es
pipeline_tag: conversational
---
size_categories: n<1K
tags:
- rlfh
- argilla
- human-feedback
|
sianbru/bert_product_classifier_final
|
sianbru
| 2023-08-25T00:09:13Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-24T21:52:59Z |
---
license: apache-2.0
base_model: bert-base-multilingual-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bert_product_classifier_final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_product_classifier_final
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2344
- Accuracy: 0.9470
- F1: 0.9466
- Precision: 0.9467
- Recall: 0.9470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.85 | 1.0 | 960 | 0.2943 | 0.9090 | 0.9074 | 0.9091 | 0.9090 |
| 0.2538 | 2.0 | 1920 | 0.2250 | 0.9332 | 0.9331 | 0.9331 | 0.9332 |
| 0.1468 | 3.0 | 2880 | 0.2372 | 0.9384 | 0.9388 | 0.9396 | 0.9384 |
| 0.0937 | 4.0 | 3840 | 0.2344 | 0.9470 | 0.9466 | 0.9467 | 0.9470 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
qwe1256/PLLM-llama-7b
|
qwe1256
| 2023-08-24T23:52:12Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-24T19:53:47Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
slmnpl/stable-diffusion-webui-master
|
slmnpl
| 2023-08-24T23:28:46Z | 0 | 0 | null |
[
"arxiv:2211.06679",
"region:us"
] | null | 2023-08-24T23:18:23Z |
# Stable Diffusion web UI
A browser interface based on Gradio library for Stable Diffusion.

## Features
[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):
- Original txt2img and img2img modes
- One click install and run script (but you still must install python and git)
- Outpainting
- Inpainting
- Color Sketch
- Prompt Matrix
- Stable Diffusion Upscale
- Attention, specify parts of text that the model should pay more attention to
- a man in a `((tuxedo))` - will pay more attention to tuxedo
- a man in a `(tuxedo:1.21)` - alternative syntax
- select text and press `Ctrl+Up` or `Ctrl+Down` (or `Command+Up` or `Command+Down` if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)
- Loopback, run img2img processing multiple times
- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
- Textual Inversion
- have as many embeddings as you want and use any names you like for them
- use multiple embeddings with different numbers of vectors per token
- works with half precision floating point numbers
- train embeddings on 8GB (also reports of 6GB working)
- Extras tab with:
- GFPGAN, neural network that fixes faces
- CodeFormer, face restoration tool as an alternative to GFPGAN
- RealESRGAN, neural network upscaler
- ESRGAN, neural network upscaler with a lot of third party models
- SwinIR and Swin2SR ([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers
- LDSR, Latent diffusion super resolution upscaling
- Resizing aspect ratio options
- Sampling method selection
- Adjust sampler eta values (noise multiplier)
- More advanced noise setting options
- Interrupt processing at any time
- 4GB video card support (also reports of 2GB working)
- Correct seeds for batches
- Live prompt token length validation
- Generation parameters
- parameters you used to generate images are saved with that image
- in PNG chunks for PNG, in EXIF for JPEG
- can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
- can be disabled in settings
- drag and drop an image/text-parameters to promptbox
- Read Generation Parameters Button, loads parameters in promptbox to UI
- Settings page
- Running arbitrary python code from UI (must run with `--allow-code` to enable)
- Mouseover hints for most UI elements
- Possible to change defaults/mix/max/step values for UI elements via text config
- Tiling support, a checkbox to create images that can be tiled like textures
- Progress bar and live image generation preview
- Can use a separate neural network to produce previews with almost none VRAM or compute requirement
- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
- Styles, a way to save part of prompt and easily apply them via dropdown later
- Variations, a way to generate same image but with tiny differences
- Seed resizing, a way to generate same image but at slightly different resolution
- CLIP interrogator, a button that tries to guess prompt from an image
- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
- Batch Processing, process a group of files using img2img
- Img2img Alternative, reverse Euler method of cross attention control
- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
- Reloading checkpoints on the fly
- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community
- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
- separate prompts using uppercase `AND`
- also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
- DeepDanbooru integration, creates danbooru style tags for anime prompts
- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add `--xformers` to commandline args)
- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI
- Generate forever option
- Training tab
- hypernetworks and embeddings options
- Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
- Clip skip
- Hypernetworks
- Loras (same as Hypernetworks but more pretty)
- A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
- Can select to load a different VAE from settings screen
- Estimated completion time in progress bar
- API
- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML
- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))
- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions
- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
- Now without any bad letters!
- Load checkpoints in safetensors format
- Eased resolution restriction: generated image's domension must be a multiple of 8 rather than 64
- Now with a license!
- Reorder elements in the UI from settings screen
## Installation and Running
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
Alternatively, use online services (like Google Colab):
- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
### Installation on Windows 10/11 with NVidia-GPUs using release package
1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre) and extract it's contents.
2. Run `update.bat`.
3. Run `run.bat`.
> For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs)
### Automatic Installation on Windows
1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".
2. Install [git](https://git-scm.com/download/win).
3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`.
4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
### Automatic Installation on Linux
1. Install the dependencies:
```bash
# Debian-based:
sudo apt install wget git python3 python3-venv
# Red Hat-based:
sudo dnf install wget git python3
# Arch-based:
sudo pacman -S wget git python3
```
2. Navigate to the directory you would like the webui to be installed and execute the following command:
```bash
bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)
```
3. Run `webui.sh`.
4. Check `webui-user.sh` for options.
### Installation on Apple Silicon
Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon).
## Contributing
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
## Documentation
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
For the purposes of getting Google and other search engines to crawl the wiki, here's a link to the (not for humans) [crawlable wiki](https://github-wiki-see.page/m/AUTOMATIC1111/stable-diffusion-webui/wiki).
## Credits
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer
- ESRGAN - https://github.com/xinntao/ESRGAN
- SwinIR - https://github.com/JingyunLiang/SwinIR
- Swin2SR - https://github.com/mv-lab/swin2sr
- LDSR - https://github.com/Hafiidz/latent-diffusion
- MiDaS - https://github.com/isl-org/MiDaS
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
- xformers - https://github.com/facebookresearch/xformers
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
- Security advice - RyotaK
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
- LyCORIS - KohakuBlueleaf
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You)
|
FernandoD95/q-FrozenLake-v1-4x4-noSlippery
|
FernandoD95
| 2023-08-24T23:02:10Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-24T23:02:06Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="FernandoD95/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Siyoun/my_poliglot_5.8_peft_model
|
Siyoun
| 2023-08-24T22:41:21Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-24T22:41:15Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.