modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 12:32:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 12:31:20
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
asenella/mmnist_MMVAEPlusconfig_adapted_resnets_seed_0_ratio_0_c
|
asenella
| 2023-07-11T03:00:48Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-11T03:00:34Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Azizslanguagesmodels/turkishReviews-ds-mini
|
Azizslanguagesmodels
| 2023-07-11T02:48:42Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-11T02:43:12Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Azizslanguagesmodels/turkishReviews-ds-mini
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Azizslanguagesmodels/turkishReviews-ds-mini
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 9.1781
- Validation Loss: 9.2629
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -896, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2821 | 9.9897 | 0 |
| 9.6595 | 9.6377 | 1 |
| 9.1781 | 9.2629 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
alex2awesome/source-role-model
|
alex2awesome
| 2023-07-11T02:46:00Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-07-11T02:14:08Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: source-role-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# source-role-model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5543
- F1: 0.5814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.12 | 100 | 1.0000 | 0.3391 |
| No log | 0.25 | 200 | 0.8371 | 0.5055 |
| No log | 0.37 | 300 | 0.8684 | 0.5019 |
| No log | 0.49 | 400 | 0.8668 | 0.5208 |
| 0.9644 | 0.62 | 500 | 0.8473 | 0.5422 |
| 0.9644 | 0.74 | 600 | 0.8852 | 0.4956 |
| 0.9644 | 0.86 | 700 | 0.8368 | 0.5124 |
| 0.9644 | 0.99 | 800 | 0.7913 | 0.5848 |
| 0.9644 | 1.11 | 900 | 1.0570 | 0.4950 |
| 0.8375 | 1.23 | 1000 | 0.9402 | 0.5280 |
| 0.8375 | 1.35 | 1100 | 0.8023 | 0.5084 |
| 0.8375 | 1.48 | 1200 | 0.9299 | 0.4807 |
| 0.8375 | 1.6 | 1300 | 0.9661 | 0.5194 |
| 0.8375 | 1.72 | 1400 | 0.8014 | 0.6016 |
| 0.8149 | 1.85 | 1500 | 0.8608 | 0.6105 |
| 0.8149 | 1.97 | 1600 | 0.9195 | 0.5741 |
| 0.8149 | 2.09 | 1700 | 1.2378 | 0.5964 |
| 0.8149 | 2.22 | 1800 | 1.0415 | 0.5902 |
| 0.8149 | 2.34 | 1900 | 1.0499 | 0.5526 |
| 0.6932 | 2.46 | 2000 | 1.0600 | 0.5832 |
| 0.6932 | 2.59 | 2100 | 0.9368 | 0.6074 |
| 0.6932 | 2.71 | 2200 | 1.0872 | 0.6270 |
| 0.6932 | 2.83 | 2300 | 1.0912 | 0.5707 |
| 0.6932 | 2.96 | 2400 | 0.8815 | 0.5602 |
| 0.6214 | 3.08 | 2500 | 1.1650 | 0.5993 |
| 0.6214 | 3.2 | 2600 | 1.4485 | 0.5821 |
| 0.6214 | 3.33 | 2700 | 1.5382 | 0.5775 |
| 0.6214 | 3.45 | 2800 | 1.3999 | 0.5696 |
| 0.6214 | 3.57 | 2900 | 1.3702 | 0.6114 |
| 0.5686 | 3.69 | 3000 | 1.3840 | 0.5635 |
| 0.5686 | 3.82 | 3100 | 1.3547 | 0.5403 |
| 0.5686 | 3.94 | 3200 | 1.0283 | 0.5723 |
| 0.5686 | 4.06 | 3300 | 1.3593 | 0.6242 |
| 0.5686 | 4.19 | 3400 | 1.5985 | 0.6004 |
| 0.4807 | 4.31 | 3500 | 1.5351 | 0.6177 |
| 0.4807 | 4.43 | 3600 | 1.4109 | 0.5779 |
| 0.4807 | 4.56 | 3700 | 1.6972 | 0.5637 |
| 0.4807 | 4.68 | 3800 | 1.5336 | 0.6047 |
| 0.4807 | 4.8 | 3900 | 1.7811 | 0.5909 |
| 0.4387 | 4.93 | 4000 | 1.5862 | 0.5869 |
| 0.4387 | 5.05 | 4100 | 1.7106 | 0.5637 |
| 0.4387 | 5.17 | 4200 | 1.5251 | 0.5624 |
| 0.4387 | 5.3 | 4300 | 1.5519 | 0.5944 |
| 0.4387 | 5.42 | 4400 | 1.7315 | 0.5908 |
| 0.3219 | 5.54 | 4500 | 1.7588 | 0.6015 |
| 0.3219 | 5.67 | 4600 | 1.9277 | 0.5635 |
| 0.3219 | 5.79 | 4700 | 1.7663 | 0.5891 |
| 0.3219 | 5.91 | 4800 | 1.8401 | 0.5917 |
| 0.3219 | 6.03 | 4900 | 2.0516 | 0.5845 |
| 0.2311 | 6.16 | 5000 | 2.0510 | 0.6166 |
| 0.2311 | 6.28 | 5100 | 2.1673 | 0.5732 |
| 0.2311 | 6.4 | 5200 | 2.0931 | 0.5819 |
| 0.2311 | 6.53 | 5300 | 2.2803 | 0.5961 |
| 0.2311 | 6.65 | 5400 | 1.9985 | 0.6010 |
| 0.1669 | 6.77 | 5500 | 2.1742 | 0.5664 |
| 0.1669 | 6.9 | 5600 | 2.1021 | 0.5732 |
| 0.1669 | 7.02 | 5700 | 2.2043 | 0.5641 |
| 0.1669 | 7.14 | 5800 | 2.2018 | 0.5837 |
| 0.1669 | 7.27 | 5900 | 2.3575 | 0.5721 |
| 0.1698 | 7.39 | 6000 | 2.4663 | 0.5662 |
| 0.1698 | 7.51 | 6100 | 2.2658 | 0.5851 |
| 0.1698 | 7.64 | 6200 | 2.1585 | 0.5676 |
| 0.1698 | 7.76 | 6300 | 2.1755 | 0.5774 |
| 0.1698 | 7.88 | 6400 | 2.2680 | 0.5696 |
| 0.1378 | 8.0 | 6500 | 2.3505 | 0.5615 |
| 0.1378 | 8.13 | 6600 | 2.2773 | 0.5705 |
| 0.1378 | 8.25 | 6700 | 2.3112 | 0.5662 |
| 0.1378 | 8.37 | 6800 | 2.4572 | 0.5679 |
| 0.1378 | 8.5 | 6900 | 2.4642 | 0.5766 |
| 0.0756 | 8.62 | 7000 | 2.4643 | 0.5885 |
| 0.0756 | 8.74 | 7100 | 2.5096 | 0.5779 |
| 0.0756 | 8.87 | 7200 | 2.4261 | 0.5789 |
| 0.0756 | 8.99 | 7300 | 2.3973 | 0.5757 |
| 0.0756 | 9.11 | 7400 | 2.4137 | 0.5906 |
| 0.0842 | 9.24 | 7500 | 2.4577 | 0.5844 |
| 0.0842 | 9.36 | 7600 | 2.5034 | 0.5840 |
| 0.0842 | 9.48 | 7700 | 2.5176 | 0.5810 |
| 0.0842 | 9.61 | 7800 | 2.5240 | 0.5852 |
| 0.0842 | 9.73 | 7900 | 2.5141 | 0.5824 |
| 0.0634 | 9.85 | 8000 | 2.5482 | 0.5814 |
| 0.0634 | 9.98 | 8100 | 2.5543 | 0.5814 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
vimonteglione/ppo-Huggy
|
vimonteglione
| 2023-07-11T02:42:10Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-11T02:42:00Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: vimonteglione/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NasimB/gpt2-concat-cbt-mod-formatting-iorder-rarity-all-4k
|
NasimB
| 2023-07-11T02:39:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-11T00:45:49Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-cbt-mod-formatting-iorder-rarity-all-4k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-cbt-mod-formatting-iorder-rarity-all-4k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6962 | 0.29 | 500 | 5.6482 |
| 5.3352 | 0.59 | 1000 | 5.2168 |
| 4.9963 | 0.88 | 1500 | 4.9671 |
| 4.7147 | 1.17 | 2000 | 4.8164 |
| 4.5508 | 1.46 | 2500 | 4.6852 |
| 4.4503 | 1.76 | 3000 | 4.5766 |
| 4.3233 | 2.05 | 3500 | 4.4995 |
| 4.1239 | 2.34 | 4000 | 4.4513 |
| 4.0934 | 2.63 | 4500 | 4.3905 |
| 4.0645 | 2.93 | 5000 | 4.3376 |
| 3.8538 | 3.22 | 5500 | 4.3338 |
| 3.7937 | 3.51 | 6000 | 4.3034 |
| 3.781 | 3.8 | 6500 | 4.2718 |
| 3.6821 | 4.1 | 7000 | 4.2702 |
| 3.5082 | 4.39 | 7500 | 4.2633 |
| 3.5078 | 4.68 | 8000 | 4.2471 |
| 3.4936 | 4.97 | 8500 | 4.2346 |
| 3.34 | 5.27 | 9000 | 4.2492 |
| 3.3145 | 5.56 | 9500 | 4.2471 |
| 3.315 | 5.85 | 10000 | 4.2463 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
alex2awesome/source-affiliation-model
|
alex2awesome
| 2023-07-11T02:37:57Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-07-10T23:11:23Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: source-affiliation-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# source-affiliation-model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3321
- F1: 0.5348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.12 | 100 | 1.4535 | 0.2435 |
| No log | 0.25 | 200 | 1.3128 | 0.3899 |
| No log | 0.37 | 300 | 1.2888 | 0.4413 |
| No log | 0.49 | 400 | 1.1560 | 0.4614 |
| 1.4848 | 0.62 | 500 | 1.0988 | 0.4477 |
| 1.4848 | 0.74 | 600 | 1.1211 | 0.4583 |
| 1.4848 | 0.86 | 700 | 1.1152 | 0.4693 |
| 1.4848 | 0.99 | 800 | 1.0176 | 0.5018 |
| 1.4848 | 1.11 | 900 | 1.0942 | 0.4774 |
| 1.1019 | 1.23 | 1000 | 1.1785 | 0.5119 |
| 1.1019 | 1.35 | 1100 | 1.0751 | 0.4797 |
| 1.1019 | 1.48 | 1200 | 1.0759 | 0.5206 |
| 1.1019 | 1.6 | 1300 | 1.0756 | 0.5231 |
| 1.1019 | 1.72 | 1400 | 1.1329 | 0.4547 |
| 0.9431 | 1.85 | 1500 | 1.0617 | 0.4852 |
| 0.9431 | 1.97 | 1600 | 1.1046 | 0.5254 |
| 0.9431 | 2.09 | 1700 | 1.2489 | 0.5069 |
| 0.9431 | 2.22 | 1800 | 1.2113 | 0.5363 |
| 0.9431 | 2.34 | 1900 | 1.1782 | 0.5546 |
| 0.7589 | 2.46 | 2000 | 1.0453 | 0.5862 |
| 0.7589 | 2.59 | 2100 | 1.0810 | 0.5223 |
| 0.7589 | 2.71 | 2200 | 1.1470 | 0.5872 |
| 0.7589 | 2.83 | 2300 | 1.1522 | 0.5553 |
| 0.7589 | 2.96 | 2400 | 1.0712 | 0.6273 |
| 0.6875 | 3.08 | 2500 | 1.3458 | 0.5768 |
| 0.6875 | 3.2 | 2600 | 1.7052 | 0.5491 |
| 0.6875 | 3.33 | 2700 | 1.5080 | 0.6582 |
| 0.6875 | 3.45 | 2800 | 1.5851 | 0.5965 |
| 0.6875 | 3.57 | 2900 | 1.4771 | 0.5691 |
| 0.5391 | 3.69 | 3000 | 1.6717 | 0.5350 |
| 0.5391 | 3.82 | 3100 | 1.5607 | 0.5448 |
| 0.5391 | 3.94 | 3200 | 1.5464 | 0.6062 |
| 0.5391 | 4.06 | 3300 | 1.7645 | 0.5755 |
| 0.5391 | 4.19 | 3400 | 1.6715 | 0.5504 |
| 0.4928 | 4.31 | 3500 | 1.7604 | 0.5626 |
| 0.4928 | 4.43 | 3600 | 1.8984 | 0.5142 |
| 0.4928 | 4.56 | 3700 | 1.8012 | 0.5763 |
| 0.4928 | 4.68 | 3800 | 1.7107 | 0.5671 |
| 0.4928 | 4.8 | 3900 | 1.7697 | 0.5598 |
| 0.4233 | 4.93 | 4000 | 1.6296 | 0.6084 |
| 0.4233 | 5.05 | 4100 | 2.0418 | 0.5343 |
| 0.4233 | 5.17 | 4200 | 1.8203 | 0.5526 |
| 0.4233 | 5.3 | 4300 | 1.9760 | 0.5292 |
| 0.4233 | 5.42 | 4400 | 2.0136 | 0.5153 |
| 0.2518 | 5.54 | 4500 | 2.0137 | 0.5121 |
| 0.2518 | 5.67 | 4600 | 2.0053 | 0.5257 |
| 0.2518 | 5.79 | 4700 | 1.9539 | 0.5423 |
| 0.2518 | 5.91 | 4800 | 2.0159 | 0.5686 |
| 0.2518 | 6.03 | 4900 | 2.0411 | 0.5817 |
| 0.2234 | 6.16 | 5000 | 2.0025 | 0.5780 |
| 0.2234 | 6.28 | 5100 | 2.1189 | 0.5413 |
| 0.2234 | 6.4 | 5200 | 2.1936 | 0.5628 |
| 0.2234 | 6.53 | 5300 | 2.1825 | 0.5210 |
| 0.2234 | 6.65 | 5400 | 2.0767 | 0.5471 |
| 0.1829 | 6.77 | 5500 | 1.9747 | 0.5587 |
| 0.1829 | 6.9 | 5600 | 2.1182 | 0.5847 |
| 0.1829 | 7.02 | 5700 | 2.1597 | 0.5437 |
| 0.1829 | 7.14 | 5800 | 2.0307 | 0.5629 |
| 0.1829 | 7.27 | 5900 | 2.0912 | 0.5450 |
| 0.1226 | 7.39 | 6000 | 2.2383 | 0.5379 |
| 0.1226 | 7.51 | 6100 | 2.2311 | 0.5834 |
| 0.1226 | 7.64 | 6200 | 2.2456 | 0.5438 |
| 0.1226 | 7.76 | 6300 | 2.2423 | 0.5860 |
| 0.1226 | 7.88 | 6400 | 2.2922 | 0.5245 |
| 0.0883 | 8.0 | 6500 | 2.3304 | 0.5650 |
| 0.0883 | 8.13 | 6600 | 2.3929 | 0.5288 |
| 0.0883 | 8.25 | 6700 | 2.3928 | 0.5344 |
| 0.0883 | 8.37 | 6800 | 2.3854 | 0.5266 |
| 0.0883 | 8.5 | 6900 | 2.4275 | 0.5339 |
| 0.044 | 8.62 | 7000 | 2.3929 | 0.5380 |
| 0.044 | 8.74 | 7100 | 2.3587 | 0.5339 |
| 0.044 | 8.87 | 7200 | 2.3372 | 0.5423 |
| 0.044 | 8.99 | 7300 | 2.3488 | 0.5424 |
| 0.044 | 9.11 | 7400 | 2.3543 | 0.5818 |
| 0.0558 | 9.24 | 7500 | 2.3397 | 0.5554 |
| 0.0558 | 9.36 | 7600 | 2.3255 | 0.5394 |
| 0.0558 | 9.48 | 7700 | 2.3184 | 0.5557 |
| 0.0558 | 9.61 | 7800 | 2.3293 | 0.5669 |
| 0.0558 | 9.73 | 7900 | 2.3358 | 0.5666 |
| 0.0323 | 9.85 | 8000 | 2.3307 | 0.5344 |
| 0.0323 | 9.98 | 8100 | 2.3321 | 0.5348 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
IS4XD/Ris3
|
IS4XD
| 2023-07-11T02:36:03Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-11T02:36:03Z |
---
license: bigscience-openrail-m
---
|
RavenFangsk/chronoborous-33B-GPTQ
|
RavenFangsk
| 2023-07-11T02:28:20Z | 5 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T03:26:46Z |
Auto-GPTQ'd version of https://huggingface.co/Henk717/chronoboros-33B
|
sl8425/troubleshooting_steps_classification_model
|
sl8425
| 2023-07-11T02:20:13Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-10T19:07:39Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: sl8425/troubleshooting_steps_classification_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# sl8425/troubleshooting_steps_classification_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6664
- Validation Loss: 0.7197
- Train Accuracy: 0.7923
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 921, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.5468 | 0.9262 | 0.7317 | 0 |
| 0.8223 | 0.7546 | 0.7830 | 1 |
| 0.6664 | 0.7197 | 0.7923 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jordyvl/vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t1.5_a0.9
|
jordyvl
| 2023-07-11T02:14:34Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-11T01:01:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t1.5_a0.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t1.5_a0.9
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2366
- Accuracy: 0.63
- Brier Loss: 0.5035
- Nll: 2.8588
- F1 Micro: 0.63
- F1 Macro: 0.6311
- Ece: 0.1649
- Aurc: 0.1472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 2.8887 | 0.1225 | 0.9306 | 15.9457 | 0.1225 | 0.1226 | 0.1434 | 0.8620 |
| No log | 2.0 | 50 | 2.2120 | 0.3775 | 0.7577 | 9.7500 | 0.3775 | 0.3483 | 0.1992 | 0.3776 |
| No log | 3.0 | 75 | 1.7681 | 0.495 | 0.6387 | 5.6935 | 0.495 | 0.4838 | 0.1885 | 0.2491 |
| No log | 4.0 | 100 | 1.6420 | 0.5225 | 0.6038 | 5.2427 | 0.5225 | 0.5242 | 0.1757 | 0.2301 |
| No log | 5.0 | 125 | 1.5877 | 0.545 | 0.5986 | 4.6187 | 0.545 | 0.5282 | 0.1808 | 0.2248 |
| No log | 6.0 | 150 | 1.6460 | 0.5125 | 0.6162 | 3.9942 | 0.5125 | 0.5060 | 0.1962 | 0.2295 |
| No log | 7.0 | 175 | 1.8436 | 0.5125 | 0.6538 | 4.1740 | 0.5125 | 0.4932 | 0.2299 | 0.2451 |
| No log | 8.0 | 200 | 1.8205 | 0.545 | 0.6453 | 5.0752 | 0.545 | 0.5234 | 0.2057 | 0.2432 |
| No log | 9.0 | 225 | 1.7399 | 0.55 | 0.6260 | 4.5896 | 0.55 | 0.5460 | 0.2057 | 0.2258 |
| No log | 10.0 | 250 | 1.8559 | 0.55 | 0.6521 | 5.0532 | 0.55 | 0.5368 | 0.2209 | 0.2560 |
| No log | 11.0 | 275 | 1.8636 | 0.5625 | 0.6488 | 4.6642 | 0.5625 | 0.5544 | 0.2335 | 0.2187 |
| No log | 12.0 | 300 | 1.7461 | 0.55 | 0.6356 | 4.1298 | 0.55 | 0.5638 | 0.2047 | 0.2313 |
| No log | 13.0 | 325 | 1.7468 | 0.5625 | 0.6281 | 4.5451 | 0.5625 | 0.5570 | 0.2224 | 0.2214 |
| No log | 14.0 | 350 | 1.9616 | 0.545 | 0.6884 | 3.7999 | 0.545 | 0.5484 | 0.2691 | 0.2624 |
| No log | 15.0 | 375 | 2.0977 | 0.5175 | 0.7138 | 4.3792 | 0.5175 | 0.5055 | 0.2658 | 0.2917 |
| No log | 16.0 | 400 | 2.0238 | 0.5275 | 0.6896 | 4.5299 | 0.5275 | 0.5177 | 0.2664 | 0.2603 |
| No log | 17.0 | 425 | 1.8687 | 0.535 | 0.6534 | 3.7356 | 0.535 | 0.5388 | 0.2490 | 0.2448 |
| No log | 18.0 | 450 | 1.8210 | 0.5575 | 0.6492 | 4.3823 | 0.5575 | 0.5537 | 0.2533 | 0.2268 |
| No log | 19.0 | 475 | 1.7610 | 0.555 | 0.6325 | 3.9697 | 0.555 | 0.5503 | 0.2292 | 0.2161 |
| 0.5398 | 20.0 | 500 | 1.7125 | 0.5825 | 0.6125 | 3.4176 | 0.5825 | 0.5731 | 0.2140 | 0.1859 |
| 0.5398 | 21.0 | 525 | 1.6296 | 0.5775 | 0.6163 | 3.6014 | 0.5775 | 0.5871 | 0.2236 | 0.2051 |
| 0.5398 | 22.0 | 550 | 1.5965 | 0.57 | 0.5908 | 3.7668 | 0.57 | 0.5712 | 0.2058 | 0.1883 |
| 0.5398 | 23.0 | 575 | 1.4828 | 0.5875 | 0.5646 | 3.7028 | 0.5875 | 0.5854 | 0.1944 | 0.1714 |
| 0.5398 | 24.0 | 600 | 1.3983 | 0.6075 | 0.5481 | 3.3608 | 0.6075 | 0.6107 | 0.1966 | 0.1628 |
| 0.5398 | 25.0 | 625 | 1.5241 | 0.5925 | 0.5866 | 3.3669 | 0.5925 | 0.6019 | 0.2069 | 0.1886 |
| 0.5398 | 26.0 | 650 | 1.5540 | 0.58 | 0.5780 | 3.5184 | 0.58 | 0.5710 | 0.2131 | 0.1857 |
| 0.5398 | 27.0 | 675 | 1.4653 | 0.6 | 0.5768 | 2.9877 | 0.6 | 0.6043 | 0.2166 | 0.1781 |
| 0.5398 | 28.0 | 700 | 1.4883 | 0.5925 | 0.5646 | 3.7789 | 0.5925 | 0.5910 | 0.2096 | 0.1746 |
| 0.5398 | 29.0 | 725 | 1.5738 | 0.59 | 0.5914 | 4.0558 | 0.59 | 0.5879 | 0.2150 | 0.1957 |
| 0.5398 | 30.0 | 750 | 1.4017 | 0.6025 | 0.5583 | 3.4791 | 0.6025 | 0.6023 | 0.2150 | 0.1752 |
| 0.5398 | 31.0 | 775 | 1.3500 | 0.61 | 0.5365 | 3.2560 | 0.61 | 0.6157 | 0.1988 | 0.1579 |
| 0.5398 | 32.0 | 800 | 1.2977 | 0.6375 | 0.5140 | 3.0503 | 0.6375 | 0.6395 | 0.1847 | 0.1534 |
| 0.5398 | 33.0 | 825 | 1.3471 | 0.6175 | 0.5406 | 3.1888 | 0.6175 | 0.6104 | 0.2077 | 0.1689 |
| 0.5398 | 34.0 | 850 | 1.2992 | 0.615 | 0.5219 | 2.8944 | 0.615 | 0.6191 | 0.1826 | 0.1574 |
| 0.5398 | 35.0 | 875 | 1.2733 | 0.6225 | 0.5124 | 2.9352 | 0.6225 | 0.6238 | 0.1588 | 0.1505 |
| 0.5398 | 36.0 | 900 | 1.2821 | 0.6175 | 0.5231 | 3.0142 | 0.6175 | 0.6169 | 0.1672 | 0.1553 |
| 0.5398 | 37.0 | 925 | 1.2819 | 0.61 | 0.5200 | 2.6874 | 0.61 | 0.6116 | 0.1847 | 0.1540 |
| 0.5398 | 38.0 | 950 | 1.2664 | 0.615 | 0.5145 | 2.9287 | 0.615 | 0.6159 | 0.1961 | 0.1528 |
| 0.5398 | 39.0 | 975 | 1.2584 | 0.6225 | 0.5134 | 3.0058 | 0.6225 | 0.6230 | 0.1747 | 0.1508 |
| 0.0507 | 40.0 | 1000 | 1.2562 | 0.615 | 0.5114 | 2.9269 | 0.615 | 0.6169 | 0.1815 | 0.1504 |
| 0.0507 | 41.0 | 1025 | 1.2525 | 0.6225 | 0.5101 | 2.9199 | 0.6225 | 0.6239 | 0.1770 | 0.1496 |
| 0.0507 | 42.0 | 1050 | 1.2573 | 0.62 | 0.5133 | 2.9195 | 0.62 | 0.6221 | 0.1824 | 0.1511 |
| 0.0507 | 43.0 | 1075 | 1.2536 | 0.6125 | 0.5131 | 2.9026 | 0.6125 | 0.6121 | 0.1820 | 0.1511 |
| 0.0507 | 44.0 | 1100 | 1.2543 | 0.6225 | 0.5109 | 3.0693 | 0.6225 | 0.6235 | 0.1647 | 0.1500 |
| 0.0507 | 45.0 | 1125 | 1.2526 | 0.6125 | 0.5117 | 2.9018 | 0.6125 | 0.6141 | 0.1788 | 0.1500 |
| 0.0507 | 46.0 | 1150 | 1.2432 | 0.615 | 0.5068 | 2.9042 | 0.615 | 0.6167 | 0.1762 | 0.1484 |
| 0.0507 | 47.0 | 1175 | 1.2485 | 0.6275 | 0.5098 | 2.8927 | 0.6275 | 0.6251 | 0.1590 | 0.1496 |
| 0.0507 | 48.0 | 1200 | 1.2576 | 0.6125 | 0.5140 | 2.8956 | 0.6125 | 0.6137 | 0.1824 | 0.1524 |
| 0.0507 | 49.0 | 1225 | 1.2468 | 0.62 | 0.5094 | 2.8918 | 0.62 | 0.6204 | 0.1832 | 0.1496 |
| 0.0507 | 50.0 | 1250 | 1.2479 | 0.6175 | 0.5102 | 2.8921 | 0.6175 | 0.6178 | 0.1706 | 0.1491 |
| 0.0507 | 51.0 | 1275 | 1.2393 | 0.6225 | 0.5057 | 2.8813 | 0.6225 | 0.6229 | 0.1784 | 0.1486 |
| 0.0507 | 52.0 | 1300 | 1.2463 | 0.6175 | 0.5085 | 2.8959 | 0.6175 | 0.6184 | 0.1669 | 0.1495 |
| 0.0507 | 53.0 | 1325 | 1.2391 | 0.62 | 0.5061 | 2.8828 | 0.62 | 0.6215 | 0.1803 | 0.1471 |
| 0.0507 | 54.0 | 1350 | 1.2538 | 0.6175 | 0.5121 | 2.8795 | 0.6175 | 0.6167 | 0.1680 | 0.1512 |
| 0.0507 | 55.0 | 1375 | 1.2407 | 0.625 | 0.5064 | 2.8830 | 0.625 | 0.6259 | 0.1842 | 0.1482 |
| 0.0507 | 56.0 | 1400 | 1.2488 | 0.62 | 0.5099 | 2.8769 | 0.62 | 0.6198 | 0.1568 | 0.1499 |
| 0.0507 | 57.0 | 1425 | 1.2402 | 0.625 | 0.5052 | 2.8778 | 0.625 | 0.6260 | 0.1616 | 0.1481 |
| 0.0507 | 58.0 | 1450 | 1.2457 | 0.625 | 0.5077 | 2.8786 | 0.625 | 0.6260 | 0.1759 | 0.1474 |
| 0.0507 | 59.0 | 1475 | 1.2430 | 0.6275 | 0.5073 | 2.8744 | 0.6275 | 0.6266 | 0.1652 | 0.1486 |
| 0.0319 | 60.0 | 1500 | 1.2399 | 0.625 | 0.5056 | 2.8767 | 0.625 | 0.6256 | 0.1701 | 0.1474 |
| 0.0319 | 61.0 | 1525 | 1.2460 | 0.63 | 0.5087 | 2.8758 | 0.63 | 0.6329 | 0.1865 | 0.1491 |
| 0.0319 | 62.0 | 1550 | 1.2410 | 0.6225 | 0.5058 | 2.8719 | 0.6225 | 0.6229 | 0.1752 | 0.1477 |
| 0.0319 | 63.0 | 1575 | 1.2418 | 0.63 | 0.5060 | 2.8746 | 0.63 | 0.6319 | 0.1692 | 0.1484 |
| 0.0319 | 64.0 | 1600 | 1.2424 | 0.6275 | 0.5069 | 2.8672 | 0.6275 | 0.6279 | 0.1903 | 0.1475 |
| 0.0319 | 65.0 | 1625 | 1.2413 | 0.63 | 0.5061 | 2.8747 | 0.63 | 0.6304 | 0.1737 | 0.1471 |
| 0.0319 | 66.0 | 1650 | 1.2385 | 0.6325 | 0.5039 | 2.8726 | 0.6325 | 0.6358 | 0.1792 | 0.1473 |
| 0.0319 | 67.0 | 1675 | 1.2368 | 0.625 | 0.5047 | 2.8661 | 0.625 | 0.6261 | 0.1843 | 0.1467 |
| 0.0319 | 68.0 | 1700 | 1.2370 | 0.6275 | 0.5039 | 2.8691 | 0.6275 | 0.6294 | 0.1724 | 0.1471 |
| 0.0319 | 69.0 | 1725 | 1.2382 | 0.63 | 0.5050 | 2.8659 | 0.63 | 0.6317 | 0.1698 | 0.1472 |
| 0.0319 | 70.0 | 1750 | 1.2396 | 0.6275 | 0.5051 | 2.8670 | 0.6275 | 0.6290 | 0.1790 | 0.1474 |
| 0.0319 | 71.0 | 1775 | 1.2378 | 0.625 | 0.5045 | 2.8637 | 0.625 | 0.6268 | 0.1742 | 0.1476 |
| 0.0319 | 72.0 | 1800 | 1.2360 | 0.625 | 0.5037 | 2.8669 | 0.625 | 0.6269 | 0.1778 | 0.1468 |
| 0.0319 | 73.0 | 1825 | 1.2390 | 0.63 | 0.5049 | 2.8638 | 0.63 | 0.6310 | 0.1711 | 0.1474 |
| 0.0319 | 74.0 | 1850 | 1.2372 | 0.625 | 0.5045 | 2.8640 | 0.625 | 0.6269 | 0.1817 | 0.1475 |
| 0.0319 | 75.0 | 1875 | 1.2375 | 0.63 | 0.5044 | 2.8640 | 0.63 | 0.6313 | 0.1703 | 0.1472 |
| 0.0319 | 76.0 | 1900 | 1.2372 | 0.6275 | 0.5041 | 2.8621 | 0.6275 | 0.6290 | 0.1794 | 0.1473 |
| 0.0319 | 77.0 | 1925 | 1.2374 | 0.63 | 0.5041 | 2.8629 | 0.63 | 0.6313 | 0.1722 | 0.1472 |
| 0.0319 | 78.0 | 1950 | 1.2367 | 0.6275 | 0.5039 | 2.8620 | 0.6275 | 0.6294 | 0.1704 | 0.1474 |
| 0.0319 | 79.0 | 1975 | 1.2371 | 0.6275 | 0.5039 | 2.8619 | 0.6275 | 0.6294 | 0.1639 | 0.1474 |
| 0.0314 | 80.0 | 2000 | 1.2372 | 0.63 | 0.5041 | 2.8612 | 0.63 | 0.6310 | 0.1750 | 0.1474 |
| 0.0314 | 81.0 | 2025 | 1.2368 | 0.63 | 0.5038 | 2.8613 | 0.63 | 0.6309 | 0.1648 | 0.1473 |
| 0.0314 | 82.0 | 2050 | 1.2370 | 0.63 | 0.5038 | 2.8607 | 0.63 | 0.6305 | 0.1782 | 0.1473 |
| 0.0314 | 83.0 | 2075 | 1.2368 | 0.63 | 0.5038 | 2.8609 | 0.63 | 0.6307 | 0.1686 | 0.1472 |
| 0.0314 | 84.0 | 2100 | 1.2368 | 0.63 | 0.5037 | 2.8603 | 0.63 | 0.6305 | 0.1667 | 0.1472 |
| 0.0314 | 85.0 | 2125 | 1.2366 | 0.63 | 0.5036 | 2.8601 | 0.63 | 0.6309 | 0.1686 | 0.1473 |
| 0.0314 | 86.0 | 2150 | 1.2367 | 0.6325 | 0.5037 | 2.8600 | 0.6325 | 0.6335 | 0.1751 | 0.1471 |
| 0.0314 | 87.0 | 2175 | 1.2369 | 0.63 | 0.5037 | 2.8598 | 0.63 | 0.6307 | 0.1730 | 0.1473 |
| 0.0314 | 88.0 | 2200 | 1.2367 | 0.63 | 0.5036 | 2.8595 | 0.63 | 0.6307 | 0.1657 | 0.1472 |
| 0.0314 | 89.0 | 2225 | 1.2366 | 0.63 | 0.5036 | 2.8597 | 0.63 | 0.6307 | 0.1680 | 0.1472 |
| 0.0314 | 90.0 | 2250 | 1.2366 | 0.63 | 0.5036 | 2.8594 | 0.63 | 0.6307 | 0.1580 | 0.1472 |
| 0.0314 | 91.0 | 2275 | 1.2366 | 0.63 | 0.5035 | 2.8593 | 0.63 | 0.6307 | 0.1677 | 0.1472 |
| 0.0314 | 92.0 | 2300 | 1.2367 | 0.63 | 0.5035 | 2.8593 | 0.63 | 0.6307 | 0.1616 | 0.1472 |
| 0.0314 | 93.0 | 2325 | 1.2366 | 0.63 | 0.5035 | 2.8590 | 0.63 | 0.6307 | 0.1625 | 0.1472 |
| 0.0314 | 94.0 | 2350 | 1.2366 | 0.6325 | 0.5035 | 2.8590 | 0.6325 | 0.6333 | 0.1586 | 0.1470 |
| 0.0314 | 95.0 | 2375 | 1.2366 | 0.63 | 0.5035 | 2.8591 | 0.63 | 0.6307 | 0.1580 | 0.1472 |
| 0.0314 | 96.0 | 2400 | 1.2366 | 0.63 | 0.5035 | 2.8589 | 0.63 | 0.6307 | 0.1695 | 0.1471 |
| 0.0314 | 97.0 | 2425 | 1.2366 | 0.63 | 0.5035 | 2.8589 | 0.63 | 0.6311 | 0.1648 | 0.1472 |
| 0.0314 | 98.0 | 2450 | 1.2366 | 0.63 | 0.5035 | 2.8588 | 0.63 | 0.6311 | 0.1695 | 0.1471 |
| 0.0314 | 99.0 | 2475 | 1.2366 | 0.6325 | 0.5035 | 2.8589 | 0.6325 | 0.6337 | 0.1724 | 0.1470 |
| 0.0312 | 100.0 | 2500 | 1.2366 | 0.63 | 0.5035 | 2.8588 | 0.63 | 0.6311 | 0.1649 | 0.1472 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-aod-cut-oversampling-augmented
|
hafidikhsan
| 2023-07-11T02:12:58Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-11T02:10:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-aod-cut-oversampling-augmented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-english-pronunciation-evaluation-aod-cut-oversampling-augmented
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0403
- Accuracy: 0.744
- F1: 0.7432
- Precision: 0.7436
- Recall: 0.744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.8567 | 1.0 | 313 | 0.9539 | 0.5388 | 0.5159 | 0.5387 | 0.5388 |
| 0.665 | 2.0 | 626 | 0.7520 | 0.6512 | 0.6545 | 0.6625 | 0.6512 |
| 0.629 | 3.0 | 939 | 0.7775 | 0.7008 | 0.6980 | 0.6978 | 0.7008 |
| 0.4793 | 4.0 | 1252 | 0.8696 | 0.7268 | 0.7295 | 0.7365 | 0.7268 |
| 0.2273 | 5.0 | 1565 | 1.0403 | 0.744 | 0.7432 | 0.7436 | 0.744 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
zwtharry/PPO-rocket
|
zwtharry
| 2023-07-11T02:09:34Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T02:09:13Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 234.64 +/- 40.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
manhtt-079/vipubmed-deberta-base
|
manhtt-079
| 2023-07-11T01:59:35Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"deberta-v2",
"transformer",
"vietnamese",
"nlp",
"bert",
"deberta",
"fill-mask",
"vi",
"dataset:VietAI/vi_pubmed",
"license:mit",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-05-06T10:35:36Z |
---
language:
- vi
metrics:
- f1
pipeline_tag: fill-mask
license: mit
datasets:
- VietAI/vi_pubmed
tags:
- transformer
- vietnamese
- nlp
- bert
- deberta
- deberta-v2
---
# ViPubMedDeBERTa: A Vietnamese pretrained biomedical language representation model
## Model description
## Model variations
- `vipubmed-deberta-xsmall`: 22M backbone parameters
- `vipubmed-deberta-base`: 86M backbone parameters
## How to use
You can use this model directly with a pipeline for masked language modeling:<br>
**_NOTE:_** The input text should be already word-segmented, you can use [Pyvi](https://github.com/trungtv/pyvi) (Python Vietnamese Core NLP Toolkit) to segment word before passing to the model.
```python
>>> from transformers import pipeline
>>> model = pipeline('fill-mask', model='manhtt-079/vipubmed-deberta-base')
>>> text_with_mask = """Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS ) . FDCS là bệnh rất hiếm ảnh_hưởng đến tế_bào trình_diện kháng_nguyên đuôi gai và thường bị chẩn_đoán nhầm . Phẫu_thuật được coi là phương_thức điều_trị tốt nhất , tiếp_theo là hóa_trị . Trong trường_hợp của chúng_tôi , [MASK] cắt bỏ không_thể thực_hiện được , do đó bệnh_nhân được hóa_trị hai dòng , sau đó là cấy_ghép tủy xương , sau đó là hóa_trị ba với đáp_ứng trao_đổi chất hoàn_toàn được thấy trên"""
>>> model(text_with_mask)
[{'score': 0.8480948805809021,
'token': 1621,
'token_str': 'phẫu_thuật',
'sequence': 'Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS ). FDCS là bệnh rất hiếm ảnh_hưởng đến tế_bào trình_diện kháng_nguyên đuôi gai và thường bị chẩn_đoán nhầm. Phẫu_thuật được coi là phương_thức điều_trị tốt nhất, tiếp_theo là hóa_trị. Trong trường_hợp của chúng_tôi, phẫu_thuật cắt bỏ không_thể thực_hiện được, do đó bệnh_nhân được hóa_trị hai dòng, sau đó là cấy_ghép tủy xương, sau đó là hóa_trị ba với đáp_ứng trao_đổi chất hoàn_toàn được thấy trên'},
{'score': 0.1136574074625969,
'token': 83,
'token_str': 'việc',
'sequence': 'Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS ). FDCS là bệnh rất hiếm ảnh_hưởng đến tế_bào trình_diện kháng_nguyên đuôi gai và thường bị chẩn_đoán nhầm. Phẫu_thuật được coi là phương_thức điều_trị tốt nhất, tiếp_theo là hóa_trị. Trong trường_hợp của chúng_tôi, việc cắt bỏ không_thể thực_hiện được, do đó bệnh_nhân được hóa_trị hai dòng, sau đó là cấy_ghép tủy xương, sau đó là hóa_trị ba với đáp_ứng trao_đổi chất hoàn_toàn được thấy trên'},
{'score': 0.014141257852315903,
'token': 589,
'token_str': 'phương_pháp',
'sequence': 'Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS ). FDCS là bệnh rất hiếm ảnh_hưởng đến tế_bào trình_diện kháng_nguyên đuôi gai và thường bị chẩn_đoán nhầm. Phẫu_thuật được coi là phương_thức điều_trị tốt nhất, tiếp_theo là hóa_trị. Trong trường_hợp của chúng_tôi, phương_pháp cắt bỏ không_thể thực_hiện được, do đó bệnh_nhân được hóa_trị hai dòng, sau đó là cấy_ghép tủy xương, sau đó là hóa_trị ba với đáp_ứng trao_đổi chất hoàn_toàn được thấy trên'},
{'score': 0.0024715897161513567,
'token': 454,
'token_str': 'điều_trị',
'sequence': 'Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS ). FDCS là bệnh rất hiếm ảnh_hưởng đến tế_bào trình_diện kháng_nguyên đuôi gai và thường bị chẩn_đoán nhầm. Phẫu_thuật được coi là phương_thức điều_trị tốt nhất, tiếp_theo là hóa_trị. Trong trường_hợp của chúng_tôi, điều_trị cắt bỏ không_thể thực_hiện được, do đó bệnh_nhân được hóa_trị hai dòng, sau đó là cấy_ghép tủy xương, sau đó là hóa_trị ba với đáp_ứng trao_đổi chất hoàn_toàn được thấy trên'},
{'score': 0.002370780799537897,
'token': 485,
'token_str': 'quá_trình',
'sequence': 'Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS ). FDCS là bệnh rất hiếm ảnh_hưởng đến tế_bào trình_diện kháng_nguyên đuôi gai và thường bị chẩn_đoán nhầm. Phẫu_thuật được coi là phương_thức điều_trị tốt nhất, tiếp_theo là hóa_trị. Trong trường_hợp của chúng_tôi, quá_trình cắt bỏ không_thể thực_hiện được, do đó bệnh_nhân được hóa_trị hai dòng, sau đó là cấy_ghép tủy xương, sau đó là hóa_trị ba với đáp_ứng trao_đổi chất hoàn_toàn được thấy trên'}]
```
#### Get features:
- With PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('manhtt-079/vipubmed-deberta-base')
model = AutoModel.from_pretrained("manhtt-079/vipubmed-deberta-base")
text = "Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS )."
model_inputs = tokenizer(text, return_tensors='pt')
outputs = model(**model_inputs)
```
- With TensorFlow
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('manhtt-079/vipubmed-deberta-base')
model = TFAutoModel.from_pretrained("manhtt-079/vipubmed-deberta-base")
text = "Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS )."
model_inputs = tokenizer(text, return_tensors='tf')
outputs = model(**model_inputs)
```
## Pre-training data
The ViPubMedDeBERTa model was pre-trained on [ViPubmed](https://github.com/vietai/ViPubmed), a dataset consisting of 20M Vietnamese Biomedical abstracts generated by large scale translation.
## Training procedure
### Data deduplication
A fuzzy deduplication, targeting documents with high overlap, was conducted at the document level to enhance quality and address overfitting. Employing Locality Sensitive Hashing (LSH) with a threshold of 0.9 ensured the removal of documents with overlap exceeding 90%. This process resulted in an average reduction of the dataset's size by 3%.
### Pretraining
We employ our model based on the [ViDeBERTa](https://github.com/HySonLab/ViDeBERTa) architecture and leverage its pre-trained checkpoint to continue pre-training. Our model was trained on a single A100 GPU (40GB) for 350 thousand steps, with a batch size of 16 and gradient accumulation steps set to 4 (resulting in a total of 64). The sequence length was limited to 512 tokens and the model peak learning rate of 1e-4.
## Evaluation results
|
casque/TemplarAssassinv0.2
|
casque
| 2023-07-11T01:29:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-11T01:26:51Z |
---
license: creativeml-openrail-m
---
|
liyingjian/Reinforce-policy-gradient
|
liyingjian
| 2023-07-11T01:28:57Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T01:28:48Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-policy-gradient
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 403.00 +/- 194.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AndreNasci/distilbert-base-uncased-finetuned-cola
|
AndreNasci
| 2023-07-11T01:24:44Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-10T23:58:13Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: AndreNasci/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# AndreNasci/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1932
- Validation Loss: 0.5147
- Train Matthews Correlation: 0.5469
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5120 | 0.4538 | 0.4858 | 0 |
| 0.3206 | 0.4722 | 0.5116 | 1 |
| 0.1932 | 0.5147 | 0.5469 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MDelan/distilbert-base-uncased-finetuned-cola
|
MDelan
| 2023-07-11T01:19:40Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-11T01:14:40Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MDelan/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MDelan/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1879
- Validation Loss: 0.5580
- Train Matthews Correlation: 0.5127
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5181 | 0.4661 | 0.4379 | 0 |
| 0.3140 | 0.4981 | 0.4774 | 1 |
| 0.1879 | 0.5580 | 0.5127 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
beomi/kollama-7b
|
beomi
| 2023-07-11T01:18:13Z | 71 | 10 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"KoLLAMA",
"KoreanGPT",
"ko",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-10T15:43:35Z |
---
license: mit
language:
- ko
- en
metrics:
- perplexity
- accuracy
pipeline_tag: text-generation
tags:
- llama
- KoLLAMA
- KoreanGPT
---
> 🚧 Note: this repo is under construction 🚧
## Todo
✅ - finish
⏳ - currently working on it
- ✅ Train new BBPE Tokenizer
- ✅ Test train code on TPUv4 Pods (with model parallel)
- ✅ Converting test (jax to PyTorch)
- ✅ LM train validation on minimal dataset (1 sentence 1000 step)
- ⏳ Build Data Shuffler (curriculum learning)
- ⏳ Train 7B Model
- Train 13B Model
- Train 33B Model
- Train 65B Model
# KoLLaMA Model Card
KoLLaMA (7B) trained on Korean/English/Code dataset with LLaMA Architecture via JAX,
with the warm support from [Google TPU Research Cloud program](https://sites.research.google/trc/about/) for providing part of the computation resources.
## Model details
**Researcher developing the model**
Junbum Lee (aka Beomi)
**Model date**
KoLLaMA was trained between 2022.04~
**Model version**
This is alpha version of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
(This repo contains 7B model!)
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
More info for KoAlpaca:
[TBD]
**Citations details**
KoLLAMA: [TBD]
LLAMA: https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
MIT
**Where to send questions or comments about the model**
Questions and comments about KoLLaMA can be sent via the [GitHub repository](https://github.com/beomi/KoLLAMA) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of KoLLaMA is research on Korean Opensource large language models
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
## Evaluation datasets
[TBD]
## Training dataset
[TBD]
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
lucs1265/distilbert-base-uncased-finetuned-cola
|
lucs1265
| 2023-07-11T01:11:57Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-11T01:06:54Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: lucs1265/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# lucs1265/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1898
- Validation Loss: 0.5233
- Train Matthews Correlation: 0.5286
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5194 | 0.4536 | 0.4725 | 0 |
| 0.3249 | 0.4763 | 0.4867 | 1 |
| 0.1898 | 0.5233 | 0.5286 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mrovejaxd/ABL_b
|
mrovejaxd
| 2023-07-11T01:07:35Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-11T00:07:08Z |
---
tags:
- generated_from_trainer
model-index:
- name: ABL_b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ABL_b
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.2
|
casque/Windrunnerv0.2
|
casque
| 2023-07-11T01:03:37Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-11T01:00:46Z |
---
license: creativeml-openrail-m
---
|
hopkins/strict-small-4
|
hopkins
| 2023-07-11T00:43:51Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-13T21:25:31Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: strict-small-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# strict-small-4
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 9
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.9925 | 1.83 | 1000 | 4.2033 |
| 3.7647 | 3.67 | 2000 | 3.9152 |
| 3.3569 | 5.5 | 3000 | 3.8495 |
| 3.0079 | 7.34 | 4000 | 3.8588 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
casque/CrystalMaidenv0.2
|
casque
| 2023-07-11T00:42:48Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-11T00:39:34Z |
---
license: creativeml-openrail-m
---
|
ALM-AHME/swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-LungCancer-LC25000-AH
|
ALM-AHME
| 2023-07-11T00:40:15Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T02:43:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-LungCancer-LC25000-AH
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: Augmented-Final
split: train
args: Augmented-Final
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-LungCancer-LC25000-AH
This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft](https://huggingface.co/microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.5
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0929 | 1.0 | 281 | 0.0919 | 0.9657 |
| 0.0908 | 2.0 | 562 | 0.0127 | 0.9967 |
| 0.0525 | 3.0 | 843 | 0.0133 | 0.9947 |
| 0.1301 | 4.0 | 1125 | 0.0270 | 0.9927 |
| 0.0624 | 5.0 | 1406 | 0.0064 | 0.9973 |
| 0.0506 | 6.0 | 1687 | 0.0025 | 0.999 |
| 0.0001 | 6.99 | 1967 | 0.0002 | 1.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Corran/all_mini_lm_paraphrase_L3_v2_12tr_5t
|
Corran
| 2023-07-11T00:37:54Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-11T00:37:49Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# Corran/all_mini_lm_paraphrase_L3_v2_12tr_5t
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("Corran/all_mini_lm_paraphrase_L3_v2_12tr_5t")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
jpherrerap/ner-bert-base-spanish-wwm-uncased
|
jpherrerap
| 2023-07-11T00:35:25Z | 125 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"es",
"dataset:jpherrerap/competencia2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T23:53:37Z |
---
language:
- es
tags:
- generated_from_trainer
datasets:
- jpherrerap/competencia2
model-index:
- name: ner-bert-base-spanish-wwm-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-bert-base-spanish-wwm-uncased
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the jpherrerap/competencia2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5112
- Body Part Precision: 0.0
- Body Part Recall: 0.0
- Body Part F1: 0.0
- Body Part Number: 0
- Disease Precision: 0.0
- Disease Recall: 0.0
- Disease F1: 0.0
- Disease Number: 0
- Family Member Precision: 0.0
- Family Member Recall: 0.0
- Family Member F1: 0.0
- Family Member Number: 0
- Medication Precision: 0.0
- Medication Recall: 0.0
- Medication F1: 0.0
- Medication Number: 0
- Procedure Precision: 0.0
- Procedure Recall: 0.0
- Procedure F1: 0.0
- Procedure Number: 0
- Overall Precision: 0.0
- Overall Recall: 0.0
- Overall F1: 0.0
- Overall Accuracy: 0.6713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Body Part Precision | Body Part Recall | Body Part F1 | Body Part Number | Disease Precision | Disease Recall | Disease F1 | Disease Number | Family Member Precision | Family Member Recall | Family Member F1 | Family Member Number | Medication Precision | Medication Recall | Medication F1 | Medication Number | Procedure Precision | Procedure Recall | Procedure F1 | Procedure Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.3372 | 1.0 | 1004 | 1.5112 | 0.0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0.0 | 0.6713 |
| 0.1611 | 2.0 | 2008 | 1.7235 | 0.0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0.0 | 0.6705 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Unmand/procare_business_unit
|
Unmand
| 2023-07-11T00:33:12Z | 4 | 0 |
spacy
|
[
"spacy",
"text-classification",
"en",
"region:us"
] |
text-classification
| 2023-07-11T00:31:49Z |
---
tags:
- spacy
- text-classification
language:
- en
model-index:
- name: en_procare_business_unit
results: []
---
| Feature | Description |
| --- | --- |
| **Name** | `en_procare_business_unit` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.5.4,<3.6.0` |
| **Default Pipeline** | `textcat_multilabel` |
| **Components** | `textcat_multilabel` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (30 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`textcat_multilabel`** | `Health and Wellbeing`, `NT Consulting`, `SA Workers Comp`, `VIC Workers Comp`, `ACT Consultancy`, `NSW Consulting`, `Staff Cards`, `NT MAC`, `NSW CTP`, `NSW Workers Comp`, `SA Consulting`, `ACT Workers Comp`, `Life and A&H`, `QLD Workers Comp`, `NT Workers Comp`, `Treatment`, `State Authorities Superannuation Scheme`, `ACT CTP`, `NULL`, `National Consulting`, `WA Workers Comp`, `QLD CTP`, `VIC TAC`, `WA Consulting`, `TAS Consulting`, `QLD Consulting`, `VIC Consulting`, `Comcare`, `TAS Workers Comp`, `SA CTP` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `CATS_SCORE` | 81.21 |
| `CATS_MICRO_P` | 89.37 |
| `CATS_MICRO_R` | 59.54 |
| `CATS_MICRO_F` | 71.47 |
| `CATS_MACRO_P` | 67.87 |
| `CATS_MACRO_R` | 33.52 |
| `CATS_MACRO_F` | 42.52 |
| `CATS_MACRO_AUC` | 81.21 |
| `TEXTCAT_MULTILABEL_LOSS` | 73.72 |
|
layoric/openllama-7b-qlora-orca
|
layoric
| 2023-07-11T00:31:19Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-09T23:58:03Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
bobobert4/poca-SoccerTwos
|
bobobert4
| 2023-07-11T00:18:04Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-07-11T00:16:06Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: bobobert4/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bastianchinchon/nominal-groups-recognition-beto-clinical-wl-es
|
bastianchinchon
| 2023-07-10T23:58:42Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"es",
"dataset:bastianchinchon/spanish_nominal_groups_conll2003",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T22:32:12Z |
---
language:
- es
tags:
- generated_from_trainer
datasets:
- bastianchinchon/spanish_nominal_groups_conll2003
model-index:
- name: nominal-groups-recognition-beto-clinical-wl-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nominal-groups-recognition-beto-clinical-wl-es
This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the bastianchinchon/spanish_nominal_groups_conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2338
- Body Part Precision: 0.7894
- Body Part Recall: 0.8257
- Body Part F1: 0.8071
- Body Part Number: 413
- Disease Precision: 0.7790
- Disease Recall: 0.8133
- Disease F1: 0.7958
- Disease Number: 975
- Family Member Precision: 0.8286
- Family Member Recall: 0.9667
- Family Member F1: 0.8923
- Family Member Number: 30
- Medication Precision: 0.8913
- Medication Recall: 0.8817
- Medication F1: 0.8865
- Medication Number: 93
- Procedure Precision: 0.7130
- Procedure Recall: 0.7910
- Procedure F1: 0.75
- Procedure Number: 311
- Overall Precision: 0.7758
- Overall Recall: 0.8183
- Overall F1: 0.7965
- Overall Accuracy: 0.9382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Body Part Precision | Body Part Recall | Body Part F1 | Body Part Number | Disease Precision | Disease Recall | Disease F1 | Disease Number | Family Member Precision | Family Member Recall | Family Member F1 | Family Member Number | Medication Precision | Medication Recall | Medication F1 | Medication Number | Procedure Precision | Procedure Recall | Procedure F1 | Procedure Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.2998 | 1.0 | 1004 | 0.2127 | 0.7460 | 0.7893 | 0.7671 | 413 | 0.7612 | 0.7815 | 0.7713 | 975 | 0.9062 | 0.9667 | 0.9355 | 30 | 0.8462 | 0.8280 | 0.8370 | 93 | 0.6583 | 0.7556 | 0.7036 | 311 | 0.7450 | 0.7843 | 0.7642 | 0.9331 |
| 0.1566 | 2.0 | 2008 | 0.2278 | 0.7780 | 0.8232 | 0.8 | 413 | 0.7847 | 0.8 | 0.7923 | 975 | 0.8529 | 0.9667 | 0.9062 | 30 | 0.8710 | 0.8710 | 0.8710 | 93 | 0.7346 | 0.7653 | 0.7496 | 311 | 0.7800 | 0.8057 | 0.7927 | 0.9367 |
| 0.1089 | 3.0 | 3012 | 0.2338 | 0.7894 | 0.8257 | 0.8071 | 413 | 0.7790 | 0.8133 | 0.7958 | 975 | 0.8286 | 0.9667 | 0.8923 | 30 | 0.8913 | 0.8817 | 0.8865 | 93 | 0.7130 | 0.7910 | 0.75 | 311 | 0.7758 | 0.8183 | 0.7965 | 0.9382 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jz0214/sd-class-butterflies-64
|
jz0214
| 2023-07-10T23:52:24Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-07-10T23:50:42Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('jz0214/sd-class-butterflies-64')
image = pipeline().images[0]
image
```
|
aliceBG/distilbert-base-uncased-finetuned-cola
|
aliceBG
| 2023-07-10T23:38:28Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-09T23:52:39Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: aliceBG/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# aliceBG/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1834
- Validation Loss: 0.5540
- Train Matthews Correlation: 0.5495
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5170 | 0.4723 | 0.4122 | 0 |
| 0.3177 | 0.4714 | 0.5232 | 1 |
| 0.1834 | 0.5540 | 0.5495 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
digiplay/Juggernaut_final
|
digiplay
| 2023-07-10T23:21:23Z | 1,591 | 15 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-10T22:56:03Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Models info :
https://civitai.com/models/46422?modelVersionId=114770
Sample image I made thru huggingface's API:

Original Author's DEMO images :




|
jz0214/sd-class-butterflies-32
|
jz0214
| 2023-07-10T23:09:47Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-07-10T23:08:46Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('jz0214/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
PistonPalacios/Piston
|
PistonPalacios
| 2023-07-10T23:04:10Z | 0 | 0 |
diffusers
|
[
"diffusers",
"legal",
"es",
"dataset:fka/awesome-chatgpt-prompts",
"doi:10.57967/hf/0876",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-10T22:50:18Z |
---
license: creativeml-openrail-m
datasets:
- fka/awesome-chatgpt-prompts
language:
- es
library_name: diffusers
tags:
- legal
---
|
trevorj/dqn-SpaceInvadersNoFrameskip-v4
|
trevorj
| 2023-07-10T22:41:49Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T22:41:13Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 523.00 +/- 142.73
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga trevorj -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga trevorj -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga trevorj
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
jordyvl/vit-small_tobacco3482_kd_CEKD_t5.0_a0.9
|
jordyvl
| 2023-07-10T22:40:13Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T22:00:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_tobacco3482_kd_CEKD_t5.0_a0.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_tobacco3482_kd_CEKD_t5.0_a0.9
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5373
- Accuracy: 0.85
- Brier Loss: 0.2432
- Nll: 1.1157
- F1 Micro: 0.85
- F1 Macro: 0.8450
- Ece: 0.1621
- Aurc: 0.0427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 2.1036 | 0.215 | 0.8753 | 5.3195 | 0.2150 | 0.1264 | 0.2571 | 0.6923 |
| No log | 2.0 | 14 | 1.6952 | 0.405 | 0.7407 | 3.4929 | 0.405 | 0.2416 | 0.2907 | 0.4040 |
| No log | 3.0 | 21 | 1.1843 | 0.62 | 0.5633 | 2.0113 | 0.62 | 0.5725 | 0.2740 | 0.2014 |
| No log | 4.0 | 28 | 0.8797 | 0.71 | 0.4080 | 1.7043 | 0.7100 | 0.6683 | 0.2024 | 0.1125 |
| No log | 5.0 | 35 | 0.8570 | 0.715 | 0.3837 | 1.6476 | 0.715 | 0.7280 | 0.2189 | 0.1079 |
| No log | 6.0 | 42 | 0.7484 | 0.775 | 0.3285 | 1.5962 | 0.775 | 0.7668 | 0.1873 | 0.0816 |
| No log | 7.0 | 49 | 0.7337 | 0.79 | 0.3131 | 1.5377 | 0.79 | 0.7779 | 0.1904 | 0.0771 |
| No log | 8.0 | 56 | 0.6709 | 0.795 | 0.3012 | 1.2156 | 0.795 | 0.7776 | 0.1939 | 0.0761 |
| No log | 9.0 | 63 | 0.6901 | 0.795 | 0.3069 | 1.4725 | 0.795 | 0.7916 | 0.1882 | 0.0769 |
| No log | 10.0 | 70 | 0.7960 | 0.75 | 0.3586 | 1.4426 | 0.75 | 0.7406 | 0.1868 | 0.0976 |
| No log | 11.0 | 77 | 0.7489 | 0.77 | 0.3296 | 1.6202 | 0.7700 | 0.7794 | 0.2020 | 0.0878 |
| No log | 12.0 | 84 | 0.7068 | 0.785 | 0.3270 | 1.4127 | 0.785 | 0.7812 | 0.1922 | 0.0759 |
| No log | 13.0 | 91 | 0.6687 | 0.79 | 0.3050 | 1.3820 | 0.79 | 0.7945 | 0.1818 | 0.0625 |
| No log | 14.0 | 98 | 0.6052 | 0.79 | 0.2854 | 1.0602 | 0.79 | 0.7716 | 0.1702 | 0.0590 |
| No log | 15.0 | 105 | 0.6369 | 0.795 | 0.2959 | 1.0580 | 0.795 | 0.7953 | 0.1709 | 0.0603 |
| No log | 16.0 | 112 | 0.6204 | 0.81 | 0.2816 | 1.1886 | 0.81 | 0.8050 | 0.1657 | 0.0702 |
| No log | 17.0 | 119 | 0.5648 | 0.83 | 0.2475 | 1.2506 | 0.83 | 0.8241 | 0.1347 | 0.0612 |
| No log | 18.0 | 126 | 0.5849 | 0.83 | 0.2672 | 1.2245 | 0.83 | 0.8155 | 0.1646 | 0.0601 |
| No log | 19.0 | 133 | 0.5536 | 0.835 | 0.2475 | 1.0514 | 0.835 | 0.8254 | 0.1683 | 0.0531 |
| No log | 20.0 | 140 | 0.5689 | 0.835 | 0.2513 | 1.2369 | 0.835 | 0.8437 | 0.1722 | 0.0489 |
| No log | 21.0 | 147 | 0.5540 | 0.83 | 0.2485 | 1.2139 | 0.83 | 0.8165 | 0.1641 | 0.0608 |
| No log | 22.0 | 154 | 0.5352 | 0.835 | 0.2402 | 1.0108 | 0.835 | 0.8295 | 0.1408 | 0.0430 |
| No log | 23.0 | 161 | 0.5380 | 0.84 | 0.2403 | 1.2280 | 0.8400 | 0.8347 | 0.1405 | 0.0436 |
| No log | 24.0 | 168 | 0.5422 | 0.835 | 0.2471 | 1.0204 | 0.835 | 0.8324 | 0.1606 | 0.0445 |
| No log | 25.0 | 175 | 0.5342 | 0.85 | 0.2404 | 1.0767 | 0.85 | 0.8487 | 0.1469 | 0.0432 |
| No log | 26.0 | 182 | 0.5374 | 0.84 | 0.2429 | 1.0774 | 0.8400 | 0.8334 | 0.1420 | 0.0462 |
| No log | 27.0 | 189 | 0.5311 | 0.85 | 0.2395 | 1.0748 | 0.85 | 0.8487 | 0.1439 | 0.0446 |
| No log | 28.0 | 196 | 0.5298 | 0.85 | 0.2384 | 1.1337 | 0.85 | 0.8487 | 0.1570 | 0.0437 |
| No log | 29.0 | 203 | 0.5387 | 0.845 | 0.2435 | 1.1319 | 0.845 | 0.8424 | 0.1539 | 0.0458 |
| No log | 30.0 | 210 | 0.5361 | 0.85 | 0.2430 | 1.0648 | 0.85 | 0.8450 | 0.1679 | 0.0431 |
| No log | 31.0 | 217 | 0.5339 | 0.85 | 0.2413 | 1.0676 | 0.85 | 0.8487 | 0.1646 | 0.0428 |
| No log | 32.0 | 224 | 0.5345 | 0.85 | 0.2421 | 1.0709 | 0.85 | 0.8487 | 0.1476 | 0.0440 |
| No log | 33.0 | 231 | 0.5343 | 0.85 | 0.2421 | 1.1236 | 0.85 | 0.8450 | 0.1621 | 0.0431 |
| No log | 34.0 | 238 | 0.5353 | 0.845 | 0.2426 | 1.1244 | 0.845 | 0.8424 | 0.1710 | 0.0428 |
| No log | 35.0 | 245 | 0.5346 | 0.85 | 0.2423 | 1.0649 | 0.85 | 0.8487 | 0.1520 | 0.0440 |
| No log | 36.0 | 252 | 0.5356 | 0.855 | 0.2422 | 1.1241 | 0.855 | 0.8517 | 0.1814 | 0.0429 |
| No log | 37.0 | 259 | 0.5357 | 0.85 | 0.2426 | 1.1237 | 0.85 | 0.8450 | 0.1670 | 0.0425 |
| No log | 38.0 | 266 | 0.5356 | 0.845 | 0.2426 | 1.1226 | 0.845 | 0.8419 | 0.1607 | 0.0435 |
| No log | 39.0 | 273 | 0.5347 | 0.855 | 0.2420 | 1.0739 | 0.855 | 0.8517 | 0.1597 | 0.0427 |
| No log | 40.0 | 280 | 0.5356 | 0.855 | 0.2423 | 1.1203 | 0.855 | 0.8517 | 0.1676 | 0.0435 |
| No log | 41.0 | 287 | 0.5365 | 0.85 | 0.2431 | 1.1199 | 0.85 | 0.8450 | 0.1780 | 0.0429 |
| No log | 42.0 | 294 | 0.5356 | 0.85 | 0.2426 | 1.1173 | 0.85 | 0.8450 | 0.1653 | 0.0430 |
| No log | 43.0 | 301 | 0.5363 | 0.85 | 0.2428 | 1.1189 | 0.85 | 0.8450 | 0.1550 | 0.0435 |
| No log | 44.0 | 308 | 0.5345 | 0.85 | 0.2418 | 1.1193 | 0.85 | 0.8450 | 0.1590 | 0.0428 |
| No log | 45.0 | 315 | 0.5374 | 0.85 | 0.2435 | 1.1202 | 0.85 | 0.8450 | 0.1633 | 0.0435 |
| No log | 46.0 | 322 | 0.5355 | 0.85 | 0.2423 | 1.1183 | 0.85 | 0.8450 | 0.1564 | 0.0428 |
| No log | 47.0 | 329 | 0.5354 | 0.85 | 0.2425 | 1.1176 | 0.85 | 0.8450 | 0.1509 | 0.0429 |
| No log | 48.0 | 336 | 0.5369 | 0.85 | 0.2433 | 1.1177 | 0.85 | 0.8450 | 0.1517 | 0.0432 |
| No log | 49.0 | 343 | 0.5361 | 0.85 | 0.2428 | 1.1182 | 0.85 | 0.8450 | 0.1490 | 0.0428 |
| No log | 50.0 | 350 | 0.5364 | 0.85 | 0.2431 | 1.1179 | 0.85 | 0.8450 | 0.1654 | 0.0430 |
| No log | 51.0 | 357 | 0.5365 | 0.85 | 0.2428 | 1.1185 | 0.85 | 0.8450 | 0.1729 | 0.0432 |
| No log | 52.0 | 364 | 0.5364 | 0.85 | 0.2430 | 1.1165 | 0.85 | 0.8450 | 0.1614 | 0.0429 |
| No log | 53.0 | 371 | 0.5362 | 0.85 | 0.2429 | 1.1167 | 0.85 | 0.8450 | 0.1694 | 0.0430 |
| No log | 54.0 | 378 | 0.5369 | 0.85 | 0.2432 | 1.1170 | 0.85 | 0.8450 | 0.1597 | 0.0432 |
| No log | 55.0 | 385 | 0.5368 | 0.85 | 0.2430 | 1.1168 | 0.85 | 0.8450 | 0.1670 | 0.0429 |
| No log | 56.0 | 392 | 0.5367 | 0.85 | 0.2430 | 1.1180 | 0.85 | 0.8450 | 0.1619 | 0.0430 |
| No log | 57.0 | 399 | 0.5364 | 0.85 | 0.2429 | 1.1163 | 0.85 | 0.8450 | 0.1649 | 0.0429 |
| No log | 58.0 | 406 | 0.5364 | 0.85 | 0.2430 | 1.1156 | 0.85 | 0.8450 | 0.1611 | 0.0429 |
| No log | 59.0 | 413 | 0.5365 | 0.85 | 0.2428 | 1.1163 | 0.85 | 0.8450 | 0.1591 | 0.0429 |
| No log | 60.0 | 420 | 0.5364 | 0.85 | 0.2429 | 1.1155 | 0.85 | 0.8450 | 0.1588 | 0.0429 |
| No log | 61.0 | 427 | 0.5370 | 0.85 | 0.2432 | 1.1158 | 0.85 | 0.8450 | 0.1772 | 0.0432 |
| No log | 62.0 | 434 | 0.5367 | 0.85 | 0.2429 | 1.1167 | 0.85 | 0.8450 | 0.1622 | 0.0429 |
| No log | 63.0 | 441 | 0.5362 | 0.85 | 0.2428 | 1.1162 | 0.85 | 0.8450 | 0.1503 | 0.0428 |
| No log | 64.0 | 448 | 0.5372 | 0.85 | 0.2433 | 1.1161 | 0.85 | 0.8450 | 0.1616 | 0.0432 |
| No log | 65.0 | 455 | 0.5371 | 0.85 | 0.2431 | 1.1162 | 0.85 | 0.8450 | 0.1499 | 0.0429 |
| No log | 66.0 | 462 | 0.5367 | 0.85 | 0.2430 | 1.1160 | 0.85 | 0.8450 | 0.1591 | 0.0427 |
| No log | 67.0 | 469 | 0.5367 | 0.85 | 0.2430 | 1.1164 | 0.85 | 0.8450 | 0.1562 | 0.0428 |
| No log | 68.0 | 476 | 0.5368 | 0.85 | 0.2430 | 1.1168 | 0.85 | 0.8450 | 0.1556 | 0.0427 |
| No log | 69.0 | 483 | 0.5368 | 0.85 | 0.2431 | 1.1158 | 0.85 | 0.8450 | 0.1593 | 0.0428 |
| No log | 70.0 | 490 | 0.5372 | 0.85 | 0.2432 | 1.1162 | 0.85 | 0.8450 | 0.1628 | 0.0428 |
| No log | 71.0 | 497 | 0.5371 | 0.85 | 0.2432 | 1.1163 | 0.85 | 0.8450 | 0.1599 | 0.0429 |
| 0.1708 | 72.0 | 504 | 0.5370 | 0.85 | 0.2430 | 1.1161 | 0.85 | 0.8450 | 0.1559 | 0.0430 |
| 0.1708 | 73.0 | 511 | 0.5372 | 0.85 | 0.2433 | 1.1154 | 0.85 | 0.8450 | 0.1556 | 0.0428 |
| 0.1708 | 74.0 | 518 | 0.5370 | 0.85 | 0.2429 | 1.1165 | 0.85 | 0.8450 | 0.1540 | 0.0428 |
| 0.1708 | 75.0 | 525 | 0.5371 | 0.85 | 0.2431 | 1.1161 | 0.85 | 0.8450 | 0.1616 | 0.0427 |
| 0.1708 | 76.0 | 532 | 0.5369 | 0.85 | 0.2431 | 1.1161 | 0.85 | 0.8450 | 0.1619 | 0.0427 |
| 0.1708 | 77.0 | 539 | 0.5369 | 0.85 | 0.2430 | 1.1156 | 0.85 | 0.8450 | 0.1623 | 0.0429 |
| 0.1708 | 78.0 | 546 | 0.5372 | 0.85 | 0.2432 | 1.1158 | 0.85 | 0.8450 | 0.1619 | 0.0427 |
| 0.1708 | 79.0 | 553 | 0.5375 | 0.85 | 0.2433 | 1.1162 | 0.85 | 0.8450 | 0.1688 | 0.0429 |
| 0.1708 | 80.0 | 560 | 0.5372 | 0.85 | 0.2432 | 1.1160 | 0.85 | 0.8450 | 0.1623 | 0.0429 |
| 0.1708 | 81.0 | 567 | 0.5373 | 0.85 | 0.2432 | 1.1162 | 0.85 | 0.8450 | 0.1620 | 0.0428 |
| 0.1708 | 82.0 | 574 | 0.5374 | 0.85 | 0.2433 | 1.1160 | 0.85 | 0.8450 | 0.1622 | 0.0428 |
| 0.1708 | 83.0 | 581 | 0.5372 | 0.85 | 0.2432 | 1.1159 | 0.85 | 0.8450 | 0.1622 | 0.0428 |
| 0.1708 | 84.0 | 588 | 0.5371 | 0.85 | 0.2431 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 |
| 0.1708 | 85.0 | 595 | 0.5372 | 0.85 | 0.2432 | 1.1158 | 0.85 | 0.8450 | 0.1687 | 0.0426 |
| 0.1708 | 86.0 | 602 | 0.5372 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1619 | 0.0426 |
| 0.1708 | 87.0 | 609 | 0.5374 | 0.85 | 0.2432 | 1.1159 | 0.85 | 0.8450 | 0.1687 | 0.0428 |
| 0.1708 | 88.0 | 616 | 0.5373 | 0.85 | 0.2432 | 1.1160 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 89.0 | 623 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 90.0 | 630 | 0.5373 | 0.85 | 0.2432 | 1.1156 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 91.0 | 637 | 0.5372 | 0.85 | 0.2432 | 1.1156 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 92.0 | 644 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 93.0 | 651 | 0.5372 | 0.85 | 0.2432 | 1.1156 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 94.0 | 658 | 0.5373 | 0.85 | 0.2432 | 1.1158 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 95.0 | 665 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 |
| 0.1708 | 96.0 | 672 | 0.5372 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 |
| 0.1708 | 97.0 | 679 | 0.5372 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 98.0 | 686 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 |
| 0.1708 | 99.0 | 693 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 |
| 0.1708 | 100.0 | 700 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Raizel123/SNoonzlora
|
Raizel123
| 2023-07-10T22:35:30Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-10T22:32:26Z |
---
license: creativeml-openrail-m
---
|
Raizel123/Mbyonglora
|
Raizel123
| 2023-07-10T22:31:20Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-10T22:27:47Z |
---
license: creativeml-openrail-m
---
|
jordyvl/vit-small_rvl_cdip_100_examples_per_class_kd_MSE
|
jordyvl
| 2023-07-10T22:30:03Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T21:13:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_rvl_cdip_100_examples_per_class_kd_MSE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_rvl_cdip_100_examples_per_class_kd_MSE
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4673
- Accuracy: 0.6425
- Brier Loss: 0.4763
- Nll: 3.0680
- F1 Micro: 0.6425
- F1 Macro: 0.6485
- Ece: 0.1946
- Aurc: 0.1381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 4.4851 | 0.06 | 0.9565 | 13.8276 | 0.06 | 0.0556 | 0.1688 | 0.9385 |
| No log | 2.0 | 50 | 3.5619 | 0.3775 | 0.7827 | 6.2649 | 0.3775 | 0.3611 | 0.2331 | 0.3882 |
| No log | 3.0 | 75 | 2.8990 | 0.5025 | 0.6453 | 4.7376 | 0.5025 | 0.4858 | 0.1689 | 0.2658 |
| No log | 4.0 | 100 | 2.5972 | 0.515 | 0.5980 | 4.4210 | 0.515 | 0.4895 | 0.1605 | 0.2249 |
| No log | 5.0 | 125 | 2.4353 | 0.56 | 0.5762 | 3.4885 | 0.56 | 0.5566 | 0.1548 | 0.2100 |
| No log | 6.0 | 150 | 2.4157 | 0.5475 | 0.5864 | 3.8261 | 0.5475 | 0.5323 | 0.1837 | 0.2167 |
| No log | 7.0 | 175 | 2.1786 | 0.6075 | 0.5203 | 3.4565 | 0.6075 | 0.6103 | 0.1403 | 0.1670 |
| No log | 8.0 | 200 | 2.1082 | 0.63 | 0.5040 | 3.3570 | 0.63 | 0.6246 | 0.1580 | 0.1530 |
| No log | 9.0 | 225 | 2.0472 | 0.625 | 0.5042 | 3.8572 | 0.625 | 0.6184 | 0.1552 | 0.1530 |
| No log | 10.0 | 250 | 2.0589 | 0.6025 | 0.5468 | 3.5723 | 0.6025 | 0.5982 | 0.1781 | 0.1785 |
| No log | 11.0 | 275 | 1.8965 | 0.65 | 0.4755 | 3.4466 | 0.65 | 0.6497 | 0.1605 | 0.1475 |
| No log | 12.0 | 300 | 1.9014 | 0.6325 | 0.5066 | 3.0881 | 0.6325 | 0.6359 | 0.1658 | 0.1591 |
| No log | 13.0 | 325 | 1.7904 | 0.6175 | 0.5162 | 3.4673 | 0.6175 | 0.6141 | 0.1525 | 0.1598 |
| No log | 14.0 | 350 | 1.8624 | 0.625 | 0.5173 | 3.6824 | 0.625 | 0.6179 | 0.1567 | 0.1624 |
| No log | 15.0 | 375 | 1.7083 | 0.6625 | 0.4817 | 3.1296 | 0.6625 | 0.6686 | 0.1651 | 0.1405 |
| No log | 16.0 | 400 | 1.8848 | 0.59 | 0.5478 | 4.3761 | 0.59 | 0.5913 | 0.2083 | 0.1696 |
| No log | 17.0 | 425 | 1.7238 | 0.6125 | 0.5229 | 3.1232 | 0.6125 | 0.6052 | 0.1833 | 0.1553 |
| No log | 18.0 | 450 | 1.7126 | 0.625 | 0.5152 | 2.9267 | 0.625 | 0.6284 | 0.1747 | 0.1565 |
| No log | 19.0 | 475 | 1.6459 | 0.6275 | 0.5024 | 2.9078 | 0.6275 | 0.6219 | 0.1766 | 0.1527 |
| 1.0542 | 20.0 | 500 | 1.6029 | 0.6275 | 0.4855 | 3.0931 | 0.6275 | 0.6316 | 0.1720 | 0.1414 |
| 1.0542 | 21.0 | 525 | 1.6566 | 0.6525 | 0.4847 | 3.0998 | 0.6525 | 0.6479 | 0.1558 | 0.1438 |
| 1.0542 | 22.0 | 550 | 1.6169 | 0.645 | 0.4894 | 3.0081 | 0.645 | 0.6471 | 0.1687 | 0.1400 |
| 1.0542 | 23.0 | 575 | 1.5322 | 0.6525 | 0.4557 | 3.3587 | 0.6525 | 0.6520 | 0.1428 | 0.1247 |
| 1.0542 | 24.0 | 600 | 1.5991 | 0.6475 | 0.4787 | 2.9349 | 0.6475 | 0.6444 | 0.1580 | 0.1450 |
| 1.0542 | 25.0 | 625 | 1.5625 | 0.6375 | 0.4926 | 3.0245 | 0.6375 | 0.6378 | 0.1641 | 0.1433 |
| 1.0542 | 26.0 | 650 | 1.5366 | 0.64 | 0.4884 | 3.3388 | 0.64 | 0.6461 | 0.1595 | 0.1453 |
| 1.0542 | 27.0 | 675 | 1.5686 | 0.65 | 0.4765 | 3.5120 | 0.65 | 0.6504 | 0.1625 | 0.1359 |
| 1.0542 | 28.0 | 700 | 1.5562 | 0.6475 | 0.4817 | 3.0348 | 0.6475 | 0.6488 | 0.1459 | 0.1388 |
| 1.0542 | 29.0 | 725 | 1.5213 | 0.6475 | 0.4719 | 3.2628 | 0.6475 | 0.6475 | 0.1634 | 0.1326 |
| 1.0542 | 30.0 | 750 | 1.5492 | 0.6675 | 0.4730 | 3.1693 | 0.6675 | 0.6679 | 0.1469 | 0.1415 |
| 1.0542 | 31.0 | 775 | 1.5311 | 0.65 | 0.4896 | 3.0881 | 0.65 | 0.6504 | 0.1815 | 0.1380 |
| 1.0542 | 32.0 | 800 | 1.5556 | 0.6475 | 0.4821 | 3.1829 | 0.6475 | 0.6491 | 0.1640 | 0.1405 |
| 1.0542 | 33.0 | 825 | 1.5471 | 0.6375 | 0.4846 | 3.4190 | 0.6375 | 0.6407 | 0.1628 | 0.1415 |
| 1.0542 | 34.0 | 850 | 1.4809 | 0.6575 | 0.4714 | 2.9136 | 0.6575 | 0.6612 | 0.1729 | 0.1338 |
| 1.0542 | 35.0 | 875 | 1.5256 | 0.66 | 0.4773 | 3.2303 | 0.66 | 0.6650 | 0.1746 | 0.1368 |
| 1.0542 | 36.0 | 900 | 1.4929 | 0.6675 | 0.4671 | 3.2360 | 0.6675 | 0.6698 | 0.1698 | 0.1309 |
| 1.0542 | 37.0 | 925 | 1.4923 | 0.645 | 0.4880 | 3.0567 | 0.645 | 0.6564 | 0.1764 | 0.1395 |
| 1.0542 | 38.0 | 950 | 1.5038 | 0.665 | 0.4672 | 3.2116 | 0.665 | 0.6661 | 0.1588 | 0.1343 |
| 1.0542 | 39.0 | 975 | 1.4708 | 0.6625 | 0.4669 | 3.1420 | 0.6625 | 0.6675 | 0.1683 | 0.1301 |
| 0.0522 | 40.0 | 1000 | 1.5153 | 0.6475 | 0.4865 | 3.1796 | 0.6475 | 0.6447 | 0.1639 | 0.1400 |
| 0.0522 | 41.0 | 1025 | 1.4705 | 0.6575 | 0.4642 | 3.2196 | 0.6575 | 0.6626 | 0.1440 | 0.1308 |
| 0.0522 | 42.0 | 1050 | 1.4844 | 0.6575 | 0.4722 | 3.2445 | 0.6575 | 0.6595 | 0.1746 | 0.1328 |
| 0.0522 | 43.0 | 1075 | 1.4957 | 0.6425 | 0.4828 | 3.1456 | 0.6425 | 0.6468 | 0.1499 | 0.1417 |
| 0.0522 | 44.0 | 1100 | 1.5179 | 0.645 | 0.4910 | 3.3921 | 0.645 | 0.6470 | 0.1861 | 0.1433 |
| 0.0522 | 45.0 | 1125 | 1.4878 | 0.6425 | 0.4839 | 3.2139 | 0.6425 | 0.6478 | 0.1720 | 0.1403 |
| 0.0522 | 46.0 | 1150 | 1.4666 | 0.655 | 0.4741 | 2.9333 | 0.655 | 0.6601 | 0.1813 | 0.1347 |
| 0.0522 | 47.0 | 1175 | 1.4954 | 0.6575 | 0.4776 | 3.2102 | 0.6575 | 0.6604 | 0.1842 | 0.1390 |
| 0.0522 | 48.0 | 1200 | 1.4976 | 0.645 | 0.4856 | 3.1539 | 0.645 | 0.6493 | 0.1549 | 0.1407 |
| 0.0522 | 49.0 | 1225 | 1.4772 | 0.64 | 0.4780 | 2.9845 | 0.64 | 0.6445 | 0.1826 | 0.1388 |
| 0.0522 | 50.0 | 1250 | 1.4584 | 0.65 | 0.4703 | 3.0776 | 0.65 | 0.6533 | 0.1685 | 0.1352 |
| 0.0522 | 51.0 | 1275 | 1.4828 | 0.6325 | 0.4844 | 3.1425 | 0.6325 | 0.6377 | 0.1641 | 0.1409 |
| 0.0522 | 52.0 | 1300 | 1.4676 | 0.6525 | 0.4737 | 3.1483 | 0.6525 | 0.6565 | 0.1773 | 0.1358 |
| 0.0522 | 53.0 | 1325 | 1.4675 | 0.6475 | 0.4791 | 3.1411 | 0.6475 | 0.6515 | 0.1820 | 0.1388 |
| 0.0522 | 54.0 | 1350 | 1.4724 | 0.645 | 0.4764 | 3.0744 | 0.645 | 0.6499 | 0.1847 | 0.1382 |
| 0.0522 | 55.0 | 1375 | 1.4689 | 0.6425 | 0.4769 | 3.2256 | 0.6425 | 0.6476 | 0.1839 | 0.1376 |
| 0.0522 | 56.0 | 1400 | 1.4660 | 0.6425 | 0.4760 | 2.9907 | 0.6425 | 0.6479 | 0.1906 | 0.1378 |
| 0.0522 | 57.0 | 1425 | 1.4663 | 0.645 | 0.4757 | 3.0722 | 0.645 | 0.6514 | 0.1705 | 0.1367 |
| 0.0522 | 58.0 | 1450 | 1.4678 | 0.65 | 0.4770 | 3.0710 | 0.65 | 0.6546 | 0.1794 | 0.1371 |
| 0.0522 | 59.0 | 1475 | 1.4717 | 0.64 | 0.4786 | 3.0737 | 0.64 | 0.6455 | 0.1889 | 0.1392 |
| 0.0064 | 60.0 | 1500 | 1.4691 | 0.645 | 0.4768 | 3.0688 | 0.645 | 0.6499 | 0.1815 | 0.1378 |
| 0.0064 | 61.0 | 1525 | 1.4689 | 0.64 | 0.4767 | 3.0688 | 0.64 | 0.6452 | 0.1846 | 0.1382 |
| 0.0064 | 62.0 | 1550 | 1.4689 | 0.64 | 0.4770 | 3.0674 | 0.64 | 0.6455 | 0.1937 | 0.1383 |
| 0.0064 | 63.0 | 1575 | 1.4687 | 0.6425 | 0.4767 | 3.0700 | 0.6425 | 0.6485 | 0.1897 | 0.1381 |
| 0.0064 | 64.0 | 1600 | 1.4674 | 0.6425 | 0.4764 | 3.0675 | 0.6425 | 0.6472 | 0.1855 | 0.1375 |
| 0.0064 | 65.0 | 1625 | 1.4681 | 0.6425 | 0.4766 | 3.0694 | 0.6425 | 0.6485 | 0.1917 | 0.1381 |
| 0.0064 | 66.0 | 1650 | 1.4681 | 0.6425 | 0.4766 | 3.0687 | 0.6425 | 0.6472 | 0.1905 | 0.1378 |
| 0.0064 | 67.0 | 1675 | 1.4667 | 0.645 | 0.4757 | 3.0681 | 0.645 | 0.6505 | 0.1899 | 0.1375 |
| 0.0064 | 68.0 | 1700 | 1.4683 | 0.6425 | 0.4771 | 3.0686 | 0.6425 | 0.6474 | 0.1871 | 0.1379 |
| 0.0064 | 69.0 | 1725 | 1.4672 | 0.64 | 0.4760 | 3.0679 | 0.64 | 0.6455 | 0.1932 | 0.1380 |
| 0.0064 | 70.0 | 1750 | 1.4673 | 0.6425 | 0.4763 | 3.0683 | 0.6425 | 0.6474 | 0.1955 | 0.1376 |
| 0.0064 | 71.0 | 1775 | 1.4676 | 0.645 | 0.4763 | 3.0680 | 0.645 | 0.6505 | 0.1921 | 0.1376 |
| 0.0064 | 72.0 | 1800 | 1.4674 | 0.6425 | 0.4763 | 3.0683 | 0.6425 | 0.6474 | 0.1946 | 0.1376 |
| 0.0064 | 73.0 | 1825 | 1.4675 | 0.6425 | 0.4763 | 3.0682 | 0.6425 | 0.6474 | 0.1946 | 0.1377 |
| 0.0064 | 74.0 | 1850 | 1.4674 | 0.6425 | 0.4763 | 3.0682 | 0.6425 | 0.6485 | 0.1945 | 0.1380 |
| 0.0064 | 75.0 | 1875 | 1.4674 | 0.64 | 0.4763 | 3.0680 | 0.64 | 0.6455 | 0.1960 | 0.1380 |
| 0.0064 | 76.0 | 1900 | 1.4675 | 0.64 | 0.4764 | 3.0682 | 0.64 | 0.6455 | 0.1972 | 0.1381 |
| 0.0064 | 77.0 | 1925 | 1.4675 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1947 | 0.1380 |
| 0.0064 | 78.0 | 1950 | 1.4674 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1958 | 0.1381 |
| 0.0064 | 79.0 | 1975 | 1.4674 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6474 | 0.1935 | 0.1376 |
| 0.0 | 80.0 | 2000 | 1.4673 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1958 | 0.1380 |
| 0.0 | 81.0 | 2025 | 1.4674 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1946 | 0.1380 |
| 0.0 | 82.0 | 2050 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1935 | 0.1380 |
| 0.0 | 83.0 | 2075 | 1.4674 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 84.0 | 2100 | 1.4674 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1958 | 0.1381 |
| 0.0 | 85.0 | 2125 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 86.0 | 2150 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 87.0 | 2175 | 1.4673 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1958 | 0.1381 |
| 0.0 | 88.0 | 2200 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 89.0 | 2225 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 90.0 | 2250 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 91.0 | 2275 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 92.0 | 2300 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 93.0 | 2325 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 94.0 | 2350 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1909 | 0.1381 |
| 0.0 | 95.0 | 2375 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 96.0 | 2400 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 97.0 | 2425 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 98.0 | 2450 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 99.0 | 2475 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 100.0 | 2500 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
Raizel123/Alfilora
|
Raizel123
| 2023-07-10T22:23:13Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-10T22:18:35Z |
---
license: creativeml-openrail-m
---
|
NasimB/gpt2-dp-mod-datasets-txt-processing-rarity-all
|
NasimB
| 2023-07-10T22:14:46Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T19:52:59Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-dp-mod-datasets-txt-processing-rarity-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-dp-mod-datasets-txt-processing-rarity-all
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7606 | 0.29 | 500 | 5.6933 |
| 5.4375 | 0.59 | 1000 | 5.2559 |
| 5.0937 | 0.88 | 1500 | 5.0171 |
| 4.8204 | 1.18 | 2000 | 4.8701 |
| 4.6728 | 1.47 | 2500 | 4.7593 |
| 4.574 | 1.77 | 3000 | 4.6587 |
| 4.4456 | 2.06 | 3500 | 4.5885 |
| 4.258 | 2.36 | 4000 | 4.5468 |
| 4.2423 | 2.65 | 4500 | 4.4860 |
| 4.2036 | 2.94 | 5000 | 4.4302 |
| 3.9737 | 3.24 | 5500 | 4.4364 |
| 3.9439 | 3.53 | 6000 | 4.4019 |
| 3.9271 | 3.83 | 6500 | 4.3632 |
| 3.7901 | 4.12 | 7000 | 4.3689 |
| 3.6474 | 4.42 | 7500 | 4.3662 |
| 3.6414 | 4.71 | 8000 | 4.3472 |
| 3.6338 | 5.01 | 8500 | 4.3344 |
| 3.3764 | 5.3 | 9000 | 4.3618 |
| 3.3821 | 5.59 | 9500 | 4.3568 |
| 3.3777 | 5.89 | 10000 | 4.3513 |
| 3.2752 | 6.18 | 10500 | 4.3602 |
| 3.2228 | 6.48 | 11000 | 4.3652 |
| 3.2172 | 6.77 | 11500 | 4.3656 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
MnLgt/swivel_inversion
|
MnLgt
| 2023-07-10T22:11:42Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-07-10T22:11:41Z |
---
license: mit
---
### swivel_inversion on Stable Diffusion
This is the `<swivel-chair>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:

























|
jordyvl/vit-small_tobacco3482_kd_CEKD_t5.0_a0.7
|
jordyvl
| 2023-07-10T21:59:33Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T21:19:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_tobacco3482_kd_CEKD_t5.0_a0.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_tobacco3482_kd_CEKD_t5.0_a0.7
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4918
- Accuracy: 0.85
- Brier Loss: 0.2583
- Nll: 1.0894
- F1 Micro: 0.85
- F1 Macro: 0.8374
- Ece: 0.1917
- Aurc: 0.0470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 1.8329 | 0.225 | 0.8761 | 5.2731 | 0.225 | 0.1384 | 0.2607 | 0.6977 |
| No log | 2.0 | 14 | 1.4785 | 0.405 | 0.7460 | 3.4067 | 0.405 | 0.2289 | 0.3097 | 0.4085 |
| No log | 3.0 | 21 | 1.0406 | 0.6 | 0.5725 | 1.8722 | 0.6 | 0.5345 | 0.3050 | 0.2010 |
| No log | 4.0 | 28 | 0.8087 | 0.725 | 0.4192 | 1.6096 | 0.7250 | 0.6767 | 0.2345 | 0.1149 |
| No log | 5.0 | 35 | 0.7666 | 0.735 | 0.3731 | 1.6189 | 0.735 | 0.7350 | 0.2377 | 0.1011 |
| No log | 6.0 | 42 | 0.6960 | 0.78 | 0.3413 | 1.5230 | 0.78 | 0.7592 | 0.2295 | 0.0868 |
| No log | 7.0 | 49 | 0.6490 | 0.805 | 0.3110 | 1.4861 | 0.805 | 0.7864 | 0.2138 | 0.0785 |
| No log | 8.0 | 56 | 0.6238 | 0.795 | 0.3069 | 1.2098 | 0.795 | 0.7816 | 0.2065 | 0.0698 |
| No log | 9.0 | 63 | 0.5755 | 0.83 | 0.2866 | 1.1943 | 0.83 | 0.8117 | 0.1937 | 0.0694 |
| No log | 10.0 | 70 | 0.6360 | 0.77 | 0.3164 | 1.2608 | 0.7700 | 0.7550 | 0.1785 | 0.0677 |
| No log | 11.0 | 77 | 0.6548 | 0.785 | 0.3335 | 1.4895 | 0.785 | 0.7707 | 0.2281 | 0.0885 |
| No log | 12.0 | 84 | 0.5847 | 0.805 | 0.3002 | 1.4317 | 0.805 | 0.7807 | 0.2264 | 0.0756 |
| No log | 13.0 | 91 | 0.5956 | 0.81 | 0.3040 | 1.2590 | 0.81 | 0.7928 | 0.2241 | 0.0556 |
| No log | 14.0 | 98 | 0.5692 | 0.81 | 0.3025 | 1.2119 | 0.81 | 0.8043 | 0.2235 | 0.0665 |
| No log | 15.0 | 105 | 0.5223 | 0.83 | 0.2762 | 1.1162 | 0.83 | 0.8221 | 0.1798 | 0.0552 |
| No log | 16.0 | 112 | 0.4981 | 0.84 | 0.2523 | 1.0864 | 0.8400 | 0.8372 | 0.1868 | 0.0396 |
| No log | 17.0 | 119 | 0.5207 | 0.805 | 0.2741 | 1.0416 | 0.805 | 0.7897 | 0.1960 | 0.0551 |
| No log | 18.0 | 126 | 0.5165 | 0.84 | 0.2723 | 1.1596 | 0.8400 | 0.8325 | 0.1942 | 0.0506 |
| No log | 19.0 | 133 | 0.4979 | 0.845 | 0.2573 | 1.2329 | 0.845 | 0.8297 | 0.1825 | 0.0444 |
| No log | 20.0 | 140 | 0.4953 | 0.855 | 0.2565 | 1.1213 | 0.855 | 0.8442 | 0.1844 | 0.0474 |
| No log | 21.0 | 147 | 0.5296 | 0.82 | 0.2792 | 1.0000 | 0.82 | 0.8218 | 0.1768 | 0.0523 |
| No log | 22.0 | 154 | 0.5027 | 0.835 | 0.2625 | 0.9926 | 0.835 | 0.8238 | 0.2035 | 0.0481 |
| No log | 23.0 | 161 | 0.5027 | 0.84 | 0.2642 | 1.0500 | 0.8400 | 0.8299 | 0.1616 | 0.0482 |
| No log | 24.0 | 168 | 0.5017 | 0.84 | 0.2616 | 1.0560 | 0.8400 | 0.8314 | 0.1819 | 0.0497 |
| No log | 25.0 | 175 | 0.4942 | 0.85 | 0.2594 | 1.1003 | 0.85 | 0.8407 | 0.1793 | 0.0483 |
| No log | 26.0 | 182 | 0.4943 | 0.83 | 0.2586 | 1.0436 | 0.83 | 0.8140 | 0.1869 | 0.0518 |
| No log | 27.0 | 189 | 0.4950 | 0.835 | 0.2613 | 1.0817 | 0.835 | 0.8224 | 0.2039 | 0.0504 |
| No log | 28.0 | 196 | 0.4957 | 0.85 | 0.2599 | 1.1109 | 0.85 | 0.8309 | 0.2058 | 0.0485 |
| No log | 29.0 | 203 | 0.4956 | 0.845 | 0.2599 | 1.0914 | 0.845 | 0.8304 | 0.1916 | 0.0492 |
| No log | 30.0 | 210 | 0.4893 | 0.84 | 0.2561 | 1.0890 | 0.8400 | 0.8214 | 0.2071 | 0.0482 |
| No log | 31.0 | 217 | 0.4920 | 0.835 | 0.2587 | 1.0907 | 0.835 | 0.8270 | 0.2031 | 0.0482 |
| No log | 32.0 | 224 | 0.4927 | 0.83 | 0.2601 | 1.0879 | 0.83 | 0.8157 | 0.2093 | 0.0500 |
| No log | 33.0 | 231 | 0.4925 | 0.835 | 0.2593 | 1.0886 | 0.835 | 0.8270 | 0.1810 | 0.0484 |
| No log | 34.0 | 238 | 0.4909 | 0.845 | 0.2578 | 1.0871 | 0.845 | 0.8304 | 0.1916 | 0.0478 |
| No log | 35.0 | 245 | 0.4927 | 0.845 | 0.2591 | 1.0866 | 0.845 | 0.8378 | 0.1943 | 0.0473 |
| No log | 36.0 | 252 | 0.4919 | 0.85 | 0.2581 | 1.0891 | 0.85 | 0.8342 | 0.2193 | 0.0475 |
| No log | 37.0 | 259 | 0.4908 | 0.845 | 0.2579 | 1.0867 | 0.845 | 0.8346 | 0.2215 | 0.0474 |
| No log | 38.0 | 266 | 0.4929 | 0.85 | 0.2590 | 1.0873 | 0.85 | 0.8407 | 0.1884 | 0.0471 |
| No log | 39.0 | 273 | 0.4913 | 0.85 | 0.2584 | 1.0861 | 0.85 | 0.8374 | 0.1944 | 0.0474 |
| No log | 40.0 | 280 | 0.4933 | 0.835 | 0.2595 | 1.0871 | 0.835 | 0.8248 | 0.1893 | 0.0491 |
| No log | 41.0 | 287 | 0.4936 | 0.84 | 0.2599 | 1.0863 | 0.8400 | 0.8276 | 0.1860 | 0.0486 |
| No log | 42.0 | 294 | 0.4911 | 0.85 | 0.2580 | 1.0861 | 0.85 | 0.8374 | 0.2186 | 0.0474 |
| No log | 43.0 | 301 | 0.4915 | 0.85 | 0.2581 | 1.0860 | 0.85 | 0.8374 | 0.2023 | 0.0475 |
| No log | 44.0 | 308 | 0.4921 | 0.85 | 0.2586 | 1.0874 | 0.85 | 0.8374 | 0.2013 | 0.0477 |
| No log | 45.0 | 315 | 0.4915 | 0.85 | 0.2583 | 1.0862 | 0.85 | 0.8374 | 0.1941 | 0.0475 |
| No log | 46.0 | 322 | 0.4918 | 0.85 | 0.2584 | 1.0878 | 0.85 | 0.8374 | 0.1852 | 0.0473 |
| No log | 47.0 | 329 | 0.4916 | 0.85 | 0.2583 | 1.0873 | 0.85 | 0.8374 | 0.2089 | 0.0473 |
| No log | 48.0 | 336 | 0.4921 | 0.85 | 0.2586 | 1.0879 | 0.85 | 0.8374 | 0.2026 | 0.0477 |
| No log | 49.0 | 343 | 0.4918 | 0.845 | 0.2584 | 1.0884 | 0.845 | 0.8282 | 0.1963 | 0.0478 |
| No log | 50.0 | 350 | 0.4922 | 0.85 | 0.2587 | 1.0871 | 0.85 | 0.8374 | 0.2102 | 0.0474 |
| No log | 51.0 | 357 | 0.4920 | 0.85 | 0.2585 | 1.0879 | 0.85 | 0.8374 | 0.2095 | 0.0474 |
| No log | 52.0 | 364 | 0.4926 | 0.85 | 0.2589 | 1.0878 | 0.85 | 0.8374 | 0.2022 | 0.0477 |
| No log | 53.0 | 371 | 0.4920 | 0.85 | 0.2586 | 1.0888 | 0.85 | 0.8374 | 0.2027 | 0.0475 |
| No log | 54.0 | 378 | 0.4921 | 0.85 | 0.2586 | 1.0886 | 0.85 | 0.8374 | 0.2020 | 0.0474 |
| No log | 55.0 | 385 | 0.4921 | 0.85 | 0.2587 | 1.0890 | 0.85 | 0.8374 | 0.1929 | 0.0471 |
| No log | 56.0 | 392 | 0.4925 | 0.85 | 0.2589 | 1.0881 | 0.85 | 0.8374 | 0.1946 | 0.0473 |
| No log | 57.0 | 399 | 0.4917 | 0.85 | 0.2583 | 1.0893 | 0.85 | 0.8374 | 0.1932 | 0.0472 |
| No log | 58.0 | 406 | 0.4921 | 0.85 | 0.2586 | 1.0877 | 0.85 | 0.8374 | 0.1948 | 0.0476 |
| No log | 59.0 | 413 | 0.4917 | 0.85 | 0.2583 | 1.0883 | 0.85 | 0.8374 | 0.1931 | 0.0472 |
| No log | 60.0 | 420 | 0.4918 | 0.85 | 0.2583 | 1.0882 | 0.85 | 0.8374 | 0.1945 | 0.0475 |
| No log | 61.0 | 427 | 0.4916 | 0.85 | 0.2582 | 1.0883 | 0.85 | 0.8374 | 0.1936 | 0.0472 |
| No log | 62.0 | 434 | 0.4920 | 0.85 | 0.2586 | 1.0882 | 0.85 | 0.8374 | 0.1942 | 0.0473 |
| No log | 63.0 | 441 | 0.4922 | 0.85 | 0.2587 | 1.0889 | 0.85 | 0.8374 | 0.1935 | 0.0473 |
| No log | 64.0 | 448 | 0.4921 | 0.85 | 0.2586 | 1.0885 | 0.85 | 0.8374 | 0.1848 | 0.0473 |
| No log | 65.0 | 455 | 0.4916 | 0.85 | 0.2582 | 1.0887 | 0.85 | 0.8374 | 0.1848 | 0.0474 |
| No log | 66.0 | 462 | 0.4917 | 0.85 | 0.2583 | 1.0883 | 0.85 | 0.8374 | 0.1849 | 0.0472 |
| No log | 67.0 | 469 | 0.4917 | 0.85 | 0.2584 | 1.0887 | 0.85 | 0.8374 | 0.1848 | 0.0472 |
| No log | 68.0 | 476 | 0.4920 | 0.85 | 0.2585 | 1.0888 | 0.85 | 0.8374 | 0.2011 | 0.0471 |
| No log | 69.0 | 483 | 0.4918 | 0.85 | 0.2584 | 1.0889 | 0.85 | 0.8374 | 0.2007 | 0.0471 |
| No log | 70.0 | 490 | 0.4919 | 0.85 | 0.2584 | 1.0886 | 0.85 | 0.8374 | 0.1848 | 0.0474 |
| No log | 71.0 | 497 | 0.4920 | 0.85 | 0.2585 | 1.0888 | 0.85 | 0.8374 | 0.1940 | 0.0474 |
| 0.1824 | 72.0 | 504 | 0.4919 | 0.85 | 0.2584 | 1.0889 | 0.85 | 0.8374 | 0.2011 | 0.0471 |
| 0.1824 | 73.0 | 511 | 0.4917 | 0.85 | 0.2583 | 1.0887 | 0.85 | 0.8374 | 0.1848 | 0.0472 |
| 0.1824 | 74.0 | 518 | 0.4920 | 0.85 | 0.2585 | 1.0890 | 0.85 | 0.8374 | 0.1848 | 0.0472 |
| 0.1824 | 75.0 | 525 | 0.4920 | 0.85 | 0.2585 | 1.0892 | 0.85 | 0.8374 | 0.1846 | 0.0472 |
| 0.1824 | 76.0 | 532 | 0.4918 | 0.85 | 0.2583 | 1.0889 | 0.85 | 0.8374 | 0.1930 | 0.0472 |
| 0.1824 | 77.0 | 539 | 0.4917 | 0.85 | 0.2582 | 1.0891 | 0.85 | 0.8374 | 0.2005 | 0.0472 |
| 0.1824 | 78.0 | 546 | 0.4919 | 0.85 | 0.2584 | 1.0892 | 0.85 | 0.8374 | 0.1928 | 0.0472 |
| 0.1824 | 79.0 | 553 | 0.4920 | 0.85 | 0.2585 | 1.0893 | 0.85 | 0.8374 | 0.1845 | 0.0473 |
| 0.1824 | 80.0 | 560 | 0.4919 | 0.85 | 0.2584 | 1.0890 | 0.85 | 0.8374 | 0.1929 | 0.0473 |
| 0.1824 | 81.0 | 567 | 0.4920 | 0.85 | 0.2585 | 1.0892 | 0.85 | 0.8374 | 0.1925 | 0.0471 |
| 0.1824 | 82.0 | 574 | 0.4920 | 0.85 | 0.2585 | 1.0895 | 0.85 | 0.8374 | 0.1844 | 0.0471 |
| 0.1824 | 83.0 | 581 | 0.4919 | 0.85 | 0.2584 | 1.0892 | 0.85 | 0.8374 | 0.1916 | 0.0471 |
| 0.1824 | 84.0 | 588 | 0.4918 | 0.85 | 0.2584 | 1.0890 | 0.85 | 0.8374 | 0.1926 | 0.0471 |
| 0.1824 | 85.0 | 595 | 0.4918 | 0.85 | 0.2584 | 1.0892 | 0.85 | 0.8374 | 0.1844 | 0.0471 |
| 0.1824 | 86.0 | 602 | 0.4918 | 0.85 | 0.2584 | 1.0893 | 0.85 | 0.8374 | 0.1927 | 0.0472 |
| 0.1824 | 87.0 | 609 | 0.4918 | 0.85 | 0.2584 | 1.0895 | 0.85 | 0.8374 | 0.1844 | 0.0471 |
| 0.1824 | 88.0 | 616 | 0.4918 | 0.85 | 0.2584 | 1.0892 | 0.85 | 0.8374 | 0.1844 | 0.0471 |
| 0.1824 | 89.0 | 623 | 0.4918 | 0.85 | 0.2583 | 1.0895 | 0.85 | 0.8374 | 0.1917 | 0.0471 |
| 0.1824 | 90.0 | 630 | 0.4919 | 0.85 | 0.2584 | 1.0892 | 0.85 | 0.8374 | 0.1998 | 0.0471 |
| 0.1824 | 91.0 | 637 | 0.4919 | 0.85 | 0.2584 | 1.0894 | 0.85 | 0.8374 | 0.1916 | 0.0471 |
| 0.1824 | 92.0 | 644 | 0.4918 | 0.85 | 0.2583 | 1.0895 | 0.85 | 0.8374 | 0.1917 | 0.0470 |
| 0.1824 | 93.0 | 651 | 0.4918 | 0.85 | 0.2583 | 1.0893 | 0.85 | 0.8374 | 0.1917 | 0.0471 |
| 0.1824 | 94.0 | 658 | 0.4918 | 0.85 | 0.2583 | 1.0894 | 0.85 | 0.8374 | 0.1844 | 0.0471 |
| 0.1824 | 95.0 | 665 | 0.4918 | 0.85 | 0.2583 | 1.0894 | 0.85 | 0.8374 | 0.1917 | 0.0470 |
| 0.1824 | 96.0 | 672 | 0.4918 | 0.85 | 0.2583 | 1.0894 | 0.85 | 0.8374 | 0.1917 | 0.0470 |
| 0.1824 | 97.0 | 679 | 0.4918 | 0.85 | 0.2583 | 1.0895 | 0.85 | 0.8374 | 0.1916 | 0.0471 |
| 0.1824 | 98.0 | 686 | 0.4918 | 0.85 | 0.2583 | 1.0895 | 0.85 | 0.8374 | 0.1917 | 0.0470 |
| 0.1824 | 99.0 | 693 | 0.4918 | 0.85 | 0.2583 | 1.0894 | 0.85 | 0.8374 | 0.1917 | 0.0470 |
| 0.1824 | 100.0 | 700 | 0.4918 | 0.85 | 0.2583 | 1.0894 | 0.85 | 0.8374 | 0.1917 | 0.0470 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
umanlp/babelbert-ft-xlm-r
|
umanlp
| 2023-07-10T21:57:04Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-07-07T21:22:09Z |
This model is one of the artifacts of the paper [Massively Multilingual Lexical Specialization of Multilingual Transformers](https://aclanthology.org/2023.acl-long.426/).
It was obtained by fine-tuning the representations of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the dataset [babelbert-dataset](https://huggingface.co/datasets/umanlp/babelbert-dataset).
|
voyzan/unit1-bonus1-Huggy-A01
|
voyzan
| 2023-07-10T21:19:37Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-10T21:19:36Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: voyzan/unit1-bonus1-Huggy-A01
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jordyvl/vit-tiny_rvl_cdip_100_examples_per_class_kd_MSE
|
jordyvl
| 2023-07-10T21:13:05Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T20:08:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-tiny_rvl_cdip_100_examples_per_class_kd_MSE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-tiny_rvl_cdip_100_examples_per_class_kd_MSE
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7723
- Accuracy: 0.6025
- Brier Loss: 0.5295
- Nll: 3.6748
- F1 Micro: 0.6025
- F1 Macro: 0.6055
- Ece: 0.1688
- Aurc: 0.1708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 4.7870 | 0.065 | 0.9655 | 17.0930 | 0.065 | 0.0550 | 0.1747 | 0.9357 |
| No log | 2.0 | 50 | 3.9498 | 0.205 | 0.8858 | 9.5780 | 0.205 | 0.1863 | 0.1692 | 0.6618 |
| No log | 3.0 | 75 | 3.3698 | 0.3675 | 0.7672 | 6.4908 | 0.3675 | 0.3392 | 0.1676 | 0.4195 |
| No log | 4.0 | 100 | 2.9935 | 0.4075 | 0.6958 | 5.5595 | 0.4075 | 0.3820 | 0.1828 | 0.3327 |
| No log | 5.0 | 125 | 2.8351 | 0.455 | 0.6591 | 4.8619 | 0.455 | 0.4351 | 0.1561 | 0.2833 |
| No log | 6.0 | 150 | 2.8196 | 0.4725 | 0.6595 | 4.7785 | 0.4725 | 0.4367 | 0.1808 | 0.2790 |
| No log | 7.0 | 175 | 2.6352 | 0.5075 | 0.6234 | 4.9881 | 0.5075 | 0.4886 | 0.1563 | 0.2493 |
| No log | 8.0 | 200 | 2.5325 | 0.525 | 0.6162 | 4.3297 | 0.525 | 0.5026 | 0.1724 | 0.2365 |
| No log | 9.0 | 225 | 2.5459 | 0.53 | 0.6099 | 5.1608 | 0.53 | 0.5148 | 0.1944 | 0.2350 |
| No log | 10.0 | 250 | 2.5573 | 0.5325 | 0.6161 | 5.4495 | 0.5325 | 0.5212 | 0.2052 | 0.2397 |
| No log | 11.0 | 275 | 2.3199 | 0.5675 | 0.5828 | 4.1247 | 0.5675 | 0.5626 | 0.1849 | 0.2071 |
| No log | 12.0 | 300 | 2.2917 | 0.565 | 0.5758 | 4.1738 | 0.565 | 0.5694 | 0.1992 | 0.2023 |
| No log | 13.0 | 325 | 2.2744 | 0.555 | 0.5974 | 4.2323 | 0.555 | 0.5544 | 0.1982 | 0.2203 |
| No log | 14.0 | 350 | 2.1638 | 0.5625 | 0.5807 | 4.2049 | 0.5625 | 0.5629 | 0.1868 | 0.2049 |
| No log | 15.0 | 375 | 2.1934 | 0.5575 | 0.5903 | 4.3813 | 0.5575 | 0.5614 | 0.1868 | 0.2022 |
| No log | 16.0 | 400 | 2.1092 | 0.5625 | 0.5702 | 3.6094 | 0.5625 | 0.5700 | 0.1846 | 0.2011 |
| No log | 17.0 | 425 | 2.0379 | 0.5875 | 0.5642 | 4.4351 | 0.5875 | 0.5822 | 0.2036 | 0.1959 |
| No log | 18.0 | 450 | 2.0303 | 0.5825 | 0.5558 | 3.6847 | 0.5825 | 0.5820 | 0.1684 | 0.1881 |
| No log | 19.0 | 475 | 2.0506 | 0.57 | 0.5749 | 4.0014 | 0.57 | 0.5708 | 0.1725 | 0.2027 |
| 1.5026 | 20.0 | 500 | 1.9932 | 0.5875 | 0.5524 | 3.8003 | 0.5875 | 0.5914 | 0.1843 | 0.1831 |
| 1.5026 | 21.0 | 525 | 2.0131 | 0.565 | 0.5643 | 4.0681 | 0.565 | 0.5635 | 0.1776 | 0.1957 |
| 1.5026 | 22.0 | 550 | 2.0162 | 0.5725 | 0.5712 | 3.7068 | 0.5725 | 0.5766 | 0.1934 | 0.1955 |
| 1.5026 | 23.0 | 575 | 1.9093 | 0.605 | 0.5381 | 3.7930 | 0.605 | 0.6032 | 0.1539 | 0.1749 |
| 1.5026 | 24.0 | 600 | 1.9607 | 0.575 | 0.5561 | 4.5740 | 0.575 | 0.5789 | 0.1782 | 0.1902 |
| 1.5026 | 25.0 | 625 | 1.8971 | 0.5825 | 0.5408 | 3.7290 | 0.5825 | 0.5754 | 0.1836 | 0.1751 |
| 1.5026 | 26.0 | 650 | 1.9217 | 0.5775 | 0.5537 | 3.8085 | 0.5775 | 0.5844 | 0.1725 | 0.1843 |
| 1.5026 | 27.0 | 675 | 1.9493 | 0.585 | 0.5606 | 3.6743 | 0.585 | 0.5953 | 0.1755 | 0.1882 |
| 1.5026 | 28.0 | 700 | 1.8884 | 0.585 | 0.5437 | 3.7865 | 0.585 | 0.5828 | 0.1801 | 0.1822 |
| 1.5026 | 29.0 | 725 | 1.9242 | 0.585 | 0.5479 | 3.9607 | 0.585 | 0.5856 | 0.1619 | 0.1817 |
| 1.5026 | 30.0 | 750 | 1.8767 | 0.5975 | 0.5470 | 3.7995 | 0.5975 | 0.5966 | 0.1599 | 0.1790 |
| 1.5026 | 31.0 | 775 | 1.8723 | 0.5925 | 0.5337 | 3.8962 | 0.5925 | 0.5972 | 0.1678 | 0.1729 |
| 1.5026 | 32.0 | 800 | 1.9093 | 0.585 | 0.5545 | 3.8776 | 0.585 | 0.5830 | 0.1902 | 0.1841 |
| 1.5026 | 33.0 | 825 | 1.8667 | 0.595 | 0.5363 | 3.8926 | 0.595 | 0.5917 | 0.1772 | 0.1745 |
| 1.5026 | 34.0 | 850 | 1.8403 | 0.59 | 0.5521 | 3.8560 | 0.59 | 0.5953 | 0.1711 | 0.1800 |
| 1.5026 | 35.0 | 875 | 1.8464 | 0.5925 | 0.5380 | 4.0376 | 0.5925 | 0.5970 | 0.1719 | 0.1756 |
| 1.5026 | 36.0 | 900 | 1.8441 | 0.5975 | 0.5411 | 3.7193 | 0.5975 | 0.6008 | 0.1569 | 0.1753 |
| 1.5026 | 37.0 | 925 | 1.8599 | 0.5875 | 0.5402 | 3.9139 | 0.5875 | 0.5908 | 0.1779 | 0.1789 |
| 1.5026 | 38.0 | 950 | 1.8559 | 0.6 | 0.5458 | 3.8970 | 0.6 | 0.5991 | 0.1583 | 0.1804 |
| 1.5026 | 39.0 | 975 | 1.8285 | 0.61 | 0.5370 | 3.6292 | 0.61 | 0.6155 | 0.1623 | 0.1722 |
| 0.0745 | 40.0 | 1000 | 1.8309 | 0.5975 | 0.5432 | 3.6865 | 0.5975 | 0.6017 | 0.1663 | 0.1821 |
| 0.0745 | 41.0 | 1025 | 1.8237 | 0.59 | 0.5348 | 3.6213 | 0.59 | 0.5921 | 0.1695 | 0.1738 |
| 0.0745 | 42.0 | 1050 | 1.8421 | 0.605 | 0.5360 | 3.8592 | 0.605 | 0.6048 | 0.1601 | 0.1743 |
| 0.0745 | 43.0 | 1075 | 1.8158 | 0.5975 | 0.5300 | 3.4537 | 0.5975 | 0.5953 | 0.1696 | 0.1707 |
| 0.0745 | 44.0 | 1100 | 1.8238 | 0.5875 | 0.5358 | 3.7706 | 0.5875 | 0.5923 | 0.1797 | 0.1754 |
| 0.0745 | 45.0 | 1125 | 1.8214 | 0.595 | 0.5463 | 3.4742 | 0.595 | 0.5981 | 0.1800 | 0.1770 |
| 0.0745 | 46.0 | 1150 | 1.8162 | 0.5925 | 0.5317 | 3.9260 | 0.5925 | 0.5950 | 0.1646 | 0.1733 |
| 0.0745 | 47.0 | 1175 | 1.8050 | 0.5975 | 0.5392 | 3.8322 | 0.5975 | 0.5979 | 0.1794 | 0.1763 |
| 0.0745 | 48.0 | 1200 | 1.8214 | 0.5975 | 0.5347 | 3.7965 | 0.5975 | 0.6009 | 0.1555 | 0.1746 |
| 0.0745 | 49.0 | 1225 | 1.7813 | 0.6 | 0.5294 | 3.8398 | 0.6 | 0.6005 | 0.1674 | 0.1688 |
| 0.0745 | 50.0 | 1250 | 1.8179 | 0.6075 | 0.5336 | 3.4690 | 0.6075 | 0.6112 | 0.1743 | 0.1748 |
| 0.0745 | 51.0 | 1275 | 1.7953 | 0.595 | 0.5380 | 3.7781 | 0.595 | 0.5990 | 0.1380 | 0.1727 |
| 0.0745 | 52.0 | 1300 | 1.7897 | 0.6 | 0.5323 | 3.7412 | 0.6 | 0.6013 | 0.1603 | 0.1707 |
| 0.0745 | 53.0 | 1325 | 1.8072 | 0.59 | 0.5428 | 3.5993 | 0.59 | 0.5947 | 0.1571 | 0.1773 |
| 0.0745 | 54.0 | 1350 | 1.7834 | 0.605 | 0.5219 | 3.7600 | 0.605 | 0.6049 | 0.1563 | 0.1671 |
| 0.0745 | 55.0 | 1375 | 1.7920 | 0.595 | 0.5361 | 3.5986 | 0.595 | 0.5978 | 0.1512 | 0.1717 |
| 0.0745 | 56.0 | 1400 | 1.8074 | 0.5925 | 0.5387 | 3.5383 | 0.5925 | 0.5962 | 0.1669 | 0.1741 |
| 0.0745 | 57.0 | 1425 | 1.7893 | 0.605 | 0.5346 | 3.6929 | 0.605 | 0.6039 | 0.1641 | 0.1681 |
| 0.0745 | 58.0 | 1450 | 1.7787 | 0.6 | 0.5317 | 3.7652 | 0.6 | 0.6004 | 0.1850 | 0.1726 |
| 0.0745 | 59.0 | 1475 | 1.7888 | 0.595 | 0.5323 | 3.4558 | 0.595 | 0.5975 | 0.1797 | 0.1732 |
| 0.0231 | 60.0 | 1500 | 1.8064 | 0.58 | 0.5332 | 3.7773 | 0.58 | 0.5839 | 0.1819 | 0.1762 |
| 0.0231 | 61.0 | 1525 | 1.7795 | 0.6075 | 0.5298 | 3.7998 | 0.6075 | 0.6086 | 0.1678 | 0.1704 |
| 0.0231 | 62.0 | 1550 | 1.7826 | 0.595 | 0.5318 | 3.6741 | 0.595 | 0.5916 | 0.1550 | 0.1715 |
| 0.0231 | 63.0 | 1575 | 1.7704 | 0.5925 | 0.5325 | 3.5942 | 0.5925 | 0.5941 | 0.1619 | 0.1712 |
| 0.0231 | 64.0 | 1600 | 1.7901 | 0.6025 | 0.5289 | 3.4459 | 0.6025 | 0.6054 | 0.2022 | 0.1712 |
| 0.0231 | 65.0 | 1625 | 1.7944 | 0.59 | 0.5381 | 3.7591 | 0.59 | 0.5910 | 0.1599 | 0.1756 |
| 0.0231 | 66.0 | 1650 | 1.7721 | 0.605 | 0.5256 | 3.5227 | 0.605 | 0.6045 | 0.1525 | 0.1677 |
| 0.0231 | 67.0 | 1675 | 1.7779 | 0.5975 | 0.5306 | 3.6792 | 0.5975 | 0.5994 | 0.1667 | 0.1714 |
| 0.0231 | 68.0 | 1700 | 1.7724 | 0.6 | 0.5250 | 3.7552 | 0.6 | 0.6022 | 0.1818 | 0.1683 |
| 0.0231 | 69.0 | 1725 | 1.7765 | 0.6025 | 0.5283 | 3.4264 | 0.6025 | 0.6019 | 0.1671 | 0.1700 |
| 0.0231 | 70.0 | 1750 | 1.7784 | 0.6 | 0.5276 | 3.6887 | 0.6 | 0.6053 | 0.1715 | 0.1703 |
| 0.0231 | 71.0 | 1775 | 1.7659 | 0.6 | 0.5282 | 3.6051 | 0.6 | 0.6006 | 0.1722 | 0.1691 |
| 0.0231 | 72.0 | 1800 | 1.7882 | 0.5975 | 0.5329 | 3.5950 | 0.5975 | 0.6016 | 0.1981 | 0.1716 |
| 0.0231 | 73.0 | 1825 | 1.7678 | 0.6 | 0.5287 | 3.6691 | 0.6 | 0.6032 | 0.1733 | 0.1696 |
| 0.0231 | 74.0 | 1850 | 1.7716 | 0.6 | 0.5286 | 3.7576 | 0.6 | 0.6013 | 0.1734 | 0.1692 |
| 0.0231 | 75.0 | 1875 | 1.7704 | 0.6 | 0.5299 | 3.5917 | 0.6 | 0.6016 | 0.1645 | 0.1709 |
| 0.0231 | 76.0 | 1900 | 1.7729 | 0.6 | 0.5298 | 3.6758 | 0.6 | 0.6024 | 0.1766 | 0.1710 |
| 0.0231 | 77.0 | 1925 | 1.7749 | 0.6 | 0.5308 | 3.6022 | 0.6 | 0.6030 | 0.1604 | 0.1717 |
| 0.0231 | 78.0 | 1950 | 1.7720 | 0.6 | 0.5294 | 3.6759 | 0.6 | 0.6017 | 0.1786 | 0.1708 |
| 0.0231 | 79.0 | 1975 | 1.7734 | 0.6025 | 0.5288 | 3.6765 | 0.6025 | 0.6048 | 0.1673 | 0.1698 |
| 0.0059 | 80.0 | 2000 | 1.7709 | 0.6 | 0.5286 | 3.6755 | 0.6 | 0.6020 | 0.1749 | 0.1704 |
| 0.0059 | 81.0 | 2025 | 1.7730 | 0.6 | 0.5295 | 3.6760 | 0.6 | 0.6020 | 0.1677 | 0.1708 |
| 0.0059 | 82.0 | 2050 | 1.7723 | 0.6025 | 0.5295 | 3.6756 | 0.6025 | 0.6055 | 0.1626 | 0.1708 |
| 0.0059 | 83.0 | 2075 | 1.7721 | 0.6025 | 0.5295 | 3.6741 | 0.6025 | 0.6055 | 0.1709 | 0.1708 |
| 0.0059 | 84.0 | 2100 | 1.7725 | 0.6025 | 0.5297 | 3.6747 | 0.6025 | 0.6048 | 0.1627 | 0.1709 |
| 0.0059 | 85.0 | 2125 | 1.7724 | 0.6025 | 0.5295 | 3.6751 | 0.6025 | 0.6055 | 0.1639 | 0.1707 |
| 0.0059 | 86.0 | 2150 | 1.7724 | 0.6025 | 0.5296 | 3.6751 | 0.6025 | 0.6055 | 0.1630 | 0.1708 |
| 0.0059 | 87.0 | 2175 | 1.7724 | 0.6025 | 0.5295 | 3.6749 | 0.6025 | 0.6055 | 0.1638 | 0.1707 |
| 0.0059 | 88.0 | 2200 | 1.7722 | 0.6025 | 0.5295 | 3.6752 | 0.6025 | 0.6055 | 0.1645 | 0.1708 |
| 0.0059 | 89.0 | 2225 | 1.7723 | 0.6025 | 0.5295 | 3.6747 | 0.6025 | 0.6055 | 0.1639 | 0.1708 |
| 0.0059 | 90.0 | 2250 | 1.7723 | 0.6025 | 0.5294 | 3.6750 | 0.6025 | 0.6055 | 0.1643 | 0.1708 |
| 0.0059 | 91.0 | 2275 | 1.7723 | 0.6025 | 0.5294 | 3.6750 | 0.6025 | 0.6055 | 0.1643 | 0.1708 |
| 0.0059 | 92.0 | 2300 | 1.7723 | 0.6025 | 0.5295 | 3.6747 | 0.6025 | 0.6055 | 0.1639 | 0.1708 |
| 0.0059 | 93.0 | 2325 | 1.7723 | 0.6025 | 0.5295 | 3.6749 | 0.6025 | 0.6055 | 0.1637 | 0.1707 |
| 0.0059 | 94.0 | 2350 | 1.7722 | 0.6025 | 0.5295 | 3.6749 | 0.6025 | 0.6055 | 0.1688 | 0.1708 |
| 0.0059 | 95.0 | 2375 | 1.7723 | 0.6025 | 0.5295 | 3.6748 | 0.6025 | 0.6055 | 0.1643 | 0.1708 |
| 0.0059 | 96.0 | 2400 | 1.7723 | 0.6025 | 0.5294 | 3.6748 | 0.6025 | 0.6055 | 0.1643 | 0.1707 |
| 0.0059 | 97.0 | 2425 | 1.7723 | 0.6025 | 0.5295 | 3.6748 | 0.6025 | 0.6055 | 0.1688 | 0.1708 |
| 0.0059 | 98.0 | 2450 | 1.7723 | 0.6025 | 0.5295 | 3.6749 | 0.6025 | 0.6055 | 0.1643 | 0.1708 |
| 0.0059 | 99.0 | 2475 | 1.7723 | 0.6025 | 0.5295 | 3.6749 | 0.6025 | 0.6055 | 0.1688 | 0.1708 |
| 0.0 | 100.0 | 2500 | 1.7723 | 0.6025 | 0.5295 | 3.6748 | 0.6025 | 0.6055 | 0.1688 | 0.1708 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
skrl/IsaacGymEnvs-FactoryTaskNutBoltScrew-PPO
|
skrl
| 2023-07-10T21:06:55Z | 0 | 0 |
skrl
|
[
"skrl",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T19:47:47Z |
---
library_name: skrl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- skrl
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -8.89 +/- 10.3
name: Total reward (mean)
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IsaacGymEnvs-FactoryTaskNutBoltScrew
type: IsaacGymEnvs-FactoryTaskNutBoltScrew
---
<!-- ---
torch: -21.51 +/- 14.99
jax: -35.77 +/- 0.39
numpy: -8.89 +/- 10.3
--- -->
# IsaacGymEnvs-FactoryTaskNutBoltScrew-PPO
Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments.
- **Task:** FactoryTaskNutBoltScrew
- **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html)
# Usage (with skrl)
Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts.
* PyTorch
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-FactoryTaskNutBoltScrew-PPO", filename="agent.pt")
agent.load(path)
```
* JAX
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-FactoryTaskNutBoltScrew-PPO", filename="agent.pickle")
agent.load(path)
```
# Hyperparameters
Note: Undefined parameters keep their values by default.
```python
# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 128 # memory_size
cfg["learning_epochs"] = 8
cfg["mini_batches"] = 32 # 128 * 128 / 512
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 1e-4
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.0
cfg["value_loss_scale"] = 1.0
cfg["kl_threshold"] = 0.016
cfg["rewards_shaper"] = None
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}
```
|
skrl/IsaacGymEnvs-FactoryTaskNutBoltPick-PPO
|
skrl
| 2023-07-10T20:49:13Z | 0 | 0 |
skrl
|
[
"skrl",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T19:46:39Z |
---
library_name: skrl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- skrl
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -13.83 +/- 0.26
name: Total reward (mean)
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IsaacGymEnvs-FactoryTaskNutBoltPick
type: IsaacGymEnvs-FactoryTaskNutBoltPick
---
<!-- ---
torch: -14.79 +/- 2.68
jax: -13.87 +/- 0.06
numpy: -13.83 +/- 0.26
--- -->
# IsaacGymEnvs-FactoryTaskNutBoltPick-PPO
Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments.
- **Task:** FactoryTaskNutBoltPick
- **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html)
# Usage (with skrl)
Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts.
* PyTorch
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-FactoryTaskNutBoltPick-PPO", filename="agent.pt")
agent.load(path)
```
* JAX
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-FactoryTaskNutBoltPick-PPO", filename="agent.pickle")
agent.load(path)
```
# Hyperparameters
Note: Undefined parameters keep their values by default.
```python
# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 120 # memory_size
cfg["learning_epochs"] = 8
cfg["mini_batches"] = 30 # 120 * 128 / 512
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 1e-4
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.0
cfg["value_loss_scale"] = 1.0
cfg["kl_threshold"] = 0.016
cfg["rewards_shaper"] = None
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}
```
|
vk21/ppo-SnowballTarget-unit5
|
vk21
| 2023-07-10T20:34:28Z | 10 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-10T20:34:22Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: vk21/ppo-SnowballTarget-unit5
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DarkAirforce/dqn-SpaceInvadersNoFrameskip-v4
|
DarkAirforce
| 2023-07-10T20:33:23Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T19:24:04Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 534.00 +/- 175.24
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DarkAirforce -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DarkAirforce -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga DarkAirforce
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
grace-pro/afriberta-large-finetuned-hausa
|
grace-pro
| 2023-07-10T20:28:21Z | 127 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T19:28:18Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afriberta-large-finetuned-hausa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta-large-finetuned-hausa
This model is a fine-tuned version of [castorini/afriberta_large](https://huggingface.co/castorini/afriberta_large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1448
- Precision: 0.7114
- Recall: 0.5238
- F1: 0.6034
- Accuracy: 0.9652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1373 | 1.0 | 2624 | 0.1267 | 0.6804 | 0.4519 | 0.5431 | 0.9612 |
| 0.1102 | 2.0 | 5248 | 0.1186 | 0.6927 | 0.5020 | 0.5821 | 0.9635 |
| 0.0849 | 3.0 | 7872 | 0.1269 | 0.7114 | 0.5036 | 0.5897 | 0.9645 |
| 0.0683 | 4.0 | 10496 | 0.1341 | 0.7159 | 0.5078 | 0.5941 | 0.9650 |
| 0.0567 | 5.0 | 13120 | 0.1448 | 0.7114 | 0.5238 | 0.6034 | 0.9652 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
skrl/IsaacGymEnvs-FactoryTaskNutBoltPlace-PPO
|
skrl
| 2023-07-10T20:15:49Z | 0 | 0 |
skrl
|
[
"skrl",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T19:47:18Z |
---
library_name: skrl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- skrl
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -38.54 +/- 17.49
name: Total reward (mean)
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IsaacGymEnvs-FactoryTaskNutBoltPlace
type: IsaacGymEnvs-FactoryTaskNutBoltPlace
---
<!-- ---
torch: -38.54 +/- 17.49
jax: -60.9 +/- 0.84
numpy: -58.9 +/- 1.8
--- -->
# IsaacGymEnvs-FactoryTaskNutBoltPlace-PPO
Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments.
- **Task:** FactoryTaskNutBoltPlace
- **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html)
# Usage (with skrl)
Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts.
* PyTorch
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-FactoryTaskNutBoltPlace-PPO", filename="agent.pt")
agent.load(path)
```
* JAX
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-FactoryTaskNutBoltPlace-PPO", filename="agent.pickle")
agent.load(path)
```
# Hyperparameters
Note: Undefined parameters keep their values by default.
```python
# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 120 # memory_size
cfg["learning_epochs"] = 8
cfg["mini_batches"] = 30 # 120 * 128 / 512
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 1e-4
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.0
cfg["value_loss_scale"] = 1.0
cfg["kl_threshold"] = 0.016
cfg["rewards_shaper"] = None
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}
```
|
MaitreHibou/Reinforce-Pixelcopter-PLE-v0
|
MaitreHibou
| 2023-07-10T20:12:10Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T19:26:33Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 35.00 +/- 20.11
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
blzncz/segformer-finetuned-4ss1st3r_s3gs3m-10k-steps
|
blzncz
| 2023-07-10T20:04:14Z | 188 | 0 |
transformers
|
[
"transformers",
"pytorch",
"segformer",
"image-segmentation",
"vision",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2023-07-10T10:49:12Z |
---
license: other
tags:
- image-segmentation
- vision
- generated_from_trainer
model-index:
- name: segformer-finetuned-4ss1st3r_s3gs3m-10k-steps
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-finetuned-4ss1st3r_s3gs3m-10k-steps
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the blzncz/4ss1st3r_s3gs3m dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3966
- Mean Iou: 0.5967
- Mean Accuracy: 0.8460
- Overall Accuracy: 0.9344
- Accuracy Bg: nan
- Accuracy Fallo cohesivo: 0.9510
- Accuracy Fallo malla: 0.8524
- Accuracy Fallo adhesivo: 0.9362
- Accuracy Fallo burbuja: 0.6444
- Iou Bg: 0.0
- Iou Fallo cohesivo: 0.9239
- Iou Fallo malla: 0.7125
- Iou Fallo adhesivo: 0.8335
- Iou Fallo burbuja: 0.5139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Bg | Accuracy Fallo cohesivo | Accuracy Fallo malla | Accuracy Fallo adhesivo | Accuracy Fallo burbuja | Iou Bg | Iou Fallo cohesivo | Iou Fallo malla | Iou Fallo adhesivo | Iou Fallo burbuja |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------:|:-----------------------:|:--------------------:|:-----------------------:|:----------------------:|:------:|:------------------:|:---------------:|:------------------:|:-----------------:|
| 0.4796 | 1.0 | 133 | 0.4190 | 0.4518 | 0.6689 | 0.9049 | nan | 0.9277 | 0.8091 | 0.9381 | 0.0008 | 0.0 | 0.8866 | 0.6536 | 0.7179 | 0.0008 |
| 0.2665 | 2.0 | 266 | 0.3667 | 0.5096 | 0.7283 | 0.9001 | nan | 0.9111 | 0.8964 | 0.8731 | 0.2324 | 0.0 | 0.8802 | 0.6013 | 0.8467 | 0.2197 |
| 0.2158 | 3.0 | 399 | 0.3210 | 0.5505 | 0.7807 | 0.9142 | nan | 0.9250 | 0.8685 | 0.9414 | 0.3878 | 0.0 | 0.8952 | 0.6239 | 0.8901 | 0.3432 |
| 0.1737 | 4.0 | 532 | 0.3572 | 0.5370 | 0.7851 | 0.8905 | nan | 0.8905 | 0.9102 | 0.9121 | 0.4277 | 0.0 | 0.8671 | 0.5637 | 0.8777 | 0.3764 |
| 0.1602 | 5.0 | 665 | 0.6273 | 0.4086 | 0.7632 | 0.7743 | nan | 0.7333 | 0.9343 | 0.9685 | 0.4168 | 0.0 | 0.7198 | 0.4460 | 0.5324 | 0.3449 |
| 0.1707 | 6.0 | 798 | 0.3534 | 0.5442 | 0.7953 | 0.9025 | nan | 0.9056 | 0.9031 | 0.9234 | 0.4492 | 0.0 | 0.8812 | 0.5985 | 0.8629 | 0.3783 |
| 0.1376 | 7.0 | 931 | 0.3266 | 0.5513 | 0.7634 | 0.9262 | nan | 0.9434 | 0.8621 | 0.9288 | 0.3195 | 0.0 | 0.9109 | 0.6623 | 0.8866 | 0.2968 |
| 0.1346 | 8.0 | 1064 | 0.4976 | 0.4916 | 0.7900 | 0.8396 | nan | 0.8190 | 0.9133 | 0.9713 | 0.4565 | 0.0 | 0.8041 | 0.4662 | 0.7906 | 0.3970 |
| 0.1319 | 9.0 | 1197 | 0.3650 | 0.5652 | 0.8404 | 0.9043 | nan | 0.9053 | 0.8856 | 0.9593 | 0.6113 | 0.0 | 0.8829 | 0.5992 | 0.8734 | 0.4706 |
| 0.1229 | 10.0 | 1330 | 0.3201 | 0.5666 | 0.7963 | 0.9299 | nan | 0.9435 | 0.8764 | 0.9389 | 0.4265 | 0.0 | 0.9171 | 0.6896 | 0.8499 | 0.3763 |
| 0.1142 | 11.0 | 1463 | 0.3824 | 0.5576 | 0.8204 | 0.9020 | nan | 0.8988 | 0.9231 | 0.9456 | 0.5142 | 0.0 | 0.8795 | 0.6001 | 0.8711 | 0.4374 |
| 0.0983 | 12.0 | 1596 | 0.3133 | 0.5812 | 0.8297 | 0.9293 | nan | 0.9354 | 0.9046 | 0.9558 | 0.5229 | 0.0 | 0.9136 | 0.6969 | 0.8618 | 0.4335 |
| 0.1058 | 13.0 | 1729 | 0.2965 | 0.5860 | 0.8250 | 0.9364 | nan | 0.9528 | 0.8496 | 0.9598 | 0.5378 | 0.0 | 0.9253 | 0.7162 | 0.8502 | 0.4383 |
| 0.1052 | 14.0 | 1862 | 0.2839 | 0.6064 | 0.8275 | 0.9460 | nan | 0.9674 | 0.8517 | 0.9290 | 0.5621 | 0.0 | 0.9355 | 0.7492 | 0.8930 | 0.4540 |
| 0.0911 | 15.0 | 1995 | 0.3245 | 0.5853 | 0.8116 | 0.9368 | nan | 0.9565 | 0.8504 | 0.9298 | 0.5099 | 0.0 | 0.9243 | 0.7171 | 0.8534 | 0.4318 |
| 0.0889 | 16.0 | 2128 | 0.3094 | 0.5969 | 0.8225 | 0.9422 | nan | 0.9615 | 0.8559 | 0.9376 | 0.5351 | 0.0 | 0.9313 | 0.7353 | 0.8726 | 0.4451 |
| 0.0827 | 17.0 | 2261 | 0.4776 | 0.5187 | 0.8195 | 0.8547 | nan | 0.8390 | 0.9163 | 0.9440 | 0.5786 | 0.0 | 0.8207 | 0.4920 | 0.8216 | 0.4590 |
| 0.0939 | 18.0 | 2394 | 0.3923 | 0.5364 | 0.8375 | 0.8948 | nan | 0.8950 | 0.8831 | 0.9437 | 0.6282 | 0.0 | 0.8746 | 0.6268 | 0.7090 | 0.4717 |
| 0.0799 | 19.0 | 2527 | 0.3560 | 0.5776 | 0.8252 | 0.9254 | nan | 0.9337 | 0.8933 | 0.9409 | 0.5331 | 0.0 | 0.9096 | 0.6846 | 0.8519 | 0.4422 |
| 0.075 | 20.0 | 2660 | 0.3803 | 0.5796 | 0.8338 | 0.9194 | nan | 0.9249 | 0.9078 | 0.9238 | 0.5788 | 0.0 | 0.9032 | 0.6459 | 0.8821 | 0.4670 |
| 0.0844 | 21.0 | 2793 | 0.2885 | 0.6170 | 0.8334 | 0.9507 | nan | 0.9757 | 0.8296 | 0.9390 | 0.5892 | 0.0 | 0.9412 | 0.7654 | 0.8933 | 0.4852 |
| 0.0746 | 22.0 | 2926 | 0.3222 | 0.5831 | 0.8160 | 0.9331 | nan | 0.9481 | 0.8685 | 0.9370 | 0.5105 | 0.0 | 0.9193 | 0.7032 | 0.8716 | 0.4215 |
| 0.072 | 23.0 | 3059 | 0.3481 | 0.5878 | 0.8336 | 0.9266 | nan | 0.9357 | 0.8952 | 0.9271 | 0.5764 | 0.0 | 0.9123 | 0.6824 | 0.8720 | 0.4725 |
| 0.0735 | 24.0 | 3192 | 0.3196 | 0.5974 | 0.8403 | 0.9353 | nan | 0.9496 | 0.8666 | 0.9430 | 0.6018 | 0.0 | 0.9225 | 0.7165 | 0.8649 | 0.4832 |
| 0.0674 | 25.0 | 3325 | 0.3407 | 0.5927 | 0.8435 | 0.9282 | nan | 0.9401 | 0.8786 | 0.9246 | 0.6304 | 0.0 | 0.9141 | 0.6844 | 0.8696 | 0.4953 |
| 0.0712 | 26.0 | 3458 | 0.3356 | 0.5906 | 0.8420 | 0.9301 | nan | 0.9405 | 0.8895 | 0.9299 | 0.6080 | 0.0 | 0.9160 | 0.6905 | 0.8743 | 0.4722 |
| 0.072 | 27.0 | 3591 | 0.3491 | 0.5833 | 0.8372 | 0.9286 | nan | 0.9415 | 0.8636 | 0.9425 | 0.6012 | 0.0 | 0.9161 | 0.6966 | 0.8246 | 0.4790 |
| 0.0641 | 28.0 | 3724 | 0.3130 | 0.6087 | 0.8422 | 0.9473 | nan | 0.9697 | 0.8357 | 0.9427 | 0.6208 | 0.0 | 0.9386 | 0.7613 | 0.8599 | 0.4837 |
| 0.0597 | 29.0 | 3857 | 0.3828 | 0.5666 | 0.8394 | 0.9107 | nan | 0.9141 | 0.8934 | 0.9411 | 0.6092 | 0.0 | 0.8924 | 0.6327 | 0.8343 | 0.4735 |
| 0.0648 | 30.0 | 3990 | 0.3435 | 0.6001 | 0.8372 | 0.9403 | nan | 0.9569 | 0.8708 | 0.9276 | 0.5935 | 0.0 | 0.9292 | 0.7312 | 0.8779 | 0.4623 |
| 0.0618 | 31.0 | 4123 | 0.3531 | 0.5963 | 0.8521 | 0.9303 | nan | 0.9450 | 0.8621 | 0.9240 | 0.6773 | 0.0 | 0.9179 | 0.6842 | 0.8730 | 0.5063 |
| 0.0556 | 32.0 | 4256 | 0.3307 | 0.6037 | 0.8417 | 0.9401 | nan | 0.9576 | 0.8637 | 0.9271 | 0.6183 | 0.0 | 0.9298 | 0.7274 | 0.8637 | 0.4974 |
| 0.0616 | 33.0 | 4389 | 0.3510 | 0.5911 | 0.8347 | 0.9298 | nan | 0.9424 | 0.8714 | 0.9388 | 0.5863 | 0.0 | 0.9158 | 0.6914 | 0.8745 | 0.4740 |
| 0.0603 | 34.0 | 4522 | 0.3467 | 0.6022 | 0.8544 | 0.9334 | nan | 0.9487 | 0.8610 | 0.9274 | 0.6807 | 0.0 | 0.9211 | 0.7029 | 0.8738 | 0.5130 |
| 0.0587 | 35.0 | 4655 | 0.3574 | 0.6017 | 0.8407 | 0.9379 | nan | 0.9555 | 0.8541 | 0.9346 | 0.6187 | 0.0 | 0.9269 | 0.7228 | 0.8627 | 0.4962 |
| 0.0557 | 36.0 | 4788 | 0.3871 | 0.5720 | 0.8334 | 0.9178 | nan | 0.9317 | 0.8416 | 0.9374 | 0.6228 | 0.0 | 0.9051 | 0.6479 | 0.8160 | 0.4911 |
| 0.0567 | 37.0 | 4921 | 0.4425 | 0.5656 | 0.8282 | 0.9070 | nan | 0.9114 | 0.8922 | 0.9244 | 0.5848 | 0.0 | 0.8889 | 0.6100 | 0.8575 | 0.4718 |
| 0.0537 | 38.0 | 5054 | 0.3512 | 0.5946 | 0.8392 | 0.9317 | nan | 0.9463 | 0.8649 | 0.9314 | 0.6142 | 0.0 | 0.9187 | 0.6984 | 0.8637 | 0.4921 |
| 0.0559 | 39.0 | 5187 | 0.3676 | 0.5931 | 0.8437 | 0.9273 | nan | 0.9381 | 0.8798 | 0.9323 | 0.6247 | 0.0 | 0.9129 | 0.6779 | 0.8786 | 0.4959 |
| 0.0502 | 40.0 | 5320 | 0.4149 | 0.5518 | 0.8381 | 0.8984 | nan | 0.9011 | 0.8773 | 0.9368 | 0.6370 | 0.0 | 0.8793 | 0.6069 | 0.7741 | 0.4989 |
| 0.0559 | 41.0 | 5453 | 0.4042 | 0.5694 | 0.8342 | 0.9130 | nan | 0.9206 | 0.8721 | 0.9400 | 0.6041 | 0.0 | 0.8971 | 0.6319 | 0.8286 | 0.4896 |
| 0.0523 | 42.0 | 5586 | 0.3669 | 0.5903 | 0.8462 | 0.9286 | nan | 0.9414 | 0.8676 | 0.9337 | 0.6421 | 0.0 | 0.9162 | 0.6883 | 0.8370 | 0.5102 |
| 0.0525 | 43.0 | 5719 | 0.4140 | 0.5704 | 0.8531 | 0.9081 | nan | 0.9110 | 0.8867 | 0.9417 | 0.6729 | 0.0 | 0.8898 | 0.6220 | 0.8366 | 0.5035 |
| 0.0508 | 44.0 | 5852 | 0.3965 | 0.5714 | 0.8396 | 0.9141 | nan | 0.9227 | 0.8800 | 0.9147 | 0.6409 | 0.0 | 0.8989 | 0.6513 | 0.8007 | 0.5060 |
| 0.0507 | 45.0 | 5985 | 0.3793 | 0.5817 | 0.8392 | 0.9196 | nan | 0.9272 | 0.8932 | 0.9214 | 0.6148 | 0.0 | 0.9042 | 0.6627 | 0.8407 | 0.5011 |
| 0.0494 | 46.0 | 6118 | 0.3500 | 0.6020 | 0.8426 | 0.9363 | nan | 0.9524 | 0.8619 | 0.9322 | 0.6240 | 0.0 | 0.9247 | 0.7142 | 0.8653 | 0.5058 |
| 0.0462 | 47.0 | 6251 | 0.3524 | 0.6031 | 0.8435 | 0.9388 | nan | 0.9545 | 0.8668 | 0.9364 | 0.6163 | 0.0 | 0.9274 | 0.7269 | 0.8703 | 0.4909 |
| 0.0486 | 48.0 | 6384 | 0.3876 | 0.5902 | 0.8397 | 0.9308 | nan | 0.9479 | 0.8557 | 0.9161 | 0.6392 | 0.0 | 0.9203 | 0.6928 | 0.8334 | 0.5046 |
| 0.0461 | 49.0 | 6517 | 0.3674 | 0.5933 | 0.8409 | 0.9326 | nan | 0.9482 | 0.8622 | 0.9258 | 0.6274 | 0.0 | 0.9214 | 0.7053 | 0.8367 | 0.5030 |
| 0.0497 | 50.0 | 6650 | 0.4018 | 0.5838 | 0.8374 | 0.9246 | nan | 0.9390 | 0.8519 | 0.9341 | 0.6244 | 0.0 | 0.9102 | 0.6733 | 0.8361 | 0.4992 |
| 0.0491 | 51.0 | 6783 | 0.4036 | 0.5824 | 0.8513 | 0.9198 | nan | 0.9272 | 0.8805 | 0.9403 | 0.6573 | 0.0 | 0.9037 | 0.6712 | 0.8169 | 0.5203 |
| 0.046 | 52.0 | 6916 | 0.3913 | 0.5820 | 0.8395 | 0.9243 | nan | 0.9347 | 0.8771 | 0.9336 | 0.6126 | 0.0 | 0.9105 | 0.6792 | 0.8244 | 0.4960 |
| 0.0488 | 53.0 | 7049 | 0.3441 | 0.6010 | 0.8504 | 0.9362 | nan | 0.9523 | 0.8521 | 0.9457 | 0.6517 | 0.0 | 0.9250 | 0.7225 | 0.8496 | 0.5081 |
| 0.0458 | 54.0 | 7182 | 0.3784 | 0.5977 | 0.8382 | 0.9378 | nan | 0.9603 | 0.8212 | 0.9375 | 0.6337 | 0.0 | 0.9286 | 0.7157 | 0.8387 | 0.5053 |
| 0.0449 | 55.0 | 7315 | 0.3506 | 0.6068 | 0.8493 | 0.9404 | nan | 0.9579 | 0.8554 | 0.9385 | 0.6456 | 0.0 | 0.9300 | 0.7357 | 0.8549 | 0.5132 |
| 0.0482 | 56.0 | 7448 | 0.4005 | 0.5819 | 0.8414 | 0.9249 | nan | 0.9374 | 0.8642 | 0.9337 | 0.6303 | 0.0 | 0.9119 | 0.6831 | 0.8139 | 0.5006 |
| 0.0434 | 57.0 | 7581 | 0.3749 | 0.5914 | 0.8465 | 0.9294 | nan | 0.9423 | 0.8675 | 0.9339 | 0.6421 | 0.0 | 0.9171 | 0.6999 | 0.8265 | 0.5134 |
| 0.0435 | 58.0 | 7714 | 0.4195 | 0.5722 | 0.8400 | 0.9172 | nan | 0.9274 | 0.8700 | 0.9234 | 0.6392 | 0.0 | 0.9025 | 0.6588 | 0.7954 | 0.5044 |
| 0.0442 | 59.0 | 7847 | 0.3975 | 0.5828 | 0.8407 | 0.9257 | nan | 0.9398 | 0.8563 | 0.9312 | 0.6356 | 0.0 | 0.9134 | 0.6866 | 0.8103 | 0.5037 |
| 0.0442 | 60.0 | 7980 | 0.3845 | 0.5929 | 0.8457 | 0.9315 | nan | 0.9459 | 0.8603 | 0.9363 | 0.6404 | 0.0 | 0.9193 | 0.7041 | 0.8308 | 0.5103 |
| 0.0422 | 61.0 | 8113 | 0.3875 | 0.5963 | 0.8465 | 0.9338 | nan | 0.9489 | 0.8616 | 0.9340 | 0.6413 | 0.0 | 0.9226 | 0.7135 | 0.8381 | 0.5072 |
| 0.0436 | 62.0 | 8246 | 0.3859 | 0.6022 | 0.8497 | 0.9385 | nan | 0.9566 | 0.8477 | 0.9382 | 0.6562 | 0.0 | 0.9289 | 0.7300 | 0.8376 | 0.5147 |
| 0.0429 | 63.0 | 8379 | 0.3857 | 0.5956 | 0.8425 | 0.9357 | nan | 0.9534 | 0.8481 | 0.9357 | 0.6327 | 0.0 | 0.9249 | 0.7233 | 0.8283 | 0.5016 |
| 0.0446 | 64.0 | 8512 | 0.3778 | 0.5976 | 0.8495 | 0.9343 | nan | 0.9492 | 0.8602 | 0.9399 | 0.6489 | 0.0 | 0.9232 | 0.7191 | 0.8305 | 0.5153 |
| 0.0429 | 65.0 | 8645 | 0.3889 | 0.5948 | 0.8478 | 0.9330 | nan | 0.9490 | 0.8548 | 0.9325 | 0.6549 | 0.0 | 0.9225 | 0.7075 | 0.8271 | 0.5167 |
| 0.0454 | 66.0 | 8778 | 0.3915 | 0.5941 | 0.8470 | 0.9329 | nan | 0.9490 | 0.8571 | 0.9271 | 0.6547 | 0.0 | 0.9221 | 0.7087 | 0.8278 | 0.5117 |
| 0.0427 | 67.0 | 8911 | 0.3924 | 0.5967 | 0.8455 | 0.9349 | nan | 0.9518 | 0.8520 | 0.9350 | 0.6433 | 0.0 | 0.9247 | 0.7167 | 0.8290 | 0.5133 |
| 0.0425 | 68.0 | 9044 | 0.3990 | 0.5992 | 0.8491 | 0.9358 | nan | 0.9524 | 0.8545 | 0.9355 | 0.6541 | 0.0 | 0.9250 | 0.7187 | 0.8387 | 0.5136 |
| 0.0429 | 69.0 | 9177 | 0.3911 | 0.5909 | 0.8499 | 0.9303 | nan | 0.9451 | 0.8532 | 0.9394 | 0.6619 | 0.0 | 0.9192 | 0.7029 | 0.8178 | 0.5146 |
| 0.0465 | 70.0 | 9310 | 0.3840 | 0.5977 | 0.8481 | 0.9332 | nan | 0.9473 | 0.8700 | 0.9278 | 0.6473 | 0.0 | 0.9215 | 0.7079 | 0.8480 | 0.5110 |
| 0.0436 | 71.0 | 9443 | 0.3862 | 0.5974 | 0.8456 | 0.9351 | nan | 0.9518 | 0.8534 | 0.9359 | 0.6413 | 0.0 | 0.9248 | 0.7162 | 0.8338 | 0.5124 |
| 0.0435 | 72.0 | 9576 | 0.3926 | 0.5952 | 0.8448 | 0.9328 | nan | 0.9484 | 0.8585 | 0.9318 | 0.6405 | 0.0 | 0.9217 | 0.7073 | 0.8386 | 0.5084 |
| 0.0421 | 73.0 | 9709 | 0.3961 | 0.5984 | 0.8467 | 0.9348 | nan | 0.9513 | 0.8564 | 0.9309 | 0.6482 | 0.0 | 0.9243 | 0.7119 | 0.8414 | 0.5143 |
| 0.0409 | 74.0 | 9842 | 0.3973 | 0.5982 | 0.8494 | 0.9341 | nan | 0.9498 | 0.8596 | 0.9306 | 0.6578 | 0.0 | 0.9233 | 0.7094 | 0.8401 | 0.5181 |
| 0.041 | 75.0 | 9975 | 0.3898 | 0.5963 | 0.8476 | 0.9335 | nan | 0.9493 | 0.8561 | 0.9354 | 0.6498 | 0.0 | 0.9227 | 0.7108 | 0.8329 | 0.5153 |
| 0.0436 | 75.19 | 10000 | 0.3966 | 0.5967 | 0.8460 | 0.9344 | nan | 0.9510 | 0.8524 | 0.9362 | 0.6444 | 0.0 | 0.9239 | 0.7125 | 0.8335 | 0.5139 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cpu
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Masik001/combined-GI-RVC-models
|
Masik001
| 2023-07-10T19:43:01Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-07-10T19:42:16Z |
===== Application Startup at 2023-07-10 13:56:11 =====
2023-07-10 17:36:30 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-07-10 17:36:30 | INFO | faiss.loader | Successfully loaded faiss with AVX2 support.
没有发现支持的N卡, 使用CPU进行推理
2023-07-10 17:36:31 | INFO | fairseq.tasks.hubert_pretraining | current directory is /home/user/app
2023-07-10 17:36:31 | INFO | fairseq.tasks.hubert_pretraining | HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': 'metadata', 'fine_tuning': False, 'labels': ['km'], 'label_dir': 'label', 'label_rate': 50.0, 'sample_rate': 16000, 'normalize': False, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 250000, 'min_sample_size': 32000, 'single_target': False, 'random_crop': True, 'pad_audio': False}
2023-07-10 17:36:31 | INFO | fairseq.models.hubert.hubert | HubertModel Config: {'_name': 'hubert', 'label_rate': 50.0, 'extractor_mode': default, 'encoder_layers': 12, 'encoder_embed_dim': 768, 'encoder_ffn_embed_dim': 3072, 'encoder_attention_heads': 12, 'activation_fn': gelu, 'layer_type': transformer, 'dropout': 0.1, 'attention_dropout': 0.1, 'activation_dropout': 0.0, 'encoder_layerdrop': 0.05, 'dropout_input': 0.1, 'dropout_features': 0.1, 'final_dim': 256, 'untie_final_proj': True, 'layer_norm_first': False, 'conv_feature_layers': '[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2', 'conv_bias': False, 'logit_temp': 0.1, 'target_glu': False, 'feature_grad_mult': 0.1, 'mask_length': 10, 'mask_prob': 0.8, 'mask_selection': static, 'mask_other': 0.0, 'no_mask_overlap': False, 'mask_min_space': 1, 'mask_channel_length': 10, 'mask_channel_prob': 0.0, 'mask_channel_selection': static, 'mask_channel_other': 0.0, 'no_mask_channel_overlap': False, 'mask_channel_min_space': 1, 'conv_pos': 128, 'conv_pos_groups': 16, 'latent_temp': [2.0, 0.5, 0.999995], 'skip_masked': False, 'skip_nomask': False, 'checkpoint_activations': False, 'required_seq_len_multiple': 2, 'depthwise_conv_kernel_size': 31, 'attn_type': '', 'pos_enc_type': 'abs', 'fp16': False}
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: aether-jp / added_IVF865_Flat_nprobe_1_aether-jp_v2.index | (V2)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: albedo-jp / added_IVF641_Flat_nprobe_1_albedo-jp_v1.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: alhaitham-jp / added_IVF519_Flat_nprobe_1.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: ayaka-jp / added_IVF1018_Flat_nprobe_1_ayaka_v2.index | (V2)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: ayato-jp / added_IVF1304_Flat_nprobe_1.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: barbara-jp / added_IVF548_Flat_nprobe_1.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: charlotte-jp / added_IVF1318_Flat_nprobe_1_charlotte-jp_v2_400.index | (V2)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: childe-jp / added_IVF684_Flat_nprobe_1_childe-v2_v2.index | (V2)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: chongyun-jp / added_IVF545_Flat_nprobe_1.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: cyno-jp / added_IVF380_Flat_nprobe_1_cyno-jp_v1.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: diluc-jp / added_IVF1511_Flat_nprobe_1.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: eula-jp / added_IVF2219_Flat_nprobe_1.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: faruzan-jp / added_IVF256_Flat_nprobe_1_faruzan-jp_v2.index | (V2)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: fischl-jp / added_IVF1225_Flat_nprobe_1.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: ganyu-jp / added_IVF1636_Flat_nprobe_1.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: heizou-jp / added_IVF466_Flat_nprobe_1_heizou-jp_v1.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: hutao-jp / added_IVF265_Flat_nprobe_5.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: itto-jp / added_IVF4454_Flat_nprobe_1_itto-jp_v2.index | (V2)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: kaeya-jp / added_IVF1655_Flat_nprobe_1.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: kaveh-jp / added_IVF613_Flat_nprobe_1_kaveh_v2_v2.index | (V2)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: kazuha-jp / added_IVF860_Flat_nprobe_1_kazuha_v2.index | (V2)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: keqing-jp / added_IVF1634_Flat_nprobe_1.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: kirara-jp / added_IVF672_Flat_nprobe_1.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: klee-jp / added_IVF282_Flat_nprobe_5.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: kokomi-jp / added_IVF934_Flat_nprobe_1_kokomi_v2.index | (V2)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: lumine-jp / added_IVF938_Flat_nprobe_1_lumine-jp_v2.index | (V2)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: mona-jp / added_IVF2165_Flat_nprobe_1.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: nahida-jp / added_IVF1062_Flat_nprobe_1_nahida-v2_v2.index | (V2)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: nilou-jp / added_IVF218_Flat_nprobe_1.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: paimon-jp / added_IVF3904_Flat_nprobe_1_paimon-jp_v2.index | (V2)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: raiden-jp / added_IVF4256_Flat_nprobe_1_raiden-jp_v2.index | (V2)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: signora-jp / added_IVF478_Flat_nprobe_1_signora-jp_v2.index | (V2)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: sucrose-jp / added_IVF884_Flat_nprobe_1.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: thoma-jp / added_IVF366_Flat_nprobe_1.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: tighnari-jp / added_IVF446_Flat_nprobe_1_tignari-jp_v1.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: venti-jp / added_IVF3591_Flat_nprobe_1_venti-jp_v2.index | (V2)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: wanderer-jp / added_IVF953_Flat_nprobe_1_wanderer-v2_v2.index | (V2)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: xiao-jp / added_IVF3205_Flat_nprobe_1_xiao-jp_v2.index | (V2)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: yae-jp / added_IVF1097_Flat_nprobe_1_yae-v2_v2.index | (V2)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: yanfei-jp / added_IVF1271_Flat_nprobe_1_yanfei-v2_v2.index | (V2)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: yelan-jp / added_IVF2051_Flat_nprobe_1.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: yoimiya-jp / added_IVF2034_Flat_nprobe_1.index | (V1)
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Model loaded: zhongli-jp / added_IVF1672_Flat_nprobe_1.index | (V1)
Running on local URL: http://0.0.0.0:7860
To create a public link, set `share=True` in `launch()`.
[2023-07-10 17:37]: npy: 2.0945026874542236, f0: 0.05994224548339844s, infer: 17.599822521209717s
[2023-07-10 17:38]: npy: 3.1487624645233154, f0: 0.022048234939575195s, infer: 25.596487760543823s
[2023-07-10 17:39]: npy: 3.693798780441284, f0: 0.017490386962890625s, infer: 32.087180376052856s
[2023-07-10 17:39]: npy: 2.5506346225738525, f0: 0.013794660568237305s, infer: 26.60752511024475s
[2023-07-10 17:40]: npy: 2.6092371940612793, f0: 0.03858685493469238s, infer: 26.312453031539917s
[2023-07-10 17:41]: npy: 2.615102767944336, f0: 0.03931307792663574s, infer: 26.40330672264099s
[2023-07-10 17:43]: npy: 3.1028923988342285, f0: 0.05546903610229492s, infer: 32.91775321960449s
[2023-07-10 17:44]: npy: 2.839845657348633, f0: 0.046269893646240234s, infer: 27.98230767250061s
[2023-07-10 17:44]: npy: 3.3039710521698, f0: 0.020084142684936523s, infer: 29.59837293624878s
[2023-07-10 17:45]: npy: 3.30319881439209, f0: 0.03941464424133301s, infer: 32.42077875137329s
[2023-07-10 17:46]: npy: 2.90372371673584, f0: 0.0513463020324707s, infer: 28.517998695373535s
[2023-07-10 17:47]: npy: 3.4118876457214355, f0: 0.10508394241333008s, infer: 31.312357664108276s
[2023-07-10 17:47]: npy: 4.102552890777588, f0: 0.02527928352355957s, infer: 33.81402325630188s
[2023-07-10 17:48]: npy: 2.4004595279693604, f0: 0.09933662414550781s, infer: 29.89732074737549s
[2023-07-10 17:49]: npy: 3.2991466522216797, f0: 0.03225088119506836s, infer: 29.510783195495605s
[2023-07-10 17:49]: npy: 3.4149115085601807, f0: 0.04070758819580078s, infer: 30.8032488822937s
|
NasimB/gpt2-cocnat-aochildes-mod-sub-length-10k
|
NasimB
| 2023-07-10T19:27:45Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T17:32:01Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-cocnat-aochildes-mod-sub-length-10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-cocnat-aochildes-mod-sub-length-10k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6933 | 0.29 | 500 | 5.6341 |
| 5.3469 | 0.59 | 1000 | 5.1996 |
| 4.9864 | 0.88 | 1500 | 4.9580 |
| 4.7189 | 1.18 | 2000 | 4.8083 |
| 4.5609 | 1.47 | 2500 | 4.6850 |
| 4.4523 | 1.77 | 3000 | 4.5821 |
| 4.317 | 2.06 | 3500 | 4.5146 |
| 4.1329 | 2.35 | 4000 | 4.4652 |
| 4.1086 | 2.65 | 4500 | 4.4071 |
| 4.0635 | 2.94 | 5000 | 4.3601 |
| 3.8482 | 3.24 | 5500 | 4.3553 |
| 3.8055 | 3.53 | 6000 | 4.3282 |
| 3.7859 | 3.83 | 6500 | 4.2926 |
| 3.6619 | 4.12 | 7000 | 4.2970 |
| 3.5196 | 4.41 | 7500 | 4.2933 |
| 3.5139 | 4.71 | 8000 | 4.2857 |
| 3.4905 | 5.0 | 8500 | 4.2710 |
| 3.3203 | 5.3 | 9000 | 4.2871 |
| 3.322 | 5.59 | 9500 | 4.2867 |
| 3.3172 | 5.89 | 10000 | 4.2863 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
aphi/ppo-SnowballTarget
|
aphi
| 2023-07-10T19:08:24Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-10T19:08:17Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: aphi/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
darthPanda/ppo-Huggy-v0
|
darthPanda
| 2023-07-10T18:57:02Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-10T18:55:55Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: darthPanda/ppo-Huggy-v0
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
43m1m4n/jpbrinx
|
43m1m4n
| 2023-07-10T18:53:46Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-04T20:40:20Z |
---
license: creativeml-openrail-m
---
|
PraveenJesu/openai-whisper-medium-peft-lora-v2.2.5
|
PraveenJesu
| 2023-07-10T18:28:05Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-10T18:28:04Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
MaitreHibou/dqn-SpaceInvadersNoFrameskip-v4
|
MaitreHibou
| 2023-07-10T18:21:47Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T18:21:06Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 656.50 +/- 140.98
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MaitreHibou -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MaitreHibou -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga MaitreHibou
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0002),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
skrl/IsaacGymEnvs-AnymalTerrain-PPO
|
skrl
| 2023-07-10T18:15:29Z | 0 | 0 |
skrl
|
[
"skrl",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-24T20:41:55Z |
---
library_name: skrl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- skrl
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 19.88 +/- 0.5
name: Total reward (mean)
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IsaacGymEnvs-AnymalTerrain
type: IsaacGymEnvs-AnymalTerrain
---
<!-- ---
torch: 19.88 +/- 0.5
jax: 17.24 +/- 0.62
numpy: 17.8 +/- 0.29
--- -->
# IsaacGymEnvs-AnymalTerrain-PPO
Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments.
- **Task:** AnymalTerrain
- **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html)
# Usage (with skrl)
Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts.
* PyTorch
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-AnymalTerrain-PPO", filename="agent.pt")
agent.load(path)
```
* JAX
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-AnymalTerrain-PPO", filename="agent.pickle")
agent.load(path)
```
# Hyperparameters
Note: Undefined parameters keep their values by default.
```python
# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 24 # memory_size
cfg["learning_epochs"] = 5
cfg["mini_batches"] = 6 # 24 * 4096 / 16384
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 3e-4
cfg["learning_rate_scheduler"] = KLAdaptiveRL
cfg["learning_rate_scheduler_kwargs"] = {"kl_threshold": 0.008}
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 1.0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.001
cfg["value_loss_scale"] = 1.0
cfg["kl_threshold"] = 0
cfg["rewards_shaper"] = None
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}
```
|
FerhatDk/wav2vec2-base-finetuned-ks
|
FerhatDk
| 2023-07-10T18:08:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-09-22T08:59:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3550
- Accuracy: 0.8727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 500
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 8 | 0.6840 | 0.6 |
| 0.6867 | 2.0 | 16 | 0.6780 | 0.6364 |
| 0.6742 | 3.0 | 24 | 0.6601 | 0.6182 |
| 0.6446 | 4.0 | 32 | 0.6294 | 0.6364 |
| 0.6299 | 5.0 | 40 | 0.6002 | 0.6727 |
| 0.6299 | 6.0 | 48 | 0.5755 | 0.7091 |
| 0.6021 | 7.0 | 56 | 0.5530 | 0.7273 |
| 0.5678 | 8.0 | 64 | 0.5036 | 0.8182 |
| 0.5512 | 9.0 | 72 | 0.4753 | 0.8545 |
| 0.4784 | 10.0 | 80 | 0.4184 | 0.9273 |
| 0.4784 | 11.0 | 88 | 0.4102 | 0.8909 |
| 0.4515 | 12.0 | 96 | 0.4444 | 0.8182 |
| 0.4878 | 13.0 | 104 | 0.3780 | 0.9091 |
| 0.4418 | 14.0 | 112 | 0.4570 | 0.8 |
| 0.4746 | 15.0 | 120 | 0.3870 | 0.8545 |
| 0.4746 | 16.0 | 128 | 0.3932 | 0.8364 |
| 0.4226 | 17.0 | 136 | 0.2779 | 0.9636 |
| 0.4301 | 18.0 | 144 | 0.3125 | 0.9455 |
| 0.3482 | 19.0 | 152 | 0.3212 | 0.9091 |
| 0.3611 | 20.0 | 160 | 0.3925 | 0.8364 |
| 0.3611 | 21.0 | 168 | 0.3389 | 0.8909 |
| 0.3507 | 22.0 | 176 | 0.3099 | 0.8727 |
| 0.3241 | 23.0 | 184 | 0.3120 | 0.8727 |
| 0.2533 | 24.0 | 192 | 0.2313 | 0.9455 |
| 0.2466 | 25.0 | 200 | 0.3550 | 0.8727 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Bellaaazzzzz/model_archive
|
Bellaaazzzzz
| 2023-07-10T18:00:43Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-10T17:41:57Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-Bellaaazzzzz/model_archive
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
You can find some example images below.
Validation result of 1 round.

Validation result of 2 round.

|
arpan-das-astrophysics/ppo-LunarLander-v2
|
arpan-das-astrophysics
| 2023-07-10T17:42:15Z | 6 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T17:41:55Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.82 +/- 21.71
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
RamonGOD/distilbert-base-uncased-finetuned-cola
|
RamonGOD
| 2023-07-10T17:32:17Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-10T17:00:10Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: RamonGOD/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# RamonGOD/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1891
- Validation Loss: 0.5654
- Train Matthews Correlation: 0.5209
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5243 | 0.4596 | 0.4917 | 0 |
| 0.3246 | 0.5117 | 0.4896 | 1 |
| 0.1891 | 0.5654 | 0.5209 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
AmbarB12/my_awesome_model
|
AmbarB12
| 2023-07-10T17:30:33Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-09T18:03:55Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: AmbarB12/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# AmbarB12/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0631
- Validation Loss: 0.2229
- Train Accuracy: 0.9306
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2523 | 0.1891 | 0.927 | 0 |
| 0.1327 | 0.2007 | 0.9298 | 1 |
| 0.0631 | 0.2229 | 0.9306 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
banden/ppo-Huggy
|
banden
| 2023-07-10T17:25:40Z | 11 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-10T17:25:35Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: banden/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
cagarraz/rl_course_vizdoom_health_gathering_supreme
|
cagarraz
| 2023-07-10T17:23:21Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T17:23:08Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 3.94 +/- 0.20
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r cagarraz/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
jordyvl/vit-small_tobacco3482_kd_CEKD_t1.5_a0.5
|
jordyvl
| 2023-07-10T17:17:44Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T16:39:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_tobacco3482_kd_CEKD_t1.5_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_tobacco3482_kd_CEKD_t1.5_a0.5
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4258
- Accuracy: 0.825
- Brier Loss: 0.2707
- Nll: 0.8867
- F1 Micro: 0.825
- F1 Macro: 0.8116
- Ece: 0.2129
- Aurc: 0.0681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 1.7307 | 0.22 | 0.8748 | 5.3766 | 0.22 | 0.1294 | 0.2444 | 0.6913 |
| No log | 2.0 | 14 | 1.3514 | 0.405 | 0.7426 | 3.5573 | 0.405 | 0.2280 | 0.2900 | 0.4026 |
| No log | 3.0 | 21 | 0.9121 | 0.62 | 0.5647 | 1.9398 | 0.62 | 0.5595 | 0.2879 | 0.2015 |
| No log | 4.0 | 28 | 0.7084 | 0.695 | 0.4179 | 1.7042 | 0.695 | 0.6379 | 0.2305 | 0.1177 |
| No log | 5.0 | 35 | 0.7167 | 0.735 | 0.3862 | 1.7929 | 0.735 | 0.7392 | 0.2380 | 0.1046 |
| No log | 6.0 | 42 | 0.6442 | 0.765 | 0.3625 | 1.5688 | 0.765 | 0.7549 | 0.2371 | 0.1034 |
| No log | 7.0 | 49 | 0.6147 | 0.805 | 0.3410 | 1.5975 | 0.805 | 0.7789 | 0.2438 | 0.1042 |
| No log | 8.0 | 56 | 0.6444 | 0.775 | 0.3446 | 1.2309 | 0.775 | 0.7725 | 0.2305 | 0.0911 |
| No log | 9.0 | 63 | 0.5964 | 0.8 | 0.3219 | 1.3613 | 0.8000 | 0.7784 | 0.2446 | 0.0734 |
| No log | 10.0 | 70 | 0.5700 | 0.82 | 0.3160 | 1.2605 | 0.82 | 0.7860 | 0.2301 | 0.0632 |
| No log | 11.0 | 77 | 0.5663 | 0.79 | 0.3176 | 1.2939 | 0.79 | 0.7643 | 0.2315 | 0.0666 |
| No log | 12.0 | 84 | 0.5111 | 0.825 | 0.3143 | 1.1082 | 0.825 | 0.8082 | 0.2519 | 0.0844 |
| No log | 13.0 | 91 | 0.5228 | 0.78 | 0.3156 | 0.9444 | 0.78 | 0.7773 | 0.1941 | 0.0650 |
| No log | 14.0 | 98 | 0.5792 | 0.78 | 0.3409 | 1.5054 | 0.78 | 0.7725 | 0.2061 | 0.1019 |
| No log | 15.0 | 105 | 0.4905 | 0.83 | 0.2912 | 1.0068 | 0.83 | 0.8266 | 0.2324 | 0.0545 |
| No log | 16.0 | 112 | 0.4990 | 0.825 | 0.2961 | 1.1452 | 0.825 | 0.8140 | 0.2188 | 0.0632 |
| No log | 17.0 | 119 | 0.4900 | 0.805 | 0.2940 | 1.2027 | 0.805 | 0.8018 | 0.2188 | 0.0860 |
| No log | 18.0 | 126 | 0.4755 | 0.805 | 0.2988 | 1.0223 | 0.805 | 0.7789 | 0.2229 | 0.0792 |
| No log | 19.0 | 133 | 0.4398 | 0.81 | 0.2679 | 0.9732 | 0.81 | 0.7830 | 0.2085 | 0.0585 |
| No log | 20.0 | 140 | 0.4766 | 0.805 | 0.2992 | 0.9730 | 0.805 | 0.7934 | 0.2141 | 0.0662 |
| No log | 21.0 | 147 | 0.4615 | 0.835 | 0.2867 | 0.9343 | 0.835 | 0.8219 | 0.1999 | 0.0751 |
| No log | 22.0 | 154 | 0.4343 | 0.825 | 0.2641 | 1.1353 | 0.825 | 0.8070 | 0.2095 | 0.0603 |
| No log | 23.0 | 161 | 0.4291 | 0.85 | 0.2660 | 1.0109 | 0.85 | 0.8365 | 0.2435 | 0.0615 |
| No log | 24.0 | 168 | 0.4263 | 0.855 | 0.2653 | 0.9395 | 0.855 | 0.8440 | 0.2445 | 0.0623 |
| No log | 25.0 | 175 | 0.4338 | 0.845 | 0.2700 | 0.8794 | 0.845 | 0.8349 | 0.2254 | 0.0584 |
| No log | 26.0 | 182 | 0.4305 | 0.835 | 0.2648 | 0.9062 | 0.835 | 0.8322 | 0.2113 | 0.0658 |
| No log | 27.0 | 189 | 0.4262 | 0.84 | 0.2683 | 0.9967 | 0.8400 | 0.8291 | 0.2240 | 0.0670 |
| No log | 28.0 | 196 | 0.4329 | 0.83 | 0.2724 | 0.9016 | 0.83 | 0.8239 | 0.2016 | 0.0685 |
| No log | 29.0 | 203 | 0.4233 | 0.845 | 0.2653 | 0.9115 | 0.845 | 0.8375 | 0.2005 | 0.0634 |
| No log | 30.0 | 210 | 0.4204 | 0.84 | 0.2638 | 0.8892 | 0.8400 | 0.8348 | 0.2175 | 0.0633 |
| No log | 31.0 | 217 | 0.4240 | 0.83 | 0.2684 | 0.8871 | 0.83 | 0.8217 | 0.2128 | 0.0660 |
| No log | 32.0 | 224 | 0.4246 | 0.84 | 0.2677 | 0.8867 | 0.8400 | 0.8307 | 0.2117 | 0.0670 |
| No log | 33.0 | 231 | 0.4247 | 0.83 | 0.2690 | 0.8917 | 0.83 | 0.8202 | 0.2084 | 0.0679 |
| No log | 34.0 | 238 | 0.4218 | 0.84 | 0.2660 | 0.8848 | 0.8400 | 0.8326 | 0.2138 | 0.0663 |
| No log | 35.0 | 245 | 0.4220 | 0.845 | 0.2667 | 0.8926 | 0.845 | 0.8354 | 0.2109 | 0.0655 |
| No log | 36.0 | 252 | 0.4247 | 0.83 | 0.2694 | 0.8854 | 0.83 | 0.8202 | 0.2213 | 0.0683 |
| No log | 37.0 | 259 | 0.4239 | 0.84 | 0.2683 | 0.8849 | 0.8400 | 0.8326 | 0.2163 | 0.0670 |
| No log | 38.0 | 266 | 0.4239 | 0.835 | 0.2689 | 0.8876 | 0.835 | 0.8208 | 0.2118 | 0.0672 |
| No log | 39.0 | 273 | 0.4252 | 0.83 | 0.2696 | 0.8885 | 0.83 | 0.8180 | 0.2064 | 0.0682 |
| No log | 40.0 | 280 | 0.4237 | 0.835 | 0.2686 | 0.8867 | 0.835 | 0.8208 | 0.2211 | 0.0675 |
| No log | 41.0 | 287 | 0.4256 | 0.83 | 0.2700 | 0.8847 | 0.83 | 0.8180 | 0.2253 | 0.0682 |
| No log | 42.0 | 294 | 0.4243 | 0.835 | 0.2692 | 0.8839 | 0.835 | 0.8208 | 0.2130 | 0.0675 |
| No log | 43.0 | 301 | 0.4248 | 0.83 | 0.2695 | 0.8850 | 0.83 | 0.8180 | 0.2237 | 0.0682 |
| No log | 44.0 | 308 | 0.4246 | 0.83 | 0.2694 | 0.8847 | 0.83 | 0.8180 | 0.2383 | 0.0680 |
| No log | 45.0 | 315 | 0.4253 | 0.83 | 0.2699 | 0.8858 | 0.83 | 0.8180 | 0.2200 | 0.0681 |
| No log | 46.0 | 322 | 0.4246 | 0.83 | 0.2694 | 0.8857 | 0.83 | 0.8180 | 0.2311 | 0.0679 |
| No log | 47.0 | 329 | 0.4253 | 0.83 | 0.2700 | 0.8843 | 0.83 | 0.8180 | 0.2312 | 0.0682 |
| No log | 48.0 | 336 | 0.4252 | 0.83 | 0.2698 | 0.8830 | 0.83 | 0.8180 | 0.2177 | 0.0682 |
| No log | 49.0 | 343 | 0.4257 | 0.83 | 0.2703 | 0.8848 | 0.83 | 0.8180 | 0.2315 | 0.0683 |
| No log | 50.0 | 350 | 0.4256 | 0.83 | 0.2703 | 0.8833 | 0.83 | 0.8180 | 0.2331 | 0.0684 |
| No log | 51.0 | 357 | 0.4254 | 0.83 | 0.2703 | 0.8863 | 0.83 | 0.8180 | 0.2422 | 0.0681 |
| No log | 52.0 | 364 | 0.4261 | 0.83 | 0.2707 | 0.8864 | 0.83 | 0.8180 | 0.2424 | 0.0683 |
| No log | 53.0 | 371 | 0.4249 | 0.83 | 0.2700 | 0.8855 | 0.83 | 0.8180 | 0.2195 | 0.0679 |
| No log | 54.0 | 378 | 0.4255 | 0.83 | 0.2704 | 0.8846 | 0.83 | 0.8180 | 0.2342 | 0.0682 |
| No log | 55.0 | 385 | 0.4256 | 0.825 | 0.2704 | 0.8861 | 0.825 | 0.8116 | 0.2357 | 0.0682 |
| No log | 56.0 | 392 | 0.4264 | 0.83 | 0.2708 | 0.8853 | 0.83 | 0.8180 | 0.2345 | 0.0682 |
| No log | 57.0 | 399 | 0.4257 | 0.825 | 0.2706 | 0.8864 | 0.825 | 0.8116 | 0.2353 | 0.0682 |
| No log | 58.0 | 406 | 0.4258 | 0.825 | 0.2704 | 0.8841 | 0.825 | 0.8116 | 0.2271 | 0.0681 |
| No log | 59.0 | 413 | 0.4255 | 0.825 | 0.2703 | 0.8856 | 0.825 | 0.8116 | 0.2267 | 0.0680 |
| No log | 60.0 | 420 | 0.4259 | 0.825 | 0.2709 | 0.8842 | 0.825 | 0.8116 | 0.2269 | 0.0683 |
| No log | 61.0 | 427 | 0.4254 | 0.83 | 0.2702 | 0.8852 | 0.83 | 0.8180 | 0.2265 | 0.0680 |
| No log | 62.0 | 434 | 0.4261 | 0.83 | 0.2707 | 0.8851 | 0.83 | 0.8180 | 0.2346 | 0.0682 |
| No log | 63.0 | 441 | 0.4257 | 0.825 | 0.2704 | 0.8854 | 0.825 | 0.8116 | 0.2232 | 0.0682 |
| No log | 64.0 | 448 | 0.4261 | 0.825 | 0.2708 | 0.8845 | 0.825 | 0.8116 | 0.2264 | 0.0683 |
| No log | 65.0 | 455 | 0.4259 | 0.825 | 0.2706 | 0.8862 | 0.825 | 0.8116 | 0.2204 | 0.0682 |
| No log | 66.0 | 462 | 0.4258 | 0.825 | 0.2707 | 0.8856 | 0.825 | 0.8116 | 0.2193 | 0.0682 |
| No log | 67.0 | 469 | 0.4255 | 0.83 | 0.2703 | 0.8852 | 0.83 | 0.8180 | 0.2190 | 0.0681 |
| No log | 68.0 | 476 | 0.4260 | 0.825 | 0.2708 | 0.8860 | 0.825 | 0.8116 | 0.2196 | 0.0682 |
| No log | 69.0 | 483 | 0.4259 | 0.825 | 0.2708 | 0.8858 | 0.825 | 0.8116 | 0.2195 | 0.0682 |
| No log | 70.0 | 490 | 0.4255 | 0.825 | 0.2703 | 0.8857 | 0.825 | 0.8116 | 0.2135 | 0.0682 |
| No log | 71.0 | 497 | 0.4258 | 0.825 | 0.2707 | 0.8857 | 0.825 | 0.8116 | 0.2205 | 0.0681 |
| 0.1816 | 72.0 | 504 | 0.4261 | 0.825 | 0.2708 | 0.8857 | 0.825 | 0.8116 | 0.2198 | 0.0682 |
| 0.1816 | 73.0 | 511 | 0.4259 | 0.825 | 0.2706 | 0.8852 | 0.825 | 0.8116 | 0.2192 | 0.0682 |
| 0.1816 | 74.0 | 518 | 0.4259 | 0.825 | 0.2707 | 0.8856 | 0.825 | 0.8116 | 0.2290 | 0.0681 |
| 0.1816 | 75.0 | 525 | 0.4257 | 0.825 | 0.2706 | 0.8864 | 0.825 | 0.8116 | 0.2337 | 0.0681 |
| 0.1816 | 76.0 | 532 | 0.4259 | 0.825 | 0.2707 | 0.8855 | 0.825 | 0.8116 | 0.2211 | 0.0681 |
| 0.1816 | 77.0 | 539 | 0.4255 | 0.825 | 0.2704 | 0.8860 | 0.825 | 0.8116 | 0.2137 | 0.0680 |
| 0.1816 | 78.0 | 546 | 0.4258 | 0.825 | 0.2707 | 0.8868 | 0.825 | 0.8116 | 0.2274 | 0.0682 |
| 0.1816 | 79.0 | 553 | 0.4260 | 0.825 | 0.2708 | 0.8859 | 0.825 | 0.8116 | 0.2209 | 0.0682 |
| 0.1816 | 80.0 | 560 | 0.4260 | 0.825 | 0.2708 | 0.8864 | 0.825 | 0.8116 | 0.2135 | 0.0681 |
| 0.1816 | 81.0 | 567 | 0.4259 | 0.825 | 0.2707 | 0.8859 | 0.825 | 0.8116 | 0.2134 | 0.0682 |
| 0.1816 | 82.0 | 574 | 0.4258 | 0.825 | 0.2706 | 0.8862 | 0.825 | 0.8116 | 0.2062 | 0.0681 |
| 0.1816 | 83.0 | 581 | 0.4259 | 0.825 | 0.2707 | 0.8866 | 0.825 | 0.8116 | 0.2204 | 0.0681 |
| 0.1816 | 84.0 | 588 | 0.4259 | 0.825 | 0.2707 | 0.8868 | 0.825 | 0.8116 | 0.2204 | 0.0681 |
| 0.1816 | 85.0 | 595 | 0.4257 | 0.825 | 0.2706 | 0.8861 | 0.825 | 0.8116 | 0.2141 | 0.0682 |
| 0.1816 | 86.0 | 602 | 0.4258 | 0.825 | 0.2707 | 0.8861 | 0.825 | 0.8116 | 0.2140 | 0.0682 |
| 0.1816 | 87.0 | 609 | 0.4258 | 0.825 | 0.2707 | 0.8867 | 0.825 | 0.8116 | 0.2137 | 0.0680 |
| 0.1816 | 88.0 | 616 | 0.4259 | 0.825 | 0.2707 | 0.8866 | 0.825 | 0.8116 | 0.2129 | 0.0681 |
| 0.1816 | 89.0 | 623 | 0.4258 | 0.825 | 0.2707 | 0.8866 | 0.825 | 0.8116 | 0.2205 | 0.0681 |
| 0.1816 | 90.0 | 630 | 0.4259 | 0.825 | 0.2707 | 0.8865 | 0.825 | 0.8116 | 0.2053 | 0.0680 |
| 0.1816 | 91.0 | 637 | 0.4258 | 0.825 | 0.2706 | 0.8868 | 0.825 | 0.8116 | 0.2130 | 0.0681 |
| 0.1816 | 92.0 | 644 | 0.4258 | 0.825 | 0.2706 | 0.8870 | 0.825 | 0.8116 | 0.2129 | 0.0680 |
| 0.1816 | 93.0 | 651 | 0.4258 | 0.825 | 0.2706 | 0.8868 | 0.825 | 0.8116 | 0.2129 | 0.0681 |
| 0.1816 | 94.0 | 658 | 0.4258 | 0.825 | 0.2707 | 0.8867 | 0.825 | 0.8116 | 0.2129 | 0.0681 |
| 0.1816 | 95.0 | 665 | 0.4258 | 0.825 | 0.2707 | 0.8867 | 0.825 | 0.8116 | 0.2053 | 0.0680 |
| 0.1816 | 96.0 | 672 | 0.4259 | 0.825 | 0.2707 | 0.8866 | 0.825 | 0.8116 | 0.2053 | 0.0681 |
| 0.1816 | 97.0 | 679 | 0.4258 | 0.825 | 0.2707 | 0.8868 | 0.825 | 0.8116 | 0.2129 | 0.0681 |
| 0.1816 | 98.0 | 686 | 0.4258 | 0.825 | 0.2707 | 0.8868 | 0.825 | 0.8116 | 0.2129 | 0.0680 |
| 0.1816 | 99.0 | 693 | 0.4258 | 0.825 | 0.2707 | 0.8868 | 0.825 | 0.8116 | 0.2129 | 0.0681 |
| 0.1816 | 100.0 | 700 | 0.4258 | 0.825 | 0.2707 | 0.8867 | 0.825 | 0.8116 | 0.2129 | 0.0681 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
NasimB/gpt2-concat-all-new-mod-datasets-rarity-all-iorder-13k-2p6k
|
NasimB
| 2023-07-10T17:09:06Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T15:25:56Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-all-new-mod-datasets-rarity-all-iorder-13k-2p6k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-all-new-mod-datasets-rarity-all-iorder-13k-2p6k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4013
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7847 | 0.32 | 500 | 5.6658 |
| 5.4409 | 0.63 | 1000 | 5.2342 |
| 5.088 | 0.95 | 1500 | 4.9831 |
| 4.8091 | 1.27 | 2000 | 4.8440 |
| 4.6774 | 1.59 | 2500 | 4.7254 |
| 4.5641 | 1.9 | 3000 | 4.6255 |
| 4.3493 | 2.22 | 3500 | 4.5674 |
| 4.2735 | 2.54 | 4000 | 4.5081 |
| 4.2294 | 2.86 | 4500 | 4.4480 |
| 4.0526 | 3.17 | 5000 | 4.4279 |
| 3.9479 | 3.49 | 5500 | 4.4002 |
| 3.9223 | 3.81 | 6000 | 4.3596 |
| 3.8021 | 4.13 | 6500 | 4.3586 |
| 3.6504 | 4.44 | 7000 | 4.3495 |
| 3.6428 | 4.76 | 7500 | 4.3416 |
| 3.58 | 5.08 | 8000 | 4.3470 |
| 3.4494 | 5.4 | 8500 | 4.3484 |
| 3.4443 | 5.71 | 9000 | 4.3455 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Buth/fatuh
|
Buth
| 2023-07-10T16:50:46Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"en",
"dataset:Open-Orca/OpenOrca",
"license:openrail",
"region:us"
] | null | 2023-07-10T16:48:59Z |
---
license: openrail
datasets:
- Open-Orca/OpenOrca
language:
- en
metrics:
- accuracy
library_name: adapter-transformers
---
|
svalcin/q-FrozenLake-v1-4x4-noSlippery
|
svalcin
| 2023-07-10T16:39:14Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T16:39:10Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="svalcin/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Umer1542/task-b-classification
|
Umer1542
| 2023-07-10T16:35:37Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-classification",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-10T15:47:23Z |
---
license: other
language:
- en
metrics:
- accuracy
- f1
- recall
pipeline_tag: text-classification
---
|
TheBloke/MPT-30B-Dolphin-v2-GGML
|
TheBloke
| 2023-07-10T16:32:10Z | 0 | 9 | null |
[
"license:other",
"region:us"
] | null | 2023-07-10T15:13:07Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Manoj Preveen's MPT 30B Dolphin v2 GGML
These files are MPT GGML format model files for [Manoj Preveen's MPT 30B Dolphin v2](https://huggingface.co/manojpreveen/mpt-30b-dolphin-v2).
Please note that these GGMLs are **not compatible with llama.cpp, or currently with text-generation-webui**. Please see below for a list of tools that work with this GGML model.
## Repositories available
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/MPT-30B-Dolphin-v2-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/manojpreveen/mpt-30b-dolphin-v2)
## Prompt template: custom
```
<system>: You are a helpful assistant
<human>: {prompt}
<bot>:
```
<!-- compatibility_ggml start -->
## Compatibilty
These files are **not** compatible with llama.cpp or text-generation-webui.
They can be used with:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful inference engine based on llama.cpp with full GPU acceleration and good UI.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI for GGML inference on Windows and macOS.
* [LoLLMs-WebUI](https://github.com/ParisNeo/LoLLMs-WebUI) a web UI which supports nearly every backend out there. Use ctransformers backend for support for this model.
* [ctransformers](https://github.com/marella/ctransformers): for use in Python code, including LangChain support.
* [rustformers' llm](https://github.com/rustformers/llm)
* The example `mpt` binary provided with [ggml](https://github.com/ggerganov/ggml)
As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!)
## Tutorial for using LoLLMs-WebUI:
* [Video tutorial, by LoLLMs-WebUI's author **ParisNeo**](https://youtu.be/vBU1b5n0GMU)
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| mpt-30b-dolphin-v2.ggmlv1.q4_0.bin | q4_0 | 4 | 16.85 GB| 19.35 GB | 4-bit. |
| mpt-30b-dolphin-v2.ggmlv1.q4_1.bin | q4_1 | 4 | 18.73 GB| 21.23 GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| mpt-30b-dolphin-v2.ggmlv1.q5_0.bin | q5_0 | 5 | 20.60 GB| 23.10 GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
| mpt-30b-dolphin-v2.ggmlv1.q5_1.bin | q5_1 | 5 | 22.47 GB| 24.97 GB | 5-bit. Even higher accuracy, resource usage and slower inference. |
| mpt-30b-dolphin-v2.ggmlv1.q8_0.bin | q8_0 | 8 | 31.83 GB| 34.33 GB | 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Manoj Preveen's MPT 30B Dolphin v2
**Base Model :** mosaicml/mpt-30b
**Tool :** MosaicML's llm-foundry (https://github.com/mosaicml/llm-foundry)
**Dataset :** Entire flan3m-GPT3.5 dataset.
**Config yaml with Model Params :** https://huggingface.co/manojpreveen/mpt-30b-orca-v2/blob/main/mpt-30b_orca.yaml
**Prompt Format :**
```
<system>: [system prompt]
<human>: [question]
<bot>:
```
|
yhyhy3/open_llama_7b_v2_med_instruct
|
yhyhy3
| 2023-07-10T16:22:39Z | 1,461 | 8 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"medical",
"code",
"en",
"dataset:ehartford/dolphin",
"dataset:LinhDuong/chatdoctor-200k",
"dataset:sahil2801/code_instructions_120k",
"dataset:medalpaca/medical_meadow_mediqa",
"dataset:kaiokendev/SuperCOT-dataset",
"dataset:tiiuae/falcon-refinedweb",
"dataset:bigcode/starcoderdata",
"dataset:togethercomputer/RedPajama-Data-1T",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-09T17:19:43Z |
---
license: apache-2.0
datasets:
- ehartford/dolphin
- LinhDuong/chatdoctor-200k
- sahil2801/code_instructions_120k
- medalpaca/medical_meadow_mediqa
- kaiokendev/SuperCOT-dataset
- tiiuae/falcon-refinedweb
- bigcode/starcoderdata
- togethercomputer/RedPajama-Data-1T
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- medical
- code
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is an instruction-tuned Open LLaMa model with 7B parameters, with specialities in medical QA and code instruction.
## Model Details
<!-- Provide a longer summary of what this model is. -->
- **Model type:** LlamaForCausalLM
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model (QLoRA):** [openlm-research/open_llama_7b_v2](https://huggingface.co/openlm-research/open_llama_7b_v2)
## How to Get Started with the Model
Use the code below to get started with the model.
```py
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
model_path = 'yhyhy3/open_llama_7b_v2_med_dolphin_qlora_merged'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = '''### Instruction: Answer the following question.
### Input: What is the capital of New Jersey?
### Response:'''
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
## Training Details
### Training Data
Converted the following datasets to alpaca:instruction format.
1. [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin)
- ORCA style dataset generously created by [Eric Hartford](https://huggingface.co/ehartford)
- Only used the 1 million GPT4 generated instructions file [flan1m-alpaca-uncensored.jsonl](https://huggingface.co/datasets/ehartford/dolphin/blob/main/flan1m-alpaca-uncensored.jsonl).
2. [LinhDuong/chatdoctor-200k](https://huggingface.co/datasets/LinhDuong/chatdoctor-200k)
- Refined dataset sourced from icliniq medical QA forum
3. [sahil2801/code_instructions_120k](https://huggingface.co/datasets/sahil2801/code_instructions_120k)
- Code instruction dataset generously created by Sahil Chaudhary from ThreeSixty AI
4. [medalpaca/medical_meadow_mediqa](https://huggingface.co/datasets/medalpaca/medical_meadow_mediqa)
- MEDIQA is a dataset of manually generated, question-driven summaries of multi and single document answers to consumer health questions from medalpaca group.
5. [kaiokendev/SuperCOT-dataset](https://huggingface.co/datasets/kaiokendev/SuperCOT-dataset)
- Code instruction dataset generously created by Kaio Ken
### Training Procedure
Trained using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) QLoRa on [RunPod](https://www.runpod.io/console/gpu-cloud) 8x A6000 on Community Cloud for 3 epochs (~14 hours - ~$70).
<details>
<summary>axolotl training config:</summary>
```yaml
base_model: openlm-research/open_llama_7b_v2
base_model_config: openlm-research/open_llama_7b_v2
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
push_dataset_to_hub:
hub_model_id:
hf_use_auth_token:
datasets:
- path: json
type: alpaca
data_files: /disk/flan1m-alpaca-uncensored.jsonl
shards: 8
- path: sahil2801/code_instructions_120k
type: alpaca
- path: LinhDuong/chatdoctor-200k
type: alpaca
shards: 2
- path: kaiokendev/SuperCOT-dataset
type: alpaca
- path: medalpaca/medical_meadow_mediqa
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.01
adapter: qlora
lora_model_dir:
sequence_len: 2048
max_packed_sequence_len: 2048
lora_r: 8
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_mode: true
wandb_project:
wandb_watch:
wandb_run_id:
wandb_log_model: 'openllama_checkpoint'
output_dir: /disk/open_llama_7b_v2_dolphin_qlora
gradient_accumulation_steps: 2
micro_batch_size: 16
num_epochs: 3
optimizer: paged_adamw_32bit
torchdistx_path:
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention: true
flash_attention:
gptq_groupsize:
gptq_model_v1:
warmup_steps: 1000
eval_steps: 5000
save_steps:
debug:
deepspeed:
weight_decay: 0.0000001
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details>
|
aburnazy/opt125m_alpaca
|
aburnazy
| 2023-07-10T16:20:54Z | 136 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"opt",
"text-generation",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T15:40:41Z |
---
license: other
tags:
- generated_from_trainer
model-index:
- name: opt125m_alpaca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt125m_alpaca
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
tyavika/Distilbert-QA-Pytorch-seed
|
tyavika
| 2023-07-10T16:10:13Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-10T12:52:40Z |
---
tags:
- generated_from_trainer
model-index:
- name: Distilbert-QA-Pytorch-seed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Distilbert-QA-Pytorch-seed
This model is a fine-tuned version of [tyavika/Distilbert-QA-Pytorch-seed](https://huggingface.co/tyavika/Distilbert-QA-Pytorch-seed) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
madoe001/ppo-LunarLander-v2
|
madoe001
| 2023-07-10T16:00:09Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-19T17:22:29Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.45 +/- 25.19
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sianadouglas/ensembletest
|
sianadouglas
| 2023-07-10T15:48:14Z | 0 | 0 | null |
[
"en",
"license:other",
"region:us"
] | null | 2023-07-10T15:47:23Z |
---
license: other
language:
- en
---
|
Khushnur/t5-base-end2end-questions-generation_squad
|
Khushnur
| 2023-07-10T15:47:50Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-10T15:02:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-end2end-questions-generation_squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-end2end-questions-generation_squad
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5658 | 0.67 | 100 | 1.8866 |
| 1.958 | 1.35 | 200 | 1.7150 |
| 1.8516 | 2.02 | 300 | 1.6701 |
| 1.7965 | 2.69 | 400 | 1.6560 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mgmeskill/Pixelcopter-PLE-v0
|
mgmeskill
| 2023-07-10T15:38:32Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T15:26:11Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 42.50 +/- 37.13
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Suryabhan/openai-whisper-large-v2-LORA-colab
|
Suryabhan
| 2023-07-10T15:32:46Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-10T15:32:41Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
tyavika/LR1E4-BS16-Bert_CNN512LSTM256NoBid
|
tyavika
| 2023-07-10T15:31:42Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-09T20:06:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: LR1E4-BS16-Bert_CNN512LSTM256NoBid
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LR1E4-BS16-Bert_CNN512LSTM256NoBid
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.7267 | 1.0 | 3290 | 1.5092 |
| 1.2394 | 2.0 | 6580 | 1.3933 |
| 0.8348 | 3.0 | 9870 | 1.5591 |
| 0.542 | 4.0 | 13160 | 1.6667 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MnLgt/textual_inversion_muir_1_5
|
MnLgt
| 2023-07-10T15:31:36Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-10T14:16:45Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - jordandavis/textual_inversion_muir_1_5
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
agercas/speecht5_finetuned_voxpopuli_nl
|
agercas
| 2023-07-10T15:27:22Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-10T09:21:57Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4572
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5221 | 4.3 | 1000 | 0.4774 |
| 0.505 | 8.61 | 2000 | 0.4648 |
| 0.4929 | 12.91 | 3000 | 0.4583 |
| 0.4901 | 17.21 | 4000 | 0.4572 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
grace-pro/afriberta-finetuned-hausa
|
grace-pro
| 2023-07-10T15:26:48Z | 126 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T14:49:51Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afriberta-finetuned-hausa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta-finetuned-hausa
This model is a fine-tuned version of [castorini/afriberta_large](https://huggingface.co/castorini/afriberta_large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1242
- Precision: 0.7104
- Recall: 0.5095
- F1: 0.5934
- Accuracy: 0.9647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1369 | 1.0 | 2624 | 0.1256 | 0.6856 | 0.4541 | 0.5463 | 0.9614 |
| 0.1103 | 2.0 | 5248 | 0.1195 | 0.7014 | 0.4947 | 0.5802 | 0.9637 |
| 0.0868 | 3.0 | 7872 | 0.1242 | 0.7104 | 0.5095 | 0.5934 | 0.9647 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Birchlabs/llama-13b-stepwise-embeddings
|
Birchlabs
| 2023-07-10T15:17:11Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-07-10T13:55:53Z |
---
license: apache-2.0
---
Fine-tuned input (`embed_tokens: Embedding`) and output (`lm_head: Linear`) embeddings layers, for use with [`Birchlabs/llama-13b-stepwise-adapter`](https://huggingface.co/Birchlabs/llama-13b-stepwise-adapter).
Prior to finetuning: we grew the vocabulary of the tokenizer and embeddings layers. The new embeddings were average-initialized, and needed training, so we trained them. These are the weights from that training.
Ordinarily a QLoRA finetune of an LLM would not finetune the `embed_tokens: Embedding` (you'd need to get a bit creative, because not only have the dimensions changed, but also I don't believe any way has been established to train _adapters_ over `Embedding`s).
Nor apparently would it finetune `lm_head: Linear`. This is harder than it sounds (i.e. you can't handle it the same way you adapt the other Linear layers), because the dimensions have grown.
|
S1X3L4/Taxi-v3
|
S1X3L4
| 2023-07-10T15:04:55Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T15:04:50Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="S1X3L4/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dariowsz/wav2vec2-base-finetuned-gtzan
|
dariowsz
| 2023-07-10T15:03:27Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-04T13:47:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-gtzan
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5537
- Accuracy: 0.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7898 | 1.0 | 113 | 1.8052 | 0.45 |
| 1.4297 | 2.0 | 226 | 1.2229 | 0.62 |
| 1.041 | 3.0 | 339 | 0.9934 | 0.65 |
| 1.3882 | 4.0 | 452 | 1.1735 | 0.62 |
| 0.7248 | 5.0 | 565 | 0.8461 | 0.69 |
| 0.6128 | 6.0 | 678 | 0.7391 | 0.75 |
| 0.3225 | 7.0 | 791 | 0.8754 | 0.74 |
| 0.6483 | 8.0 | 904 | 0.8341 | 0.79 |
| 0.2755 | 9.0 | 1017 | 0.5537 | 0.88 |
| 0.4398 | 10.0 | 1130 | 0.6076 | 0.85 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NICFRU/bart-base-paraphrasing-news
|
NICFRU
| 2023-07-10T15:02:02Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-10T14:46:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-paraphrasing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-paraphrasing
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6617
- Rouge1: 57.7088
- Rouge2: 51.0096
- Rougel: 54.7514
- Rougelsum: 56.3943
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 0.2 | 10 | 0.5263 | 58.2676 | 51.5842 | 55.5057 | 57.1584 | 19.94 |
| No log | 0.4 | 20 | 0.5050 | 56.1604 | 48.7383 | 54.0373 | 55.372 | 20.0 |
| No log | 0.6 | 30 | 0.4674 | 58.0617 | 51.4993 | 56.0368 | 56.9665 | 20.0 |
| No log | 0.8 | 40 | 0.4545 | 57.5375 | 51.0203 | 55.5247 | 56.5761 | 19.94 |
| No log | 1.0 | 50 | 0.4373 | 57.7263 | 50.8021 | 55.0549 | 56.35 | 19.98 |
| No log | 1.2 | 60 | 0.4313 | 57.87 | 50.9904 | 54.9727 | 56.5379 | 19.97 |
| No log | 1.4 | 70 | 0.4855 | 56.5101 | 49.3124 | 54.1572 | 55.0671 | 20.0 |
| No log | 1.6 | 80 | 0.4202 | 56.6535 | 50.0302 | 53.6891 | 55.1016 | 19.96 |
| No log | 1.8 | 90 | 0.4544 | 57.315 | 50.6289 | 54.642 | 55.7326 | 19.95 |
| 0.5858 | 2.0 | 100 | 0.4157 | 56.4569 | 48.8105 | 53.937 | 55.3515 | 20.0 |
| 0.5858 | 2.2 | 110 | 0.4555 | 57.8424 | 51.5966 | 55.6655 | 56.6862 | 20.0 |
| 0.5858 | 2.4 | 120 | 0.4196 | 58.2562 | 51.7596 | 55.5085 | 57.1823 | 19.97 |
| 0.5858 | 2.6 | 130 | 0.4334 | 58.6906 | 51.6106 | 55.6631 | 57.5254 | 19.89 |
| 0.5858 | 2.8 | 140 | 0.4710 | 56.5401 | 49.33 | 53.8792 | 55.0282 | 20.0 |
| 0.5858 | 3.0 | 150 | 0.4357 | 58.2083 | 52.0049 | 55.9938 | 57.1928 | 20.0 |
| 0.5858 | 3.2 | 160 | 0.4735 | 58.8112 | 52.2196 | 56.5004 | 57.7703 | 19.94 |
| 0.5858 | 3.4 | 170 | 0.4428 | 57.6778 | 50.6377 | 54.8752 | 56.4778 | 20.0 |
| 0.5858 | 3.6 | 180 | 0.4983 | 57.4124 | 50.4244 | 54.6163 | 56.0992 | 20.0 |
| 0.5858 | 3.8 | 190 | 0.4620 | 58.0701 | 51.5021 | 55.7222 | 56.8737 | 20.0 |
| 0.2865 | 4.0 | 200 | 0.4502 | 59.1191 | 52.7516 | 56.4389 | 57.7153 | 20.0 |
| 0.2865 | 4.2 | 210 | 0.4805 | 58.9064 | 52.7148 | 56.1058 | 57.6709 | 20.0 |
| 0.2865 | 4.4 | 220 | 0.4755 | 58.6883 | 52.1464 | 55.9164 | 57.3825 | 20.0 |
| 0.2865 | 4.6 | 230 | 0.4524 | 58.9916 | 52.1101 | 56.4116 | 57.9468 | 19.9 |
| 0.2865 | 4.8 | 240 | 0.4726 | 58.9953 | 52.8173 | 56.5846 | 58.0805 | 20.0 |
| 0.2865 | 5.0 | 250 | 0.4841 | 58.1058 | 51.614 | 55.3374 | 56.7617 | 20.0 |
| 0.2865 | 5.2 | 260 | 0.5047 | 58.2785 | 51.1874 | 55.5336 | 56.8795 | 20.0 |
| 0.2865 | 5.4 | 270 | 0.4658 | 57.2753 | 49.6038 | 53.9588 | 55.6038 | 19.91 |
| 0.2865 | 5.6 | 280 | 0.5261 | 58.1691 | 51.5254 | 55.2685 | 56.7787 | 20.0 |
| 0.2865 | 5.8 | 290 | 0.4833 | 57.8088 | 51.2838 | 54.8739 | 56.4374 | 20.0 |
| 0.1668 | 6.0 | 300 | 0.5067 | 58.2021 | 51.3629 | 55.3548 | 56.9093 | 19.99 |
| 0.1668 | 6.2 | 310 | 0.5461 | 58.0327 | 51.4051 | 55.3426 | 56.7923 | 20.0 |
| 0.1668 | 6.4 | 320 | 0.5463 | 58.1027 | 51.3706 | 55.1733 | 56.7923 | 19.9 |
| 0.1668 | 6.6 | 330 | 0.5837 | 57.6284 | 50.8245 | 54.6253 | 56.2127 | 20.0 |
| 0.1668 | 6.8 | 340 | 0.5221 | 58.0869 | 51.5448 | 55.4226 | 56.7532 | 20.0 |
| 0.1668 | 7.0 | 350 | 0.5433 | 58.7676 | 52.0403 | 56.2634 | 57.6441 | 20.0 |
| 0.1668 | 7.2 | 360 | 0.5498 | 57.9172 | 50.9727 | 55.1006 | 56.6018 | 20.0 |
| 0.1668 | 7.4 | 370 | 0.5581 | 57.4669 | 50.698 | 54.6448 | 56.1325 | 20.0 |
| 0.1668 | 7.6 | 380 | 0.5526 | 57.0821 | 50.298 | 54.1635 | 55.8059 | 20.0 |
| 0.1668 | 7.8 | 390 | 0.5548 | 57.5422 | 50.2734 | 54.2446 | 56.1223 | 20.0 |
| 0.1071 | 8.0 | 400 | 0.5620 | 57.4548 | 50.2657 | 54.5094 | 55.9422 | 20.0 |
| 0.1071 | 8.2 | 410 | 0.5772 | 57.4144 | 50.2443 | 54.5173 | 55.9331 | 20.0 |
| 0.1071 | 8.4 | 420 | 0.5857 | 57.2975 | 50.2116 | 54.5918 | 55.9931 | 20.0 |
| 0.1071 | 8.6 | 430 | 0.5827 | 58.4767 | 51.4318 | 55.4792 | 57.1284 | 20.0 |
| 0.1071 | 8.8 | 440 | 0.5728 | 58.4414 | 51.3523 | 55.2838 | 57.202 | 20.0 |
| 0.1071 | 9.0 | 450 | 0.5919 | 58.0499 | 51.3783 | 55.0748 | 56.6939 | 20.0 |
| 0.1071 | 9.2 | 460 | 0.5937 | 57.7604 | 50.845 | 54.8941 | 56.351 | 20.0 |
| 0.1071 | 9.4 | 470 | 0.5970 | 57.3655 | 50.4126 | 54.4522 | 55.7815 | 20.0 |
| 0.1071 | 9.6 | 480 | 0.5911 | 58.203 | 51.0367 | 55.3215 | 56.8485 | 20.0 |
| 0.1071 | 9.8 | 490 | 0.6121 | 58.2898 | 51.2749 | 55.4292 | 57.0241 | 20.0 |
| 0.0718 | 10.0 | 500 | 0.5903 | 58.2487 | 51.3838 | 55.4237 | 56.8863 | 20.0 |
| 0.0718 | 10.2 | 510 | 0.5983 | 58.2681 | 51.0925 | 55.2887 | 56.9562 | 20.0 |
| 0.0718 | 10.4 | 520 | 0.6308 | 57.9797 | 50.7386 | 54.995 | 56.5939 | 20.0 |
| 0.0718 | 10.6 | 530 | 0.6307 | 57.6269 | 50.5515 | 54.446 | 56.1544 | 20.0 |
| 0.0718 | 10.8 | 540 | 0.6173 | 57.9545 | 51.1005 | 54.9406 | 56.5732 | 20.0 |
| 0.0718 | 11.0 | 550 | 0.6322 | 58.3718 | 51.4321 | 55.4241 | 57.1879 | 20.0 |
| 0.0718 | 11.2 | 560 | 0.6027 | 58.6581 | 51.8607 | 55.6436 | 57.32 | 20.0 |
| 0.0718 | 11.4 | 570 | 0.6140 | 58.6476 | 51.7822 | 55.5845 | 57.3018 | 20.0 |
| 0.0718 | 11.6 | 580 | 0.6184 | 59.2454 | 52.4204 | 56.2174 | 57.9278 | 20.0 |
| 0.0718 | 11.8 | 590 | 0.6281 | 59.2945 | 52.8165 | 56.547 | 58.0674 | 20.0 |
| 0.0512 | 12.0 | 600 | 0.6128 | 58.2165 | 51.3689 | 55.37 | 56.8342 | 20.0 |
| 0.0512 | 12.2 | 610 | 0.6482 | 57.9196 | 50.9793 | 55.0883 | 56.6986 | 20.0 |
| 0.0512 | 12.4 | 620 | 0.6267 | 57.4782 | 50.1118 | 54.2802 | 55.8872 | 20.0 |
| 0.0512 | 12.6 | 630 | 0.6198 | 57.457 | 50.4079 | 54.2449 | 55.8118 | 20.0 |
| 0.0512 | 12.8 | 640 | 0.6500 | 57.6903 | 51.0627 | 55.0743 | 56.3025 | 20.0 |
| 0.0512 | 13.0 | 650 | 0.6265 | 57.4394 | 50.9013 | 54.7936 | 56.1688 | 20.0 |
| 0.0512 | 13.2 | 660 | 0.6817 | 58.4345 | 51.7087 | 55.291 | 57.0057 | 20.0 |
| 0.0512 | 13.4 | 670 | 0.6322 | 57.869 | 50.9503 | 54.8937 | 56.5178 | 20.0 |
| 0.0512 | 13.6 | 680 | 0.6424 | 57.8285 | 51.1014 | 55.0072 | 56.5022 | 20.0 |
| 0.0512 | 13.8 | 690 | 0.6668 | 58.7067 | 51.9929 | 55.5044 | 57.1517 | 20.0 |
| 0.0397 | 14.0 | 700 | 0.6537 | 58.8717 | 52.4036 | 55.6521 | 57.4855 | 20.0 |
| 0.0397 | 14.2 | 710 | 0.6463 | 58.9623 | 52.4749 | 55.8145 | 57.8095 | 20.0 |
| 0.0397 | 14.4 | 720 | 0.6630 | 58.8097 | 52.1997 | 55.8204 | 57.6325 | 20.0 |
| 0.0397 | 14.6 | 730 | 0.6839 | 59.0479 | 52.6573 | 56.0439 | 57.7322 | 20.0 |
| 0.0397 | 14.8 | 740 | 0.6541 | 59.2854 | 52.6109 | 56.1891 | 57.9446 | 20.0 |
| 0.0397 | 15.0 | 750 | 0.6486 | 58.8419 | 52.2004 | 55.8071 | 57.49 | 20.0 |
| 0.0397 | 15.2 | 760 | 0.6578 | 57.6161 | 50.7276 | 54.5514 | 56.2359 | 20.0 |
| 0.0397 | 15.4 | 770 | 0.6673 | 57.5458 | 50.8286 | 54.4597 | 56.1513 | 20.0 |
| 0.0397 | 15.6 | 780 | 0.6624 | 57.6634 | 51.0017 | 54.6769 | 56.3837 | 20.0 |
| 0.0397 | 15.8 | 790 | 0.6469 | 57.9037 | 51.137 | 54.8939 | 56.6427 | 20.0 |
| 0.0301 | 16.0 | 800 | 0.6373 | 57.8696 | 51.0899 | 54.8543 | 56.4596 | 20.0 |
| 0.0301 | 16.2 | 810 | 0.6712 | 58.614 | 52.0052 | 55.6436 | 57.3211 | 20.0 |
| 0.0301 | 16.4 | 820 | 0.6812 | 58.5214 | 51.8911 | 55.7447 | 57.2663 | 20.0 |
| 0.0301 | 16.6 | 830 | 0.6716 | 58.5818 | 51.929 | 55.7993 | 57.4064 | 20.0 |
| 0.0301 | 16.8 | 840 | 0.6590 | 57.745 | 51.0481 | 54.8545 | 56.4781 | 20.0 |
| 0.0301 | 17.0 | 850 | 0.6695 | 57.6663 | 50.9646 | 54.7863 | 56.3687 | 20.0 |
| 0.0301 | 17.2 | 860 | 0.6858 | 57.5552 | 51.0436 | 54.7092 | 56.3079 | 20.0 |
| 0.0301 | 17.4 | 870 | 0.6840 | 57.9091 | 51.3823 | 54.8309 | 56.6186 | 20.0 |
| 0.0301 | 17.6 | 880 | 0.6751 | 57.8223 | 51.1688 | 54.7562 | 56.5558 | 20.0 |
| 0.0301 | 17.8 | 890 | 0.6589 | 57.9956 | 51.1425 | 54.9509 | 56.6868 | 20.0 |
| 0.0482 | 18.0 | 900 | 0.6634 | 58.0392 | 51.3121 | 55.0726 | 56.7878 | 20.0 |
| 0.0482 | 18.2 | 910 | 0.6907 | 58.2021 | 51.4548 | 55.1874 | 56.91 | 20.0 |
| 0.0482 | 18.4 | 920 | 0.6977 | 58.1124 | 51.4254 | 55.062 | 56.8412 | 20.0 |
| 0.0482 | 18.6 | 930 | 0.6832 | 58.0776 | 51.3168 | 55.0849 | 56.8226 | 20.0 |
| 0.0482 | 18.8 | 940 | 0.6672 | 57.925 | 51.2475 | 54.9661 | 56.655 | 20.0 |
| 0.0482 | 19.0 | 950 | 0.6582 | 57.9285 | 51.2483 | 54.9744 | 56.6609 | 20.0 |
| 0.0482 | 19.2 | 960 | 0.6575 | 57.9285 | 51.2483 | 54.9744 | 56.6609 | 20.0 |
| 0.0482 | 19.4 | 970 | 0.6619 | 57.8961 | 51.2097 | 54.9475 | 56.6344 | 20.0 |
| 0.0482 | 19.6 | 980 | 0.6658 | 57.8961 | 51.2097 | 54.9475 | 56.6344 | 20.0 |
| 0.0482 | 19.8 | 990 | 0.6635 | 57.7222 | 51.0096 | 54.8166 | 56.4623 | 20.0 |
| 0.0201 | 20.0 | 1000 | 0.6617 | 57.7088 | 51.0096 | 54.7514 | 56.3943 | 20.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
SHENMU007/neunit_BASE_V10.19
|
SHENMU007
| 2023-07-10T15:01:47Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-07T08:50:51Z |
---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
NasimB/gpt2-cocnat-mod-datasets-txt-processing
|
NasimB
| 2023-07-10T15:01:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T12:29:02Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-cocnat-mod-datasets-txt-processing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-cocnat-mod-datasets-txt-processing
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6848 | 0.3 | 500 | 5.6500 |
| 5.3379 | 0.59 | 1000 | 5.2204 |
| 4.9909 | 0.89 | 1500 | 4.9703 |
| 4.7146 | 1.19 | 2000 | 4.8200 |
| 4.5695 | 1.49 | 2500 | 4.7076 |
| 4.4685 | 1.78 | 3000 | 4.5985 |
| 4.3237 | 2.08 | 3500 | 4.5311 |
| 4.1614 | 2.38 | 4000 | 4.4731 |
| 4.1267 | 2.68 | 4500 | 4.4151 |
| 4.082 | 2.97 | 5000 | 4.3593 |
| 3.8448 | 3.27 | 5500 | 4.3575 |
| 3.8261 | 3.57 | 6000 | 4.3240 |
| 3.8089 | 3.86 | 6500 | 4.2887 |
| 3.6462 | 4.16 | 7000 | 4.2921 |
| 3.5453 | 4.46 | 7500 | 4.2840 |
| 3.529 | 4.76 | 8000 | 4.2688 |
| 3.4926 | 5.05 | 8500 | 4.2683 |
| 3.3463 | 5.35 | 9000 | 4.2715 |
| 3.3453 | 5.65 | 9500 | 4.2702 |
| 3.3408 | 5.95 | 10000 | 4.2694 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
ericNguyen0132/DepRoBERTa-2ndStage
|
ericNguyen0132
| 2023-07-10T14:56:14Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-10T13:42:58Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: DepRoBERTa-2ndStage
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DepRoBERTa-2ndStage
This model is a fine-tuned version of [rafalposwiata/deproberta-large-v1](https://huggingface.co/rafalposwiata/deproberta-large-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6330
- Accuracy: 0.855
- F1: 0.9134
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 469 | 0.3572 | 0.8617 | 0.9224 |
| 0.4953 | 2.0 | 938 | 0.3593 | 0.8783 | 0.9315 |
| 0.3493 | 3.0 | 1407 | 0.4274 | 0.8483 | 0.9091 |
| 0.313 | 4.0 | 1876 | 0.5488 | 0.8617 | 0.9187 |
| 0.2622 | 5.0 | 2345 | 0.6330 | 0.855 | 0.9134 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Nyme/textual_inversion_cat
|
Nyme
| 2023-07-10T14:49:16Z | 5 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-10T09:17:48Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - Nyme/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
Tasaloris13/falcon-7b-test
|
Tasaloris13
| 2023-07-10T14:31:30Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-10T14:31:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
medanis13/chatbot
|
medanis13
| 2023-07-10T14:25:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-10T14:22:19Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
jiuzhou/roop
|
jiuzhou
| 2023-07-10T14:14:43Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-07-10T14:11:09Z |
# Roop项目的Colab脚本
使用谷歌免费的GPU在线运行一键换脸,[点击打开](roop_v1.ipynb)!

# 更新

# 原项目地址[roop](https://github.com/s0md3v/roop/)


# 使用方法
打开.ipynb文件,然后点击open in colab 就可以开始使用了,详细的使用教程,点[这里](https://www.tonyisstark.com/1240.html)
|
jordyvl/vit-_tobacco3482_kd_MSE_test_pretrain_student
|
jordyvl
| 2023-07-10T14:09:40Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T14:07:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: vit-_tobacco3482_kd_MSE_test_pretrain_student
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-_tobacco3482_kd_MSE_test_pretrain_student
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 0.8077 | 0.4 | 0.7439 | 5.4442 | 0.4000 | 0.2755 | 0.2844 | 0.3738 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.