modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 12:32:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 12:31:20
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
casque/meichidarkMix_meichidarkMIX38
|
casque
| 2023-07-17T04:39:47Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-17T03:58:55Z |
---
license: creativeml-openrail-m
---
|
DracoHugging/flan-T5-base-sum
|
DracoHugging
| 2023-07-17T04:23:51Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-05T13:58:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: flan-T5-base-sum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: test
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 47.6617
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-T5-base-sum
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3721
- Rouge1: 47.6617
- Rouge2: 23.7647
- Rougel: 40.1155
- Rougelsum: 43.6943
- Gen Len: 17.2759
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.4403 | 1.0 | 1842 | 1.3822 | 47.2814 | 23.7835 | 39.7427 | 43.4897 | 17.0256 |
| 1.3572 | 2.0 | 3684 | 1.3747 | 47.553 | 23.5714 | 39.8212 | 43.6246 | 17.4420 |
| 1.2822 | 3.0 | 5526 | 1.3721 | 47.6617 | 23.7647 | 40.1155 | 43.6943 | 17.2759 |
| 1.2375 | 4.0 | 7368 | 1.3764 | 47.7453 | 24.1099 | 40.1684 | 43.8659 | 17.2943 |
| 1.1935 | 5.0 | 9210 | 1.3780 | 47.614 | 23.6643 | 39.8434 | 43.6558 | 17.3077 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
elvis-d/test_trainer
|
elvis-d
| 2023-07-17T04:12:07Z | 128 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-15T02:07:27Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.1491
- eval_runtime: 58.6469
- eval_samples_per_second: 34.102
- eval_steps_per_second: 4.263
- epoch: 5.0
- step: 5000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
magicsword/wy-mt-en-zh
|
magicsword
| 2023-07-17T04:04:52Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"autotrain",
"translation",
"unk",
"dataset:magicsword/autotrain-data-wy-mt-en-zh",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-16T15:16:02Z |
---
tags:
- autotrain
- translation
language:
- unk
- unk
datasets:
- magicsword/autotrain-data-wy-mt-en-zh
co2_eq_emissions:
emissions: 93.22001955321743
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 74981139788
- CO2 Emissions (in grams): 93.2200
## Validation Metrics
- Loss: 2.249
- SacreBLEU: 12.950
- Gen len: 16.555
|
AnySue/Learning
|
AnySue
| 2023-07-17T03:50:48Z | 0 | 0 | null |
[
"dataset:fka/awesome-chatgpt-prompts",
"doi:10.57967/hf/0900",
"license:openrail",
"region:us"
] | null | 2022-11-06T15:36:44Z |
---
license: openrail
datasets:
- fka/awesome-chatgpt-prompts
---
|
casque/queratograySketch_v10
|
casque
| 2023-07-17T03:42:57Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-17T03:27:07Z |
---
license: creativeml-openrail-m
---
|
uzenhuang/distilgpt2-finetuned-wikitext2-test
|
uzenhuang
| 2023-07-17T03:22:43Z | 213 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T03:03:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2-test
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 277 | 3.8379 |
| 3.8669 | 2.0 | 554 | 3.8250 |
| 3.8669 | 3.0 | 831 | 3.8267 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
gyuri2020/kw-classification-setfit-model
|
gyuri2020
| 2023-07-17T03:17:50Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-14T14:50:06Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# gyuri2020/kw-classification-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("gyuri2020/kw-classification-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
dariowsz/whisper-tiny-finetuned-minds-14
|
dariowsz
| 2023-07-17T02:53:30Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-11T13:13:49Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-finetuned-minds-14
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: MInDS 14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.35465116279070
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-finetuned-minds-14
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the MInDS 14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7154
- Wer Ortho: 0.3540
- Wer: 0.3547
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.0007 | 17.86 | 500 | 0.7154 | 0.3540 | 0.3547 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
DAMO-NLP-MT/polylm-13b-fine-grained-shards
|
DAMO-NLP-MT
| 2023-07-17T02:36:30Z | 11 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"zh",
"en",
"es",
"fr",
"pt",
"ru",
"de",
"it",
"ar",
"ja",
"ko",
"th",
"vi",
"id",
"nl",
"pl",
"tr",
"he",
"arxiv:2307.06018",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T02:03:12Z |
---
language:
- zh
- en
- es
- fr
- pt
- ru
- de
- it
- ar
- ja
- ko
- th
- vi
- id
- nl
- pl
- tr
- he
tags:
- text-generation
license: apache-2.0
---
# Model Details
## Abstract
> Large language models (LLMs) demonstrate remarkable ability to comprehend, reason, and generate following nature language instructions. However, the development of LLMs has been primarily focused on high-resource languages, such as English, thereby limiting their applicability and research in other languages. Consequently, we present PolyLM, a multilingual LLM trained on 640 billion (B) tokens, avaliable in two model sizes: 1.7B and 13B. To enhance its multilingual capabilities, we 1) integrate bilingual data into training data; and 2) adopt a curriculum learning strategy that increases the proportion of non-English data from 30% in the first stage to 60% in the final stage during pre-training. Further, we propose a multilingual self-instruct method which automatically generates 132.7K diverse multilingual instructions for model fine-tuning. To assess the model's performance, we collect several existing multilingual tasks, including multilingual understanding, question answering, generation, and translation. Extensive experiments show that PolyLM surpasses other open-source models such as LLaMA and BLOOM on multilingual tasks while maintaining comparable performance in English.
## Model Description
> The only difference between this model card and [polylm-13B](https://huggingface.co/DAMO-NLP-MT/polylm-13b) is that it includes finer grained shards.
# Citation
**BibTeX:**
```bibtex
@misc{wei2023polylm,
title={PolyLM: An Open Source Polyglot Large Language Model},
author={Xiangpeng Wei and Haoran Wei and Huan Lin and Tianhao Li and Pei Zhang and Xingzhang Ren and Mei Li and Yu Wan and Zhiwei Cao and Binbin Xie and Tianxiang Hu and Shangjie Li and Binyuan Hui and Bowen Yu and Dayiheng Liu and Baosong Yang and Fei Huang and Jun Xie},
year={2023},
eprint={2307.06018},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
lucostiguy11/dreambooth_if_1
|
lucostiguy11
| 2023-07-17T02:26:09Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"if",
"if-diffusers",
"text-to-image",
"dreambooth",
"base_model:DeepFloyd/IF-I-XL-v1.0",
"base_model:finetune:DeepFloyd/IF-I-XL-v1.0",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:IFPipeline",
"region:us"
] |
text-to-image
| 2023-07-17T01:37:40Z |
---
license: creativeml-openrail-m
base_model: DeepFloyd/IF-I-XL-v1.0
instance_prompt: A photo of sks dog in a bucket
tags:
- if
- if-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - lucostiguy11/dreambooth_if_1
This is a dreambooth model derived from DeepFloyd/IF-I-XL-v1.0. The weights were trained on A photo of sks dog in a bucket using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.




DreamBooth for the text encoder was enabled: False.
|
samiul25/ppo-LunarLander-v2
|
samiul25
| 2023-07-17T02:25:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T02:25:07Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.09 +/- 22.88
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hansanguw/HSCho_test
|
hansanguw
| 2023-07-17T01:26:47Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T01:26:41Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
RajanGo/TEST-2
|
RajanGo
| 2023-07-17T01:13:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T01:13:11Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e6_s6789_v3
|
KingKazma
| 2023-07-17T01:05:09Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T01:05:08Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e4_s6789_v3
|
KingKazma
| 2023-07-17T00:51:11Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:51:10Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
peterdamn/distilhubert-finetuned-gtzan
|
peterdamn
| 2023-07-17T00:37:21Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-15T15:29:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2454
- Accuracy: 0.82
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2107 | 1.0 | 112 | 2.2411 | 0.31 |
| 2.0193 | 2.0 | 225 | 1.9900 | 0.53 |
| 1.7491 | 3.0 | 337 | 1.6436 | 0.59 |
| 1.5096 | 4.0 | 450 | 1.3625 | 0.63 |
| 0.9801 | 5.0 | 562 | 1.0769 | 0.75 |
| 0.8603 | 6.0 | 675 | 0.9399 | 0.78 |
| 0.5573 | 7.0 | 787 | 0.8290 | 0.77 |
| 0.5776 | 8.0 | 900 | 0.6834 | 0.82 |
| 0.4687 | 9.0 | 1012 | 0.6522 | 0.82 |
| 0.3513 | 10.0 | 1125 | 0.6564 | 0.82 |
| 0.1691 | 11.0 | 1237 | 0.6628 | 0.84 |
| 0.0384 | 12.0 | 1350 | 0.8602 | 0.81 |
| 0.0218 | 13.0 | 1462 | 0.8367 | 0.85 |
| 0.0057 | 14.0 | 1575 | 0.9951 | 0.83 |
| 0.0041 | 15.0 | 1687 | 1.0021 | 0.84 |
| 0.0027 | 16.0 | 1800 | 1.0215 | 0.82 |
| 0.0021 | 17.0 | 1912 | 0.9737 | 0.83 |
| 0.0017 | 18.0 | 2025 | 1.0321 | 0.85 |
| 0.0015 | 19.0 | 2137 | 0.9519 | 0.81 |
| 0.0013 | 20.0 | 2250 | 0.9298 | 0.82 |
| 0.0011 | 21.0 | 2362 | 0.9627 | 0.83 |
| 0.001 | 22.0 | 2475 | 1.1373 | 0.82 |
| 0.0009 | 23.0 | 2587 | 1.0855 | 0.83 |
| 0.0008 | 24.0 | 2700 | 0.9979 | 0.81 |
| 0.0008 | 25.0 | 2812 | 1.0956 | 0.82 |
| 0.0009 | 26.0 | 2925 | 0.9861 | 0.82 |
| 0.0007 | 27.0 | 3037 | 1.1387 | 0.83 |
| 0.0006 | 28.0 | 3150 | 1.1965 | 0.83 |
| 0.0006 | 29.0 | 3262 | 1.1527 | 0.81 |
| 0.0007 | 30.0 | 3375 | 1.0609 | 0.82 |
| 0.0006 | 31.0 | 3487 | 1.1770 | 0.81 |
| 0.0801 | 32.0 | 3600 | 1.2290 | 0.82 |
| 0.0005 | 33.0 | 3712 | 1.1785 | 0.83 |
| 0.0005 | 34.0 | 3825 | 1.2154 | 0.83 |
| 0.0004 | 35.0 | 3937 | 1.2250 | 0.83 |
| 0.0004 | 36.0 | 4050 | 1.2280 | 0.82 |
| 0.0004 | 37.0 | 4162 | 1.2364 | 0.83 |
| 0.0004 | 38.0 | 4275 | 1.2379 | 0.82 |
| 0.0004 | 39.0 | 4387 | 1.2483 | 0.83 |
| 0.0004 | 39.82 | 4480 | 1.2454 | 0.82 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e2_s6789_v3
|
KingKazma
| 2023-07-17T00:37:12Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:37:12Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e1_s6789_v3
|
KingKazma
| 2023-07-17T00:30:14Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:30:13Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e9_s6789_v3
|
KingKazma
| 2023-07-17T00:24:11Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:24:10Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e8_s6789_v3
|
KingKazma
| 2023-07-17T00:16:36Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:16:35Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
dsmonk/xgen-7b-tuned-alpaca
|
dsmonk
| 2023-07-17T00:04:40Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:Salesforce/xgen-7b-8k-base",
"base_model:finetune:Salesforce/xgen-7b-8k-base",
"license:apache-2.0",
"region:us"
] | null | 2023-07-16T21:52:46Z |
---
license: apache-2.0
base_model: Salesforce/xgen-7b-8k-base
tags:
- generated_from_trainer
model-index:
- name: xgen-7b-tuned-alpaca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xgen-7b-tuned-alpaca
This model is a fine-tuned version of [Salesforce/xgen-7b-8k-base](https://huggingface.co/Salesforce/xgen-7b-8k-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ByteExplorer/Reinforce-CartPole-8
|
ByteExplorer
| 2023-07-17T00:04:03Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T00:03:54Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-8
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e8_s55555_v3
|
KingKazma
| 2023-07-17T00:02:03Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:02:02Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e6_s6789_v3
|
KingKazma
| 2023-07-17T00:01:27Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:01:26Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e5_s6789_v3
|
KingKazma
| 2023-07-16T23:53:53Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:53:52Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e4_s6789_v3
|
KingKazma
| 2023-07-16T23:46:20Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:46:18Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e5_s55555_v3
|
KingKazma
| 2023-07-16T23:41:02Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:41:01Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
yzzhong/dqn-SpaceInvadersNoFrameskip
|
yzzhong
| 2023-07-16T23:27:41Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T23:27:01Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 699.50 +/- 220.35
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga yzzhong -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga yzzhong -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga yzzhong
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e3_s55555_v3
|
KingKazma
| 2023-07-16T23:27:02Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:27:01Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e1_s6789_v3
|
KingKazma
| 2023-07-16T23:23:37Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:23:36Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
boostcamp-5th-nlp07/kullm-polyglot-5.8b-finetuning_0717
|
boostcamp-5th-nlp07
| 2023-07-16T23:19:30Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:19:26Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e1_s55555_v3
|
KingKazma
| 2023-07-16T23:13:02Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:13:01Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
ailabturkiye/wtcn
|
ailabturkiye
| 2023-07-16T23:06:15Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-16T23:04:16Z |
---
license: openrail
language:
- tr
tags:
- music
---
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e0_s55555_v3
|
KingKazma
| 2023-07-16T23:06:01Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:05:59Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e-1_s55555_v3
|
KingKazma
| 2023-07-16T22:58:57Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T22:58:56Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e8_s108_v3
|
KingKazma
| 2023-07-16T22:42:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T22:41:59Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e7_s108_v3
|
KingKazma
| 2023-07-16T22:35:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T22:34:59Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Chickenfish/Jennie
|
Chickenfish
| 2023-07-16T22:30:43Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-15T01:54:48Z |
---
license: creativeml-openrail-m
---
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e6_s108_v3
|
KingKazma
| 2023-07-16T22:28:01Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T22:28:00Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e4_s108_v3
|
KingKazma
| 2023-07-16T22:13:58Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T22:13:57Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
SushantGautam/videomae-small-finetuned-kinetics-finetuned-SoccerNetChunks-NoInference
|
SushantGautam
| 2023-07-16T22:11:23Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"videomae",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-07-15T14:30:20Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- matthews_correlation
model-index:
- name: videomae-small-finetuned-kinetics-finetuned-SoccerNetChunks-NoInference
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-small-finetuned-kinetics-finetuned-SoccerNetChunks-NoInference
This model is a fine-tuned version of [MCG-NJU/videomae-small-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-small-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9787
- Accuracy: 0.6333
- Balanced Accuracy: 0.6333
- Matthews Correlation: 0.5649
- Confusion Matrix: [[1007 111 66 107 22 59]
[ 222 935 74 50 19 71]
[ 114 27 969 172 77 11]
[ 240 50 259 686 103 32]
[ 154 59 299 489 343 27]
[ 72 20 6 2 2 1268]]
- 0 Ball out of play: {'precision': 0.556661138750691, 'recall': 0.7339650145772595, 'f1-score': 0.6331342345174474, 'support': 1372.0}
- Precision 0: 0.5567
- Recall 0: 0.7340
- F1-score 0: 0.6331
- Support 0: 1372.0
- 1 Foul: {'precision': 0.7778702163061564, 'recall': 0.6819839533187454, 'f1-score': 0.7267780800621843, 'support': 1371.0}
- Precision 1: 0.7779
- Recall 1: 0.6820
- F1-score 1: 0.7268
- Support 1: 1371.0
- 2 Goal: {'precision': 0.5791990436341901, 'recall': 0.7072992700729926, 'f1-score': 0.6368715083798882, 'support': 1370.0}
- Precision 2: 0.5792
- Recall 2: 0.7073
- F1-score 2: 0.6369
- Support 2: 1370.0
- 3 Shots off target: {'precision': 0.4555112881806109, 'recall': 0.5007299270072992, 'f1-score': 0.4770514603616134, 'support': 1370.0}
- Precision 3: 0.4555
- Recall 3: 0.5007
- F1-score 3: 0.4771
- Support 3: 1370.0
- 4 Shots on target: {'precision': 0.6060070671378092, 'recall': 0.25018234865062, 'f1-score': 0.3541559112028911, 'support': 1371.0}
- Precision 4: 0.6060
- Recall 4: 0.2502
- F1-score 4: 0.3542
- Support 4: 1371.0
- 5 Throw-in: {'precision': 0.8637602179836512, 'recall': 0.9255474452554745, 'f1-score': 0.8935870331219168, 'support': 1370.0}
- Precision 5: 0.8638
- Recall 5: 0.9255
- F1-score 5: 0.8936
- Support 5: 1370.0
- Precision Macro avg: 0.6398
- Recall Macro avg: 0.6333
- F1-score Macro avg: 0.6203
- Support Macro avg: 8224.0
- Precision Weighted avg: 0.6398
- Recall Weighted avg: 0.6333
- F1-score Weighted avg: 0.6202
- Support Weighted avg: 8224.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 20620
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Balanced Accuracy | Matthews Correlation | Confusion Matrix | 0 Ball out of play | Precision 0 | Recall 0 | F1-score 0 | Support 0 | 1 Foul | Precision 1 | Recall 1 | F1-score 1 | Support 1 | 2 Goal | Precision 2 | Recall 2 | F1-score 2 | Support 2 | 3 Shots off target | Precision 3 | Recall 3 | F1-score 3 | Support 3 | 4 Shots on target | Precision 4 | Recall 4 | F1-score 4 | Support 4 | 5 Throw-in | Precision 5 | Recall 5 | F1-score 5 | Support 5 | Precision Macro avg | Recall Macro avg | F1-score Macro avg | Support Macro avg | Precision Weighted avg | Recall Weighted avg | F1-score Weighted avg | Support Weighted avg |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-----------------:|:--------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:-----------:|:--------:|:----------:|:---------:|:-------------------------------------------------------------------------------------------------------------------:|:-----------:|:--------:|:----------:|:---------:|:-------------------------------------------------------------------------------------------------------------------:|:-----------:|:--------:|:----------:|:---------:|:--------------------------------------------------------------------------------------------------------------------:|:-----------:|:--------:|:----------:|:---------:|:---------------------------------------------------------------------------------------------------------------------:|:-----------:|:--------:|:----------:|:---------:|:------------------------------------------------------------------------------------------------------------------:|:-----------:|:--------:|:----------:|:---------:|:-------------------:|:----------------:|:------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------------:|:--------------------:|
| 1.5371 | 0.05 | 1031 | 1.2696 | 0.4884 | 0.4885 | 0.3949 | [[ 214 227 131 266 173 361]
[ 24 763 108 72 97 307]
[ 20 29 893 202 140 86]
[ 34 32 436 460 320 88]
[ 18 21 459 363 403 107]
[ 3 22 24 14 23 1284]] | {'precision': 0.6837060702875399, 'recall': 0.15597667638483964, 'f1-score': 0.2540059347181009, 'support': 1372.0} | 0.6837 | 0.1560 | 0.2540 | 1372.0 | {'precision': 0.6974405850091407, 'recall': 0.5565280816921955, 'f1-score': 0.6190669371196754, 'support': 1371.0} | 0.6974 | 0.5565 | 0.6191 | 1371.0 | {'precision': 0.4353973671379815, 'recall': 0.6518248175182482, 'f1-score': 0.5220695703010816, 'support': 1370.0} | 0.4354 | 0.6518 | 0.5221 | 1370.0 | {'precision': 0.33405954974582425, 'recall': 0.3357664233576642, 'f1-score': 0.3349108117946851, 'support': 1370.0} | 0.3341 | 0.3358 | 0.3349 | 1370.0 | {'precision': 0.3486159169550173, 'recall': 0.2939460247994165, 'f1-score': 0.3189552829442026, 'support': 1371.0} | 0.3486 | 0.2939 | 0.3190 | 1371.0 | {'precision': 0.5750111957008509, 'recall': 0.9372262773722628, 'f1-score': 0.7127393838467944, 'support': 1370.0} | 0.5750 | 0.9372 | 0.7127 | 1370.0 | 0.5124 | 0.4885 | 0.4603 | 8224.0 | 0.5124 | 0.4884 | 0.4602 | 8224.0 |
| 0.946 | 0.1 | 2062 | 1.1950 | 0.4993 | 0.4993 | 0.4176 | [[1020 44 64 224 10 10]
[ 510 602 79 135 24 21]
[ 117 25 758 434 30 6]
[ 206 32 217 883 25 7]
[ 156 21 238 889 61 6]
[ 394 48 39 102 5 782]] | {'precision': 0.42446941323345816, 'recall': 0.7434402332361516, 'f1-score': 0.5403973509933775, 'support': 1372.0} | 0.4245 | 0.7434 | 0.5404 | 1372.0 | {'precision': 0.7797927461139896, 'recall': 0.4390955506929249, 'f1-score': 0.5618292113859076, 'support': 1371.0} | 0.7798 | 0.4391 | 0.5618 | 1371.0 | {'precision': 0.5433691756272402, 'recall': 0.5532846715328467, 'f1-score': 0.5482820976491862, 'support': 1370.0} | 0.5434 | 0.5533 | 0.5483 | 1370.0 | {'precision': 0.33108361454818147, 'recall': 0.6445255474452555, 'f1-score': 0.43745355461976715, 'support': 1370.0} | 0.3311 | 0.6445 | 0.4375 | 1370.0 | {'precision': 0.3935483870967742, 'recall': 0.04449307075127644, 'f1-score': 0.0799475753604194, 'support': 1371.0} | 0.3935 | 0.0445 | 0.0799 | 1371.0 | {'precision': 0.9399038461538461, 'recall': 0.5708029197080292, 'f1-score': 0.7102633969118983, 'support': 1370.0} | 0.9399 | 0.5708 | 0.7103 | 1370.0 | 0.5687 | 0.4993 | 0.4797 | 8224.0 | 0.5687 | 0.4993 | 0.4797 | 8224.0 |
| 1.6051 | 0.15 | 3093 | 1.1348 | 0.5418 | 0.5419 | 0.4626 | [[ 849 48 194 135 31 115]
[ 408 534 225 27 63 114]
[ 71 28 1101 103 49 18]
[ 165 21 516 509 127 32]
[ 116 15 563 379 262 36]
[ 87 9 44 13 16 1201]] | {'precision': 0.5005896226415094, 'recall': 0.6188046647230321, 'f1-score': 0.5534550195567145, 'support': 1372.0} | 0.5006 | 0.6188 | 0.5535 | 1372.0 | {'precision': 0.815267175572519, 'recall': 0.38949671772428884, 'f1-score': 0.5271470878578479, 'support': 1371.0} | 0.8153 | 0.3895 | 0.5271 | 1371.0 | {'precision': 0.41657207718501704, 'recall': 0.8036496350364963, 'f1-score': 0.5487166708198357, 'support': 1370.0} | 0.4166 | 0.8036 | 0.5487 | 1370.0 | {'precision': 0.4365351629502573, 'recall': 0.3715328467153285, 'f1-score': 0.40141955835962145, 'support': 1370.0} | 0.4365 | 0.3715 | 0.4014 | 1370.0 | {'precision': 0.4781021897810219, 'recall': 0.1911013858497447, 'f1-score': 0.273058884835852, 'support': 1371.0} | 0.4781 | 0.1911 | 0.2731 | 1371.0 | {'precision': 0.7922163588390502, 'recall': 0.8766423357664234, 'f1-score': 0.8322938322938324, 'support': 1370.0} | 0.7922 | 0.8766 | 0.8323 | 1370.0 | 0.5732 | 0.5419 | 0.5227 | 8224.0 | 0.5732 | 0.5418 | 0.5227 | 8224.0 |
| 1.2631 | 1.0 | 4124 | 0.9987 | 0.6069 | 0.6069 | 0.5309 | [[ 692 217 105 187 53 118]
[ 127 995 63 42 38 106]
[ 40 52 996 142 127 13]
[ 80 84 360 541 273 32]
[ 41 71 368 321 546 24]
[ 58 38 30 8 15 1221]] | {'precision': 0.6666666666666666, 'recall': 0.5043731778425656, 'f1-score': 0.5742738589211619, 'support': 1372.0} | 0.6667 | 0.5044 | 0.5743 | 1372.0 | {'precision': 0.6829100892244337, 'recall': 0.7257476294675419, 'f1-score': 0.7036775106082037, 'support': 1371.0} | 0.6829 | 0.7257 | 0.7037 | 1371.0 | {'precision': 0.518210197710718, 'recall': 0.727007299270073, 'f1-score': 0.6051032806804374, 'support': 1370.0} | 0.5182 | 0.7270 | 0.6051 | 1370.0 | {'precision': 0.43593875906526997, 'recall': 0.3948905109489051, 'f1-score': 0.4144006127920337, 'support': 1370.0} | 0.4359 | 0.3949 | 0.4144 | 1370.0 | {'precision': 0.5190114068441065, 'recall': 0.3982494529540481, 'f1-score': 0.4506809739991746, 'support': 1371.0} | 0.5190 | 0.3982 | 0.4507 | 1371.0 | {'precision': 0.8064729194187582, 'recall': 0.8912408759124087, 'f1-score': 0.8467406380027739, 'support': 1370.0} | 0.8065 | 0.8912 | 0.8467 | 1370.0 | 0.6049 | 0.6069 | 0.5991 | 8224.0 | 0.6049 | 0.6069 | 0.5991 | 8224.0 |
| 1.2292 | 1.05 | 5155 | 1.1215 | 0.5412 | 0.5412 | 0.4641 | [[1041 41 100 167 7 16]
[ 456 628 83 139 34 31]
[ 112 13 898 322 20 5]
[ 276 19 261 768 33 13]
[ 213 27 340 691 87 13]
[ 249 16 56 17 3 1029]] | {'precision': 0.4435449510012782, 'recall': 0.7587463556851312, 'f1-score': 0.5598279107286904, 'support': 1372.0} | 0.4435 | 0.7587 | 0.5598 | 1372.0 | {'precision': 0.8440860215053764, 'recall': 0.45805981035740334, 'f1-score': 0.5938534278959811, 'support': 1371.0} | 0.8441 | 0.4581 | 0.5939 | 1371.0 | {'precision': 0.5166858457997698, 'recall': 0.6554744525547446, 'f1-score': 0.5778635778635779, 'support': 1370.0} | 0.5167 | 0.6555 | 0.5779 | 1370.0 | {'precision': 0.3650190114068441, 'recall': 0.5605839416058395, 'f1-score': 0.4421416234887737, 'support': 1370.0} | 0.3650 | 0.5606 | 0.4421 | 1370.0 | {'precision': 0.47282608695652173, 'recall': 0.06345733041575492, 'f1-score': 0.11189710610932474, 'support': 1371.0} | 0.4728 | 0.0635 | 0.1119 | 1371.0 | {'precision': 0.9295392953929539, 'recall': 0.7510948905109489, 'f1-score': 0.8308437626160677, 'support': 1370.0} | 0.9295 | 0.7511 | 0.8308 | 1370.0 | 0.5953 | 0.5412 | 0.5194 | 8224.0 | 0.5953 | 0.5412 | 0.5194 | 8224.0 |
| 0.733 | 1.1 | 6186 | 1.0294 | 0.5803 | 0.5803 | 0.5073 | [[ 861 72 61 229 20 129]
[ 225 782 71 135 33 125]
[ 93 21 806 389 43 18]
[ 141 26 224 873 71 35]
[ 90 24 275 780 174 28]
[ 47 17 11 15 4 1276]] | {'precision': 0.5909402882635553, 'recall': 0.6275510204081632, 'f1-score': 0.608695652173913, 'support': 1372.0} | 0.5909 | 0.6276 | 0.6087 | 1372.0 | {'precision': 0.8301486199575372, 'recall': 0.5703865791393143, 'f1-score': 0.6761781236489407, 'support': 1371.0} | 0.8301 | 0.5704 | 0.6762 | 1371.0 | {'precision': 0.5566298342541437, 'recall': 0.5883211678832116, 'f1-score': 0.5720369056068133, 'support': 1370.0} | 0.5566 | 0.5883 | 0.5720 | 1370.0 | {'precision': 0.36059479553903345, 'recall': 0.6372262773722628, 'f1-score': 0.4605644948562385, 'support': 1370.0} | 0.3606 | 0.6372 | 0.4606 | 1370.0 | {'precision': 0.5043478260869565, 'recall': 0.12691466083150985, 'f1-score': 0.2027972027972028, 'support': 1371.0} | 0.5043 | 0.1269 | 0.2028 | 1371.0 | {'precision': 0.7920546244568591, 'recall': 0.9313868613138686, 'f1-score': 0.8560885608856088, 'support': 1370.0} | 0.7921 | 0.9314 | 0.8561 | 1370.0 | 0.6058 | 0.5803 | 0.5627 | 8224.0 | 0.6058 | 0.5803 | 0.5627 | 8224.0 |
| 1.0566 | 1.15 | 7217 | 1.0046 | 0.6037 | 0.6037 | 0.5314 | [[ 941 83 42 200 15 91]
[ 273 859 43 67 12 117]
[ 106 41 763 348 92 20]
[ 156 61 180 826 93 54]
[ 93 68 192 657 305 56]
[ 64 20 6 5 4 1271]] | {'precision': 0.5762400489895897, 'recall': 0.6858600583090378, 'f1-score': 0.6262895174708818, 'support': 1372.0} | 0.5762 | 0.6859 | 0.6263 | 1372.0 | {'precision': 0.758833922261484, 'recall': 0.6265499635302699, 'f1-score': 0.6863763483819417, 'support': 1371.0} | 0.7588 | 0.6265 | 0.6864 | 1371.0 | {'precision': 0.6223491027732463, 'recall': 0.5569343065693431, 'f1-score': 0.5878274268104776, 'support': 1370.0} | 0.6223 | 0.5569 | 0.5878 | 1370.0 | {'precision': 0.3927722301474085, 'recall': 0.602919708029197, 'f1-score': 0.47566945004319033, 'support': 1370.0} | 0.3928 | 0.6029 | 0.4757 | 1370.0 | {'precision': 0.5854126679462572, 'recall': 0.2224653537563822, 'f1-score': 0.3224101479915433, 'support': 1371.0} | 0.5854 | 0.2225 | 0.3224 | 1371.0 | {'precision': 0.7899316345556247, 'recall': 0.9277372262773723, 'f1-score': 0.8533064786841222, 'support': 1370.0} | 0.7899 | 0.9277 | 0.8533 | 1370.0 | 0.6209 | 0.6037 | 0.5920 | 8224.0 | 0.6209 | 0.6037 | 0.5920 | 8224.0 |
| 1.2033 | 2.0 | 8248 | 1.1187 | 0.5755 | 0.5755 | 0.4993 | [[1013 54 78 81 24 122]
[ 365 704 80 46 59 117]
[ 160 27 982 126 56 19]
[ 299 39 335 516 115 66]
[ 257 43 368 366 270 67]
[ 67 15 31 4 5 1248]] | {'precision': 0.46876446089773255, 'recall': 0.7383381924198251, 'f1-score': 0.5734503255024059, 'support': 1372.0} | 0.4688 | 0.7383 | 0.5735 | 1372.0 | {'precision': 0.7981859410430839, 'recall': 0.513493800145879, 'f1-score': 0.6249445184198846, 'support': 1371.0} | 0.7982 | 0.5135 | 0.6249 | 1371.0 | {'precision': 0.5240128068303095, 'recall': 0.7167883211678832, 'f1-score': 0.6054254007398273, 'support': 1370.0} | 0.5240 | 0.7168 | 0.6054 | 1370.0 | {'precision': 0.45302897278314314, 'recall': 0.37664233576642336, 'f1-score': 0.4113192506974891, 'support': 1370.0} | 0.4530 | 0.3766 | 0.4113 | 1370.0 | {'precision': 0.5103969754253308, 'recall': 0.19693654266958424, 'f1-score': 0.28421052631578947, 'support': 1371.0} | 0.5104 | 0.1969 | 0.2842 | 1371.0 | {'precision': 0.7614399023794997, 'recall': 0.910948905109489, 'f1-score': 0.8295114656031903, 'support': 1370.0} | 0.7614 | 0.9109 | 0.8295 | 1370.0 | 0.5860 | 0.5755 | 0.5548 | 8224.0 | 0.5860 | 0.5755 | 0.5548 | 8224.0 |
| 0.9223 | 2.05 | 9279 | 1.0713 | 0.5793 | 0.5793 | 0.5049 | [[1039 51 64 88 20 110]
[ 357 747 78 42 18 129]
[ 173 25 919 194 47 12]
[ 343 32 273 582 104 36]
[ 307 29 301 473 203 58]
[ 67 10 14 4 1 1274]] | {'precision': 0.4545056867891514, 'recall': 0.7572886297376094, 'f1-score': 0.5680699835975944, 'support': 1372.0} | 0.4545 | 0.7573 | 0.5681 | 1372.0 | {'precision': 0.8355704697986577, 'recall': 0.5448577680525164, 'f1-score': 0.6596026490066225, 'support': 1371.0} | 0.8356 | 0.5449 | 0.6596 | 1371.0 | {'precision': 0.5573074590661007, 'recall': 0.6708029197080292, 'f1-score': 0.608810864524677, 'support': 1370.0} | 0.5573 | 0.6708 | 0.6088 | 1370.0 | {'precision': 0.420824295010846, 'recall': 0.4248175182481752, 'f1-score': 0.42281147838721395, 'support': 1370.0} | 0.4208 | 0.4248 | 0.4228 | 1370.0 | {'precision': 0.5165394402035624, 'recall': 0.14806710430342815, 'f1-score': 0.23015873015873015, 'support': 1371.0} | 0.5165 | 0.1481 | 0.2302 | 1371.0 | {'precision': 0.7869054972205065, 'recall': 0.92992700729927, 'f1-score': 0.8524590163934427, 'support': 1370.0} | 0.7869 | 0.9299 | 0.8525 | 1370.0 | 0.5953 | 0.5793 | 0.5570 | 8224.0 | 0.5953 | 0.5793 | 0.5570 | 8224.0 |
| 0.6639 | 2.1 | 10310 | 0.9879 | 0.6091 | 0.6091 | 0.5358 | [[ 988 65 71 104 26 118]
[ 262 816 85 62 40 106]
[ 127 18 870 231 105 19]
[ 236 27 243 692 135 37]
[ 169 24 252 534 355 37]
[ 54 13 10 4 1 1288]] | {'precision': 0.5381263616557734, 'recall': 0.7201166180758017, 'f1-score': 0.6159600997506235, 'support': 1372.0} | 0.5381 | 0.7201 | 0.6160 | 1372.0 | {'precision': 0.8473520249221184, 'recall': 0.5951859956236324, 'f1-score': 0.6992287917737788, 'support': 1371.0} | 0.8474 | 0.5952 | 0.6992 | 1371.0 | {'precision': 0.5682560418027433, 'recall': 0.635036496350365, 'f1-score': 0.5997931747673216, 'support': 1370.0} | 0.5683 | 0.6350 | 0.5998 | 1370.0 | {'precision': 0.4253226797787339, 'recall': 0.5051094890510949, 'f1-score': 0.46179512846179516, 'support': 1370.0} | 0.4253 | 0.5051 | 0.4618 | 1370.0 | {'precision': 0.5362537764350453, 'recall': 0.2589350838803793, 'f1-score': 0.3492375799311362, 'support': 1371.0} | 0.5363 | 0.2589 | 0.3492 | 1371.0 | {'precision': 0.8024922118380062, 'recall': 0.9401459854014599, 'f1-score': 0.8658823529411765, 'support': 1370.0} | 0.8025 | 0.9401 | 0.8659 | 1370.0 | 0.6196 | 0.6091 | 0.5986 | 8224.0 | 0.6196 | 0.6091 | 0.5986 | 8224.0 |
| 1.1311 | 2.15 | 11341 | 0.9851 | 0.6051 | 0.6051 | 0.5337 | [[ 995 77 93 145 20 42]
[ 241 847 120 67 36 60]
[ 95 15 999 192 59 10]
[ 176 27 345 717 89 16]
[ 120 23 358 612 242 16]
[ 115 30 36 11 2 1176]] | {'precision': 0.571182548794489, 'recall': 0.7252186588921283, 'f1-score': 0.6390494540783558, 'support': 1372.0} | 0.5712 | 0.7252 | 0.6390 | 1372.0 | {'precision': 0.831207065750736, 'recall': 0.6177972283005105, 'f1-score': 0.708786610878661, 'support': 1371.0} | 0.8312 | 0.6178 | 0.7088 | 1371.0 | {'precision': 0.5120451050743209, 'recall': 0.7291970802919708, 'f1-score': 0.6016260162601627, 'support': 1370.0} | 0.5120 | 0.7292 | 0.6016 | 1370.0 | {'precision': 0.4111238532110092, 'recall': 0.5233576642335767, 'f1-score': 0.46050096339113683, 'support': 1370.0} | 0.4111 | 0.5234 | 0.4605 | 1370.0 | {'precision': 0.5401785714285714, 'recall': 0.1765134938001459, 'f1-score': 0.26608026388125344, 'support': 1371.0} | 0.5402 | 0.1765 | 0.2661 | 1371.0 | {'precision': 0.8909090909090909, 'recall': 0.8583941605839416, 'f1-score': 0.8743494423791821, 'support': 1370.0} | 0.8909 | 0.8584 | 0.8743 | 1370.0 | 0.6261 | 0.6051 | 0.5917 | 8224.0 | 0.6261 | 0.6051 | 0.5917 | 8224.0 |
| 0.4786 | 3.0 | 12372 | 0.9868 | 0.6189 | 0.6189 | 0.5473 | [[ 960 111 60 139 25 77]
[ 239 916 71 49 12 84]
[ 141 34 962 151 69 13]
[ 211 51 315 629 138 26]
[ 145 57 340 446 357 26]
[ 59 23 12 7 3 1266]] | {'precision': 0.5470085470085471, 'recall': 0.6997084548104956, 'f1-score': 0.6140070354972819, 'support': 1372.0} | 0.5470 | 0.6997 | 0.6140 | 1372.0 | {'precision': 0.7684563758389261, 'recall': 0.6681254558716265, 'f1-score': 0.7147873585641824, 'support': 1371.0} | 0.7685 | 0.6681 | 0.7148 | 1371.0 | {'precision': 0.5465909090909091, 'recall': 0.7021897810218978, 'f1-score': 0.6146964856230032, 'support': 1370.0} | 0.5466 | 0.7022 | 0.6147 | 1370.0 | {'precision': 0.4426460239268121, 'recall': 0.4591240875912409, 'f1-score': 0.4507345037620925, 'support': 1370.0} | 0.4426 | 0.4591 | 0.4507 | 1370.0 | {'precision': 0.5910596026490066, 'recall': 0.2603938730853392, 'f1-score': 0.3615189873417722, 'support': 1371.0} | 0.5911 | 0.2604 | 0.3615 | 1371.0 | {'precision': 0.8485254691689008, 'recall': 0.9240875912408759, 'f1-score': 0.8846960167714885, 'support': 1370.0} | 0.8485 | 0.9241 | 0.8847 | 1370.0 | 0.6240 | 0.6189 | 0.6067 | 8224.0 | 0.6240 | 0.6189 | 0.6067 | 8224.0 |
| 0.6052 | 3.05 | 13403 | 0.9818 | 0.6126 | 0.6126 | 0.5421 | [[ 935 141 90 111 18 77]
[ 196 953 94 44 17 67]
[ 104 30 1044 123 56 13]
[ 236 37 367 612 89 29]
[ 155 43 417 474 259 23]
[ 68 30 31 4 2 1235]] | {'precision': 0.551948051948052, 'recall': 0.6814868804664723, 'f1-score': 0.609915198956295, 'support': 1372.0} | 0.5519 | 0.6815 | 0.6099 | 1372.0 | {'precision': 0.7722852512155591, 'recall': 0.6951130561633844, 'f1-score': 0.7316698656429943, 'support': 1371.0} | 0.7723 | 0.6951 | 0.7317 | 1371.0 | {'precision': 0.5110132158590308, 'recall': 0.762043795620438, 'f1-score': 0.6117784939935541, 'support': 1370.0} | 0.5110 | 0.7620 | 0.6118 | 1370.0 | {'precision': 0.4473684210526316, 'recall': 0.4467153284671533, 'f1-score': 0.44704163623082543, 'support': 1370.0} | 0.4474 | 0.4467 | 0.4470 | 1370.0 | {'precision': 0.5873015873015873, 'recall': 0.18891320204230488, 'f1-score': 0.28587196467991166, 'support': 1371.0} | 0.5873 | 0.1889 | 0.2859 | 1371.0 | {'precision': 0.8552631578947368, 'recall': 0.9014598540145985, 'f1-score': 0.8777540867093105, 'support': 1370.0} | 0.8553 | 0.9015 | 0.8778 | 1370.0 | 0.6209 | 0.6126 | 0.5940 | 8224.0 | 0.6209 | 0.6126 | 0.5940 | 8224.0 |
| 0.2743 | 3.1 | 14434 | 0.9548 | 0.6301 | 0.6301 | 0.5604 | [[1003 99 56 137 26 51]
[ 225 932 67 71 22 54]
[ 129 23 930 204 79 5]
[ 186 39 278 713 135 19]
[ 138 45 306 486 384 12]
[ 77 35 21 9 8 1220]] | {'precision': 0.5705346985210467, 'recall': 0.7310495626822158, 'f1-score': 0.6408945686900959, 'support': 1372.0} | 0.5705 | 0.7310 | 0.6409 | 1372.0 | {'precision': 0.7945439045183291, 'recall': 0.6797957695113056, 'f1-score': 0.7327044025157232, 'support': 1371.0} | 0.7945 | 0.6798 | 0.7327 | 1371.0 | {'precision': 0.5609167671893848, 'recall': 0.6788321167883211, 'f1-score': 0.6142668428005283, 'support': 1370.0} | 0.5609 | 0.6788 | 0.6143 | 1370.0 | {'precision': 0.44012345679012344, 'recall': 0.5204379562043796, 'f1-score': 0.4769230769230769, 'support': 1370.0} | 0.4401 | 0.5204 | 0.4769 | 1370.0 | {'precision': 0.5871559633027523, 'recall': 0.2800875273522976, 'f1-score': 0.3792592592592593, 'support': 1371.0} | 0.5872 | 0.2801 | 0.3793 | 1371.0 | {'precision': 0.896399706098457, 'recall': 0.8905109489051095, 'f1-score': 0.8934456243134383, 'support': 1370.0} | 0.8964 | 0.8905 | 0.8934 | 1370.0 | 0.6416 | 0.6301 | 0.6229 | 8224.0 | 0.6416 | 0.6301 | 0.6229 | 8224.0 |
| 0.9667 | 3.15 | 15465 | 0.9949 | 0.6158 | 0.6158 | 0.5479 | [[1078 50 70 95 20 59]
[ 351 792 80 56 17 75]
[ 107 24 1008 182 38 11]
[ 253 28 286 690 86 27]
[ 206 22 361 476 280 26]
[ 119 11 18 4 2 1216]] | {'precision': 0.5099337748344371, 'recall': 0.7857142857142857, 'f1-score': 0.6184738955823293, 'support': 1372.0} | 0.5099 | 0.7857 | 0.6185 | 1372.0 | {'precision': 0.8543689320388349, 'recall': 0.5776805251641138, 'f1-score': 0.6892950391644909, 'support': 1371.0} | 0.8544 | 0.5777 | 0.6893 | 1371.0 | {'precision': 0.5529347229840922, 'recall': 0.7357664233576642, 'f1-score': 0.6313811462574381, 'support': 1370.0} | 0.5529 | 0.7358 | 0.6314 | 1370.0 | {'precision': 0.4590818363273453, 'recall': 0.5036496350364964, 'f1-score': 0.4803341454925165, 'support': 1370.0} | 0.4591 | 0.5036 | 0.4803 | 1370.0 | {'precision': 0.6320541760722348, 'recall': 0.20423048869438365, 'f1-score': 0.308710033076075, 'support': 1371.0} | 0.6321 | 0.2042 | 0.3087 | 1371.0 | {'precision': 0.85997171145686, 'recall': 0.8875912408759125, 'f1-score': 0.8735632183908046, 'support': 1370.0} | 0.8600 | 0.8876 | 0.8736 | 1370.0 | 0.6447 | 0.6158 | 0.6003 | 8224.0 | 0.6447 | 0.6158 | 0.6003 | 8224.0 |
| 0.906 | 4.0 | 16496 | 0.9465 | 0.6312 | 0.6312 | 0.5612 | [[ 921 147 51 171 30 52]
[ 184 965 64 64 35 59]
[ 80 26 906 240 108 10]
[ 170 41 224 786 131 18]
[ 124 36 245 564 385 17]
[ 74 40 15 10 3 1228]] | {'precision': 0.5930457179652285, 'recall': 0.6712827988338192, 'f1-score': 0.6297435897435897, 'support': 1372.0} | 0.5930 | 0.6713 | 0.6297 | 1372.0 | {'precision': 0.7689243027888446, 'recall': 0.7038657913931436, 'f1-score': 0.734958111195735, 'support': 1371.0} | 0.7689 | 0.7039 | 0.7350 | 1371.0 | {'precision': 0.6019933554817276, 'recall': 0.6613138686131387, 'f1-score': 0.6302608695652173, 'support': 1370.0} | 0.6020 | 0.6613 | 0.6303 | 1370.0 | {'precision': 0.42833787465940054, 'recall': 0.5737226277372263, 'f1-score': 0.49048361934477386, 'support': 1370.0} | 0.4283 | 0.5737 | 0.4905 | 1370.0 | {'precision': 0.5563583815028902, 'recall': 0.28081692195477753, 'f1-score': 0.373242850218129, 'support': 1371.0} | 0.5564 | 0.2808 | 0.3732 | 1371.0 | {'precision': 0.8872832369942196, 'recall': 0.8963503649635036, 'f1-score': 0.8917937545388526, 'support': 1370.0} | 0.8873 | 0.8964 | 0.8918 | 1370.0 | 0.6393 | 0.6312 | 0.6251 | 8224.0 | 0.6393 | 0.6312 | 0.6251 | 8224.0 |
| 0.8828 | 4.05 | 17527 | 0.9787 | 0.6333 | 0.6333 | 0.5649 | [[1007 111 66 107 22 59]
[ 222 935 74 50 19 71]
[ 114 27 969 172 77 11]
[ 240 50 259 686 103 32]
[ 154 59 299 489 343 27]
[ 72 20 6 2 2 1268]] | {'precision': 0.556661138750691, 'recall': 0.7339650145772595, 'f1-score': 0.6331342345174474, 'support': 1372.0} | 0.5567 | 0.7340 | 0.6331 | 1372.0 | {'precision': 0.7778702163061564, 'recall': 0.6819839533187454, 'f1-score': 0.7267780800621843, 'support': 1371.0} | 0.7779 | 0.6820 | 0.7268 | 1371.0 | {'precision': 0.5791990436341901, 'recall': 0.7072992700729926, 'f1-score': 0.6368715083798882, 'support': 1370.0} | 0.5792 | 0.7073 | 0.6369 | 1370.0 | {'precision': 0.4555112881806109, 'recall': 0.5007299270072992, 'f1-score': 0.4770514603616134, 'support': 1370.0} | 0.4555 | 0.5007 | 0.4771 | 1370.0 | {'precision': 0.6060070671378092, 'recall': 0.25018234865062, 'f1-score': 0.3541559112028911, 'support': 1371.0} | 0.6060 | 0.2502 | 0.3542 | 1371.0 | {'precision': 0.8637602179836512, 'recall': 0.9255474452554745, 'f1-score': 0.8935870331219168, 'support': 1370.0} | 0.8638 | 0.9255 | 0.8936 | 1370.0 | 0.6398 | 0.6333 | 0.6203 | 8224.0 | 0.6398 | 0.6333 | 0.6202 | 8224.0 |
| 0.744 | 4.1 | 18558 | 1.0063 | 0.6246 | 0.6246 | 0.5570 | [[1072 72 55 92 17 64]
[ 283 876 67 54 17 74]
[ 166 20 921 195 57 11]
[ 314 32 223 672 94 35]
[ 227 37 268 485 320 34]
[ 72 12 6 1 3 1276]] | {'precision': 0.5023430178069354, 'recall': 0.7813411078717201, 'f1-score': 0.6115231032515687, 'support': 1372.0} | 0.5023 | 0.7813 | 0.6115 | 1372.0 | {'precision': 0.8350810295519543, 'recall': 0.6389496717724289, 'f1-score': 0.7239669421487603, 'support': 1371.0} | 0.8351 | 0.6389 | 0.7240 | 1371.0 | {'precision': 0.5980519480519481, 'recall': 0.6722627737226278, 'f1-score': 0.6329896907216496, 'support': 1370.0} | 0.5981 | 0.6723 | 0.6330 | 1370.0 | {'precision': 0.4482988659106071, 'recall': 0.4905109489051095, 'f1-score': 0.4684559079818752, 'support': 1370.0} | 0.4483 | 0.4905 | 0.4685 | 1370.0 | {'precision': 0.6299212598425197, 'recall': 0.23340627279358134, 'f1-score': 0.3406067056945184, 'support': 1371.0} | 0.6299 | 0.2334 | 0.3406 | 1371.0 | {'precision': 0.8540829986613119, 'recall': 0.9313868613138686, 'f1-score': 0.8910614525139665, 'support': 1370.0} | 0.8541 | 0.9314 | 0.8911 | 1370.0 | 0.6446 | 0.6246 | 0.6114 | 8224.0 | 0.6446 | 0.6246 | 0.6114 | 8224.0 |
| 0.4786 | 4.15 | 19589 | 0.9796 | 0.6288 | 0.6288 | 0.5618 | [[1061 70 61 107 14 59]
[ 283 866 81 55 13 73]
[ 128 17 958 199 54 14]
[ 258 31 245 717 89 30]
[ 188 25 290 534 303 31]
[ 80 14 5 3 2 1266]] | {'precision': 0.531031031031031, 'recall': 0.7733236151603499, 'f1-score': 0.6296735905044509, 'support': 1372.0} | 0.5310 | 0.7733 | 0.6297 | 1372.0 | {'precision': 0.8465298142717498, 'recall': 0.6316557257476295, 'f1-score': 0.7234753550543024, 'support': 1371.0} | 0.8465 | 0.6317 | 0.7235 | 1371.0 | {'precision': 0.5841463414634146, 'recall': 0.6992700729927007, 'f1-score': 0.6365448504983389, 'support': 1370.0} | 0.5841 | 0.6993 | 0.6365 | 1370.0 | {'precision': 0.4439628482972136, 'recall': 0.5233576642335767, 'f1-score': 0.4804020100502513, 'support': 1370.0} | 0.4440 | 0.5234 | 0.4804 | 1370.0 | {'precision': 0.6378947368421053, 'recall': 0.2210065645514223, 'f1-score': 0.32827735644637057, 'support': 1371.0} | 0.6379 | 0.2210 | 0.3283 | 1371.0 | {'precision': 0.8594704684317719, 'recall': 0.9240875912408759, 'f1-score': 0.8906085121350685, 'support': 1370.0} | 0.8595 | 0.9241 | 0.8906 | 1370.0 | 0.6505 | 0.6288 | 0.6148 | 8224.0 | 0.6505 | 0.6288 | 0.6148 | 8224.0 |
| 0.5705 | 5.0 | 20620 | 0.9751 | 0.6299 | 0.6299 | 0.5628 | [[1059 76 57 110 18 52]
[ 276 886 74 50 16 69]
[ 128 19 948 200 64 11]
[ 267 33 232 718 91 29]
[ 196 31 269 536 314 25]
[ 91 15 5 3 1 1255]] | {'precision': 0.5250371839365394, 'recall': 0.771865889212828, 'f1-score': 0.624963115963411, 'support': 1372.0} | 0.5250 | 0.7719 | 0.6250 | 1372.0 | {'precision': 0.8358490566037736, 'recall': 0.6462436177972283, 'f1-score': 0.7289181406828465, 'support': 1371.0} | 0.8358 | 0.6462 | 0.7289 | 1371.0 | {'precision': 0.5981072555205047, 'recall': 0.691970802919708, 'f1-score': 0.6416243654822336, 'support': 1370.0} | 0.5981 | 0.6920 | 0.6416 | 1370.0 | {'precision': 0.4440321583178726, 'recall': 0.5240875912408759, 'f1-score': 0.4807499163039839, 'support': 1370.0} | 0.4440 | 0.5241 | 0.4807 | 1370.0 | {'precision': 0.623015873015873, 'recall': 0.22902990517870167, 'f1-score': 0.3349333333333333, 'support': 1371.0} | 0.6230 | 0.2290 | 0.3349 | 1371.0 | {'precision': 0.8709229701596114, 'recall': 0.916058394160584, 'f1-score': 0.8929206688011384, 'support': 1370.0} | 0.8709 | 0.9161 | 0.8929 | 1370.0 | 0.6495 | 0.6299 | 0.6174 | 8224.0 | 0.6495 | 0.6299 | 0.6173 | 8224.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
LuisAVasquez/simple-latin-bert-uncased
|
LuisAVasquez
| 2023-07-16T22:07:37Z | 118 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"latin",
"masked language modelling",
"la",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-07T13:38:46Z |
---
license: mit
language:
- la
pipeline_tag: fill-mask
tags:
- latin
- masked language modelling
widget:
- text: "Gallia est omnis divisa in [MASK] tres ."
example_title: "Commentary on Gallic Wars"
- text: "[MASK] sum Caesar ."
example_title: "Who is Caesar?"
- text: "[MASK] it ad forum ."
example_title: "Who is going to the forum?"
- text: "Ovidius paratus est ad [MASK] ."
example_title: "What is Ovidius up to?"
- text: "[MASK], veni!"
example_title: "Calling someone to come closer"
- text: "Roma in Italia [MASK] ."
example_title: "Ubi est Roma?"
---
# Model Card for Simple Latin BERT
<!-- Provide a quick summary of what the model is/does. [Optional] -->
A simple BERT Masked Language Model for Latin for my portfolio, trained on Latin Corpora from the [Classical Language Toolkit](http://cltk.org/) corpora.
**NOT** apt for production nor commercial use.
This model's performance is really poor, and it has not been evaluated.
This model comes with its own tokenizer! It will automatically use **lowercase**.
Check the `training notebooks` folder for the preprocessing and training scripts.
Inspired by
- [This repo](https://github.com/dbamman/latin-bert), which has a BERT model for latin that is actually useful!
- [This tutorial](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples)
- [This tutorial](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb#scrollTo=VNZZs-r6iKAV)
- [This tutorial](https://huggingface.co/blog/how-to-train)
# Table of Contents
- [Model Card for Simple Latin BERT ](#model-card-for--model_id-)
- [Table of Contents](#table-of-contents)
- [Table of Contents](#table-of-contents-1)
- [Model Details](#model-details)
- [Model Description](#model-description)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Downstream Use [Optional]](#downstream-use-optional)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Speeds, Sizes, Times](#speeds-sizes-times)
- [Evaluation](#evaluation)
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is/does. -->
A simple BERT Masked Language Model for Latin for my portfolio, trained on Latin Corpora from the [Classical Language Toolkit](http://cltk.org/) corpora.
**NOT** apt for production nor commercial use.
This model's performance is really poor, and it has not been evaluated.
This model comes with its own tokenizer!
Check the `notebooks` folder for the preprocessing and training scripts.
- **Developed by:** Luis Antonio VASQUEZ
- **Model type:** Language model
- **Language(s) (NLP):** la
- **License:** mit
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
This model can be used directly for Masked Language Modelling.
## Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
This model could be used as a base model for other NLP tasks, for example, Text Classification (that is, using transformers' `BertForSequenceClassification`)
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The training data comes from the corpora freely available from the [Classical Language Toolkit](http://cltk.org/)
- [The Latin Library](https://www.thelatinlibrary.com/)
- Latin section of the [Perseus Digital Library](http://www.perseus.tufts.edu/hopper/)
- Latin section of the [Tesserae Project](https://tesserae.caset.buffalo.edu/)
- [Corpus Grammaticorum Latinorum](https://cgl.hypotheses.org/)
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
For preprocessing, the raw text from each of the corpora was extracted by parsing. Then, it was **lowercased** and written onto `txt` files. Ideally, in these files one line would correspond to one sentence.
Other data from the corpora, like Entity Tags, POS Tags, etc., were discarded.
Training hyperparameters:
- epochs: 1
- Batch size: 64
- Attention heads: 12
- Hidden Layers: 12
- Max input size: 512 tokens
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
After having the dataset ready, training this model on a 16 GB Nvidia Graphics card took around 10 hours.
# Evaluation
No evaluation was performed on this dataset.
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e3_s108_v3
|
KingKazma
| 2023-07-16T22:06:58Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T22:06:55Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
mgeller/opt-6.7b-lora
|
mgeller
| 2023-07-16T22:06:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-12T22:58:35Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e2_s108_v3
|
KingKazma
| 2023-07-16T21:59:56Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T21:59:55Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e1_s108_v3
|
KingKazma
| 2023-07-16T21:52:56Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T21:52:55Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
SinanAkkoyun/orca_mini_3b_gptq_badtest
|
SinanAkkoyun
| 2023-07-16T21:49:31Z | 5 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-16T21:27:48Z |
This is a very bad attempt at quantizing 128g 4 bit with alpaca (in orca style prompt
```sh
python quantize_alpaca.py --pretrained_model_dir orca_mini_3b/ --bits 4 --group_size 128 --quantized_model_dir orca_mini_3b_gptq/ --save_and_reloa
```
Downloqd cleaned dataset first: https://github.com/gururise/AlpacaDataCleaned
|
LarryAIDraw/chara_FateLordElMelloi_LuviagelitaEdelfelt_v1
|
LarryAIDraw
| 2023-07-16T21:46:20Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T21:42:58Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/109052/luviagelita-edelfelt-or-fate-series-lord-el-melloi-ii-sei-no-jikenbo
|
LarryAIDraw/roxy-08
|
LarryAIDraw
| 2023-07-16T21:46:08Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T21:42:37Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/109272/roxy-oror-mushoku-tensei
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e0_s108_v3
|
KingKazma
| 2023-07-16T21:45:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T21:45:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
LarryAIDraw/Predator
|
LarryAIDraw
| 2023-07-16T21:45:47Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T21:42:05Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/109356/predator-or-granblue-fantasy
|
quangnguyennn/pokemon-lora
|
quangnguyennn
| 2023-07-16T21:41:33Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-16T12:51:01Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - quangnguyennn/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e-1_s108_v3
|
KingKazma
| 2023-07-16T21:38:46Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T21:38:45Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Debayan990/my-pet-cat-jxl
|
Debayan990
| 2023-07-16T21:13:51Z | 13 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-16T21:01:07Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat-jxl Dreambooth model trained by Debayan990 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: BBIT47
Sample pictures of this concept:



|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e4_s6789_v3
|
KingKazma
| 2023-07-16T20:39:23Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T00:16:34Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
il18/PPO-LunarLander-v2
|
il18
| 2023-07-16T20:38:20Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T20:37:53Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.21 +/- 15.74
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Andres6087/Cte
|
Andres6087
| 2023-07-16T20:23:05Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"translation",
"ab",
"dataset:Open-Orca/OpenOrca",
"license:openrail",
"region:us"
] |
translation
| 2023-07-16T20:19:56Z |
---
license: openrail
datasets:
- Open-Orca/OpenOrca
language:
- ab
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: translation
---
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e1_s6789_v3
|
KingKazma
| 2023-07-16T20:18:10Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-14T23:29:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
bskang/test_demo_ver
|
bskang
| 2023-07-16T20:17:48Z | 34 | 0 |
peft
|
[
"peft",
"text-generation",
"en",
"region:us"
] |
text-generation
| 2023-07-16T20:15:26Z |
---
library_name: peft
language:
- en
pipeline_tag: text-generation
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e0_s6789_v3
|
KingKazma
| 2023-07-16T20:11:04Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-14T23:14:20Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
NemesisAlm/q-FrozenLake-v1-4x4-noSlippery
|
NemesisAlm
| 2023-07-16T20:04:44Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T20:04:41Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="NemesisAlm/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e-1_s6789_v3
|
KingKazma
| 2023-07-16T20:04:01Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-14T22:57:38Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Meina/Alter_V3
|
Meina
| 2023-07-16T20:03:51Z | 27 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-16T20:01:49Z |
---
license: creativeml-openrail-m
---
|
Meina/Unreal_V4.1
|
Meina
| 2023-07-16T20:02:45Z | 118 | 5 |
diffusers
|
[
"diffusers",
"safetensors",
"art",
"anime",
"meina",
"unreal",
"semirealistic",
"2.5d",
"sexy",
"fantasy",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-16T19:59:21Z |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
- anime
- meina
- unreal
- semirealistic
- 2.5d
- sexy
- fantasy
---
MeinaUnreal objetive is to be able to do anime art with a 2.5d feeling.
( the VAE is already baked in the model )
For examples and prompts, please checkout: https://civitai.com/models/18798/meinaunreal
I have a discord server where you can post images that you generated, discuss prompt and/or ask for help.
https://discord.gg/XC9nGZNDUd If you like one of my models and want to support their updates
I've made a ko-fi page; https://ko-fi.com/meina where you can pay me a coffee <3
And a Patreon page; https://www.patreon.com/MeinaMix where you can support me and get acess to beta of my models!
You may also try this model using Sinkin.ai: https://sinkin.ai/m/PREaKGN
Recommendations of use: Enable Quantization in K samplers.
Hires.fix is needed for prompts where the character is far away in order to make decent images, it drastically improve the quality of face and eyes!
Recommended parameters:
Sampler: DPM++ 2M Karras: 20 to 40 steps.
Sampler: DPM++ SDE Karras: 20 to 30 steps.
CFG Scale: 7.
Resolutions: 512x768, 512x1024 for Portrait!
Resolutions: 768x512, 1024x512, 1536x512 for Landscape!
Hires.fix: R-ESRGAN 4x+Anime6b, with 15 steps at 0.3 denoising.
Clip Skip: 2.
Negatives: ' (worst quality, low quality:1.4), monochrome, zombie, (interlocked fingers), '
|
jthetzel/swin-tiny-patch4-window7-224-finetuned-eurosat
|
jthetzel
| 2023-07-16T20:01:23Z | 213 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-16T19:41:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9822222222222222
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0604
- Accuracy: 0.9822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2326 | 1.0 | 190 | 0.1175 | 0.9604 |
| 0.1789 | 2.0 | 380 | 0.0765 | 0.9763 |
| 0.1414 | 3.0 | 570 | 0.0604 | 0.9822 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
anindya64/alpaca-bank-issue-summarization-20b-EthurAI
|
anindya64
| 2023-07-16T20:00:19Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T20:00:16Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
DarwinAnim8or/Something-V2.2-OpenVINO
|
DarwinAnim8or
| 2023-07-16T20:00:19Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T19:16:43Z |
---
license: creativeml-openrail-m
---
# Something V2.2 OpenVINO
This is a conversion of [NoCrypt's Something V2.2 model](https://huggingface.co/NoCrypt/SomethingV2_2) to OpenVINO format. The original model is a stable diffusion model that can generate realistic images from text input.
## What is OpenVINO?
OpenVINO (Open Visual Inference and Neural network Optimization) is a free toolkit that facilitates the optimization and deployment of deep learning models on Intel hardware. It supports models trained with popular frameworks like TensorFlow, PyTorch, and more. It also provides a common API to run inference on various devices, such as CPU, GPU, VPU, FPGA, etc.
## Why use OpenVINO?
OpenVINO can make it possible to run Stable Diffusion models (and others) on simply the CPU, rather than requiring a GPU, which can be expensive.
The time to generate a 512x512 image, on HuggingFace's "CPU Upgrade" space, takes about 21~ seconds after warmup.
For more details, see [this blogpost](https://huggingface.co/blog/stable-diffusion-inference-intel)
## Usage example
TODO
|
ailabturkiye/SamedGungor
|
ailabturkiye
| 2023-07-16T19:54:01Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-16T19:40:50Z |
[](discord.gg/ailab)


# Samed Güngör - RVC V2 250 Epoch
**YouTuber Samed Güngör`ün ses modelidir,
Rvc V2 250 epoch olarak eğitilmiştir.**
_Dataset ve Train Benim Tarafımdan yapılmıştır.._
__Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.__
## Credits
**Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.**
- Discord: eraymoruk54
- YouTube: Eray Tokaç (https://www.youtube.com/@ErayOyuncantas)
license: openrail

[](discord.gg/ailab)

|
Talha185/speecht5_finetuned_urdu_TTS
|
Talha185
| 2023-07-16T19:53:22Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:common_voice_13_0",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-14T10:59:46Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.558 | 8.61 | 1000 | 0.4964 |
| 0.5232 | 17.22 | 2000 | 0.4879 |
| 0.5114 | 25.83 | 3000 | 0.4811 |
| 0.5009 | 34.45 | 4000 | 0.4799 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
rshrott/falcon-7b-instruct-ft-adapters
|
rshrott
| 2023-07-16T19:48:46Z | 5 | 0 |
peft
|
[
"peft",
"pytorch",
"RefinedWebModel",
"custom_code",
"region:us"
] | null | 2023-07-16T13:37:16Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
ailabturkiye/CagriMertBakirci
|
ailabturkiye
| 2023-07-16T19:38:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-16T19:31:12Z |
---
license: openrail
language:
- tr
tags:
- music
---300 Epoch kullanılarak 20 dakikalık dataset ile oluşturuldu.
|
uraskargi/Reinforce-CartPole-v1
|
uraskargi
| 2023-07-16T19:19:02Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-04T14:20:20Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
huarddk/finetuning-sentiment-model-3000-samples
|
huarddk
| 2023-07-16T19:18:10Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-11T15:47:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87
- name: F1
type: f1
value: 0.8704318936877077
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3117
- Accuracy: 0.87
- F1: 0.8704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
YojitShinde/ppo-Pyramids
|
YojitShinde
| 2023-07-16T19:13:01Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-07-16T19:11:49Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: YojitShinde/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
0sunfire0/poca-SoccerTwos_00
|
0sunfire0
| 2023-07-16T19:10:43Z | 433 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-07-16T19:08:00Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: 0sunfire0/poca-SoccerTwos_00
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
PhysHunter/codeparrot-ds
|
PhysHunter
| 2023-07-16T18:57:05Z | 142 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-15T08:41:52Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.3352 | 0.31 | 1000 | 2.9747 |
| 2.417 | 0.62 | 2000 | 2.3979 |
| 2.0098 | 0.93 | 3000 | 2.1771 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
oakal/fourthbrain_bloomz_marketing
|
oakal
| 2023-07-16T18:32:44Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T18:32:38Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
weekcircle/wav2vec2-large-mms-1b-korean-colab_v2
|
weekcircle
| 2023-07-16T18:27:07Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/mms-1b-l1107",
"base_model:finetune:facebook/mms-1b-l1107",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-16T05:19:24Z |
---
license: cc-by-nc-4.0
base_model: facebook/mms-1b-l1107
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-mms-1b-korean-colab_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-mms-1b-korean-colab_v2
This model is a fine-tuned version of [facebook/mms-1b-l1107](https://huggingface.co/facebook/mms-1b-l1107) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1650
- Wer: 0.3776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.6667 | 0.18 | 100 | 0.8024 | 0.8379 |
| 0.5754 | 0.36 | 200 | 0.3907 | 0.6495 |
| 0.4658 | 0.53 | 300 | 0.3620 | 0.6224 |
| 0.4321 | 0.71 | 400 | 0.3184 | 0.5842 |
| 0.399 | 0.89 | 500 | 0.2930 | 0.5120 |
| 0.3538 | 1.07 | 600 | 0.2446 | 0.4698 |
| 0.3379 | 1.24 | 700 | 0.2341 | 0.4692 |
| 0.3333 | 1.42 | 800 | 0.2121 | 0.4488 |
| 0.31 | 1.6 | 900 | 0.2054 | 0.4297 |
| 0.3049 | 1.78 | 1000 | 0.1958 | 0.4180 |
| 0.2885 | 1.95 | 1100 | 0.1885 | 0.4143 |
| 0.2632 | 2.13 | 1200 | 0.1865 | 0.4094 |
| 0.2592 | 2.31 | 1300 | 0.1774 | 0.3853 |
| 0.2591 | 2.49 | 1400 | 0.1700 | 0.3924 |
| 0.2605 | 2.66 | 1500 | 0.1701 | 0.3789 |
| 0.2361 | 2.84 | 1600 | 0.1650 | 0.3776 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
YojitShinde/ppo-SnowballTarget
|
YojitShinde
| 2023-07-16T18:25:48Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-16T18:25:46Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: YojitShinde/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
0sunfire0/rl_course_vizdoom_health_gathering_supreme_02
|
0sunfire0
| 2023-07-16T18:23:44Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T18:23:37Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.16 +/- 3.86
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r 0sunfire0/rl_course_vizdoom_health_gathering_supreme_02
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .opt.conda.lib.python3.10.site-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme_02
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .opt.conda.lib.python3.10.site-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme_02 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-lr-v4
|
hafidikhsan
| 2023-07-16T18:23:13Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-16T18:22:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-lr-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-english-pronunciation-evaluation-lr-v4
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7777
- Accuracy: 0.656
- F1: 0.6292
- Precision: 0.6618
- Recall: 0.656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.9582 | 1.0 | 500 | 0.9629 | 0.544 | 0.4585 | 0.5657 | 0.544 |
| 0.8052 | 2.0 | 1000 | 0.8512 | 0.624 | 0.5916 | 0.6247 | 0.624 |
| 0.8939 | 3.0 | 1500 | 0.8313 | 0.638 | 0.6071 | 0.6384 | 0.638 |
| 0.6153 | 4.0 | 2000 | 0.8035 | 0.67 | 0.6442 | 0.6833 | 0.67 |
| 0.5782 | 5.0 | 2500 | 0.8024 | 0.67 | 0.6458 | 0.6788 | 0.67 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nastassja-bellisario/whisper-large-v2-15-07-2023
|
nastassja-bellisario
| 2023-07-16T18:13:57Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T14:45:57Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
rsml/bbert_qa
|
rsml
| 2023-07-16T17:59:30Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-16T17:42:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bbert_qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bbert_qa
This model is a fine-tuned version of [bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12](https://huggingface.co/bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.3490 |
| 2.7154 | 2.0 | 500 | 1.7686 |
| 2.7154 | 3.0 | 750 | 1.6818 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sherif1311/flan-t5-base-imdb-text-classification
|
sherif1311
| 2023-07-16T17:50:43Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-16T14:44:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: flan-t5-base-imdb-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-imdb-text-classification
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0797
- F1: 95.072
- Gen Len: 2.5005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
kanu03/my-cat
|
kanu03
| 2023-07-16T17:44:02Z | 107 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-16T17:39:19Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-cat Dreambooth model trained by kanu03 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: OPJU101
Sample pictures of this concept:

|
balpreetspankaj/distilbert-base-uncased-finetuned-emotion
|
balpreetspankaj
| 2023-07-16T17:37:10Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-16T16:46:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2169
- Accuracy: 0.9285
- F1: 0.9283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.827 | 1.0 | 250 | 0.3132 | 0.9085 | 0.9062 |
| 0.2411 | 2.0 | 500 | 0.2169 | 0.9285 | 0.9283 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
quangnguyennn/pokemon-lora-xformer
|
quangnguyennn
| 2023-07-16T17:29:24Z | 2 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-16T13:08:13Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - quangnguyennn/pokemon-lora-xformer
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
magicsword/wy-mt-en-zh-2
|
magicsword
| 2023-07-16T17:27:39Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"autotrain",
"translation",
"unk",
"dataset:magicsword/autotrain-data-wy-mt-en-zh",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-16T15:15:50Z |
---
tags:
- autotrain
- translation
language:
- unk
- unk
datasets:
- magicsword/autotrain-data-wy-mt-en-zh
co2_eq_emissions:
emissions: 71.14399741050826
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 74981139786
- CO2 Emissions (in grams): 71.1440
## Validation Metrics
- Loss: 2.220
- SacreBLEU: 12.949
- Gen len: 16.386
|
magicsword/wy-mt-en-zh-3
|
magicsword
| 2023-07-16T17:21:53Z | 111 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"autotrain",
"translation",
"unk",
"dataset:magicsword/autotrain-data-wy-mt-en-zh",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-16T15:15:50Z |
---
tags:
- autotrain
- translation
language:
- unk
- unk
datasets:
- magicsword/autotrain-data-wy-mt-en-zh
co2_eq_emissions:
emissions: 61.92129308371724
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 74981139784
- CO2 Emissions (in grams): 61.9213
## Validation Metrics
- Loss: 2.222
- SacreBLEU: 12.575
- Gen len: 16.299
|
DanGalt/speecht5_finetuned_voxpopuli_fi
|
DanGalt
| 2023-07-16T17:11:18Z | 82 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"fi",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-07-16T17:07:04Z |
---
language:
- fi
license: mit
tags:
- generated_from_trainer
- text-to-speech
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_fi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_fi
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 150
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.504 | 5.05 | 250 | 0.4645 |
| 0.4882 | 10.1 | 500 | 0.4499 |
| 0.467 | 15.15 | 750 | 0.4450 |
| 0.4651 | 20.2 | 1000 | 0.4436 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
KingKazma/xsum_t5-small_prompt_tuning_500_10_3000_8_e-1_s55555_v3_manual
|
KingKazma
| 2023-07-16T17:02:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T17:02:55Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
gioca91/ppo-Huggy
|
gioca91
| 2023-07-16T17:00:31Z | 10 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-16T17:00:21Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: gioca91/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
iworeushankaonce/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
|
iworeushankaonce
| 2023-07-16T16:35:53Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-16T15:19:49Z |
---
license: bsd-3-clause
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3882
- Accuracy: 0.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4932 | 1.0 | 112 | 0.5325 | 0.86 |
| 0.3541 | 2.0 | 225 | 0.6068 | 0.77 |
| 0.5743 | 3.0 | 337 | 0.6356 | 0.83 |
| 0.6256 | 4.0 | 450 | 0.4878 | 0.86 |
| 0.0619 | 5.0 | 562 | 0.4262 | 0.88 |
| 0.0044 | 6.0 | 675 | 0.3266 | 0.91 |
| 0.0018 | 7.0 | 787 | 0.4827 | 0.87 |
| 0.001 | 8.0 | 900 | 0.9245 | 0.82 |
| 0.1854 | 9.0 | 1012 | 0.4256 | 0.89 |
| 0.0001 | 10.0 | 1125 | 0.3898 | 0.9 |
| 0.0001 | 11.0 | 1237 | 0.3873 | 0.9 |
| 0.0001 | 12.0 | 1350 | 0.4064 | 0.91 |
| 0.0 | 13.0 | 1462 | 0.3910 | 0.9 |
| 0.0 | 14.0 | 1575 | 0.3924 | 0.9 |
| 0.0001 | 15.0 | 1687 | 0.3917 | 0.91 |
| 0.0 | 16.0 | 1800 | 0.3903 | 0.9 |
| 0.0 | 17.0 | 1912 | 0.3900 | 0.89 |
| 0.0 | 18.0 | 2025 | 0.3894 | 0.89 |
| 0.0 | 19.0 | 2137 | 0.3886 | 0.9 |
| 0.0 | 19.91 | 2240 | 0.3882 | 0.9 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
cassandraqs/shan_homework1
|
cassandraqs
| 2023-07-16T16:29:28Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T16:29:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
casque/LactationV.1.1
|
casque
| 2023-07-16T16:25:30Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T16:23:40Z |
---
license: creativeml-openrail-m
---
|
localmodels/LLaMA-65B-ggml
|
localmodels
| 2023-07-16T16:22:41Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-07-16T16:22:41Z |
---
duplicated_from: localmodels/LLM
---
# LLaMA 65B ggml
From Meta: https://ai.meta.com/blog/large-language-model-llama-meta-ai
---
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48.
### k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387.
---
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| llama-65b.ggmlv3.q2_K.bin | q2_K | 2 | 27.33 GB| 29.83 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| llama-65b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 34.55 GB| 37.05 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| llama-65b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 31.40 GB| 33.90 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| llama-65b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 28.06 GB| 30.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| llama-65b.ggmlv3.q4_0.bin | q4_0 | 4 | 36.73 GB| 39.23 GB | Original quant method, 4-bit. |
| llama-65b.ggmlv3.q4_1.bin | q4_1 | 4 | 40.81 GB| 43.31 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| llama-65b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 39.28 GB| 41.78 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| llama-65b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 36.73 GB| 39.23 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| llama-65b.ggmlv3.q5_0.bin | q5_0 | 5 | 44.89 GB| 47.39 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| llama-65b.ggmlv3.q5_1.bin | q5_1 | 5 | 48.97 GB| 51.47 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| llama-65b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 46.20 GB| 48.70 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| llama-65b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 44.89 GB| 47.39 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| llama-65b.ggmlv3.q6_K.bin | q6_K |6 | 53.56 GB| 56.06 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| llama-65b.ggmlv3.q8_0.bin | q8_0 | 8 | 69.370 GB | 71.87 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
localmodels/LLaMA-7B-ggml
|
localmodels
| 2023-07-16T16:17:29Z | 0 | 2 | null |
[
"region:us"
] | null | 2023-07-16T16:17:29Z |
---
duplicated_from: localmodels/LLM
---
# LLaMA 7B ggml
From Meta: https://ai.meta.com/blog/large-language-model-llama-meta-ai
---
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48.
### k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387.
---
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| llama-7b.ggmlv3.q2_K.bin | q2_K | 2 | 2.80 GB| 5.30 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| llama-7b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.55 GB| 6.05 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| llama-7b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.23 GB| 5.73 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| llama-7b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.90 GB| 5.40 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| llama-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB| 6.29 GB | Original quant method, 4-bit. |
| llama-7b.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB| 6.71 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| llama-7b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.05 GB| 6.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| llama-7b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.79 GB| 6.29 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| llama-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB| 7.13 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| llama-7b.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB| 7.56 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| llama-7b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.77 GB| 7.27 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| llama-7b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.63 GB| 7.13 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| llama-7b.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB| 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| llama-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB| 9.66 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
ailabturkiye/Joker
|
ailabturkiye
| 2023-07-16T16:17:15Z | 0 | 1 | null |
[
"license:openrail",
"region:us"
] | null | 2023-07-16T15:22:06Z |
---
license: openrail
---
[](discord.gg/ailab)


# Joker - RVC V2 300 Epoch
**Rapper Joker`in ses modelidir,
Rvc V2 300 epoch olarak eğitilmiştir.**
_Dataset ve Train Benim Tarafımdan yapılmıştır.._
__Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.__
## Credits
**Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.**
- Discord: barisdark0
- YouTube: Barış (https://www.youtube.com/@barisdark)

[](discord.gg/ailab)
---
{}
---
|
casque/Ultimate_ahegao
|
casque
| 2023-07-16T16:16:47Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T16:14:24Z |
---
license: creativeml-openrail-m
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.