modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-31 18:27:20
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-31 18:27:03
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
chandan9t8/ppo-SnowballTarget
|
chandan9t8
| 2023-07-17T15:51:08Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-17T15:49:56Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: chandan9t8/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e6_s6789_v3
|
KingKazma
| 2023-07-17T15:50:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T15:49:58Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
peterdamn/speecht5_finetuned_voxpopuli_nl
|
peterdamn
| 2023-07-17T15:47:39Z | 84 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"text-to-speech",
"generated_from_trainer",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-07-17T15:04:46Z |
---
license: mit
tags:
- text-to-speech
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e4_s108_v3
|
KingKazma
| 2023-07-17T15:43:17Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T01:10:21Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
ParallelnoMinded/distilbert-base-uncased-finetuned-squad
|
ParallelnoMinded
| 2023-07-17T15:36:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-16T14:22:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2273 | 1.0 | 5533 | 1.1657 |
| 0.9589 | 2.0 | 11066 | 1.1226 |
| 0.7485 | 3.0 | 16599 | 1.1562 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cu116
- Datasets 2.13.1
- Tokenizers 0.13.3
|
roa7n/gpt2-human_nontata_promoters-last_layer_1
|
roa7n
| 2023-07-17T15:35:28Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T15:35:26Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
wanderer2k1/T5-LawsQA
|
wanderer2k1
| 2023-07-17T15:35:23Z | 103 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-30T14:49:29Z |
---
widget:
- text: "Trả lời câu hỏi: tại trụ sở tổ chức trợ giúp pháp lý thì có cần niêm yết lịch và nội quy không? Trong ngữ cảnh: 11/2017/qh14 điều 28. địa điểm tiếp người được trợ giúp pháp lý. 1. tổ chức thực hiện trợ giúp pháp lý bố trí nơi tiếp người được trợ giúp pháp lý tại trụ sở của tổ chức thực hiện trợ giúp pháp lý hoặc tại địa điểm khác ngoài trụ sở của tổ chức bảo đảm điều kiện để việc trình bày yêu cầu được dễ dàng, thuận lợi. 2. tại trụ sở của tổ chức thực hiện trợ giúp pháp lý phải niêm yết lịch tiếp, nội quy tiếp người được trợ giúp pháp lý. "
example_title: "Example #1"
inference:
parameters:
temperature: 0.0,
min_length: 32,
max_length: 256
- text: "Trả lời câu hỏi: chuyến bay công vụ được định nghĩa như thế nào? Trong ngữ cảnh: 194/2016/tt-btc điều 2. giải thích từ ngữ. trong thông tư này, các từ ngữ dưới đây được hiểu như sau: 1. chuyến bay công vụ: là chuyến bay của tàu bay quân sự, tàu bay chuyên dụng của lực lượng hải quan, công an và chuyến bay của tàu bay dân dụng sử dụng hoàn toàn cho mục đích công vụ nhà nước. 2. chuyến bay chuyên cơ: là chuyến bay được sử dụng hoàn toàn riêng biệt hoặc kết hợp vận chuyển thương mại và được cơ quan nhà nước có thẩm quyền xác nhận hoặc thông báo theo quy định tại nghị định số 03/2009/nđ-cp ngày 09 tháng 01 năm 2009 của chính phủ về công tác đảm bảo an toàn cho chuyến bay chuyên cơ. "
example_title: "Example #2"
inference:
parameters:
temperature: 0.0,
min_length: 32,
max_length: 256
- text: "Trả lời câu hỏi: có được cho thuê ô tô đang bị thế chấp cho ngân hàng không? Trong ngữ cảnh: 91/2015/qh13 điều 321. quyền của bên thế chấp. 1. khai thác công dụng, hưởng hoa lợi, lợi tức từ tài sản thế chấp, trừ trường hợp hoa lợi, lợi tức cũng là tài sản thế chấp theo thỏa thuận. 2. đầu tư để làm tăng giá trị của tài sản thế chấp. 3. nhận lại tài sản thế chấp do người thứ ba giữ và giấy tờ liên quan đến tài sản thế chấp do bên nhận thế chấp giữ khi nghĩa vụ được bảo đảm bằng thế chấp chấm dứt hoặc được thay thế bằng biện pháp bảo đảm khác. 4. được bán, thay thế, trao đổi tài sản thế chấp, nếu tài sản đó là hàng hóa luân chuyển trong quá trình sản xuất, kinh doanh. trong trường hợp này, quyền yêu cầu bên mua thanh toán tiền, số tiền thu được, tài sản hình thành từ số tiền thu được, tài sản được thay thế hoặc được trao đổi trở thành tài sản thế chấp. trường hợp tài sản thế chấp là kho hàng thì bên thế chấp được quyền thay thế hàng hóa trong kho, nhưng phải bảo đảm giá trị của hàng hóa trong kho đúng như thỏa thuận. 5. được bán, trao đổi, tặng cho tài sản thế chấp không phải là hàng hóa luân chuyển trong quá trình sản xuất, kinh doanh, nếu được bên nhận thế chấp đồng ý hoặc theo quy định của luật. 6. được cho thuê, cho mượn tài sản thế chấp nhưng phải thông báo cho bên thuê, bên mượn biết về việc tài sản cho thuê, cho mượn đang được dùng để thế chấp và phải thông báo cho bên nhận thế chấp biết. "
example_title: "Example #3"
inference:
parameters:
temperature: 0.0,
min_length: 32,
max_length: 256
---
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e4_s6789_v3
|
KingKazma
| 2023-07-17T15:35:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T15:35:15Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e2_s108_v3
|
KingKazma
| 2023-07-17T15:29:17Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:55:15Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e1_s108_v3
|
KingKazma
| 2023-07-17T15:22:19Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:47:41Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e0_s108_v3
|
KingKazma
| 2023-07-17T15:15:20Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:40:07Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e1_s6789_v3
|
KingKazma
| 2023-07-17T15:13:14Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T01:52:29Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e-1_s108_v3
|
KingKazma
| 2023-07-17T15:08:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:32:34Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
bhenrym14/llama-33b-lxctx-PI-16384-LoRA
|
bhenrym14
| 2023-07-17T15:08:06Z | 0 | 2 | null |
[
"region:us"
] | null | 2023-07-17T14:58:26Z |
Mostly untested!
# RoPE Scaled QLoRA Long Context Extension of Llama-33b (LoRA)
## Overview
This is base Llama-33b with minimal additional training to extend the useful context window.
- Context length extended to 16384 by RoPE Scaled Embeddings (Position Interpolation).
- Pretrained for additional 100 steps on 8192 length sequences from the pile dataset.
- The merged model is used as the starting point for training [bhenrym14/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-LoRA](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-LoRA)
**This is a QLoRA fine-tune**
Pretraining took 10 hours on 1x RTX 6000 Ada.
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e0_s6789_v3
|
KingKazma
| 2023-07-17T15:05:54Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T01:45:12Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
roa7n/gpt2-human_nontata_promoters-last_layer
|
roa7n
| 2023-07-17T15:01:45Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T15:01:43Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e-1_s6789_v3
|
KingKazma
| 2023-07-17T14:58:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T01:37:50Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
michaelee0407/path-to-save-model
|
michaelee0407
| 2023-07-17T14:36:19Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-17T14:07:28Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - michaelee0407/path-to-save-model
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
Ruborobot/bert-base-cased-finetuned-TeacherMomentsConfusion
|
Ruborobot
| 2023-07-17T14:12:58Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-12T19:36:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-cased-finetuned-TeacherMomentsConfusion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-TeacherMomentsConfusion
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7761
- Accuracy: 0.6607
- Precision: 0.1951
- Recall: 0.4872
- F1: 0.2786
- Balanced Accuracy: 0.5874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Balanced Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-----------------:|
| No log | 1.0 | 295 | 0.6697 | 0.8655 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.6915 | 2.0 | 590 | 0.6861 | 0.6303 | 0.1765 | 0.4769 | 0.2576 | 0.5656 |
| 0.6915 | 3.0 | 885 | 0.7761 | 0.6607 | 0.1951 | 0.4872 | 0.2786 | 0.5874 |
| 0.5506 | 4.0 | 1180 | 1.2897 | 0.6828 | 0.1911 | 0.4205 | 0.2628 | 0.5720 |
| 0.5506 | 5.0 | 1475 | 1.9368 | 0.7938 | 0.1977 | 0.1744 | 0.1853 | 0.5322 |
| 0.2161 | 6.0 | 1770 | 2.3813 | 0.7738 | 0.1878 | 0.2051 | 0.1961 | 0.5336 |
| 0.0445 | 7.0 | 2065 | 3.0640 | 0.8241 | 0.1809 | 0.0872 | 0.1176 | 0.5129 |
| 0.0445 | 8.0 | 2360 | 3.4525 | 0.8255 | 0.1915 | 0.0923 | 0.1246 | 0.5159 |
| 0.0131 | 9.0 | 2655 | 3.5113 | 0.82 | 0.1827 | 0.0974 | 0.1271 | 0.5149 |
| 0.0131 | 10.0 | 2950 | 3.5255 | 0.8138 | 0.1849 | 0.1128 | 0.1401 | 0.5178 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
digiplay/PhotoSomnia_vFinal
|
digiplay
| 2023-07-17T14:12:27Z | 435 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-17T13:45:52Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/18637/photosomnia
Original Author's DEMO image :

Sample image thru huggingface's API :


|
nored355/finetuning-sentiment-model-6000-samples
|
nored355
| 2023-07-17T14:12:17Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-17T14:02:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-6000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9066666666666666
- name: F1
type: f1
value: 0.9060402684563759
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-6000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5302
- Accuracy: 0.9067
- F1: 0.9060
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hseokool/vicuna-7b-v1.3-230717-01
|
hseokool
| 2023-07-17T14:09:26Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T14:09:24Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Oslaw/q-FrozenLake-v1-4x4-noSlippery
|
Oslaw
| 2023-07-17T13:44:53Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T13:44:51Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Oslaw/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NasimB/all-base-rarity-all-cbt-rarity-all-p8k-iorder-est-5p5k
|
NasimB
| 2023-07-17T13:31:00Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T11:32:22Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: all-base-rarity-all-cbt-rarity-all-p8k-iorder-est-5p5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-base-rarity-all-cbt-rarity-all-p8k-iorder-est-5p5k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7559 | 0.31 | 500 | 5.6511 |
| 5.4062 | 0.63 | 1000 | 5.2172 |
| 5.0687 | 0.94 | 1500 | 4.9678 |
| 4.7662 | 1.25 | 2000 | 4.8187 |
| 4.628 | 1.57 | 2500 | 4.6878 |
| 4.5225 | 1.88 | 3000 | 4.5768 |
| 4.3098 | 2.19 | 3500 | 4.5210 |
| 4.2125 | 2.51 | 4000 | 4.4508 |
| 4.1764 | 2.82 | 4500 | 4.3910 |
| 4.0275 | 3.13 | 5000 | 4.3703 |
| 3.8912 | 3.45 | 5500 | 4.3383 |
| 3.8735 | 3.76 | 6000 | 4.3003 |
| 3.7925 | 4.07 | 6500 | 4.2941 |
| 3.5917 | 4.39 | 7000 | 4.2879 |
| 3.5908 | 4.7 | 7500 | 4.2713 |
| 3.577 | 5.01 | 8000 | 4.2617 |
| 3.4004 | 5.33 | 8500 | 4.2710 |
| 3.3993 | 5.64 | 9000 | 4.2699 |
| 3.3898 | 5.95 | 9500 | 4.2692 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
quangnguyennn/pokemon-lora-sophia
|
quangnguyennn
| 2023-07-17T13:28:19Z | 4 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-17T06:53:24Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - quangnguyennn/pokemon-lora-sophia
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
llm-toys/falcon-7b-paraphrase-tone-dialogue-summary-topic
|
llm-toys
| 2023-07-17T13:26:47Z | 15 | 5 |
peft
|
[
"peft",
"text-generation",
"en",
"license:wtfpl",
"region:us"
] |
text-generation
| 2023-07-17T09:29:41Z |
---
library_name: peft
license: wtfpl
language:
- en
pipeline_tag: text-generation
---
## Model description
The tiiuae/falcon-7b model finetuned for Paraphrasing, Changing the Tone of the input sentence(to casual/professional/witty),
Summary and Topic generation from a dialogue. Data for Paraphrasing and Changing the Tone was generated using gpt-35-turbo and a sample of roughly 1000 data points from the
[Dialogsum](https://github.com/cylnlp/dialogsum) dataset was used for Summary and Topic generation.
Look at the repo [llm-toys](https://github.com/kuutsav/llm-toys) for usage and other details.
Try in colab (you might need the pro version):
<a target="_blank" href="https://colab.research.google.com/drive/1hhANNzQkxhrPIIrxtvf0WT_Ste8KrFjh#scrollTo=d6-OJJq_q5Qr">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
## Installation
```bash
pip install llm-toys
```
```python
from llm_toys.tasks import GeneralTaskAssitant
from llm_toys.config import TaskType
gta = GeneralTaskAssitant()
gta.complete(TaskType.PARAPHRASE_TONE, "Hey, can yuo hepl me cancel my last order?")
# "Could you assist me in canceling my previous order?"
gta.complete(TaskType.PARAPHRASE_TONE, "Hey, can yuo hepl me cancel my last order?", tone="casual")
# "Hey, can you help me cancel my last order?"
gta.complete(TaskType.PARAPHRASE_TONE, "Hey, can yuo hepl me cancel my last order?", tone="professional")
# "I would appreciate if you could assist me in canceling my previous order."
gta.complete(TaskType.PARAPHRASE_TONE, "Hey, can yuo hepl me cancel my last order?", tone="witty")
# "Oops! Looks like I got a little carried away with my shopping spree. Can you help me cancel my last order?"
chat = """
#Person1#: I'm so excited for the premiere of the latest Studio Ghibli movie!
#Person2#: What's got you so hyped?
#Person1#: Studio Ghibli movies are pure magic! The animation, storytelling, everything is incredible.
#Person2#: Which movie is it?
#Person1#: It's called "Whisper of the Wind." It's about a girl on a magical journey to save her village.
#Person2#: Sounds amazing! I'm in for the premiere.
#Person1#: Great! We're in for a visual masterpiece and a heartfelt story.
#Person2#: Can't wait to be transported to their world.
#Person1#: It'll be an unforgettable experience, for sure!
""".strip()
gta.complete(TaskType.DIALOGUE_SUMMARY_TOPIC, chat)
# {"summary": "#Person1# tells #Person2# about the upcoming Studio Ghibli movie.
# #Person1# thinks it's magical and #Person2#'s excited to watch it.",
# "topic": "Movie premiere"}
```
## Sample training data
```json
[
{
"original": "If you have any further questions, feel free to ask.",
"casual": "Got more questions? Feel free to ask away. I'm here to help!",
"professional": "Should you have any additional inquiries, please don't hesitate to ask.",
"witty": "Curiosity is always in style! If you have more mysteries to solve, I'm all ears!",
"paraphrase": "Don't hesitate to ask if you have any more questions."
},
{
"fname": "dev_473",
"dialogue": "#Person1#: Did you enjoy your weekend at the highland hotel? I heard it's and excellent place to stay and has good facilities.\n#Person2#: I had a wonderful time. The rooms are not very big, but they are well furnished. The restaurant is excellent and reasonably priced. There's a sauna and a Jacuzzi.\n#Person1#: Do they have a swimming pool?\n#Person2#: No, they don't. they have a beauty parlor, but I didn't go there.\n#Person1#: What's the service like?\n#Person2#: It's very good. Check in and check out at the reception only took a few minutes. The wait staff is very good. A waiter recommended their baked fish, which tasted wonderful. The hotel was quite full, so I'd suggest making a reservation if you intend to go there. The hotel offers a discount at the weekends.\n#Person1#: It sounds perfect. Did you have any complaints at all?\n#Person2#: There was a problem with the internet access, so I couldn't check my email, but I didn't complain about it to the management.\n#Person1#: I suppose you were happy to forget about the outside world.\n#Person2#: Yes, I was. Here's their business card.\n#Person1#: Thanks. Was there a mina bar in the room?\n#Person2#: No, there wasn't. There is a bar on the ground floor and of course you can buy drinks in the restaurant to go with your meal.\n#Person1#: One of the things I dislike about hotels is that everyone expects tips.\n#Person2#: I know. At the inland hotel, they have an interesting policy. When you check out, you put some money in a special box at reception. Each evening, the money in the box is shared equally by the hotel staff.",
"summary": "#Person2# enjoys #Person2#'s weekend at the highland hotel because of the hotel's excellent and reasonably priced restaurant and good service. #Person2# introduces the hotel's facilities, weekend discount, and its interesting tip policy and suggests #Person1# make a reservation in advance.",
"topic": "Experience in hotel"
}
]
```
## Training params
```json
{
"batch_size": 1,
"eval_ratio": 0.05,
"eval_steps": 100,
"gradient_accumulation_steps": 4,
"learning_rate": 0.0001,
"logging_steps": 100,
"lora_alpha": 32,
"lora_dropout": 0.05,
"lora_r": 16,
"max_length": 1024,
"model_name": "tiiuae/falcon-7b",
"num_train_epochs": 3,
"seed": 10,
"task_type": "paraphrase_tone,dialogue_summary_topic",
"use_aim": True
}
```
## Training curve

## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
B0b91/AILearnsToMultiply2
|
B0b91
| 2023-07-17T13:24:04Z | 0 | 0 |
mlconsole
|
[
"mlconsole",
"tabular-regression",
"dataset:house_price_prediction",
"license:unknown",
"model-index",
"region:us"
] |
tabular-regression
| 2023-07-17T13:23:58Z |
---
license: unknown
inference: false
tags:
- mlconsole
- tabular-regression
library_name: mlconsole
metrics:
- mae
- loss
datasets:
- house_price_prediction
model-index:
- name: AILearnsToMultiply2
results:
- task:
type: tabular-regression
name: tabular-regression
dataset:
type: house_price_prediction
name: house_price_prediction
metrics:
- type: mae
name: Mean absolute error
value: 4.996237277984619
- type: loss
name: Model loss
value: 45.071861267089844
---
# regression model trained on "house_price_prediction"
🤖 [Load and use this model](https://mlconsole.com/model/hf/B0b91/AILearnsToMultiply2) in one click.
🧑💻 [Train your own model](https://mlconsole.com) on ML Console.
|
Serjssv/whisper-tiny-v1
|
Serjssv
| 2023-07-17T13:24:04Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-17T12:59:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-v1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.32762691853600945
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-v1
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6409
- Wer Ortho: 33.1277
- Wer: 0.3276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0009 | 17.86 | 500 | 0.6409 | 33.1277 | 0.3276 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Oslaw/ppo-Huggy
|
Oslaw
| 2023-07-17T13:23:22Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-17T13:23:16Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Oslaw/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Camih/ppo-Huggy
|
Camih
| 2023-07-17T13:05:41Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-17T13:05:30Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Camih/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
chloe0x0/mutyGPT
|
chloe0x0
| 2023-07-17T13:03:45Z | 145 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T07:45:41Z |
---
pipeline_tag: conversational
---
|
huarddk/finetuning-sentiment-model-350-samples
|
huarddk
| 2023-07-17T13:00:02Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-12T14:50:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-350-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-350-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1608
- Accuracy: 0.9619
- F1: 0.9806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
terionmanu/bloom_3b_squad_v2
|
terionmanu
| 2023-07-17T12:57:10Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T12:56:45Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
naimul011/fine_tuned_llama-7b-100-hf
|
naimul011
| 2023-07-17T12:48:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T10:47:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
google/flan-t5-base
|
google
| 2023-07-17T12:48:39Z | 804,134 | 836 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:svakulenk0/qrecc",
"dataset:taskmaster2",
"dataset:djaym7/wiki_dialog",
"dataset:deepmind/code_contests",
"dataset:lambada",
"dataset:gsm8k",
"dataset:aqua_rat",
"dataset:esnli",
"dataset:quasc",
"dataset:qed",
"arxiv:2210.11416",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-21T10:02:31Z |
---
language:
- en
- fr
- ro
- de
- multilingual
tags:
- text2text-generation
widget:
- text: "Translate to German: My name is Arthur"
example_title: "Translation"
- text: "Please answer to the following question. Who is going to be the next Ballon d'or?"
example_title: "Question Answering"
- text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering."
example_title: "Logical reasoning"
- text: "Please answer the following question. What is the boiling point of Nitrogen?"
example_title: "Scientific knowledge"
- text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?"
example_title: "Yes/no question"
- text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
example_title: "Reasoning task"
- text: "Q: ( False or not False or False ) is? A: Let's think step by step"
example_title: "Boolean Expressions"
- text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?"
example_title: "Math reasoning"
- text: "Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?"
example_title: "Premise and hypothesis"
datasets:
- svakulenk0/qrecc
- taskmaster2
- djaym7/wiki_dialog
- deepmind/code_contests
- lambada
- gsm8k
- aqua_rat
- esnli
- quasc
- qed
license: apache-2.0
---
# Model Card for FLAN-T5 base
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/flan2_architecture.jpg"
alt="drawing" width="600"/>
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
5. [Training Details](#training-details)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
# TL;DR
If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages.
As mentioned in the first few lines of the abstract :
> Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large).
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English, Spanish, Japanese, Persian, Hindi, French, Chinese, Bengali, Gujarati, German, Telugu, Italian, Arabic, Polish, Tamil, Marathi, Malayalam, Oriya, Panjabi, Portuguese, Urdu, Galician, Hebrew, Korean, Catalan, Thai, Dutch, Indonesian, Vietnamese, Bulgarian, Filipino, Central Khmer, Lao, Turkish, Russian, Croatian, Swedish, Yoruba, Kurdish, Burmese, Malay, Czech, Finnish, Somali, Tagalog, Swahili, Sinhala, Kannada, Zhuang, Igbo, Xhosa, Romanian, Haitian, Estonian, Slovak, Lithuanian, Greek, Nepali, Assamese, Norwegian
- **License:** Apache 2.0
- **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5)
- **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints)
- **Resources for more information:**
- [Research paper](https://arxiv.org/pdf/2210.11416.pdf)
- [GitHub Repo](https://github.com/google-research/t5x)
- [Hugging Face FLAN-T5 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/t5)
# Usage
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base", device_map="auto")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base", device_map="auto", torch_dtype=torch.float16)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base", device_map="auto", load_in_8bit=True)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
# Uses
## Direct Use and Downstream Use
The authors write in [the original paper's model card](https://arxiv.org/pdf/2210.11416.pdf) that:
> The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models
See the [research paper](https://arxiv.org/pdf/2210.11416.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
The information below in this section are copied from the model's [official model card](https://arxiv.org/pdf/2210.11416.pdf):
> Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
## Ethical considerations and risks
> Flan-T5 is fine-tuned on a large corpus of text data that was not filtered for explicit content or assessed for existing biases. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
## Known Limitations
> Flan-T5 has not been tested in real world applications.
## Sensitive Use:
> Flan-T5 should not be applied for any unacceptable use cases, e.g., generation of abusive speech.
# Training Details
## Training Data
The model was trained on a mixture of tasks, that includes the tasks described in the table below (from the original paper, figure 2):

## Training Procedure
According to the model card from the [original paper](https://arxiv.org/pdf/2210.11416.pdf):
> These models are based on pretrained T5 (Raffel et al., 2020) and fine-tuned with instructions for better zero-shot and few-shot performance. There is one fine-tuned Flan model per T5 model size.
The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax).
# Evaluation
## Testing Data, Factors & Metrics
The authors evaluated the model on various tasks covering several languages (1836 in total). See the table below for some quantitative evaluation:

For full details, please check the [research paper](https://arxiv.org/pdf/2210.11416.pdf).
## Results
For full results for FLAN-T5-Base, see the [research paper](https://arxiv.org/pdf/2210.11416.pdf), Table 3.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4.
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@misc{https://doi.org/10.48550/arxiv.2210.11416,
doi = {10.48550/ARXIV.2210.11416},
url = {https://arxiv.org/abs/2210.11416},
author = {Chung, Hyung Won and Hou, Le and Longpre, Shayne and Zoph, Barret and Tay, Yi and Fedus, William and Li, Eric and Wang, Xuezhi and Dehghani, Mostafa and Brahma, Siddhartha and Webson, Albert and Gu, Shixiang Shane and Dai, Zhuyun and Suzgun, Mirac and Chen, Xinyun and Chowdhery, Aakanksha and Narang, Sharan and Mishra, Gaurav and Yu, Adams and Zhao, Vincent and Huang, Yanping and Dai, Andrew and Yu, Hongkun and Petrov, Slav and Chi, Ed H. and Dean, Jeff and Devlin, Jacob and Roberts, Adam and Zhou, Denny and Le, Quoc V. and Wei, Jason},
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Scaling Instruction-Finetuned Language Models},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
## Model Recycling
[Evaluation on 36 datasets](https://ibm.github.io/model-recycling/model_gain_chart?avg=9.16&mnli_lp=nan&20_newsgroup=3.34&ag_news=1.49&amazon_reviews_multi=0.21&anli=13.91&boolq=16.75&cb=23.12&cola=9.97&copa=34.50&dbpedia=6.90&esnli=5.37&financial_phrasebank=18.66&imdb=0.33&isear=1.37&mnli=11.74&mrpc=16.63&multirc=6.24&poem_sentiment=14.62&qnli=3.41&qqp=6.18&rotten_tomatoes=2.98&rte=24.26&sst2=0.67&sst_5bins=5.44&stsb=20.68&trec_coarse=3.95&trec_fine=10.73&tweet_ev_emoji=13.39&tweet_ev_emotion=4.62&tweet_ev_hate=3.46&tweet_ev_irony=9.04&tweet_ev_offensive=1.69&tweet_ev_sentiment=0.75&wic=14.22&wnli=9.44&wsc=5.53&yahoo_answers=4.14&model_name=google%2Fflan-t5-base&base_name=google%2Ft5-v1_1-base) using google/flan-t5-base as a base model yields average score of 77.98 in comparison to 68.82 by google/t5-v1_1-base.
The model is ranked 1st among all tested models for the google/t5-v1_1-base architecture as of 06/02/2023
Results:
| 20_newsgroup | ag_news | amazon_reviews_multi | anli | boolq | cb | cola | copa | dbpedia | esnli | financial_phrasebank | imdb | isear | mnli | mrpc | multirc | poem_sentiment | qnli | qqp | rotten_tomatoes | rte | sst2 | sst_5bins | stsb | trec_coarse | trec_fine | tweet_ev_emoji | tweet_ev_emotion | tweet_ev_hate | tweet_ev_irony | tweet_ev_offensive | tweet_ev_sentiment | wic | wnli | wsc | yahoo_answers |
|---------------:|----------:|-----------------------:|--------:|--------:|--------:|--------:|-------:|----------:|--------:|-----------------------:|-------:|--------:|--------:|--------:|----------:|-----------------:|--------:|--------:|------------------:|--------:|--------:|------------:|--------:|--------------:|------------:|-----------------:|-------------------:|----------------:|-----------------:|---------------------:|---------------------:|--------:|-------:|--------:|----------------:|
| 86.2188 | 89.6667 | 67.12 | 51.9688 | 82.3242 | 78.5714 | 80.1534 | 75 | 77.6667 | 90.9507 | 85.4 | 93.324 | 72.425 | 87.2457 | 89.4608 | 62.3762 | 82.6923 | 92.7878 | 89.7724 | 89.0244 | 84.8375 | 94.3807 | 57.2851 | 89.4759 | 97.2 | 92.8 | 46.848 | 80.2252 | 54.9832 | 76.6582 | 84.3023 | 70.6366 | 70.0627 | 56.338 | 53.8462 | 73.4 |
For more information, see: [Model Recycling](https://ibm.github.io/model-recycling/)
|
KoRiF/whisper-tiny-en
|
KoRiF
| 2023-07-17T12:26:37Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-17T11:52:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.3252656434474616
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8008
- Wer Ortho: 0.3523
- Wer: 0.3253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 1.593 | 1.79 | 50 | 1.0054 | 0.5003 | 0.4185 |
| 0.3982 | 3.57 | 100 | 0.7250 | 0.4121 | 0.3554 |
| 0.2075 | 5.36 | 150 | 0.6898 | 0.4226 | 0.3518 |
| 0.0957 | 7.14 | 200 | 0.6909 | 0.4028 | 0.3371 |
| 0.0412 | 8.93 | 250 | 0.7296 | 0.3695 | 0.3300 |
| 0.0186 | 10.71 | 300 | 0.7522 | 0.3627 | 0.3270 |
| 0.008 | 12.5 | 350 | 0.7703 | 0.3584 | 0.3288 |
| 0.0049 | 14.29 | 400 | 0.7756 | 0.3553 | 0.3294 |
| 0.0032 | 16.07 | 450 | 0.7889 | 0.3516 | 0.3235 |
| 0.0023 | 17.86 | 500 | 0.8008 | 0.3523 | 0.3253 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ShekDass/donut-base-sroie-cord
|
ShekDass
| 2023-07-17T12:16:05Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-07-17T12:11:18Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie-cord
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie-cord
This model is a fine-tuned version of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
peterdamn/distil-ast-audioset-finetuned-gtzan
|
peterdamn
| 2023-07-17T12:05:44Z | 173 | 0 |
transformers
|
[
"transformers",
"pytorch",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-17T08:29:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distil-ast-audioset-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil-ast-audioset-finetuned-gtzan
This model is a fine-tuned version of [bookbot/distil-ast-audioset](https://huggingface.co/bookbot/distil-ast-audioset) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5033
- Accuracy: 0.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7719 | 1.0 | 112 | 1.0881 | 0.65 |
| 0.3801 | 2.0 | 225 | 0.8942 | 0.7 |
| 0.3706 | 3.0 | 337 | 0.9499 | 0.75 |
| 0.3541 | 4.0 | 450 | 0.5243 | 0.87 |
| 0.0132 | 5.0 | 562 | 0.5716 | 0.81 |
| 0.0221 | 6.0 | 675 | 0.5164 | 0.87 |
| 0.0001 | 7.0 | 787 | 0.4789 | 0.91 |
| 0.0002 | 8.0 | 900 | 0.5062 | 0.87 |
| 0.0528 | 9.0 | 1012 | 0.5029 | 0.89 |
| 0.0002 | 9.96 | 1120 | 0.5033 | 0.89 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
yacine-djm/fg-bert-sustainability-15-1.5e-05-0.02-64
|
yacine-djm
| 2023-07-17T12:05:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-17T11:16:50Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: fg-bert-sustainability-15-1.5e-05-0.02-64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fg-bert-sustainability-15-1.5e-05-0.02-64
This model is a fine-tuned version of [Raccourci/fairguest-bert](https://huggingface.co/Raccourci/fairguest-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0711
- F1: 0.9215
- Roc Auc: 0.9565
- Accuracy: 0.8846
On the validation dataset :
- The accuracy with hamming loss is 0.7800788954635107
- The acccuracy as a metric is 0.8326530612244898
- The following is the global precision score: 0.8695652173913043
- The following is the global recall score: 0.8536585365853658
- The following is the global f1-score: 0.8615384615384616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 55 | 0.3273 | 0.0 | 0.5 | 0.0956 |
| No log | 2.0 | 110 | 0.2344 | 0.3710 | 0.6182 | 0.2328 |
| No log | 3.0 | 165 | 0.1464 | 0.8973 | 0.9300 | 0.8441 |
| No log | 4.0 | 220 | 0.1143 | 0.9066 | 0.9405 | 0.8617 |
| No log | 5.0 | 275 | 0.0998 | 0.9091 | 0.9455 | 0.8659 |
| No log | 6.0 | 330 | 0.0901 | 0.9142 | 0.9490 | 0.8732 |
| No log | 7.0 | 385 | 0.0854 | 0.9121 | 0.9534 | 0.8721 |
| No log | 8.0 | 440 | 0.0778 | 0.9185 | 0.9538 | 0.8825 |
| No log | 9.0 | 495 | 0.0775 | 0.9119 | 0.9473 | 0.8763 |
| 0.1683 | 10.0 | 550 | 0.0742 | 0.9200 | 0.9535 | 0.8815 |
| 0.1683 | 11.0 | 605 | 0.0730 | 0.9196 | 0.9544 | 0.8805 |
| 0.1683 | 12.0 | 660 | 0.0716 | 0.9213 | 0.9556 | 0.8825 |
| 0.1683 | 13.0 | 715 | 0.0722 | 0.9218 | 0.9585 | 0.8836 |
| 0.1683 | 14.0 | 770 | 0.0712 | 0.9222 | 0.9580 | 0.8836 |
| 0.1683 | 15.0 | 825 | 0.0711 | 0.9215 | 0.9565 | 0.8846 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
moritzwilke/distilbert-base-uncased-finetuned-squad
|
moritzwilke
| 2023-07-17T11:50:41Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-17T09:13:23Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: moritzwilke/distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# moritzwilke/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6756
- Train End Logits Accuracy: 0.5691
- Train Start Logits Accuracy: 0.5327
- Validation Loss: 1.2714
- Validation End Logits Accuracy: 0.6582
- Validation Start Logits Accuracy: 0.6184
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2766, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.6756 | 0.5691 | 0.5327 | 1.2714 | 0.6582 | 0.6184 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
DaniloGMatto/distilbert-base-uncased-finetuned-cola
|
DaniloGMatto
| 2023-07-17T11:43:06Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-17T11:32:33Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: DaniloGMatto/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# DaniloGMatto/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3235
- Validation Loss: 0.4519
- Train Matthews Correlation: 0.5089
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5136 | 0.4726 | 0.4337 | 0 |
| 0.3235 | 0.4519 | 0.5089 | 1 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
samarthum/model
|
samarthum
| 2023-07-17T11:40:49Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-17T10:57:31Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - samarthum/model
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
ignatius/igbo_model
|
ignatius
| 2023-07-17T11:37:03Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"token-classification",
"ig",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-26T10:16:27Z |
---
license: cc-by-nc-4.0
language:
- ig
---
|
Arindamdas70/llora7B-finetuned
|
Arindamdas70
| 2023-07-17T11:36:03Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T11:35:59Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Wyzard1004/TaxiV3
|
Wyzard1004
| 2023-07-17T11:35:23Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T11:35:21Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: TaxiV3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Wyzard1004/TaxiV3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
planetk/distilbert-base-uncased-finetuned-squad
|
planetk
| 2023-07-17T11:24:35Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-17T09:16:54Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: planetk/distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# planetk/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9803
- Train End Logits Accuracy: 0.7295
- Train Start Logits Accuracy: 0.6894
- Validation Loss: 1.0988
- Validation End Logits Accuracy: 0.7002
- Validation Start Logits Accuracy: 0.6626
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.5242 | 0.6031 | 0.5649 | 1.1395 | 0.6898 | 0.6537 | 0 |
| 0.9803 | 0.7295 | 0.6894 | 1.0988 | 0.7002 | 0.6626 | 1 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.13.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
abhinavkashyap92/distilhubert-finetuned-gtzan
|
abhinavkashyap92
| 2023-07-17T11:19:37Z | 172 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-07T09:09:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6995
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7415 | 1.0 | 113 | 1.8323 | 0.43 |
| 1.2237 | 2.0 | 226 | 1.2223 | 0.65 |
| 0.8856 | 3.0 | 339 | 0.8612 | 0.71 |
| 0.658 | 4.0 | 452 | 0.6679 | 0.8 |
| 0.2701 | 5.0 | 565 | 0.5787 | 0.81 |
| 0.1232 | 6.0 | 678 | 0.7164 | 0.81 |
| 0.0726 | 7.0 | 791 | 0.6973 | 0.84 |
| 0.0253 | 8.0 | 904 | 0.6665 | 0.86 |
| 0.0939 | 9.0 | 1017 | 0.6756 | 0.87 |
| 0.0112 | 10.0 | 1130 | 0.6995 | 0.87 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/cbt-rarity-all-end-p8k-guten-rarity-all-mixed
|
NasimB
| 2023-07-17T11:13:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T09:15:48Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-rarity-all-end-p8k-guten-rarity-all-mixed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-rarity-all-end-p8k-guten-rarity-all-mixed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6958 | 0.29 | 500 | 5.6331 |
| 5.3364 | 0.58 | 1000 | 5.2041 |
| 4.9968 | 0.88 | 1500 | 4.9505 |
| 4.7186 | 1.17 | 2000 | 4.8044 |
| 4.5561 | 1.46 | 2500 | 4.6841 |
| 4.4622 | 1.75 | 3000 | 4.5747 |
| 4.3263 | 2.04 | 3500 | 4.4949 |
| 4.1311 | 2.33 | 4000 | 4.4481 |
| 4.101 | 2.63 | 4500 | 4.3896 |
| 4.0645 | 2.92 | 5000 | 4.3353 |
| 3.871 | 3.21 | 5500 | 4.3306 |
| 3.8006 | 3.5 | 6000 | 4.3048 |
| 3.7879 | 3.79 | 6500 | 4.2723 |
| 3.6977 | 4.08 | 7000 | 4.2640 |
| 3.5167 | 4.38 | 7500 | 4.2617 |
| 3.5203 | 4.67 | 8000 | 4.2466 |
| 3.5051 | 4.96 | 8500 | 4.2353 |
| 3.3506 | 5.25 | 9000 | 4.2461 |
| 3.3237 | 5.54 | 9500 | 4.2458 |
| 3.3231 | 5.83 | 10000 | 4.2450 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
navyatiwari11/my-pet-cat-nxt
|
navyatiwari11
| 2023-07-17T11:10:54Z | 5 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-17T11:04:50Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat-nxt Dreambooth model trained by navyatiwari11 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: OPJU100
Sample pictures of this concept:

|
u2003158/saved_model
|
u2003158
| 2023-07-17T11:10:43Z | 15 | 0 |
keras
|
[
"keras",
"tf-keras",
"resnet",
"code",
"image-classification",
"arxiv:1910.09700",
"region:us"
] |
image-classification
| 2023-07-17T09:48:04Z |
---
metrics:
- accuracy
library_name: keras
pipeline_tag: image-classification
tags:
- code
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model card aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** .pb
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** BugSenseAI
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chayanbhansali/clock-tower
|
chayanbhansali
| 2023-07-17T11:07:56Z | 10 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-17T11:03:06Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### clock_tower Dreambooth model trained by chayanbhansali with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
arick6/ppo-LunarLander-v2
|
arick6
| 2023-07-17T11:03:13Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T11:29:14Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.27 +/- 11.83
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yacine-djm/fg-bert-sustainability-15-1e-05-0.02-64
|
yacine-djm
| 2023-07-17T11:02:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-17T10:12:45Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: fg-bert-sustainability-15-1e-05-0.02-64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fg-bert-sustainability-15-1e-05-0.02-64
This model is a fine-tuned version of [Raccourci/fairguest-bert](https://huggingface.co/Raccourci/fairguest-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0893
- F1: 0.9139
- Roc Auc: 0.9527
- Accuracy: 0.8711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 55 | 0.3449 | 0.0 | 0.4999 | 0.0946 |
| No log | 2.0 | 110 | 0.3249 | 0.0 | 0.4999 | 0.0946 |
| No log | 3.0 | 165 | 0.2658 | 0.0755 | 0.5195 | 0.1320 |
| No log | 4.0 | 220 | 0.2092 | 0.4475 | 0.6489 | 0.3077 |
| No log | 5.0 | 275 | 0.1706 | 0.7755 | 0.8312 | 0.6663 |
| No log | 6.0 | 330 | 0.1461 | 0.8566 | 0.8998 | 0.7848 |
| No log | 7.0 | 385 | 0.1290 | 0.8929 | 0.9416 | 0.8430 |
| No log | 8.0 | 440 | 0.1161 | 0.9044 | 0.9463 | 0.8649 |
| No log | 9.0 | 495 | 0.1038 | 0.9111 | 0.9505 | 0.8680 |
| 0.2414 | 10.0 | 550 | 0.0993 | 0.9143 | 0.9523 | 0.8711 |
| 0.2414 | 11.0 | 605 | 0.0957 | 0.9106 | 0.9504 | 0.8669 |
| 0.2414 | 12.0 | 660 | 0.0932 | 0.9123 | 0.9516 | 0.8680 |
| 0.2414 | 13.0 | 715 | 0.0910 | 0.9185 | 0.9561 | 0.8784 |
| 0.2414 | 14.0 | 770 | 0.0901 | 0.9151 | 0.9538 | 0.8742 |
| 0.2414 | 15.0 | 825 | 0.0893 | 0.9139 | 0.9527 | 0.8711 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
naltatis/distilbert-base-uncased-finetuned-squad
|
naltatis
| 2023-07-17T10:59:14Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-17T09:13:27Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: naltatis/distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# naltatis/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0002
- Train End Logits Accuracy: 0.7231
- Train Start Logits Accuracy: 0.6883
- Validation Loss: 1.1339
- Validation End Logits Accuracy: 0.6926
- Validation Start Logits Accuracy: 0.6580
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.5428 | 0.5983 | 0.5604 | 1.1748 | 0.6817 | 0.6417 | 0 |
| 1.0002 | 0.7231 | 0.6883 | 1.1339 | 0.6926 | 0.6580 | 1 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.13.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
quangnguyennn/pokemon-lora-xformer-sophia
|
quangnguyennn
| 2023-07-17T10:51:43Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-17T06:42:42Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - quangnguyennn/pokemon-lora-xformer-sophia
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
roa7n/gpt2-human_nontata_promoters-rng
|
roa7n
| 2023-07-17T10:39:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T10:39:16Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
avichr/hebEMO_joy
|
avichr
| 2023-07-17T10:13:22Z | 264 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2102.01909",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:[email protected]) <br>
[Inbal yahav](mailto:[email protected]) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={arXiv preprint arXiv:2102.01909},
year={2021}
}
```
|
roa7n/gpt2-human_nontata_promoters
|
roa7n
| 2023-07-17T10:01:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T10:01:33Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
geolearner/fill-mask-camembert-base
|
geolearner
| 2023-07-17T09:53:32Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"fill-mask",
"en",
"dataset:SetFit/mrpc",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-17T02:45:50Z |
---
license: mit
datasets:
- SetFit/mrpc
language:
- en
metrics:
- f1
pipeline_tag: fill-mask
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model card aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TheUpperCaseGuy/Guy-Urdu-TTS
|
TheUpperCaseGuy
| 2023-07-17T09:34:18Z | 203 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-17T09:23:10Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Guy-Urdu-TTS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Guy-Urdu-TTS
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Aditya78b/my-awesome-model-new
|
Aditya78b
| 2023-07-17T09:28:38Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T09:27:56Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
peterdamn/distil-ast-audioset-finetuned-gtzan-finetuned-gtzan
|
peterdamn
| 2023-07-17T09:25:45Z | 166 | 0 |
transformers
|
[
"transformers",
"pytorch",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-17T07:43:01Z |
---
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distil-ast-audioset-finetuned-gtzan-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil-ast-audioset-finetuned-gtzan-finetuned-gtzan
This model is a fine-tuned version of [peterdamn/distil-ast-audioset-finetuned-gtzan](https://huggingface.co/peterdamn/distil-ast-audioset-finetuned-gtzan) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8269
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2642 | 1.0 | 225 | 1.0594 | 0.8 |
| 0.1655 | 2.0 | 450 | 0.9670 | 0.84 |
| 0.0009 | 3.0 | 675 | 0.9774 | 0.79 |
| 0.0093 | 4.0 | 900 | 0.9330 | 0.83 |
| 0.0 | 5.0 | 1125 | 0.8269 | 0.84 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
SotirisLegkas/Socratic-GODEL-2
|
SotirisLegkas
| 2023-07-17T09:21:47Z | 96 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-14T17:16:26Z |
Instruction: given a context, respond using Socratic dialogue principles by asking questions, considering various viewpoints, and promoting critical thinking.
|
akdeniz27/q-FrozenLake-v1-4x4-noSlippery
|
akdeniz27
| 2023-07-17T09:20:29Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T09:20:25Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="akdeniz27/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Masterjp123/AnythingV5Nijimix
|
Masterjp123
| 2023-07-17T09:08:11Z | 8 | 0 |
diffusers
|
[
"diffusers",
"art",
"en",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-17T07:18:24Z |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
tags:
- art
---
A Mix of Anything-v5 with 4 niji jounery style Loras to try to recreate a niji-jounery like style.
****WARNING I HAVE NOT TESTED THIS MODEL AT ALL!****
Citivai link: https://civitai.com/models/110761/anythingv5nijimix
|
ykirpichev/speecht5_finetuned_voxpopuli_fr
|
ykirpichev
| 2023-07-17T09:02:15Z | 84 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"text-to-speech",
"generated_from_trainer",
"dataset:facebook/voxpopuli-fr",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-07-17T07:04:40Z |
---
license: mit
tags:
- text-to-speech
- generated_from_trainer
datasets:
- facebook/voxpopuli-fr
model-index:
- name: speecht5_finetuned_voxpopuli_fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_fr
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli-fr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5294 | 2.99 | 1000 | 0.4842 |
| 0.5094 | 5.98 | 2000 | 0.4688 |
| 0.5032 | 8.97 | 3000 | 0.4636 |
| 0.4981 | 11.96 | 4000 | 0.4623 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
cgr28/CartPole-v1
|
cgr28
| 2023-07-17T08:44:20Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T08:44:08Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ITG/wav2vec2-large-xlsr-gl
|
ITG
| 2023-07-17T08:35:55Z | 78 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ITG",
"PyTorch",
"Transformers",
"gl",
"dataset:openslr",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-17T08:15:40Z |
---
license: cc-by-nc-nd-4.0
datasets:
- openslr
language:
- gl
pipeline_tag: automatic-speech-recognition
tags:
- ITG
- PyTorch
- Transformers
- wav2vec2
---
# Wav2Vec2 Large XLSR Galician
## Description
This is a fine-tuned version of the [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) pre-trained model for ASR in galician.
---
## Dataset
The dataset used for fine-tuning this model was the [OpenSLR galician](https://huggingface.co/datasets/openslr/viewer/SLR77) dataset, available in the openslr repository.
---
## Example inference script
### Check this example script to run our model in inference mode
```python
import torch
from transformers import AutoProcessor, AutoModelForCTC
filename = "demo.wav" #change this line to the name of your audio file
sample_rate = 16_000
processor = AutoProcessor.from_pretrained('ITG/wav2vec2-large-xlsr-gl')
model = AutoModelForSpeechSeq2Seq.from_pretrained('ITG/wav2vec2-large-xlsr-gl')
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
speech_array, _ = librosa.load(filename, sr=sample_rate)
inputs = processor(speech_array, sampling_rate=sample_rate, return_tensors="pt", padding=True).to(device)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask.to(device)).logits
decode_output = processor.batch_decode(torch.argmax(logits, dim=-1))[0]
print(f"ASR Galician wav2vec2-large-xlsr output: {decode_output}")
```
---
## Fine-tuning hyper-parameters
| **Hyper-parameter** | **Value** |
|:----------------------------------------:|:---------------------------:|
| Training batch size | 16 |
| Evaluation batch size | 8 |
| Learning rate | 3e-4 |
| Gradient accumulation steps | 2 |
| Group by length | true |
| Evaluation strategy | steps |
| Max training epochs | 50 |
| Max steps | 4000 |
| Generate max length | 225 |
| FP16 | true |
| Metric for best model | wer |
| Greater is better | false |
## Fine-tuning in a different dataset or style
If you're interested in fine-tuning your own wav2vec2 model, we suggest starting with the [facebook/wav2vec2-large-xlsr-53 model](https://huggingface.co/facebook/wav2vec2-large-xlsr-53). Additionally,
you may find this [fine-tuning on galician notebook by Diego Fustes](https://github.com/diego-fustes/xlsr-fine-tuning-gl/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Galician.ipynb) to be a valuable resource.
This guide served as a helpful reference during the training process of this Galician wav2vec2-large-xlsr model!
|
nolanaatama/mnnrl
|
nolanaatama
| 2023-07-17T08:25:25Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-15T00:50:33Z |
---
license: creativeml-openrail-m
---
|
MelindaStudy/sd-class-butterflies-32
|
MelindaStudy
| 2023-07-17T08:16:47Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-07-17T08:16:17Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('MelindaStudy/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
yacine-djm/fg-bert-sustainability-20-1e-05-0.02-64
|
yacine-djm
| 2023-07-17T08:16:36Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-17T07:13:31Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: fg-bert-sustainability-20-1e-05-0.02-64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fg-bert-sustainability-20-1e-05-0.02-64
This model is a fine-tuned version of [Raccourci/fairguest-bert](https://huggingface.co/Raccourci/fairguest-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0768
- F1: 0.9111
- Roc Auc: 0.9481
- Accuracy: 0.8721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 55 | 0.3490 | 0.0 | 0.4999 | 0.0946 |
| No log | 2.0 | 110 | 0.3051 | 0.0 | 0.5 | 0.0956 |
| No log | 3.0 | 165 | 0.2361 | 0.2265 | 0.5641 | 0.1611 |
| No log | 4.0 | 220 | 0.1869 | 0.6345 | 0.7492 | 0.4657 |
| No log | 5.0 | 275 | 0.1469 | 0.8934 | 0.9318 | 0.8358 |
| No log | 6.0 | 330 | 0.1197 | 0.9057 | 0.9409 | 0.8555 |
| No log | 7.0 | 385 | 0.1060 | 0.9126 | 0.9507 | 0.8680 |
| No log | 8.0 | 440 | 0.0958 | 0.9151 | 0.9487 | 0.8763 |
| No log | 9.0 | 495 | 0.0912 | 0.9153 | 0.9496 | 0.8721 |
| 0.2274 | 10.0 | 550 | 0.0863 | 0.9163 | 0.9521 | 0.8742 |
| 0.2274 | 11.0 | 605 | 0.0842 | 0.9131 | 0.9507 | 0.8711 |
| 0.2274 | 12.0 | 660 | 0.0816 | 0.9160 | 0.9507 | 0.8773 |
| 0.2274 | 13.0 | 715 | 0.0810 | 0.9156 | 0.9511 | 0.8763 |
| 0.2274 | 14.0 | 770 | 0.0803 | 0.9097 | 0.9484 | 0.8680 |
| 0.2274 | 15.0 | 825 | 0.0790 | 0.9103 | 0.9466 | 0.8690 |
| 0.2274 | 16.0 | 880 | 0.0774 | 0.9100 | 0.9475 | 0.8701 |
| 0.2274 | 17.0 | 935 | 0.0779 | 0.9134 | 0.9499 | 0.8732 |
| 0.2274 | 18.0 | 990 | 0.0767 | 0.9136 | 0.9508 | 0.8763 |
| 0.0682 | 19.0 | 1045 | 0.0767 | 0.9112 | 0.9486 | 0.8732 |
| 0.0682 | 20.0 | 1100 | 0.0768 | 0.9111 | 0.9481 | 0.8721 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
msrtoto/Coral_AI_TB
|
msrtoto
| 2023-07-17T08:15:56Z | 237 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-17T08:15:50Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Coral_AI_TB
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9821428656578064
---
# Coral_AI_TB
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Bird

#### Human

#### Lynx

#### Squirrel

#### Wolf

|
ykirpichev/speecht5_finetuned_voxpopuli_nl
|
ykirpichev
| 2023-07-17T08:13:17Z | 83 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-07-17T05:53:12Z |
---
license: mit
tags:
- generated_from_trainer
- text-to-speech
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5242 | 4.3 | 1000 | 0.4753 |
| 0.5023 | 8.61 | 2000 | 0.4625 |
| 0.4941 | 12.91 | 3000 | 0.4577 |
| 0.4903 | 17.21 | 4000 | 0.4569 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ZaidHaris/bloom-560m-lora-tagger
|
ZaidHaris
| 2023-07-17T08:11:08Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T08:11:07Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
thoshan/zeroStores
|
thoshan
| 2023-07-17T08:11:01Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-17T08:11:01Z |
---
license: creativeml-openrail-m
---
|
rtyui123/CartPole-v1
|
rtyui123
| 2023-07-17T08:03:51Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T08:03:46Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 124.50 +/- 5.70
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ashwinperti/finetuning-sentiment-model-3000-samples
|
ashwinperti
| 2023-07-17T08:00:55Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-29T10:16:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.877887788778878
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3080
- Accuracy: 0.8767
- F1: 0.8779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
abhinavkashyap92/whisper-tiny-asr-english
|
abhinavkashyap92
| 2023-07-17T07:57:56Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-17T04:15:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-asr-english
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.31582054309327035
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-asr-english
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Wer Ortho: 0.3196
- Wer: 0.3158
- Loss: 0.5223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Wer Ortho | Wer | Validation Loss |
|:-------------:|:-----:|:----:|:---------:|:------:|:---------------:|
| 0.4862 | 0.89 | 100 | 0.3917 | 0.3719 | 0.5372 |
| 0.3213 | 1.79 | 200 | 0.3769 | 0.3571 | 0.4777 |
| 0.1822 | 2.68 | 300 | 0.3726 | 0.3589 | 0.4746 |
| 0.068 | 3.57 | 400 | 0.3276 | 0.3146 | 0.4819 |
| 0.0333 | 4.46 | 500 | 0.3196 | 0.3158 | 0.5223 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
StarRing2022/Dlip-RWKV
|
StarRing2022
| 2023-07-17T07:56:21Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rwkv",
"license:lgpl-3.0",
"endpoints_compatible",
"region:us"
] | null | 2023-07-17T07:32:43Z |
---
license: lgpl-3.0
---
一种基于Clip改进的,通用HF格式的冻结LLM语言模型进行图文对齐训练的方案,以RWKV-4-World-0.4B为例,Cifar10为数据集
共创合作:受到visualrwkv冻结LLM模型启发(https://github.com/howard-hou/VisualRWKV)
RWKV-4-World-0.4B模型及训练30个epoch后的checkpoint文件:
GIT开源地址:https://github.com/StarRing2022/Dlip-RWKV/
|
gsaivinay/Platypus-30B
|
gsaivinay
| 2023-07-17T07:56:04Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2302.13971",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T07:56:04Z |
---
language:
- en
tags:
- llama
license: other
metrics:
- MMLU
- ARC
- HellaSwag
- TruthfulQA
duplicated_from: lilloukas/Platypus-30B
---
# 🥳 Platypus-30B has arrived!
Platypus-30B is an instruction fine-tuned model based on the LLaMA-30B transformer architecture.
| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | 64.1 |
| ARC (25-shot) | 57.6 |
| HellaSwag (10-shot) | 81.9 |
| TruthfulQA (0-shot) | 45.3 |
| Avg. | 62.2 |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above.
## Model Details
* **Trained by**: Cole Hunter & Ariel Lee
* **Model type:** **Platypus-30B** is an auto-regressive language model based on the LLaMA transformer architecture.
* **Language(s)**: English
* **License for base weights**: License for the base LLaMA model's weights is Meta's [non-commercial bespoke license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md).
| Hyperparameter | Value |
|---------------------------|-------|
| \\(n_\text{parameters}\\) | 33B |
| \\(d_\text{model}\\) | 6656 |
| \\(n_\text{layers}\\) | 60 |
| \\(n_\text{heads}\\) | 52 |
## Training Dataset
Dataset of highly filtered and curated question and answer pairs. Release TBD.
## Training Procedure
`lilloukas/Platypus-30B` was instruction fine-tuned using LoRA on 4 A100 80GB. For training details and inference instructions please see the [Platypus-30B](https://github.com/arielnlee/Platypus-30B.git) GitHub repo.
## Reproducing Evaluation Results
Install LM Evaluation Harness:
```
git clone https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .
```
Each task was evaluated on a single A100 80GB GPU.
ARC:
```
python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/Platypus-30B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/arc_challenge_25shot.json --device cuda --num_fewshot 25
```
HellaSwag:
```
python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/Platypus-30B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/hellaswag_10shot.json --device cuda --num_fewshot 10
```
MMLU:
```
python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/Platypus-30B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/mmlu_5shot.json --device cuda --num_fewshot 5
```
TruthfulQA:
```
python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/Platypus-30B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/truthfulqa_0shot.json --device cuda
```
## Limitations and bias
The base LLaMA model is trained on various data, some of which may contain offensive, harmful, and biased content that can lead to toxic behavior. See Section 5.1 of the LLaMA paper. We have not performed any studies to determine how fine-tuning on the aforementioned datasets affect the model's behavior and toxicity. Do not treat chat responses from this model as a substitute for human judgment or as a source of truth. Please use responsibly.
## Citations
```bibtex
@article{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
@article{hu2021lora,
title={LoRA: Low-Rank Adaptation of Large Language Models},
author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu},
journal={CoRR},
year={2021}
}
```
|
guilleguells/cypher-7b-apoc2
|
guilleguells
| 2023-07-17T07:45:38Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T10:44:20Z |
---
library_name: peft
---
***Settings***
training_args = transformers.TrainingArguments(
auto_find_batch_size=True,
gradient_accumulation_steps=4,
num_train_epochs=1,
learning_rate=2e-4,
fp16=True,
save_total_limit=3,
logging_steps=1,
max_steps=80,
output_dir="/home/gguells/finetuning/apoc/",
save_strategy='epoch',
optim="paged_adamw_8bit",
lr_scheduler_type = 'cosine',
warmup_ratio = 0.05,
)
|
Sukmin/a2c-PandaReachDense-v2
|
Sukmin
| 2023-07-17T07:43:56Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T07:42:00Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.18 +/- 0.37
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ethan1278/WizardLM-Uncensored-Falcon-7b-sharded-bf16
|
ethan1278
| 2023-07-17T07:37:34Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"RefinedWebModel",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T06:01:19Z |
Copy of [Wizard-Uncensored-Falcon-7b](https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-7b) but sharded. Please refer to the original repo for details about license/dataset/etc.
|
OysterQAQ/DanbooruCLIP
|
OysterQAQ
| 2023-07-17T07:22:55Z | 127 | 9 |
transformers
|
[
"transformers",
"pytorch",
"clip",
"zero-shot-image-classification",
"vision",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2023-05-18T14:06:00Z |
---
tags:
- vision
widget:
- src: https://huggingface.co/OysterQAQ/DanbooruCLIP/resolve/main/example.jpg
candidate_labels: Azur Lane, 3 girl with sword, cat ear, a dog
example_title: Azur Lane
- src: https://huggingface.co/OysterQAQ/DanbooruCLIP/resolve/main/example2.jpg
candidate_labels: 1 girl with black hair, rabbit ear, big breasts, minato aqua, fate/extra, k-on!, daiyousei, cirno
example_title: cirno & daiyousei
---
### 介绍
2023_07_17更新:增加了pixiv数据集进行训练
使用danburoo2021数据集对clip(ViT-L/14)模型进行微调。
0-3 epoch学习率为4e-6,权重衰减为1e-3
4-8 epoch学习率为1e-6,权重衰减为1e-3
标签预处理过程:
```python
for i in range(length):
# 加载并且缩放图片
if not is_image(data_from_db.path[i]):
continue
try:
img = self.preprocess(
Image.open(data_from_db.path[i].replace("./", "/mnt/lvm/danbooru2021/danbooru2021/")))
except Exception as e:
#print(e)
continue
# 处理标签
tags = json.loads(data_from_db.tags[i])
# 优先选择人物和作品标签
category_group = {}
for tag in tags:
category_group.setdefault(tag["category"], []).append(tag)
# category_group=groupby(tags, key=lambda x: (x["category"]))
character_list = category_group[4] if 4 in category_group else []
# 作品需要过滤以bad开头的
work_list = list(filter(
lambda e:
e["name"] != "original"
, category_group[3])) if 3 in category_group else []
# work_list= category_group[5] if 5 in category_group else []
general_list = category_group[0] if 0 in category_group else []
caption = ""
caption_2 = None
for character in character_list:
if len(work_list) != 0:
# 去除括号内作品内容
character["name"] = re.sub(u"\\(.*?\\)", "", character["name"])
caption += character["name"].replace("_", " ")
caption += ","
caption = caption[:-1]
caption += " "
if len(work_list) != 0:
caption += "from "
for work in work_list:
caption += work["name"].replace("_", " ")
caption += " "
# 普通标签
if len(general_list) != 0:
caption += "with "
if len(general_list) > 20:
general_list_1 = general_list[:int(len(general_list) / 2)]
general_list_2 = general_list[int(len(general_list) / 2):]
caption_2 = caption
for general in general_list_1:
if general["name"].find("girl") == -1 and general["name"].find("boy") == -1 and len(
re.findall(is_contain, general["name"])) != 0:
caption_2 += general["name"].replace("_", " ")
caption_2 += ","
caption_2 = caption_2[:-1]
for general in general_list_2:
if general["name"].find("girl") == -1 and general["name"].find("boy") == -1 and len(
re.findall(is_contain, general["name"])) != 0:
caption += general["name"].replace("_", " ")
caption += ","
caption = caption[:-1]
else:
for general in general_list:
# 如果标签数据目大于20 则拆分成两个caption
if general["name"].find("girl") == -1 and general["name"].find("boy") == -1 and len(
re.findall(is_contain, general["name"])) != 0:
caption += general["name"].replace("_", " ")
caption += ","
caption = caption[:-1]
# 标签汇总成语句
# tokenize语句
# 返回
# 过长截断 不行的话用huggingface的
text_1 = clip.tokenize(texts=caption, truncate=True)
text_2= None
if caption_2 is not None:
text_2 = clip.tokenize(texts=caption_2, truncate=True)
# 处理逻辑
# print(img)
yield img, text_1[0]
if text_2 is not None:
yield img, text_2[0]
```
### 使用
```python
from PIL import Image
import requests
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("OysterQAQ/DanbooruCLIP")
processor = CLIPProcessor.from_pretrained("OysterQAQ/DanbooruCLIP")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
```
## Feedback
### Where to send questions or comments about the model
Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9)
|
Shinjigen/Mimi
|
Shinjigen
| 2023-07-17T07:17:24Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-17T07:13:49Z |
---
license: creativeml-openrail-m
---
|
lchen7/FB_week2
|
lchen7
| 2023-07-17T07:15:50Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T07:15:45Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
HoaAn2003/dqn-SpaceInvadersNoFrameskip-v4
|
HoaAn2003
| 2023-07-17T07:15:03Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T07:11:28Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 410.50 +/- 108.87
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Note
I break at num_timesteps=1275000
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga HoaAn2003 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga HoaAn2003 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga HoaAn2003
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 10),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
hw2942/Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-close-000300.SH-v1
|
hw2942
| 2023-07-17T06:36:48Z | 93 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"longformer",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-17T06:14:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-close-000300.SH-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-close-000300.SH-v1
This model is a fine-tuned version of [IDEA-CCNL/Erlangshen-Longformer-110M](https://huggingface.co/IDEA-CCNL/Erlangshen-Longformer-110M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6937
- Accuracy: 0.4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 38 | 0.6746 | 0.6 |
| No log | 2.0 | 76 | 0.7211 | 0.4 |
| No log | 3.0 | 114 | 0.6894 | 0.6 |
| No log | 4.0 | 152 | 0.6827 | 0.6 |
| No log | 5.0 | 190 | 0.7080 | 0.6 |
| No log | 6.0 | 228 | 0.6982 | 0.4 |
| No log | 7.0 | 266 | 0.7154 | 0.4 |
| No log | 8.0 | 304 | 0.6789 | 0.6 |
| No log | 9.0 | 342 | 0.7016 | 0.4 |
| No log | 10.0 | 380 | 0.6937 | 0.4 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ailabturkiye/shaco
|
ailabturkiye
| 2023-07-17T06:35:20Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-17T06:30:09Z |
---
license: openrail
language:
- tr
tags:
- music
---
League of Legends oyunundaki Shaco adlı şampiyonun yaklaşık 5 dakikalık datasetiyle 250 epoch basılarak oluşturulmuştur. -3 ya da -5 Pitch(Transpose) önerilir. Herhangi bir platformda model ile yapılan bir cover paylaşımında discord linkimizi vermeniz rica olunur. discord.gg/ailab
|
ailabturkiye/drmundo
|
ailabturkiye
| 2023-07-17T06:34:42Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-17T06:28:34Z |
---
license: openrail
language:
- tr
tags:
- music
---
League of Legends oyunundaki Dr. Mundo adlı şampiyonun yaklaşık 5 dakikalık datasetiyle 500 epoch basılarak oluşturulmuştur. Herhangi bir platformda model ile yapılan bir cover paylaşımında discord linkimizi vermeniz rica olunur. discord.gg/ailab
|
charlieoneill/falcon-abstracts
|
charlieoneill
| 2023-07-17T06:29:06Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-07-17T00:55:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: falcon-abstracts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-abstracts
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2500
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
prognosis/alpaca-cardio-qa
|
prognosis
| 2023-07-17T06:27:20Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T06:24:24Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
ailabturkiye/2xciv
|
ailabturkiye
| 2023-07-17T06:22:21Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-17T06:16:23Z |
---
license: openrail
language:
- tr
tags:
- music
---
VALORANT youtuberı olan 2xCIV'in yaklaşık 5 dakikalık datasetiyle 250 epoch basılarak oluşturulmuştur. Herhangi bir platformda model ile yapılan bir cover paylaşımında discord linkimizi vermeniz rica olunur. discord.gg/ailab
|
shivaneej/my_awesome_billsum_model
|
shivaneej
| 2023-07-17T06:19:13Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-14T06:38:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1425
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4536
- Rouge1: 0.1425
- Rouge2: 0.051
- Rougel: 0.1174
- Rougelsum: 0.1176
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7496 | 0.1275 | 0.0381 | 0.1084 | 0.1082 | 19.0 |
| No log | 2.0 | 124 | 2.5353 | 0.1365 | 0.0475 | 0.1138 | 0.1136 | 19.0 |
| No log | 3.0 | 186 | 2.4718 | 0.1409 | 0.0495 | 0.1157 | 0.1156 | 19.0 |
| No log | 4.0 | 248 | 2.4536 | 0.1425 | 0.051 | 0.1174 | 0.1176 | 19.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ailabturkiye/yasuo
|
ailabturkiye
| 2023-07-17T06:18:49Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-17T06:13:49Z |
---
license: openrail
language:
- tr
tags:
- music
---
League of Legends oyunundaki Yasuo adlı şampiyonun yaklaşık 5 dakikalık datasetiyle 250 epoch basılarak oluşturulmuştur.
Herhangi bir platformda model ile yapılan bir cover paylaşımında discord linkimizi vermeniz rica olunur. discord.gg/ailab
|
StarRing2022/RWKV-4-Raven-3B-v11-zh
|
StarRing2022
| 2023-07-17T06:16:24Z | 98 | 6 |
transformers
|
[
"transformers",
"pytorch",
"rwkv",
"endpoints_compatible",
"region:us"
] | null | 2023-05-23T01:26:32Z |
---
{RWKV-4-Raven-3B-v11-zh}
---
将RWKV模型转化为HF格式,与HF无缝连接,几句代码调用RWKV
底座模型:RWKV-4-Raven-3B-v11-Eng49%-Chn49%-Jpn1%-Other1%-20230429-ctx4096.pth(https://huggingface.co/BlinkDL/rwkv-4-raven)
import torch
from transformers import GPTNeoXTokenizerFast, RwkvConfig, RwkvForCausalLM
model = RwkvForCausalLM.from_pretrained("StarRing2022/RWKV-4-Raven-3B-v11-zh")
tokenizer = GPTNeoXTokenizerFast.from_pretrained("StarRing2022/RWKV-4-Raven-3B-v11-zh")
text = "你好"
input_ids = tokenizer.encode(text, return_tensors='pt')
out = model.generate(input_ids=input_ids,max_new_tokens=128)
answer = tokenizer.decode(out[0])
print(answer)
GIT开源地址:https://github.com/StarRing2022/HF-For-RWKVRaven-Alpaca/
|
Open-Orca/OpenOrca-Preview1-13B
|
Open-Orca
| 2023-07-17T06:07:48Z | 1,576 | 146 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"arxiv:2306.02707",
"arxiv:2301.13688",
"arxiv:2302.13971",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-12T01:13:58Z |
---
license: mit
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- Open-Orca/OpenOrca
---
<p><h1>🐋 The First OpenOrca Model Preview! 🐋</h1></p>

# OpenOrca-Preview1-13B
We have used our own [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca) to fine-tune LLaMA-13B.
This dataset is our attempt to reproduce the dataset generated for Microsoft Research's [Orca Paper](https://arxiv.org/abs/2306.02707).
We have trained on less than 6% of our data, just to give a preview of what is possible while we further refine our dataset!
We trained a refined selection of 200k GPT-4 entries from OpenOrca.
We have filtered our GPT-4 augmentations to remove statements like, "As an AI language model..." and other responses which have been shown to harm model reasoning capabilities. Further details on our dataset curation practices will be forthcoming with our full model releases.
This release highlights that even a small portion of our training data can produce state of the art results in this model class with training costs <$200 in total.
Want to visualize our full (pre-filtering) dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
[<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners.
We will also give sneak-peak announcements on our Discord, which you can find here:
https://AlignmentLab.ai
# Evaluation
We have evaluated OpenOrca-Preview1-13B on hard reasoning tasks from BigBench-Hard and AGIEval as outlined in the Orca paper.
Our average performance for BigBench-Hard: 0.3753
Average for AGIEval: 0.3638
In the Orca paper, they measured their score relative to Vicuna on these evals.
We've done the same and have found our score averages to ~60% of the total improvement that was shown in the Orca paper.
So we got 60% of the improvement with 6% of the data!
## BigBench-Hard Performance

## AGIEval Performance

We will report our results on [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Evals once we receive them.
# Dataset
We used a small (6%, 200k) subset of our data from OpenOrca, which aims to reproduce the Orca Research Paper dataset.
As this release is intended as a preview, please await our full releases for further details on the training data.
# Training
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
We trained with 8x A100-80G GPUs for 15 hours. Commodity cost was < $200.
We trained for 4 epochs and selected a snapshot at 3 epochs for peak performance.
Please await our full releases for further training details.
# Prompting
It uses the Alpaca format (see [FastChat implementation example](https://github.com/lm-sys/FastChat/blob/daa2b9abe20597ebf34dc5df164d450456610c74/fastchat/conversation.py#L198-L229)):
```
### Instruction:
### Response:
```
# Citation
```bibtex
@software{OpenOrca_Preview1,
title = {OpenOrca_Preview1: A LLaMA-13B Model Fine-tuned on Small Portion of OpenOrcaV1 Dataset},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B},
}
```
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
|
NasimB/cbt-rarity-all-guten-rarity-all-shuffled
|
NasimB
| 2023-07-17T06:04:22Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T03:50:00Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-rarity-all-guten-rarity-all-shuffled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-rarity-all-guten-rarity-all-shuffled
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6936 | 0.29 | 500 | 5.6373 |
| 5.3455 | 0.58 | 1000 | 5.2068 |
| 4.9918 | 0.87 | 1500 | 4.9529 |
| 4.7206 | 1.17 | 2000 | 4.7986 |
| 4.5625 | 1.46 | 2500 | 4.6814 |
| 4.4501 | 1.75 | 3000 | 4.5769 |
| 4.3341 | 2.04 | 3500 | 4.4914 |
| 4.1289 | 2.33 | 4000 | 4.4492 |
| 4.1029 | 2.62 | 4500 | 4.3892 |
| 4.0658 | 2.91 | 5000 | 4.3368 |
| 3.8669 | 3.21 | 5500 | 4.3328 |
| 3.7955 | 3.5 | 6000 | 4.3018 |
| 3.7944 | 3.79 | 6500 | 4.2674 |
| 3.7043 | 4.08 | 7000 | 4.2633 |
| 3.5179 | 4.37 | 7500 | 4.2601 |
| 3.5117 | 4.66 | 8000 | 4.2451 |
| 3.5008 | 4.95 | 8500 | 4.2339 |
| 3.3507 | 5.24 | 9000 | 4.2455 |
| 3.3229 | 5.54 | 9500 | 4.2429 |
| 3.3252 | 5.83 | 10000 | 4.2429 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Althhecow/CattleMix
|
Althhecow
| 2023-07-17T06:00:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-16T21:23:09Z |
Model based on Anything v3 and a few older models that I've since lost track of. This model was originally mixed over 6 months ago, but has stayed useful for cartoonish / anthropomorphic subjects, despite newer models since releasing.
|
digiplay/CosplayMix_v2
|
digiplay
| 2023-07-17T05:59:37Z | 10 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-17T05:06:32Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: false
---
Model info :
https://civitai.com/models/34502?modelVersionId=48334
Original Author's DEMO image :

more image info:
https://civitai.com/images/519469
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.