modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 00:36:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 00:36:49
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
haonan-li/bactrian-gu-bloom-7b1-lora
|
haonan-li
| 2023-06-13T13:32:02Z | 0 | 0 | null |
[
"arxiv:2305.15011",
"license:mit",
"region:us"
] | null | 2023-06-13T13:31:48Z |
---
license: mit
---
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Gujarati.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-gu-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
haonan-li/bactrian-pt-bloom-7b1-lora
|
haonan-li
| 2023-06-13T13:31:47Z | 0 | 0 | null |
[
"arxiv:2305.15011",
"license:mit",
"region:us"
] | null | 2023-06-13T13:31:32Z |
---
license: mit
---
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Portuguese.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-pt-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
haonan-li/bactrian-en-bloom-7b1-lora
|
haonan-li
| 2023-06-13T13:31:18Z | 0 | 0 | null |
[
"arxiv:2305.15011",
"license:mit",
"region:us"
] | null | 2023-06-13T13:31:05Z |
---
license: mit
---
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in English.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-en-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
haonan-li/bactrian-tr-bloom-7b1-lora
|
haonan-li
| 2023-06-13T13:30:50Z | 0 | 0 | null |
[
"arxiv:2305.15011",
"license:mit",
"region:us"
] | null | 2023-06-13T13:30:34Z |
---
license: mit
---
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Turkish.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-tr-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
haonan-li/bactrian-es-bloom-7b1-lora
|
haonan-li
| 2023-06-13T13:30:20Z | 0 | 0 | null |
[
"arxiv:2305.15011",
"license:mit",
"region:us"
] | null | 2023-06-13T13:30:07Z |
---
license: mit
---
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Spanish.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-es-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
haonan-li/bactrian-ru-bloom-7b1-lora
|
haonan-li
| 2023-06-13T13:29:53Z | 0 | 0 | null |
[
"arxiv:2305.15011",
"license:mit",
"region:us"
] | null | 2023-06-13T13:29:39Z |
---
license: mit
---
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Russian.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-ru-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
haonan-li/bactrian-hi-bloom-7b1-lora
|
haonan-li
| 2023-06-13T13:29:38Z | 0 | 0 | null |
[
"arxiv:2305.15011",
"license:mit",
"region:us"
] | null | 2023-06-13T13:29:26Z |
---
license: mit
---
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Hindi.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-hi-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
haonan-li/bactrian-xh-bloom-7b1-lora
|
haonan-li
| 2023-06-13T13:29:25Z | 0 | 0 | null |
[
"arxiv:2305.15011",
"license:mit",
"region:us"
] | null | 2023-06-13T13:29:10Z |
---
license: mit
---
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Xhosa.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-xh-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
haonan-li/bactrian-et-bloom-7b1-lora
|
haonan-li
| 2023-06-13T13:28:55Z | 0 | 0 | null |
[
"arxiv:2305.15011",
"license:mit",
"region:us"
] | null | 2023-06-13T13:28:41Z |
---
license: mit
---
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Estonian.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-et-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
haonan-li/bactrian-te-bloom-7b1-lora
|
haonan-li
| 2023-06-13T13:28:27Z | 0 | 0 | null |
[
"arxiv:2305.15011",
"license:mit",
"region:us"
] | null | 2023-06-13T13:28:15Z |
---
license: mit
---
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Telugu.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-te-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
haonan-li/bactrian-th-bloom-7b1-lora
|
haonan-li
| 2023-06-13T13:28:01Z | 0 | 0 | null |
[
"arxiv:2305.15011",
"license:mit",
"region:us"
] | null | 2023-06-13T13:27:48Z |
---
license: mit
---
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Thai.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-th-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
haonan-li/bactrian-ne-bloom-7b1-lora
|
haonan-li
| 2023-06-13T13:27:48Z | 0 | 0 | null |
[
"arxiv:2305.15011",
"license:mit",
"region:us"
] | null | 2023-06-13T13:27:35Z |
---
license: mit
---
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Nepali.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-ne-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
haonan-li/bactrian-fr-bloom-7b1-lora
|
haonan-li
| 2023-06-13T13:27:35Z | 0 | 0 | null |
[
"arxiv:2305.15011",
"license:mit",
"region:us"
] | null | 2023-06-13T13:27:21Z |
---
license: mit
---
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in French.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-fr-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
haonan-li/bactrian-ar-bloom-7b1-lora
|
haonan-li
| 2023-06-13T13:25:48Z | 0 | 0 | null |
[
"arxiv:2305.15011",
"license:mit",
"region:us"
] | null | 2023-06-13T13:25:36Z |
---
license: mit
---
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Arabic.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-ar-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
soddokayo/klue-roberta-large-klue-ner
|
soddokayo
| 2023-06-13T13:24:32Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:klue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-13T03:22:54Z |
---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: klue-roberta-large-klue-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: klue
type: klue
config: ner
split: validation
args: ner
metrics:
- name: Precision
type: precision
value: 0.8292094561996003
- name: Recall
type: recall
value: 0.8438661710037175
- name: F1
type: f1
value: 0.836473614684002
- name: Accuracy
type: accuracy
value: 0.9663865173522563
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-roberta-large-klue-ner
This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1279
- Precision: 0.8292
- Recall: 0.8439
- F1: 0.8365
- Accuracy: 0.9664
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1246 | 1.0 | 2626 | 0.1629 | 0.7891 | 0.7725 | 0.7807 | 0.9539 |
| 0.0744 | 2.0 | 5252 | 0.1194 | 0.8124 | 0.8345 | 0.8233 | 0.9642 |
| 0.0401 | 3.0 | 7878 | 0.1279 | 0.8292 | 0.8439 | 0.8365 | 0.9664 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.12.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
leo1452/q-Taxi-v3
|
leo1452
| 2023-06-13T13:24:07Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-13T11:38:16Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="leo1452/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
irfanamal/bert-base-uncased-classification-flat
|
irfanamal
| 2023-06-13T13:14:45Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-13T07:20:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-classification-flat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-classification-flat
This model is a fine-tuned version of [irfanamal/bert-base-uncased-finetuned-amazonreviews](https://huggingface.co/irfanamal/bert-base-uncased-finetuned-amazonreviews) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4951
- Accuracy: 0.4957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.7227 | 1.0 | 1250 | 3.3098 | 0.3826 |
| 2.6109 | 2.0 | 2500 | 2.7897 | 0.4568 |
| 2.2396 | 3.0 | 3750 | 2.5943 | 0.4809 |
| 1.9093 | 4.0 | 5000 | 2.5155 | 0.4937 |
| 1.7949 | 5.0 | 6250 | 2.4951 | 0.4957 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
kejolong/SNI
|
kejolong
| 2023-06-13T13:00:30Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-13T12:58:55Z |
---
license: creativeml-openrail-m
---
|
jrahn/yolochess_mlm_azure-cloud-35
|
jrahn
| 2023-06-13T12:58:28Z | 119 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"fill-mask",
"chess",
"dataset:jrahn/yolochess_lichess-elite_2211",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-02-08T07:16:48Z |
---
license: mit
datasets:
- jrahn/yolochess_lichess-elite_2211
library_name: transformers
tags:
- chess
widget:
- text: "rnbqkbnr/pppppppp/8/8/8/[MASK]/PPPPPPPP/RNBQKBNR w KQkq - 0 1"
example_title: "MLM: Masked = 8"
- text: "6k1/8/8/1pB3[MASK]P/1P3P2/8/8/8 w - - 1 74"
example_title: "MLM: Masked = K"
---
# Model Card for yolochess_mlm_azure-cloud-35
<!-- Provide a quick summary of what the model is/does. -->
This model with 66M parameters is pre-trained from scratch with Masked Language Modeling on Chess Positions in [FEN](https://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation) format.
It is supposed to be used for downstream fine-tuning, e.g. Text Classification for human moves.
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Jonathan Rahn
- **Model type:** Distilbert
- **Language(s) (NLP):** Chess [FEN](https://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation)
- **License:** MIT
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model is pre-trained from scratch with Masked Language Modeling on Chess Positions in FEN format.
## Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
It is supposed to be used for downstream fine-tuning, e.g. Text Classification for human moves.
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Anything other than Chess Positions in standard [FEN](https://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation) format.
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
n/a
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
n/a
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoModelForMaskedLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("jrahn/yolochess_mlm_azure-cloud-35")
model = AutoModelForMaskedLM.from_pretrained("jrahn/yolochess_mlm_azure-cloud-35")
```
```python
from transformers import pipeline
pipe = pipeline("fill-mask", "jrahn/yolochess_mlm_azure-cloud-35")
pipe("6k1/8/8/1pB3[MASK]P/1P3P2/8/8/8 w - - 1 74")
```
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[Lichess-Elite 22-11 Dataset](https://huggingface.co/datasets/jrahn/yolochess_lichess-elite_2211)
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
Masked Language Modeling objective with 15% masked token ratio.
### Preprocessing
Tokenize `data["train"]["fen"]` with max-length padding to 200 tokens with default `distilbert-base-cased` tokenizer.
Inefficient: Most of the vocab is never observed in FEN, wasting embedding parameters.
The sequence length / pos embedding size of model and sequence length of data preprocessing leads to lots of padding and wasted parameters. FENs should be shorter than 90 characters.
Experiments with reduced max-length in tokenization show performance gains.
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
Training for 172500 steps at batch-size 128 (22M examples, 1 epoch) took ~10 hrs on 1x RTX 4090, using 20GB VRAM, with final MLM-loss: 0.2567.
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 1x RTX 4090
- **Hours used:** 10
- **Cloud Provider:** local
- **Compute Region:** local
- **Carbon Emitted:** 1.5kg
# Technical Specifications
## Model Architecture and Objective
Distilbert, Masked Language Modeling
|
diallomama/fr-summarization
|
diallomama
| 2023-06-13T12:56:59Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:GEM/wiki_lingua",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-09T20:52:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- GEM/wiki_lingua
metrics:
- rouge
model-index:
- name: fr-summarization
results:
- task:
name: Summarization
type: summarization
dataset:
name: GEM/wiki_lingua fr
type: GEM/wiki_lingua
config: fr
split: validation
args: fr
metrics:
- name: Rouge1
type: rouge
value: 100.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fr-summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the GEM/wiki_lingua fr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Rouge1: 100.0
- Rouge2: 100.0
- Rougel: 100.0
- Rougelsum: 100.0
- Gen Len: 13.9390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
LarryAIDraw/fate_saberalter-10
|
LarryAIDraw
| 2023-06-13T12:53:40Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-13T11:29:56Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/58105/artoria-pendragon-alter-saber-or-fategrand-order
|
LarryAIDraw/Positions_Lora32
|
LarryAIDraw
| 2023-06-13T12:53:17Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-13T11:31:39Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/54901/norian-hentaicore-lora-extracted
|
LarryAIDraw/Yuzuriha
|
LarryAIDraw
| 2023-06-13T12:50:02Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-13T11:29:35Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/43350/yuzuriha-of-keishu-jigokuraku
|
Geotrend/bert-base-fr-cased
|
Geotrend
| 2023-06-13T12:37:39Z | 131 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"fr",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language: fr
datasets: wikipedia
license: apache-2.0
widget:
- text: "Paris est la [MASK] de la France."
- text: "Paris est la capitale de la [MASK]."
- text: "L'élection américaine a eu [MASK] en novembre 2020."
---
# bert-base-fr-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-fr-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-fr-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
clarin-knext/plt5-base-msmarco
|
clarin-knext
| 2023-06-13T12:24:51Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"pl",
"arxiv:2305.19840",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-02-21T15:01:26Z |
---
license: cc-by-sa-4.0
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: [email protected]
|
Zengwei/icefall-asr-librispeech-zipformer-transducer-ctc-2023-06-13
|
Zengwei
| 2023-06-13T12:10:17Z | 0 | 0 | null |
[
"tensorboard",
"region:us"
] | null | 2023-06-13T11:40:06Z |
See https://github.com/k2-fsa/icefall/pull/1111
|
NathyB/Hate-Speech-Detection-in-Amharic-Language-mBERT
|
NathyB
| 2023-06-13T12:09:21Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"Sentiment-Analysis",
"Hate-Speech",
"Finetuning-mBERT",
"am",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-04T14:03:04Z |
---
language:
- am
metrics:
- accuracy
- f1
library_name: transformers
pipeline_tag: text-classification
tags:
- Sentiment-Analysis
- Hate-Speech
- Finetuning-mBERT
---
**<h1>Hate-Speech-Detection-in-Amharic-Language-mBERT</h1>**
This Hugging Face model card contains a machine learning model that uses fine-tuned mBERT to detect hate speech in Amharic language.
The model was fine-tuned using the Hugging Face Trainer API.
**<h1>Fine-Tuning</h1>**
This model was created by finetuning the mBERT model for the downstream task of Hate speech detection for the Amharic language.
The initial mBERT model used for finetuning is http://Davlan/bert-base-multilingual-cased-finetuned-amharic which was provided by Davlan on Huggingface.
**<h1>Usage</h1>**
You can use the model through the Hugging Face Transformers library, either by directly loading the model in your Python code
or by using the Hugging Face model hub.
|
ThirdEyeData/Text_Summarization
|
ThirdEyeData
| 2023-06-13T11:58:18Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-13T09:27:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: Text_Summarization
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1324
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text_Summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7235
- Rouge1: 0.1324
- Rouge2: 0.0397
- Rougel: 0.1114
- Rougelsum: 0.111
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 74 | 2.8406 | 0.1286 | 0.04 | 0.1087 | 0.1088 | 19.0 |
| No log | 2.0 | 148 | 2.7235 | 0.1324 | 0.0397 | 0.1114 | 0.111 | 19.0 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
MarcoLYH/bert-base-uncased-finetuned-v1
|
MarcoLYH
| 2023-06-13T11:51:22Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-13T11:45:14Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: MarcoLYH/bert-base-uncased-finetuned-v1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MarcoLYH/bert-base-uncased-finetuned-v1
This model is a fine-tuned version of [csarron/bert-base-uncased-squad-v1](https://huggingface.co/csarron/bert-base-uncased-squad-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0187
- Train End Logits Accuracy: 0.6875
- Train Start Logits Accuracy: 0.7083
- Validation Loss: 0.7458
- Validation End Logits Accuracy: 0.75
- Validation Start Logits Accuracy: 0.8000
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 9, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.6743 | 0.4375 | 0.5625 | 1.0957 | 0.7000 | 0.7000 | 0 |
| 1.1601 | 0.6458 | 0.6458 | 0.8086 | 0.75 | 0.75 | 1 |
| 1.0187 | 0.6875 | 0.7083 | 0.7458 | 0.75 | 0.8000 | 2 |
### Framework versions
- Transformers 4.30.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
keysonya/Reinforce-2
|
keysonya
| 2023-06-13T11:46:49Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-13T11:46:04Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: -3.20 +/- 2.36
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ml-projects/clickbait-ml_bert
|
ml-projects
| 2023-06-13T11:38:55Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"onnx",
"bert",
"text-classification",
"generated_from_keras_callback",
"de",
"dataset:ml-projects/clickbait-ml_dataset",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-11T15:06:38Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: clickbait-ml_bert
results: []
language:
- de
metrics:
- accuracy
pipeline_tag: text-classification
widget:
- text: Bundesweiter Großstreik beginnt - Züge, Busse und Flugzeuge stehen still
example_title: Normale Überschrift
- text: Bachelor in Paradise-Star Pamela Gil Matas Sohn ist da!
example_title: Clickbait Überschrift
- text: Du wirst nie glauben was hier geschah
example_title: Beispiel
datasets:
- ml-projects/clickbait-ml_dataset
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# clickbait-ml_bert
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6057
- Validation Loss: 0.6160
- Train Accuracy: 0.8235
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 8, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.7115 | 0.6299 | 0.8235 | 0 |
| 0.6071 | 0.6160 | 0.8235 | 1 |
| 0.5783 | 0.6160 | 0.8235 | 2 |
| 0.6057 | 0.6160 | 0.8235 | 3 |
### Framework versions
- Transformers 4.30.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ml-projects/clickbait-ml_setfit
|
ml-projects
| 2023-06-13T11:34:01Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"de",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-12T14:21:50Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
language:
- de
---
# /var/folders/2x/fdpqscbs113ftxcylzlb9sx40000gn/T/tmpjza_ogmp/ml-projects/clickbait-ml-setfit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("/var/folders/2x/fdpqscbs113ftxcylzlb9sx40000gn/T/tmpjza_ogmp/ml-projects/clickbait-ml-setfit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
kristian-a/bloomz-lora
|
kristian-a
| 2023-06-13T11:17:45Z | 33 | 0 |
peft
|
[
"peft",
"text-generation",
"region:us"
] |
text-generation
| 2023-06-13T11:02:00Z |
---
library_name: peft
tags:
- text-generation
---
|
trumanplus/trumanplus
|
trumanplus
| 2023-06-13T11:07:26Z | 0 | 0 |
allennlp
|
[
"allennlp",
"finance",
"Health",
"license:openrail",
"region:us"
] | null | 2023-06-13T10:59:48Z |
---
license: openrail
library_name: allennlp
tags:
- finance
- Health
---
Introducing <a href="https://www.topofferlink.com/7TZS934/5K6QMGG/">Truman Plus</a>, the ultimate solution for better enhancement and energy! Are you ready to take your performance to new heights? Look no further, as Truman Plus is here to revolutionize your experience. With its powerful formula and remarkable benefits, Truman Plus is the go-to choice for those seeking an exceptional boost. Get ready to unlock your true potential!
Experience a surge of energy like never before. <a href="https://www.topofferlink.com/7TZS934/5K6QMGG/">Truman Plus</a> provides you with the vitality you need to conquer each day with enthusiasm. Bid farewell to fatigue and embrace a rejuvenated spirit that will keep you going from dawn till dusk. Imagine the endless possibilities that await you when you're armed with boundless energy.
<a href="https://www.topofferlink.com/7TZS934/5K6QMGG/">OFFICIAL WEBSITE-” CLICK HERE”!</a>
But that's not all. <a href="https://www.topofferlink.com/7TZS934/5K6QMGG/">Truman Plus</a> offers remarkable enhancement benefits that will leave you astounded. Discover a newfound confidence as you embrace heightened focus and mental clarity. With Truman Plus, you can break through limitations and achieve peak performance in all aspects of your life. Whether you're tackling a challenging task or aiming for success in the gym, Truman Plus empowers you to exceed your own expectations.
What sets <a href="https://www.topofferlink.com/7TZS934/5K6QMGG/">Truman Plus</a> apart is its cutting-edge formula, meticulously crafted to provide you with the best results. Each pill is packed with a potent blend of premium ingredients, scientifically proven to enhance your energy levels, mental agility, and overall performance. Don't settle for mediocre when you can strive for greatness.
<a href="https://www.topofferlink.com/7TZS934/5K6QMGG/">Click Here to Go To the "OFFICIAL Site"!</a>
It's time to take control of your life and unlock the best version of yourself. Experience the remarkable benefits of <a href="https://www.topofferlink.com/7TZS934/5K6QMGG/">Truman Plus</a> and elevate your performance to new heights. Don't wait any longer - seize this opportunity to enhance your energy and empower yourself.
Upgrade your life with <a href="https://www.topofferlink.com/7TZS934/5K6QMGG/">Truman Plus</a> today and embark on a journey of limitless possibilities. Place your order now and let Truman Plus be the catalyst that propels you towards success. Take the leap and embrace the extraordinary!
<a href="https://www.topofferlink.com/7TZS934/5K6QMGG/">Call to Action: Order Truman Plus now and unleash your true potential!</a>
license: openrail
---
|
dhanushkaha/web-model
|
dhanushkaha
| 2023-06-13T10:59:48Z | 40 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-13T10:56:31Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### web-model Dreambooth model trained by dhanushkaha with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:























































|
AlexanderDadario/setfit-model
|
AlexanderDadario
| 2023-06-13T10:45:04Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-13T10:44:42Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# AlexanderDadario/setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("AlexanderDadario/setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
keysonya/Reinforce-1
|
keysonya
| 2023-06-13T10:44:27Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-13T10:43:57Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
KrishnAI7/autotrain-aniai1-66240136433
|
KrishnAI7
| 2023-06-13T10:42:25Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta",
"text-classification",
"autotrain",
"en",
"dataset:KrishnAI7/autotrain-data-aniai1",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-13T10:41:59Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- KrishnAI7/autotrain-data-aniai1
co2_eq_emissions:
emissions: 0.0473575460314297
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 66240136433
- CO2 Emissions (in grams): 0.0474
## Validation Metrics
- Loss: 2.632
- Accuracy: 0.100
- Macro F1: 0.013
- Micro F1: 0.100
- Weighted F1: 0.018
- Macro Precision: 0.007
- Micro Precision: 0.100
- Weighted Precision: 0.010
- Macro Recall: 0.071
- Micro Recall: 0.100
- Weighted Recall: 0.100
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/KrishnAI7/autotrain-aniai1-66240136433
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("KrishnAI7/autotrain-aniai1-66240136433", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("KrishnAI7/autotrain-aniai1-66240136433", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
wootwoot/abyssorangemix3-popupparade-fp16
|
wootwoot
| 2023-06-13T10:42:14Z | 156 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-12T14:43:01Z |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
---
### Based off [WarriorMama777/OrangeMixs](https://huggingface.co/WarriorMama777/OrangeMixs)
All credits go to the original author and all the author of AbyssOrangeMix3's ancestor models
### Merged with [Pop Up Parade](https://civitai.com/models/78997)
### Diffusers
The original AbyssOrangeMix3 model converted to be used with the [🧨Diffusers library](https://github.com/huggingface/diffusers)
|
MarcoLYH/distilbert-base-uncased-finetuned-v3
|
MarcoLYH
| 2023-06-13T10:34:48Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-13T10:23:32Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MarcoLYH/distilbert-base-uncased-finetuned-v3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MarcoLYH/distilbert-base-uncased-finetuned-v3
This model is a fine-tuned version of [distilbert-base-uncased-distilled-squad](https://huggingface.co/distilbert-base-uncased-distilled-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9895
- Train End Logits Accuracy: 0.7708
- Train Start Logits Accuracy: 0.7292
- Validation Loss: 0.7644
- Validation End Logits Accuracy: 0.8000
- Validation Start Logits Accuracy: 0.8000
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 1e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 27, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 3, 'power': 1.0, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 2.1849 | 0.4583 | 0.5625 | 1.4084 | 0.6000 | 0.7000 | 0 |
| 1.7525 | 0.4583 | 0.625 | 1.1174 | 0.6000 | 0.7000 | 1 |
| 1.4231 | 0.5625 | 0.6458 | 0.9771 | 0.7000 | 0.75 | 2 |
| 1.2974 | 0.6042 | 0.6667 | 0.8995 | 0.7000 | 0.8000 | 3 |
| 1.0907 | 0.6875 | 0.6875 | 0.8517 | 0.7000 | 0.8000 | 4 |
| 0.9871 | 0.7292 | 0.7292 | 0.8189 | 0.7000 | 0.8000 | 5 |
| 1.0101 | 0.7292 | 0.75 | 0.7987 | 0.8000 | 0.8000 | 6 |
| 0.9208 | 0.7083 | 0.7708 | 0.7801 | 0.8000 | 0.8000 | 7 |
| 0.9486 | 0.7083 | 0.7292 | 0.7692 | 0.8000 | 0.8000 | 8 |
| 0.9895 | 0.7708 | 0.7292 | 0.7644 | 0.8000 | 0.8000 | 9 |
### Framework versions
- Transformers 4.30.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
renyulin/Reinforce-CartPole-v1
|
renyulin
| 2023-06-13T10:28:29Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-13T10:28:20Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ahamid/bert-finetuned-ner
|
ahamid
| 2023-06-13T10:25:49Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-06T18:40:37Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ahamid/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ahamid/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0222
- Validation Loss: 0.0531
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0217 | 0.0531 | 0 |
| 0.0222 | 0.0531 | 1 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
kevinpro/Vicuna-13B-CoT_v2
|
kevinpro
| 2023-06-13T10:25:04Z | 0 | 1 | null |
[
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | null | 2023-06-12T15:35:02Z |
---
license: apache-2.0
language:
- en
---
# Model Card for Model ID
SFT to enhance the CoT capabiliy of Vicuna.
We tune the model on 55W "CoT Capabiliy" related Instruction Data
If you find the model helpful, please click "like" to support us. We also welcome feedback on your usage experience and any issues you encounter in the issues section.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
scottzroot/clip-ViT-B-32-config
|
scottzroot
| 2023-06-13T10:23:34Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"clip",
"feature-extraction",
"sentence-similarity",
"arxiv:2103.00020",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-13T10:20:02Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# clip-ViT-B-32
This is the Image & Text model [CLIP](https://arxiv.org/abs/2103.00020), which maps text and images to a shared vector space. For applications of the models, have a look in our documentation [SBERT.net - Image Search](https://www.sbert.net/examples/applications/image-search/README.html)
## Usage
After installing [sentence-transformers](https://sbert.net) (`pip install sentence-transformers`), the usage of this model is easy:
```python
from sentence_transformers import SentenceTransformer, util
from PIL import Image
#Load CLIP model
model = SentenceTransformer('clip-ViT-B-32')
#Encode an image:
img_emb = model.encode(Image.open('two_dogs_in_snow.jpg'))
#Encode text descriptions
text_emb = model.encode(['Two dogs in the snow', 'A cat on a table', 'A picture of London at night'])
#Compute cosine similarities
cos_scores = util.cos_sim(img_emb, text_emb)
print(cos_scores)
```
See our [SBERT.net - Image Search](https://www.sbert.net/examples/applications/image-search/README.html) documentation for more examples how the model can be used for image search, zero-shot image classification, image clustering and image deduplication.
## Performance
In the following table we find the zero-shot ImageNet validation set accuracy:
| Model | Top 1 Performance |
| --- | :---: |
| [clip-ViT-B-32](https://huggingface.co/sentence-transformers/clip-ViT-B-32) | 63.3 |
| [clip-ViT-B-16](https://huggingface.co/sentence-transformers/clip-ViT-B-16) | 68.1 |
| [clip-ViT-L-14](https://huggingface.co/sentence-transformers/clip-ViT-L-14) | 75.4 |
For a multilingual version of the CLIP model for 50+ languages have a look at: [clip-ViT-B-32-multilingual-v1](https://huggingface.co/sentence-transformers/clip-ViT-B-32-multilingual-v1)
|
ranajoy98/autotrain-clauses_classifier-2847083405
|
ranajoy98
| 2023-06-13T10:23:16Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:ranajoy98/autotrain-data-clauses_classifier",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-12T07:58:25Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ranajoy98/autotrain-data-clauses_classifier
co2_eq_emissions:
emissions: 0.712310551029896
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 2847083405
- CO2 Emissions (in grams): 0.7123
## Validation Metrics
- Loss: 0.642
- Accuracy: 0.795
- Macro F1: 0.810
- Micro F1: 0.795
- Weighted F1: 0.796
- Macro Precision: 0.807
- Micro Precision: 0.795
- Weighted Precision: 0.802
- Macro Recall: 0.819
- Micro Recall: 0.795
- Weighted Recall: 0.795
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ranajoy98/autotrain-clauses_classifier-2847083405
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ranajoy98/autotrain-clauses_classifier-2847083405", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ranajoy98/autotrain-clauses_classifier-2847083405", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
nvenhuizen14/mofodbtransactions
|
nvenhuizen14
| 2023-06-13T10:12:54Z | 1 | 0 |
transformers
|
[
"transformers",
"joblib",
"logistic_regression",
"autotrain",
"tabular",
"classification",
"tabular-classification",
"dataset:nvenhuizen14/autotrain-data-mofodb_classifications",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
tabular-classification
| 2023-06-13T10:11:09Z |
---
tags:
- autotrain
- tabular
- classification
- tabular-classification
datasets:
- nvenhuizen14/autotrain-data-mofodb_classifications
co2_eq_emissions:
emissions: 0.04250103814751933
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 66203136426
- CO2 Emissions (in grams): 0.0425
## Validation Metrics
- Loss: 0.007
- Accuracy: 0.997
- Macro F1: 0.915
- Micro F1: 0.997
- Weighted F1: 0.996
- Macro Precision: 0.926
- Micro Precision: 0.997
- Weighted Precision: 0.995
- Macro Recall: 0.915
- Micro Recall: 0.997
- Weighted Recall: 0.997
## Usage
```python
import json
import joblib
import pandas as pd
model = joblib.load('model.joblib')
config = json.load(open('config.json'))
features = config['features']
# data = pd.read_csv("data.csv")
data = data[features]
data.columns = ["feat_" + str(col) for col in data.columns]
predictions = model.predict(data) # or model.predict_proba(data)
```
|
MarcoLYH/distilbert-base-uncased-finetuned-v2
|
MarcoLYH
| 2023-06-13T09:53:44Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-13T09:47:30Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MarcoLYH/distilbert-base-uncased-finetuned-v2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MarcoLYH/distilbert-base-uncased-finetuned-v2
This model is a fine-tuned version of [distilbert-base-uncased-distilled-squad](https://huggingface.co/distilbert-base-uncased-distilled-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8512
- Train End Logits Accuracy: 0.7917
- Train Start Logits Accuracy: 0.7708
- Validation Loss: 0.9185
- Validation End Logits Accuracy: 0.7000
- Validation Start Logits Accuracy: 0.8000
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 14, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1, 'power': 1.0, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 2.0417 | 0.4583 | 0.5833 | 1.2444 | 0.6000 | 0.7000 | 0 |
| 1.5102 | 0.5625 | 0.6875 | 1.0279 | 0.7000 | 0.75 | 1 |
| 1.1881 | 0.6458 | 0.6875 | 0.9774 | 0.7000 | 0.8000 | 2 |
| 1.1344 | 0.6875 | 0.6875 | 0.9360 | 0.7000 | 0.8000 | 3 |
| 0.8512 | 0.7917 | 0.7708 | 0.9185 | 0.7000 | 0.8000 | 4 |
### Framework versions
- Transformers 4.30.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
kowshikBlue/sti_workplace_model_updated
|
kowshikBlue
| 2023-06-13T09:51:03Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-13T09:50:35Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 200 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 200,
"warmup_steps": 20,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
cointegrated/roberta-large-cola-krishna2020
|
cointegrated
| 2023-06-13T09:38:15Z | 1,785 | 7 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"arxiv:2010.05700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
This is a RoBERTa-large classifier trained on the CoLA corpus [Warstadt et al., 2019](https://www.mitpressjournals.org/doi/pdf/10.1162/tacl_a_00290),
which contains sentences paired with grammatical acceptability judgments. The model can be used to evaluate fluency of machine-generated English sentences, e.g. for evaluation of text style transfer.
The model was trained in the paper [Krishna et al, 2020. Reformulating Unsupervised Style Transfer as Paraphrase Generation](https://arxiv.org/abs/2010.05700), and its original version is available at [their project page](http://style.cs.umass.edu). We converted this model from Fairseq to Transformers format. All credit goes to the authors of the original paper.
## Citation
If you found this model useful and refer to it, please cite the original work:
```
@inproceedings{style20,
author={Kalpesh Krishna and John Wieting and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2020",
Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation},
}
```
|
Josias-Ounsinli/my_awesome_model32
|
Josias-Ounsinli
| 2023-06-13T09:33:36Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-13T09:14:34Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Josias-Ounsinli/my_awesome_model32
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Josias-Ounsinli/my_awesome_model32
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0986
- Validation Loss: 1.0986
- Train Accuracy: 0.3333
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3750, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.0986 | 1.0986 | 0.3333 | 0 |
| 1.0986 | 1.0986 | 0.3333 | 1 |
### Framework versions
- Transformers 4.30.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Jamli/AmeliaLoRa
|
Jamli
| 2023-06-13T09:15:22Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-13T09:07:01Z |
---
license: creativeml-openrail-m
---
|
addy88/bert-finetuned-bpmn
|
addy88
| 2023-06-13T09:15:21Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-13T09:06:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-bpmn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-bpmn
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3456
- Precision: 0.8113
- Recall: 0.86
- F1: 0.8350
- Accuracy: 0.9341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.2716 | 0.7778 | 0.84 | 0.8077 | 0.9115 |
| No log | 2.0 | 20 | 0.2428 | 0.7669 | 0.8333 | 0.7987 | 0.9160 |
| No log | 3.0 | 30 | 0.2726 | 0.7875 | 0.84 | 0.8129 | 0.9205 |
| No log | 4.0 | 40 | 0.2658 | 0.7862 | 0.8333 | 0.8091 | 0.9214 |
| No log | 5.0 | 50 | 0.2470 | 0.7914 | 0.86 | 0.8243 | 0.9268 |
| No log | 6.0 | 60 | 0.2745 | 0.7791 | 0.8467 | 0.8115 | 0.9250 |
| No log | 7.0 | 70 | 0.3415 | 0.8280 | 0.8667 | 0.8469 | 0.9259 |
| No log | 8.0 | 80 | 0.3524 | 0.775 | 0.8267 | 0.8000 | 0.9178 |
| No log | 9.0 | 90 | 0.3307 | 0.8313 | 0.8867 | 0.8581 | 0.9322 |
| No log | 10.0 | 100 | 0.3161 | 0.7778 | 0.84 | 0.8077 | 0.9214 |
| No log | 11.0 | 110 | 0.3646 | 0.8387 | 0.8667 | 0.8525 | 0.9322 |
| No log | 12.0 | 120 | 0.3262 | 0.7925 | 0.84 | 0.8155 | 0.9223 |
| No log | 13.0 | 130 | 0.3436 | 0.8462 | 0.88 | 0.8627 | 0.9350 |
| No log | 14.0 | 140 | 0.3427 | 0.8516 | 0.88 | 0.8656 | 0.9377 |
| No log | 15.0 | 150 | 0.3163 | 0.7950 | 0.8533 | 0.8232 | 0.9322 |
| No log | 16.0 | 160 | 0.3233 | 0.8291 | 0.8733 | 0.8506 | 0.9377 |
| No log | 17.0 | 170 | 0.3354 | 0.8050 | 0.8533 | 0.8285 | 0.9322 |
| No log | 18.0 | 180 | 0.3468 | 0.8291 | 0.8733 | 0.8506 | 0.9341 |
| No log | 19.0 | 190 | 0.3457 | 0.8176 | 0.8667 | 0.8414 | 0.9341 |
| No log | 20.0 | 200 | 0.3456 | 0.8113 | 0.86 | 0.8350 | 0.9341 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Josias-Ounsinli/my_awesome_model31
|
Josias-Ounsinli
| 2023-06-13T09:11:53Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-13T08:44:13Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Josias-Ounsinli/my_awesome_model31
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Josias-Ounsinli/my_awesome_model31
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0988
- Validation Loss: 1.0986
- Train Accuracy: 0.3333
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.001, 'decay_steps': 5625, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.1022 | 1.0986 | 0.3333 | 0 |
| 1.0989 | 1.0988 | 0.3333 | 1 |
| 1.0988 | 1.0986 | 0.3333 | 2 |
### Framework versions
- Transformers 4.30.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
franfj/media-bias-ukraine-dataset-all-removed
|
franfj
| 2023-06-13T09:08:29Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-11T23:45:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: media-bias-ukraine-dataset-all-removed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# media-bias-ukraine-dataset-all-removed
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1717
- F1: 0.8014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3845 | 1.0 | 114 | 0.2296 | 0.6101 |
| 0.1412 | 2.0 | 228 | 0.1759 | 0.7486 |
| 0.0215 | 3.0 | 342 | 0.2275 | 0.7439 |
| 0.0506 | 4.0 | 456 | 0.2064 | 0.7651 |
| 0.0366 | 5.0 | 570 | 0.1717 | 0.8014 |
| 0.2428 | 6.0 | 684 | 0.1955 | 0.7878 |
| 0.005 | 7.0 | 798 | 0.2297 | 0.7839 |
| 0.003 | 8.0 | 912 | 0.2428 | 0.8005 |
| 0.0037 | 9.0 | 1026 | 0.2577 | 0.7884 |
| 0.0099 | 10.0 | 1140 | 0.2641 | 0.7957 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.10.1
- Tokenizers 0.13.2
|
yswill/llama-13b-hf
|
yswill
| 2023-06-13T09:05:59Z | 50 | 4 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-13T07:17:12Z |
---
license: other
---
This contains the weights for the LLaMA-13b model. This model is under a non-commercial license (see the LICENSE file).
You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
模型为HF格式,可以直接使用huggingface api加载使用,本模型也适用于LLaVA模型的底层LLaMa模型。
|
Josias-Ounsinli/my_awesome_model39
|
Josias-Ounsinli
| 2023-06-13T08:53:12Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-13T08:33:51Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Josias-Ounsinli/my_awesome_model39
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Josias-Ounsinli/my_awesome_model39
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5308
- Validation Loss: 0.6272
- Train Accuracy: 0.7307
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0001, 'decay_steps': 7500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6912 | 0.6544 | 0.7063 | 0 |
| 0.5308 | 0.6272 | 0.7307 | 1 |
### Framework versions
- Transformers 4.30.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
echarlaix/distilbert-sst2-inc-dynamic-quantization-magnitude-pruning-0.1
|
echarlaix
| 2023-06-13T08:47:40Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"neural-compressor",
"int8",
"en",
"dataset:sst2",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-06T09:51:12Z |
---
language: en
license: apache-2.0
datasets:
- sst2
- glue
metrics:
- accuracy
tags:
- text-classification
- neural-compressor
- int8
---
# Dynamically quantized and pruned DistilBERT base uncased finetuned SST-2
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
**Model Description:** This model is a [DistilBERT](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) fine-tuned on SST-2 dynamically quantized and pruned using a magnitude pruning strategy to obtain a sparsity of 10% with [optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
- **Model Type:** Text Classification
- **Language(s):** English
- **License:** Apache-2.0
- **Parent Model:** For more details on the original model, we encourage users to check out [this](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) model card.
## How to Get Started With the Model
This requires to install Optimum :
`pip install optimum[neural-compressor]`
To load the quantized model and run inference using the Transformers [pipelines](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines), you can do as follows:
```python
from transformers import AutoTokenizer, pipeline
from optimum.intel import INCModelForSequenceClassification
model_id = "echarlaix/distilbert-sst2-inc-dynamic-quantization-magnitude-pruning-0.1"
model = INCModelForSequenceClassification.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
cls_pipe = pipeline("text-classification", model=model, tokenizer=tokenizer)
text = "He's a dreadful magician."
outputs = cls_pipe(text)
```
|
srb1smo/lizard
|
srb1smo
| 2023-06-13T08:44:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-13T08:44:41Z |
---
license: creativeml-openrail-m
---
|
aga3134/ppo-pyramids-training
|
aga3134
| 2023-06-13T08:39:20Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-13T08:39:12Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: aga3134/ppo-pyramids-training
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Josias-Ounsinli/my_awesome_model2
|
Josias-Ounsinli
| 2023-06-13T08:37:32Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-12T10:29:57Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Josias-Ounsinli/my_awesome_model2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Josias-Ounsinli/my_awesome_model2
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: nan
- Validation Loss: nan
- Train Accuracy: 0.3333
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 4128, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| nan | nan | 0.3333 | 0 |
| nan | nan | 0.3333 | 1 |
| nan | nan | 0.3333 | 2 |
### Framework versions
- Transformers 4.30.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
undrwolf/taxi-RL-agent
|
undrwolf
| 2023-06-13T08:24:09Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-13T08:24:06Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-RL-agent
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="undrwolf/taxi-RL-agent", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ikasou/ppo-LunarLander-v2
|
ikasou
| 2023-06-13T08:17:07Z | 8 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T16:53:23Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 292.00 +/- 15.21
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dico97/distilgpt2-finetuned-wikitext2
|
dico97
| 2023-06-13T07:59:55Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-13T07:52:42Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: dico97/distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dico97/distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.8575
- Validation Loss: 3.6734
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.8575 | 3.6734 | 0 |
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Getinside03/vit-base-beans
|
Getinside03
| 2023-06-13T07:39:44Z | 195 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"vision",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-13T07:35:25Z |
---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: vit-base-beans
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9849624060150376
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0848
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2605 | 1.0 | 130 | 0.2307 | 0.9549 |
| 0.2843 | 2.0 | 260 | 0.1110 | 0.9925 |
| 0.1579 | 3.0 | 390 | 0.1061 | 0.9699 |
| 0.0904 | 4.0 | 520 | 0.0853 | 0.9850 |
| 0.1618 | 5.0 | 650 | 0.0848 | 0.9850 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 1.12.0a0+git664058f
- Datasets 2.12.0
- Tokenizers 0.13.3
|
intanm/fewshot-qa-002-20230613-003
|
intanm
| 2023-06-13T07:30:44Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-13T07:11:07Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: fewshot-qa-002-20230613-003
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fewshot-qa-002-20230613-003
This model is a fine-tuned version of [deepset/xlm-roberta-base-squad2](https://huggingface.co/deepset/xlm-roberta-base-squad2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 208 | 2.5896 |
| No log | 2.0 | 416 | 2.6143 |
| 2.487 | 3.0 | 624 | 2.7156 |
| 2.487 | 4.0 | 832 | 3.1187 |
| 1.2936 | 5.0 | 1040 | 3.3531 |
| 1.2936 | 6.0 | 1248 | 3.7272 |
| 1.2936 | 7.0 | 1456 | 3.9238 |
| 0.6852 | 8.0 | 1664 | 4.3116 |
| 0.6852 | 9.0 | 1872 | 4.3842 |
| 0.3944 | 10.0 | 2080 | 4.3842 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
WattsIshaan/ppo-LunarLander-v2
|
WattsIshaan
| 2023-06-13T07:02:01Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-13T07:01:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.69 +/- 16.87
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
casque/MuscleGirl_v1
|
casque
| 2023-06-13T06:59:23Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-13T06:57:46Z |
---
license: creativeml-openrail-m
---
|
addy88/distilroberta-base
|
addy88
| 2023-06-13T06:39:09Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-12T09:16:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilroberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6935
- Precision: 0.7556
- Recall: 0.7556
- F1: 0.7556
- Accuracy: 0.7556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2481 | 1.0 | 2355 | 1.5506 | 0.7409 | 0.7409 | 0.7409 | 0.7409 |
| 0.3473 | 2.0 | 4710 | 1.5572 | 0.7428 | 0.7428 | 0.7428 | 0.7428 |
| 0.2614 | 3.0 | 7065 | 1.6423 | 0.7539 | 0.7539 | 0.7539 | 0.7539 |
| 0.1337 | 4.0 | 9420 | 1.6935 | 0.7556 | 0.7556 | 0.7556 | 0.7556 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ugiugi/inisw08-DistilBERT-STS
|
ugiugi
| 2023-06-13T06:36:34Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-13T06:23:59Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 180 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 72,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
chjooon/my_awesome_eli5_clm-model
|
chjooon
| 2023-06-13T06:35:09Z | 208 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-10T04:53:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7153
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.879 | 1.0 | 1121 | 3.7352 |
| 3.7867 | 2.0 | 2242 | 3.7179 |
| 3.737 | 3.0 | 3363 | 3.7153 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
pilgrim222/q-FrozenLake-v1-4x4-noSlippery
|
pilgrim222
| 2023-06-13T06:24:44Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-13T06:24:42Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="pilgrim222/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
intanm/fewshot-qa-002-20230613
|
intanm
| 2023-06-13T06:21:53Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-13T06:19:20Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: fewshot-qa-002-20230613
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fewshot-qa-002-20230613
This model is a fine-tuned version of [intanm/20230429-001-baseline-xlmr-qa-ft-clickbait-spoiling](https://huggingface.co/intanm/20230429-001-baseline-xlmr-qa-ft-clickbait-spoiling) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 8 | 1.7534 |
| No log | 2.0 | 16 | 1.0488 |
| No log | 3.0 | 24 | 0.6455 |
| No log | 4.0 | 32 | 0.3724 |
| No log | 5.0 | 40 | 0.2555 |
| No log | 6.0 | 48 | 0.1813 |
| No log | 7.0 | 56 | 0.1244 |
| No log | 8.0 | 64 | 0.1023 |
| No log | 9.0 | 72 | 0.0873 |
| No log | 10.0 | 80 | 0.0795 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
casque/Angewomon-Digimon-v1
|
casque
| 2023-06-13T06:21:20Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-13T06:19:51Z |
---
license: creativeml-openrail-m
---
|
irfanamal/bert-base-uncased-finetuned-amazonreviews
|
irfanamal
| 2023-06-13T06:21:12Z | 197 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-13T04:07:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-amazonreviews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-amazonreviews
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1298 | 1.0 | 1797 | 1.9650 |
| 2.0174 | 2.0 | 3594 | 1.8939 |
| 1.9809 | 3.0 | 5391 | 1.8666 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Catears/AnythingVAEStorage
|
Catears
| 2023-06-13T06:17:58Z | 0 | 0 |
diffusers
|
[
"diffusers",
"license:unknown",
"region:us"
] | null | 2023-06-13T06:09:45Z |
---
license: unknown
---
## This is just a direct copy of AnythingV4 vae.
I want to load it manually in diffusers to avoid loading failure, but cannot do it using "from_pretrained" directly. That's why I create this model card to resolve the issue
|
tuwonga/actionauf
|
tuwonga
| 2023-06-13T06:16:31Z | 0 | 0 | null |
[
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-12T19:28:06Z |
---
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/tuwonga/actionauf/resolve/main/actionauf.jpg"
tags:
- stable-diffusion
- text-to-image
---
### Actionauf
This is a fine-tuned Stable Diffusion model (based on v1.5) trained on **_action figure_** pictures: Use the token **_actionauf_** in your prompts to use the style.
_Download the safetensor file from "files and versions" tab into the stable diffusion models folder of your web-ui of choice._
--
**Characters rendered with this model:**

_prompt and settings used: **realistic actionauf style [person]** | **Steps: 20, Sampler: Euler, CFG scale: 11.5**_
--
**Note:** You can make the prompt stronger using words as "realistic" or "action figure" or whatever you think being fit. Do not exceed with steps, try to check the restore faces and enjoy with cfg scale. At the moment this is an experimental model. Hope you like it.Please feel free to merge with some useful model and let me know ^_^
--
This model was trained with Dreambooth training by TheLastBen, using 77 images at 11550 steps.
--
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
ssvadim/whisper-small-uz
|
ssvadim
| 2023-06-13T06:10:30Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"dataset:mozilla-foundation/common_voice_13_0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-12T16:44:38Z |
---
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
---
|
LemonFace0309/Reinforce-Pixelcopter-PLE-v0
|
LemonFace0309
| 2023-06-13T06:07:08Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-13T06:06:42Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 8.20 +/- 7.98
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
gokuls/bert_12_layer_model_v1_complete_training_new_48_KD_wt_init
|
gokuls
| 2023-06-13T06:03:21Z | 50 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-10T21:48:23Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_12_layer_model_v1_complete_training_new_48_KD_wt_init
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_12_layer_model_v1_complete_training_new_48_KD_wt_init
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 241.0859
- Accuracy: 0.4099
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 36
- eval_batch_size: 36
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 603.6149 | 0.06 | 10000 | 579.3715 | 0.1445 |
| 499.749 | 0.12 | 20000 | 507.5929 | 0.1449 |
| 464.2382 | 0.18 | 30000 | 455.9639 | 0.1525 |
| 402.3357 | 0.25 | 40000 | 394.2733 | 0.2312 |
| 354.8343 | 0.31 | 50000 | 348.3572 | 0.2952 |
| 323.2804 | 0.37 | 60000 | 315.5649 | 0.3318 |
| 304.7558 | 0.43 | 70000 | 294.0559 | 0.3520 |
| 291.4657 | 0.49 | 80000 | 282.6148 | 0.3669 |
| 280.9548 | 0.55 | 90000 | 270.2188 | 0.3792 |
| 271.2151 | 0.61 | 100000 | 260.9895 | 0.3888 |
| 261.6096 | 0.68 | 110000 | 251.4035 | 0.3961 |
| 256.1119 | 0.74 | 120000 | 243.2089 | 0.4041 |
| 249.2419 | 0.8 | 130000 | 241.0859 | 0.4099 |
### Framework versions
- Transformers 4.30.1
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/bert_12_layer_model_v2_complete_training_new_48_KD
|
gokuls
| 2023-06-13T05:52:38Z | 50 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-10T21:42:09Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_12_layer_model_v2_complete_training_new_48_KD
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_12_layer_model_v2_complete_training_new_48_KD
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 418.2312
- Accuracy: 0.1802
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 36
- eval_batch_size: 36
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 846.7844 | 0.06 | 10000 | 799.2012 | 0.1433 |
| 603.1405 | 0.12 | 20000 | 597.2043 | 0.1455 |
| 552.8343 | 0.18 | 30000 | 549.4058 | 0.1455 |
| 525.8206 | 0.25 | 40000 | 523.2474 | 0.1455 |
| 508.5397 | 0.31 | 50000 | 508.2666 | 0.1467 |
| 495.479 | 0.37 | 60000 | 494.1740 | 0.1454 |
| 485.269 | 0.43 | 70000 | 483.4185 | 0.1459 |
| 474.9876 | 0.49 | 80000 | 475.5062 | 0.1475 |
| 464.3079 | 0.55 | 90000 | 460.0214 | 0.1507 |
| 455.1477 | 0.61 | 100000 | 451.2754 | 0.1553 |
| 444.9362 | 0.68 | 110000 | 441.2908 | 0.1596 |
| 438.575 | 0.74 | 120000 | 432.5171 | 0.1660 |
| 429.8774 | 0.8 | 130000 | 425.1851 | 0.1693 |
| 421.0561 | 0.86 | 140000 | 418.2312 | 0.1802 |
### Framework versions
- Transformers 4.30.1
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ppsingh/action-policy-plans-classifier
|
ppsingh
| 2023-06-13T05:50:19Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mpnet",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-13T05:49:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: action-policy-plans-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# action-policy-plans-classifier
This model is a fine-tuned version of [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6839
- Precision Micro: 0.7089
- Precision Weighted: 0.7043
- Precision Samples: 0.4047
- Recall Micro: 0.7066
- Recall Weighted: 0.7066
- Recall Samples: 0.4047
- F1-score: 0.4041
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.915e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Micro | Precision Weighted | Precision Samples | Recall Micro | Recall Weighted | Recall Samples | F1-score |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|:------------------:|:-----------------:|:------------:|:---------------:|:--------------:|:--------:|
| 0.7333 | 1.0 | 253 | 0.5828 | 0.625 | 0.6422 | 0.4047 | 0.7098 | 0.7098 | 0.4065 | 0.4047 |
| 0.5905 | 2.0 | 506 | 0.5593 | 0.6292 | 0.6318 | 0.4437 | 0.7760 | 0.7760 | 0.4446 | 0.4434 |
| 0.4934 | 3.0 | 759 | 0.5269 | 0.6630 | 0.6637 | 0.4319 | 0.7571 | 0.7571 | 0.4347 | 0.4325 |
| 0.4018 | 4.0 | 1012 | 0.5645 | 0.6449 | 0.6479 | 0.4456 | 0.7792 | 0.7792 | 0.4465 | 0.4453 |
| 0.3235 | 5.0 | 1265 | 0.6101 | 0.6964 | 0.6929 | 0.4220 | 0.7382 | 0.7382 | 0.4229 | 0.4217 |
| 0.2638 | 6.0 | 1518 | 0.6692 | 0.6888 | 0.6841 | 0.4111 | 0.7192 | 0.7192 | 0.4120 | 0.4108 |
| 0.2197 | 7.0 | 1771 | 0.6839 | 0.7089 | 0.7043 | 0.4047 | 0.7066 | 0.7066 | 0.4047 | 0.4041 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/bert_12_layer_model_v1_complete_training_new_48_KD
|
gokuls
| 2023-06-13T05:49:08Z | 50 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-10T03:43:10Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_12_layer_model_v1_complete_training_new_48_KD
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_12_layer_model_v1_complete_training_new_48_KD
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 326.4413
- Accuracy: 0.3018
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 36
- eval_batch_size: 36
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 849.2694 | 0.06 | 10000 | 802.2138 | 0.1435 |
| 603.4255 | 0.12 | 20000 | 597.5114 | 0.1445 |
| 552.5588 | 0.18 | 30000 | 549.1310 | 0.1454 |
| 525.5738 | 0.25 | 40000 | 523.0781 | 0.1460 |
| 508.5192 | 0.31 | 50000 | 507.5772 | 0.1463 |
| 496.0482 | 0.37 | 60000 | 494.5385 | 0.1457 |
| 487.2105 | 0.43 | 70000 | 484.7273 | 0.1464 |
| 476.1281 | 0.49 | 80000 | 473.3444 | 0.1490 |
| 456.0017 | 0.55 | 90000 | 445.0464 | 0.1662 |
| 421.6633 | 0.61 | 100000 | 404.1071 | 0.2046 |
| 382.6604 | 0.68 | 110000 | 369.2148 | 0.2446 |
| 358.6727 | 0.74 | 120000 | 341.1114 | 0.2776 |
| 339.9395 | 0.8 | 130000 | 326.4413 | 0.3018 |
### Framework versions
- Transformers 4.30.1
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Deojaklah/Memeyy
|
Deojaklah
| 2023-06-13T05:44:59Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-13T05:35:02Z |
---
license: creativeml-openrail-m
---
|
or90/results
|
or90
| 2023-06-13T05:39:16Z | 0 | 0 | null |
[
"generated_from_trainer",
"region:us"
] | null | 2023-06-13T05:34:48Z |
---
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 500
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
julianzy/CheckGPT
|
julianzy
| 2023-06-13T05:25:21Z | 0 | 1 | null |
[
"dataset:julianzy/GPABenchmark",
"region:us"
] | null | 2023-06-13T05:23:14Z |
---
datasets:
- julianzy/GPABenchmark
---
The official repository of paper: "Check Me If You Can: Detecting ChatGPT-Generated Academic Writing using CheckGPT".
|
Zemulax/masked-lm-tpu
|
Zemulax
| 2023-06-13T05:18:23Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-13T00:21:57Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Zemulax/masked-lm-tpu
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Zemulax/masked-lm-tpu
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 7.7770
- Train Accuracy: 0.0241
- Validation Loss: 7.7589
- Validation Accuracy: 0.0230
- Epoch: 98
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 0.0001, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0001, 'decay_steps': 223250, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 11750, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.001}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 10.2868 | 0.0 | 10.2891 | 0.0 | 0 |
| 10.2817 | 0.0000 | 10.2764 | 0.0 | 1 |
| 10.2772 | 0.0000 | 10.2667 | 0.0000 | 2 |
| 10.2604 | 0.0000 | 10.2521 | 0.0 | 3 |
| 10.2421 | 0.0000 | 10.2282 | 0.0000 | 4 |
| 10.2219 | 0.0 | 10.2010 | 0.0 | 5 |
| 10.1957 | 0.0 | 10.1669 | 0.0 | 6 |
| 10.1667 | 0.0000 | 10.1388 | 0.0000 | 7 |
| 10.1278 | 0.0000 | 10.0908 | 0.0000 | 8 |
| 10.0848 | 0.0000 | 10.0405 | 0.0001 | 9 |
| 10.0496 | 0.0002 | 9.9921 | 0.0007 | 10 |
| 9.9940 | 0.0010 | 9.9422 | 0.0039 | 11 |
| 9.9424 | 0.0035 | 9.8765 | 0.0110 | 12 |
| 9.8826 | 0.0092 | 9.8156 | 0.0182 | 13 |
| 9.8225 | 0.0155 | 9.7461 | 0.0209 | 14 |
| 9.7670 | 0.0201 | 9.6768 | 0.0222 | 15 |
| 9.7065 | 0.0219 | 9.6127 | 0.0222 | 16 |
| 9.6352 | 0.0227 | 9.5445 | 0.0220 | 17 |
| 9.5757 | 0.0226 | 9.4795 | 0.0219 | 18 |
| 9.4894 | 0.0232 | 9.3985 | 0.0222 | 19 |
| 9.4277 | 0.0234 | 9.3386 | 0.0222 | 20 |
| 9.3676 | 0.0229 | 9.2753 | 0.0220 | 21 |
| 9.2980 | 0.0229 | 9.2170 | 0.0219 | 22 |
| 9.2361 | 0.0233 | 9.1518 | 0.0219 | 23 |
| 9.1515 | 0.0236 | 9.0827 | 0.0223 | 24 |
| 9.1171 | 0.0228 | 9.0406 | 0.0218 | 25 |
| 9.0447 | 0.0234 | 8.9867 | 0.0218 | 26 |
| 9.0119 | 0.0229 | 8.9307 | 0.0221 | 27 |
| 8.9625 | 0.0229 | 8.8969 | 0.0221 | 28 |
| 8.9098 | 0.0230 | 8.8341 | 0.0223 | 29 |
| 8.8726 | 0.0227 | 8.8118 | 0.0220 | 30 |
| 8.8574 | 0.0223 | 8.7910 | 0.0219 | 31 |
| 8.7798 | 0.0231 | 8.7506 | 0.0221 | 32 |
| 8.7535 | 0.0231 | 8.7055 | 0.0222 | 33 |
| 8.7333 | 0.0228 | 8.6801 | 0.0223 | 34 |
| 8.6985 | 0.0231 | 8.6837 | 0.0220 | 35 |
| 8.6816 | 0.0229 | 8.6243 | 0.0223 | 36 |
| 8.6356 | 0.0228 | 8.6323 | 0.0217 | 37 |
| 8.6392 | 0.0225 | 8.5603 | 0.0225 | 38 |
| 8.5802 | 0.0233 | 8.5722 | 0.0219 | 39 |
| 8.5825 | 0.0228 | 8.5548 | 0.0220 | 40 |
| 8.5625 | 0.0228 | 8.5272 | 0.0220 | 41 |
| 8.5415 | 0.0228 | 8.5200 | 0.0222 | 42 |
| 8.5124 | 0.0230 | 8.4787 | 0.0222 | 43 |
| 8.4999 | 0.0229 | 8.4819 | 0.0218 | 44 |
| 8.4561 | 0.0235 | 8.4453 | 0.0221 | 45 |
| 8.4854 | 0.0223 | 8.4378 | 0.0220 | 46 |
| 8.4367 | 0.0229 | 8.4212 | 0.0222 | 47 |
| 8.4096 | 0.0232 | 8.4033 | 0.0221 | 48 |
| 8.4162 | 0.0228 | 8.3869 | 0.0221 | 49 |
| 8.4005 | 0.0229 | 8.3768 | 0.0218 | 50 |
| 8.3583 | 0.0235 | 8.3470 | 0.0224 | 51 |
| 8.3428 | 0.0235 | 8.3540 | 0.0221 | 52 |
| 8.3491 | 0.0231 | 8.3201 | 0.0225 | 53 |
| 8.3551 | 0.0231 | 8.3382 | 0.0221 | 54 |
| 8.3186 | 0.0231 | 8.3136 | 0.0219 | 55 |
| 8.3139 | 0.0226 | 8.2844 | 0.0222 | 56 |
| 8.3170 | 0.0229 | 8.2740 | 0.0221 | 57 |
| 8.2886 | 0.0231 | 8.2485 | 0.0223 | 58 |
| 8.2648 | 0.0233 | 8.2336 | 0.0223 | 59 |
| 8.2714 | 0.0225 | 8.2321 | 0.0221 | 60 |
| 8.2446 | 0.0233 | 8.2135 | 0.0223 | 61 |
| 8.2303 | 0.0230 | 8.1980 | 0.0223 | 62 |
| 8.2022 | 0.0237 | 8.1996 | 0.0222 | 63 |
| 8.2222 | 0.0227 | 8.1822 | 0.0222 | 64 |
| 8.1690 | 0.0236 | 8.2005 | 0.0220 | 65 |
| 8.1741 | 0.0233 | 8.1446 | 0.0226 | 66 |
| 8.1990 | 0.0224 | 8.1586 | 0.0219 | 67 |
| 8.1395 | 0.0236 | 8.1243 | 0.0225 | 68 |
| 8.1675 | 0.0229 | 8.1275 | 0.0222 | 69 |
| 8.1432 | 0.0229 | 8.1374 | 0.0217 | 70 |
| 8.1197 | 0.0234 | 8.1078 | 0.0221 | 71 |
| 8.1046 | 0.0232 | 8.0991 | 0.0221 | 72 |
| 8.1013 | 0.0231 | 8.0794 | 0.0222 | 73 |
| 8.0887 | 0.0228 | 8.0720 | 0.0221 | 74 |
| 8.0661 | 0.0233 | 8.0573 | 0.0222 | 75 |
| 8.0548 | 0.0231 | 8.0313 | 0.0226 | 76 |
| 8.0307 | 0.0235 | 8.0278 | 0.0222 | 77 |
| 8.0626 | 0.0226 | 8.0084 | 0.0224 | 78 |
| 8.0276 | 0.0229 | 8.0099 | 0.0221 | 79 |
| 8.0213 | 0.0231 | 7.9930 | 0.0222 | 80 |
| 7.9798 | 0.0237 | 7.9742 | 0.0224 | 81 |
| 8.0135 | 0.0226 | 7.9857 | 0.0218 | 82 |
| 7.9500 | 0.0235 | 7.9505 | 0.0223 | 83 |
| 7.9519 | 0.0234 | 7.9711 | 0.0217 | 84 |
| 7.9616 | 0.0228 | 7.9288 | 0.0223 | 85 |
| 7.9803 | 0.0225 | 7.8997 | 0.0226 | 86 |
| 7.9369 | 0.0227 | 7.9015 | 0.0225 | 87 |
| 7.9309 | 0.0229 | 7.9010 | 0.0224 | 88 |
| 7.9367 | 0.0226 | 7.8988 | 0.0220 | 89 |
| 7.8840 | 0.0230 | 7.8774 | 0.0216 | 90 |
| 7.8785 | 0.0233 | 7.8527 | 0.0225 | 91 |
| 7.8998 | 0.0226 | 7.8509 | 0.0219 | 92 |
| 7.8451 | 0.0232 | 7.8488 | 0.0221 | 93 |
| 7.8596 | 0.0231 | 7.8310 | 0.0222 | 94 |
| 7.8434 | 0.0231 | 7.8168 | 0.0229 | 95 |
| 7.7929 | 0.0238 | 7.7815 | 0.0233 | 96 |
| 7.8174 | 0.0236 | 7.7857 | 0.0232 | 97 |
| 7.7770 | 0.0241 | 7.7589 | 0.0230 | 98 |
### Framework versions
- Transformers 4.30.1
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
wiorz/gpt2_sm_gen1_large
|
wiorz
| 2023-06-13T05:16:26Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-10T02:02:59Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: gpt2_sm_gen1_large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_sm_gen1_large
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4824
- Accuracy: 0.8063
- Precision: 0.5094
- Recall: 0.3114
- F1: 0.3865
- D-index: 1.5483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 96000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.5028 | 1.0 | 3000 | 0.5183 | 0.8039 | 0.4872 | 0.0162 | 0.0313 | 1.4419 |
| 0.4442 | 2.0 | 6000 | 0.4597 | 0.8113 | 0.6126 | 0.0995 | 0.1712 | 1.4819 |
| 0.415 | 3.0 | 9000 | 0.4217 | 0.8202 | 0.6309 | 0.1978 | 0.3012 | 1.5284 |
| 0.4047 | 4.0 | 12000 | 0.4365 | 0.8228 | 0.6682 | 0.1901 | 0.2960 | 1.5294 |
| 0.3827 | 5.0 | 15000 | 0.4141 | 0.8289 | 0.6502 | 0.2744 | 0.3859 | 1.5663 |
| 0.3527 | 6.0 | 18000 | 0.4357 | 0.8284 | 0.6320 | 0.2973 | 0.4044 | 1.5733 |
| 0.336 | 7.0 | 21000 | 0.4322 | 0.8285 | 0.6202 | 0.3216 | 0.4235 | 1.5815 |
| 0.3051 | 8.0 | 24000 | 0.4696 | 0.8259 | 0.6076 | 0.3148 | 0.4147 | 1.5758 |
| 0.2745 | 9.0 | 27000 | 0.4957 | 0.8164 | 0.5431 | 0.3969 | 0.4586 | 1.5903 |
| 0.2435 | 10.0 | 30000 | 0.5369 | 0.8151 | 0.5391 | 0.3871 | 0.4506 | 1.5853 |
| 0.2182 | 11.0 | 33000 | 0.6251 | 0.8176 | 0.5559 | 0.3428 | 0.4241 | 1.5740 |
| 0.2031 | 12.0 | 36000 | 0.6869 | 0.795 | 0.4760 | 0.4590 | 0.4673 | 1.5820 |
| 0.188 | 13.0 | 39000 | 0.8867 | 0.8147 | 0.5600 | 0.2522 | 0.3478 | 1.5396 |
| 0.1738 | 14.0 | 42000 | 1.0311 | 0.8077 | 0.5149 | 0.3152 | 0.3910 | 1.5514 |
| 0.1495 | 15.0 | 45000 | 1.2024 | 0.8053 | 0.5039 | 0.3815 | 0.4343 | 1.5703 |
| 0.1415 | 16.0 | 48000 | 1.3324 | 0.8045 | 0.5013 | 0.4015 | 0.4459 | 1.5759 |
| 0.1275 | 17.0 | 51000 | 1.5071 | 0.8051 | 0.5038 | 0.3416 | 0.4071 | 1.5568 |
| 0.1139 | 18.0 | 54000 | 1.4309 | 0.8053 | 0.5047 | 0.3177 | 0.3900 | 1.5490 |
| 0.1111 | 19.0 | 57000 | 1.5033 | 0.8082 | 0.5154 | 0.3496 | 0.4166 | 1.5636 |
| 0.1124 | 20.0 | 60000 | 1.4824 | 0.8063 | 0.5094 | 0.3114 | 0.3865 | 1.5483 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gsn-codes/a2c-PandaReachDense-v2
|
gsn-codes
| 2023-06-13T04:34:33Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-13T04:31:51Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.55 +/- 0.47
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
danielthomas45a/pure-frankincense-essential-oils
|
danielthomas45a
| 2023-06-13T04:20:50Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-06-13T04:04:06Z |
---
license: openrail
---
Experience the captivating essence of nature with our <a href="https://www.amazon.com/Pure-Frankincense-Essential-Oil-for-Pain-Skin/dp/B076P3XYGX">pure frankincense essential oils</a>. ✨ Uncover the secrets of this ancient treasure, known for its remarkable therapeutic properties and heavenly aroma. Elevate your senses and embark on a journey of serenity and rejuvenation.
<a href="https://www.amazon.com/Pure-Frankincense-Essential-Oil-for-Pain-Skin/dp/B076P3XYGX">pure frankincense essential oils</a>
<a href="https://www.amazon.com/Pure-Frankincense-Essential-Oil-for-Pain-Skin/dp/B076P3XYGX">pure frankincense essential oils</a>
<a href="https://www.amazon.com/Pure-Frankincense-Essential-Oil-for-Pain-Skin/dp/B076P3XYGX">pure frankincense essential oils</a>
|
irfanamal/distilroberta-base-finetuned-wikitext2
|
irfanamal
| 2023-06-13T04:18:14Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-12T11:00:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1005 | 1.0 | 1203 | 1.9467 |
| 2.034 | 2.0 | 2406 | 1.8616 |
| 1.9683 | 3.0 | 3609 | 1.8253 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Dans-Archive/Dans-PersonalityEngine-13b
|
Dans-Archive
| 2023-06-13T04:14:23Z | 53 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-11T23:42:20Z |
---
language:
- en
---
### Description:
This is a multipurpose chat / chat instruct hybrid model in the same vein as the Pygmalion team's Metharme. It uses a curated pile of training data that has been normalized into a consistent training format. It has been trained on a wide array of one shot instructions, multi round instructions, and role playing scenarios.
### Prompt format:
Metharme
The prompt should start with the cursor on the same line directly after "<|model|>" with no space. The following are all valid formats and can be extended to as many rounds as desired.
```
<|system|>system message here<|user|>user message here<|model|>
```
```
<|system|>system message here<|user|>user message here<|model|>model message<|user|>user message here<|model|>
```
```
<|system|>system message here<|model|>
```
```
<|system|>system message here<|model|>model message<|user|>user message here<|model|>
```
Some example prompts:
```
<|system|>The following is a transcript between a helpful assistant and a user.<|user|>Why is the sky blue?<|model|>
```
```
<|system|>You are a Virtual Story Generator. You take the user's input and create an excellent and captivating story that goes in that direction. Use an abundance of sensory descriptions and eloquent prose.<|user|>Alpha Centauri has fallen, to the bears. This is a point of view tale about a soldier on the ground.<|model|>
```
```
<|system|>You are a professional editor with decades of experience, help the user with any task they have for you.<|user|>Can you rewrite this to flow better? "I knew I probably shouldnt have done that but oh well"<|model|>
```
More will be added at a later date.
### Perplexity Benchmarks:
- TBA
### Training information:
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="150" height="24"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- GPTQ 4 bit LoRA
- 7 Epochs
- 64 / 32 R / A
- 2048 Cutoff
- 18 hours on 4x RTX 4090s
### Data used in training:
- TBA
### Models used:
For training:
https://huggingface.co/PocketDoc/llama-13b-gptq-4bit-128g
For merging:
https://huggingface.co/PocketDoc/Dans-PersonalityEngine-13b-LoRA
and
https://huggingface.co/huggyllama/llama-13b
### Disclaimer:
It has not been aligned and no warranty is given for the quality or safety of its outputs.
|
ugiugi/inisw08-DistilBERT-mlm-adagrad
|
ugiugi
| 2023-06-13T04:01:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-13T02:14:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: inisw08-RoBERT-mlm-adagrad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# inisw08-RoBERT-mlm-adagrad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8605
- Accuracy: 0.3698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_120
|
gokuls
| 2023-06-13T03:59:38Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-11T21:10:30Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_12_layer_model_v2_complete_training_new_wt_init_120
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_12_layer_model_v2_complete_training_new_wt_init_120
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_96](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_96) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3044
- Accuracy: 0.5675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 2.5079 | 0.08 | 10000 | 2.4011 | 0.5539 |
| 2.4953 | 0.16 | 20000 | 2.3921 | 0.5553 |
| 2.484 | 0.25 | 30000 | 2.3823 | 0.5568 |
| 2.4828 | 0.33 | 40000 | 2.3711 | 0.5582 |
| 2.4639 | 0.41 | 50000 | 2.3587 | 0.5598 |
| 2.4572 | 0.49 | 60000 | 2.3521 | 0.5610 |
| 2.4385 | 0.57 | 70000 | 2.3430 | 0.5626 |
| 2.4307 | 0.66 | 80000 | 2.3337 | 0.5633 |
| 2.4162 | 0.74 | 90000 | 2.3208 | 0.5647 |
| 2.4088 | 0.82 | 100000 | 2.3133 | 0.5663 |
| 2.4139 | 0.9 | 110000 | 2.3044 | 0.5675 |
### Framework versions
- Transformers 4.30.1
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Alwin114/my_awesome_wnut_model
|
Alwin114
| 2023-06-13T03:53:25Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-13T03:49:53Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Alwin114/my_awesome_wnut_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Alwin114/my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1251
- Validation Loss: 0.2613
- Train Precision: 0.5636
- Train Recall: 0.4079
- Train F1: 0.4733
- Train Accuracy: 0.9449
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 636, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.3473 | 0.3059 | 0.3825 | 0.2667 | 0.3143 | 0.9352 | 0 |
| 0.1626 | 0.2656 | 0.5075 | 0.3648 | 0.4245 | 0.9418 | 1 |
| 0.1251 | 0.2613 | 0.5636 | 0.4079 | 0.4733 | 0.9449 | 2 |
### Framework versions
- Transformers 4.30.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
silpakanneganti/biobert-finetuned-squad-insurance
|
silpakanneganti
| 2023-06-13T03:47:55Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-12T10:07:55Z |
---
tags:
- generated_from_trainer
model-index:
- name: biobert-finetuned-squad-insurance
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-finetuned-squad-insurance
This model is a fine-tuned version of [dmis-lab/biobert-large-cased-v1.1-squad](https://huggingface.co/dmis-lab/biobert-large-cased-v1.1-squad) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gsn-codes/a2c-AntBulletEnv-v0
|
gsn-codes
| 2023-06-13T03:43:09Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-13T02:20:41Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1407.58 +/- 108.66
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gokuls/bert_12_layer_model_v1_complete_training_new_120
|
gokuls
| 2023-06-13T03:39:56Z | 50 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-11T21:08:17Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_12_layer_model_v1_complete_training_new_120
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_12_layer_model_v1_complete_training_new_120
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_96](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_96) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2643
- Accuracy: 0.5796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 2.4425 | 0.08 | 10000 | 2.3838 | 0.5641 |
| 2.4415 | 0.16 | 20000 | 2.3705 | 0.5658 |
| 2.4103 | 0.25 | 30000 | 2.3537 | 0.5680 |
| 2.4068 | 0.33 | 40000 | 2.3430 | 0.5696 |
| 2.3823 | 0.41 | 50000 | 2.3249 | 0.5719 |
| 2.3729 | 0.49 | 60000 | 2.3141 | 0.5733 |
| 2.3516 | 0.57 | 70000 | 2.2986 | 0.5751 |
| 2.342 | 0.66 | 80000 | 2.2878 | 0.5764 |
| 2.3265 | 0.74 | 90000 | 2.2734 | 0.5782 |
| 2.3158 | 0.82 | 100000 | 2.2643 | 0.5796 |
### Framework versions
- Transformers 4.30.1
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_120
|
gokuls
| 2023-06-13T03:39:09Z | 53 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-11T21:07:16Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_12_layer_model_v1_complete_training_new_wt_init_120
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_12_layer_model_v1_complete_training_new_wt_init_120
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_96](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_96) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1966
- Accuracy: 0.5856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 2.3673 | 0.08 | 10000 | 2.2852 | 0.5732 |
| 2.356 | 0.16 | 20000 | 2.2772 | 0.5744 |
| 2.3424 | 0.25 | 30000 | 2.2640 | 0.5765 |
| 2.3442 | 0.33 | 40000 | 2.2525 | 0.5778 |
| 2.3228 | 0.41 | 50000 | 2.2427 | 0.5793 |
| 2.3179 | 0.49 | 60000 | 2.2313 | 0.5810 |
| 2.2993 | 0.57 | 70000 | 2.2237 | 0.5822 |
| 2.2911 | 0.66 | 80000 | 2.2128 | 0.5831 |
| 2.279 | 0.74 | 90000 | 2.2008 | 0.5842 |
| 2.2715 | 0.82 | 100000 | 2.1966 | 0.5856 |
### Framework versions
- Transformers 4.30.1
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
leeboykt/Reinforce-unit4_001
|
leeboykt
| 2023-06-13T03:37:41Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-13T03:37:32Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-unit4_001
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 210.10 +/- 208.34
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
zweedao/instruct-pix2pix
|
zweedao
| 2023-06-13T03:32:24Z | 8 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"image-to-image",
"license:mit",
"diffusers:StableDiffusionInstructPix2PixPipeline",
"region:us"
] |
image-to-image
| 2023-05-22T06:44:29Z |
---
license: mit
tags:
- image-to-image
duplicated_from: timbrooks/instruct-pix2pix
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.