modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 06:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 06:27:02
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
allenai/open-instruct-unnatural-instructions-7b
|
allenai
| 2023-06-20T17:50:10Z | 15 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:mrm8488/unnatural-instructions",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2212.09689",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:08:23Z |
---
datasets:
- mrm8488/unnatural-instructions
language:
- en
---
# Open-Instruct Unnatural Instructions 7B
This model is a 7B LLaMa model finetuned on the Unnatural Instructions dataset. *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner.
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 42.9 | 38.1 | 3.5 | 5.0 | 31.4 | 30.0 | 36.3 | 6.5 | 10.3 | 19.8 | 8.2 | 20.0 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{honovich2022unnatural,
title = {Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor},
author = {Honovich, Or and Scialom, Thomas and Levy, Omer and Schick, Timo},
url = {https://arxiv.org/abs/2212.09689},
publisher = {arXiv},
year={2022}
}
```
|
allenai/open-instruct-code-alpaca-7b
|
allenai
| 2023-06-20T17:49:49Z | 32 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:sahil2801/CodeAlpaca-20k",
"arxiv:2306.04751",
"arxiv:2302.13971",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:06:24Z |
---
datasets:
- sahil2801/CodeAlpaca-20k
language:
- en
---
# Open-Instruct Code Alpaca 7B
This model is a 7B LLaMa model finetuned on the Code Alpaca dataset. *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner.
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 34.7 | 34.5 | 6.5 | 7.5 | 29.6 | 30.5 | 36.7 | 10.5 | 16.5 | 29.2 | 17.5 | 22.6 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{codealpaca,
author = {Sahil Chaudhary},
title = {Code Alpaca: An Instruction-following LLaMA model for code generation},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/sahil280114/codealpaca}},
}
```
|
allenai/open-instruct-code-alpaca-13b
|
allenai
| 2023-06-20T17:49:39Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:sahil2801/CodeAlpaca-20k",
"arxiv:2306.04751",
"arxiv:2302.13971",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:17:16Z |
---
datasets:
- sahil2801/CodeAlpaca-20k
language:
- en
---
# Open-Instruct Code Alpaca 13B
This model is a 13B LLaMa model finetuned on the Code Alpaca dataset. *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner.
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 42.6 | 44.3 | 5.0 | 12.0 | 35.5 | 36.6 | 41.3 | 10.9 | 20.1 | 34.5 | 19.4 | 26.8 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{codealpaca,
author = {Sahil Chaudhary},
title = {Code Alpaca: An Instruction-following LLaMA model for code generation},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/sahil280114/codealpaca}},
}
```
|
MireP/remon-polyglot-5.8b-qlora-8000
|
MireP
| 2023-06-20T17:48:48Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-20T17:48:43Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
allenai/tulu-13b
|
allenai
| 2023-06-20T17:48:04Z | 27 | 8 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:databricks/databricks-dolly-15k",
"dataset:OpenAssistant/oasst1",
"dataset:sahil2801/CodeAlpaca-20k",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2301.13688",
"arxiv:2304.07327",
"arxiv:2304.03277",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T23:46:18Z |
---
datasets:
- databricks/databricks-dolly-15k
- OpenAssistant/oasst1
- sahil2801/CodeAlpaca-20k
language:
- en
---
# Tulu 13B
This model is a 13B LLaMa model finetuned on a mixture of instruction datasets (FLAN V2, CoT, Dolly, Open Assistant 1, GPT4-Alpaca, Code-Alpaca, and ShareGPT).
*Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 49.2 | 51.8 | 5.0 | 36.5 | 41.3 | 42.8 | 46.1 | 9.2 | 21.3 | 35.0 | 53.9 |37.2 |
If you use this model, please cite our work, the llama paper, and the original datasets:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{dolly,
author = {Databricks},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {Blog post},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}
}
```
```
@article{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others},
journal={arXiv preprint arXiv:2301.13688},
year={2023}
}
```
```
@misc{köpf2023openassistant,
title={OpenAssistant Conversations -- Democratizing Large Language Model Alignment},
author={Andreas Köpf and Yannic Kilcher and Dimitri von Rütte and Sotiris Anagnostidis and Zhi-Rui Tam and Keith Stevens and Abdullah Barhoum and Nguyen Minh Duc and Oliver Stanley and Richárd Nagyfi and Shahul ES and Sameer Suri and David Glushkov and Arnav Dantuluri and Andrew Maguire and Christoph Schuhmann and Huu Nguyen and Alexander Mattick},
year={2023},
eprint={2304.07327},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{peng2023instruction,
title={Instruction Tuning with GPT-4},
author={Peng, Baolin and Li, Chunyuan and He, Pengcheng and Galley, Michel and Gao, Jianfeng},
journal={arXiv preprint arXiv:2304.03277},
year={2023}
}
```
```
@misc{codealpaca,
author = {Sahil Chaudhary},
title = {Code Alpaca: An Instruction-following LLaMA model for code generation},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/sahil280114/codealpaca}},
}
```
|
gsn-codes/poca-SoccerTwos
|
gsn-codes
| 2023-06-20T17:47:56Z | 35 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-06-20T17:42:24Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: gsn-codes/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
allenai/open-instruct-human-mix-30b
|
allenai
| 2023-06-20T17:47:47Z | 25 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:databricks/databricks-dolly-15k",
"dataset:OpenAssistant/oasst1",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2301.13688",
"arxiv:2304.07327",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:35:33Z |
---
datasets:
- databricks/databricks-dolly-15k
- OpenAssistant/oasst1
language:
- en
---
# Open-Instruct Human-mix 30B
This model is a 30B LLaMa model finetuned on a mixture of human-authored datasets (FLAN V2, CoT, Dolly, and Open Assistant 1). *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 56.3 | 58.9 | 6.5 | 49.5 | 46.6 | 47.8 | 58.9 | 12.7 | 22.6 | 39.4 | 44.6 | 40.7 |
If you use this model, please cite our work, the llama paper, and the original datasets:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{dolly,
author = {Databricks},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {Blog post},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}
}
```
```
@article{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others},
journal={arXiv preprint arXiv:2301.13688},
year={2023}
}
```
```
@misc{köpf2023openassistant,
title={OpenAssistant Conversations -- Democratizing Large Language Model Alignment},
author={Andreas Köpf and Yannic Kilcher and Dimitri von Rütte and Sotiris Anagnostidis and Zhi-Rui Tam and Keith Stevens and Abdullah Barhoum and Nguyen Minh Duc and Oliver Stanley and Richárd Nagyfi and Shahul ES and Sameer Suri and David Glushkov and Arnav Dantuluri and Andrew Maguire and Christoph Schuhmann and Huu Nguyen and Alexander Mattick},
year={2023},
eprint={2304.07327},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
allenai/open-instruct-sharegpt-13b
|
allenai
| 2023-06-20T17:47:15Z | 25 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"arxiv:2306.04751",
"arxiv:2302.13971",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T23:46:17Z |
---
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
language:
- en
---
# Open-Instruct ShareGPT 13B
This model is a 13B LLaMa model finetuned on the ShareGPT dataset (cleaned in a similar manner to Vicuna). *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 49.2 | 47.4 | 7.0 | 16.0 | 23.6 | 40.1 | 30.1 | 8.3 | 16.1 | 31.6 | 68.9 | 33.9 |
If you use this model, please cite our work and the llama paper:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
allenai/open-instruct-sharegpt-7b
|
allenai
| 2023-06-20T17:47:00Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"arxiv:2306.04751",
"arxiv:2302.13971",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:06:01Z |
---
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
language:
- en
---
# Open-Instruct ShareGPT 7B
This model is a 7B LLaMa model finetuned on the ShareGPT dataset (cleaned in a similar manner to Vicuna). *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 44.3 | 40.0 | 8.0 | 9.5 | 5.2 | 32.6 | 25.5 | 8.9 | 10.9 | 21.6 | 58.3 | 26.9 |
If you use this model, please cite our work and the llama paper:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
allenai/open-instruct-human-mix-13b
|
allenai
| 2023-06-20T17:46:47Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:databricks/databricks-dolly-15k",
"dataset:OpenAssistant/oasst1",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2301.13688",
"arxiv:2304.07327",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T23:41:23Z |
---
datasets:
- databricks/databricks-dolly-15k
- OpenAssistant/oasst1
language:
- en
---
# Open-Instruct Human-mix 13B
This model is a 13B LLaMa model finetuned on a mixture of human-authored datasets (FLAN V2, CoT, Dolly, and Open Assistant 1). *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 50.4 | 51.2 | 6.5 | 36.5 | 40.6 | 39.4 | 49.8 | 9.3 | 11.3 | 23.7 | 36.3 | 32.6 |
If you use this model, please cite our work, the llama paper, and the original datasets:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{dolly,
author = {Databricks},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {Blog post},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}
}
```
```
@article{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others},
journal={arXiv preprint arXiv:2301.13688},
year={2023}
}
```
```
@misc{köpf2023openassistant,
title={OpenAssistant Conversations -- Democratizing Large Language Model Alignment},
author={Andreas Köpf and Yannic Kilcher and Dimitri von Rütte and Sotiris Anagnostidis and Zhi-Rui Tam and Keith Stevens and Abdullah Barhoum and Nguyen Minh Duc and Oliver Stanley and Richárd Nagyfi and Shahul ES and Sameer Suri and David Glushkov and Arnav Dantuluri and Andrew Maguire and Christoph Schuhmann and Huu Nguyen and Alexander Mattick},
year={2023},
eprint={2304.07327},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
allenai/open-instruct-human-mix-7b
|
allenai
| 2023-06-20T17:46:37Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:databricks/databricks-dolly-15k",
"dataset:OpenAssistant/oasst1",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2301.13688",
"arxiv:2304.07327",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:06:39Z |
---
datasets:
- databricks/databricks-dolly-15k
- OpenAssistant/oasst1
language:
- en
---
# Open-Instruct Human-mix 7B
This model is a 7B LLaMa model finetuned on a mixture of human-authored datasets (FLAN V2, CoT, Dolly, and Open Assistant 1). *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 46.2 | 48.0 | 4.5 | 26.5 | 35.6 | 34.8 | 42.2 | 7.7 | 9.4 | 20.2 | 29.4 | 27.8 |
If you use this model, please cite our work, the llama paper, and the original datasets:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{dolly,
author = {Databricks},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {Blog post},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}
}
```
```
@article{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others},
journal={arXiv preprint arXiv:2301.13688},
year={2023}
}
```
```
@misc{köpf2023openassistant,
title={OpenAssistant Conversations -- Democratizing Large Language Model Alignment},
author={Andreas Köpf and Yannic Kilcher and Dimitri von Rütte and Sotiris Anagnostidis and Zhi-Rui Tam and Keith Stevens and Abdullah Barhoum and Nguyen Minh Duc and Oliver Stanley and Richárd Nagyfi and Shahul ES and Sameer Suri and David Glushkov and Arnav Dantuluri and Andrew Maguire and Christoph Schuhmann and Huu Nguyen and Alexander Mattick},
year={2023},
eprint={2304.07327},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
allenai/open-instruct-gpt4-alpaca-13b
|
allenai
| 2023-06-20T17:45:58Z | 24 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2304.03277",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:16:33Z |
---
language:
- en
---
# Open-Instruct GPT-4 Alpaca 13B
This model is a 13B LLaMa model finetuned on the GPT-4 Alpaca dataset. *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 47.0 | 46.9 | 7.5 | 14.0 | 34.9 | 38.3 | 24.4 | 6.1 | 15.8 | 32.5 | 61.1 | 32.5 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{peng2023instruction,
title={Instruction Tuning with GPT-4},
author={Peng, Baolin and Li, Chunyuan and He, Pengcheng and Galley, Michel and Gao, Jianfeng},
journal={arXiv preprint arXiv:2304.03277},
year={2023}
}
```
|
allenai/open-instruct-cot-7b
|
allenai
| 2023-06-20T17:45:42Z | 31 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2301.13688",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:05:59Z |
---
language:
- en
---
# Open-Instruct CoT 7B
This model is a 7B LLaMa model finetuned on the CoT dataset (a subset of Flan v2). *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 41.3 | 42.5 | 6.5 | 27.5 | 33.7 | 31.3 | 44.4 | 8.5 | 7.4 | 17.3 | 4.2 | 22.4 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others},
journal={arXiv preprint arXiv:2301.13688},
year={2023}
}
```
|
allenai/open-instruct-cot-13b
|
allenai
| 2023-06-20T17:44:57Z | 28 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2301.13688",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:15:49Z |
---
language:
- en
---
# Open-Instruct CoT 13B
This model is a 13B LLaMa model finetuned on the CoT dataset (a subset of Flan v2). *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 44.5 | 45.4 | 9.5 | 39.5 | 38.6 | 39.0 | 52.2 | 11.5 | 11.5 | 23.3 | 5.6 | 27.2 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others},
journal={arXiv preprint arXiv:2301.13688},
year={2023}
}
```
|
allenai/open-instruct-baize-7b
|
allenai
| 2023-06-20T17:44:44Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2304.01196",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:06:06Z |
---
language:
- en
---
# Open-Instruct Baize 7B
This model is a 7B LLaMa model finetuned on the Baize dataset. *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 40.3 | 38.6 | 3.5 | 5.5 | 30.6 | 32.4 | 29.8 | 7.9 | 12.2 | 23.8 | 23.5 | 22.6 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{xu2023baize,
title={Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data},
author={Xu, Canwen and Guo, Daya and Duan, Nan and McAuley, Julian},
journal={arXiv preprint arXiv:2304.01196},
year={2023}
}
```
|
allenai/open-instruct-oasst1-7b
|
allenai
| 2023-06-20T17:44:05Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:OpenAssistant/oasst1",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2304.07327",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:04:11Z |
---
datasets:
- OpenAssistant/oasst1
language:
- en
---
# Open-Instruct Open Assistant 7B
This model is a 7B LLaMa model finetuned on the Open Assistant dataset. *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 32.9 | 29.7 | 6.0 | 6.5 | 20.4 | 29.5 | 26.8 | 7.8 | 10.1 | 20.4 | 47.8 | 23.8 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{köpf2023openassistant,
title={OpenAssistant Conversations -- Democratizing Large Language Model Alignment},
author={Andreas Köpf and Yannic Kilcher and Dimitri von Rütte and Sotiris Anagnostidis and Zhi-Rui Tam and Keith Stevens and Abdullah Barhoum and Nguyen Minh Duc and Oliver Stanley and Richárd Nagyfi and Shahul ES and Sameer Suri and David Glushkov and Arnav Dantuluri and Andrew Maguire and Christoph Schuhmann and Huu Nguyen and Alexander Mattick},
year={2023},
eprint={2304.07327},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Piun/Zachet
|
Piun
| 2023-06-20T17:43:53Z | 1 | 0 |
tf-keras
|
[
"tf-keras",
"region:us"
] | null | 2023-06-20T17:16:33Z |
# Модель распознавания изображений.
Обучена на наборе данных mnist
Модель нейронной сети выполняет задачу предсказания цифр, на основе чего выводится остаток от деления данной цифры на 3.
Структура модели

Общее количество обучающих параметров - 111,146.
Алгоритм оптимизации - adam
Функция ошибки - sparse_categorical_crossentropy.
Размеры датасетов:
тренировочный: (60000, 28, 28) - изображения, (60000, ) - метки;
валидационный: (100, 28, 28) - изображения, (100, ) - метки;
тестовый: (10000, 28, 28) - изображения, (10000, ) - метки.
Результаты обучения:
тренировочный: loss: 0.2079, accuracy: 0.9695;
валидационный: val_loss: 0.2054, val_accuracy: 0.9690;
тестовый: loss: 14.7035 - accuracy: 0.9470.
|
allenai/open-instruct-self-instruct-13b
|
allenai
| 2023-06-20T17:43:21Z | 23 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:yizhongw/self_instruct",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2212.10560",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:16:53Z |
---
datasets:
- yizhongw/self_instruct
language:
- en
---
# Open-Instruct Self-Instruct 13B
This model is a 13B LLaMa model finetuned on the Self-instruct dataset. *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 30.3 | 32.3 | 4.5 | 9.0 | 33.6 | 29.6 | 40.4 | 9.3 | 8.6 | 13.4 | 6.8 | 18.7 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{selfinstruct,
title={Self-Instruct: Aligning Language Model with Self Generated Instructions},
author={Wang, Yizhong and Kordi, Yeganeh and Mishra, Swaroop and Liu, Alisa and Smith, Noah A. and Khashabi, Daniel and Hajishirzi, Hannaneh},
journal={arXiv preprint arXiv:2212.10560},
year={2022}
}
```
|
allenai/open-instruct-dolly-13b
|
allenai
| 2023-06-20T17:43:06Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:databricks/databricks-dolly-15k",
"arxiv:2306.04751",
"arxiv:2302.13971",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T17:16:56Z |
---
datasets:
- databricks/databricks-dolly-15k
language:
- en
---
# Open-Instruct Dolly 13B
This model is a 13B LLaMa model finetuned on the Dolly dataset. *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 45.3 | 44.7 | 6.0 | 17.0 | 31.4 | 26.0 | 46.8 | 12.4 | 13.4 | 31.4 | 16.2 | 25.5 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{dolly,
author = {Databricks},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {Blog post},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}
}
```
|
fabgraziano/bert-italian-xxl-cased_squad-it_v1
|
fabgraziano
| 2023-06-20T17:41:05Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"it",
"dataset:squad_it",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-20T17:16:58Z |
---
datasets:
- squad_it
language:
- it
metrics:
- f1
- exact_match
pipeline_tag: question-answering
tags:
- question-answering
---
# bert-italian-xxl-cased_squad-it_v1
BERT-based model for extractive Question-Answering.
## Model Details
### Model Description
- **Developed by:** fabgraziano
- **Model type:** Question-Answering
- **Language(s) (NLP):** Italian
- **Finetuned from model:** [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased)
- **Datased used:** [squad_it](https://huggingface.co/datasets/squad_it)
## Training Details
#### Training Hyperparameters
- **Training regime:** fp16 mixed precision
- **MAX_LENGTH:** 384
- **STRIDE:** 128
- **BATCH_SIZE:** 8
- **LEARNING_RATE:** 2e-5
- **EPOCHS:** 3
## Evaluation
#### Metrics
- **EM:** 63.87
- **F1:** 75.34
|
undrwolf/PixelCopter
|
undrwolf
| 2023-06-20T17:33:08Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T17:32:38Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 58.50 +/- 23.35
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
alexandrualexandru/final-3.0-t5-base-2023-06-20_13-18
|
alexandrualexandru
| 2023-06-20T17:31:20Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-20T13:21:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: final-3.0-t5-base-2023-06-20_13-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final-3.0-t5-base-2023-06-20_13-18
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0913
- Gen Len: 19.0
- Bertscorer-p: 0.5070
- Bertscorer-r: 0.0536
- Bertscorer-f1: 0.2705
- Sacrebleu-score: 4.7246
- Sacrebleu-precisions: [82.26851609027145, 72.56818398298178, 64.03031654034775, 58.101657218801705]
- Bleu-bp: 0.0688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | Bertscorer-p | Bertscorer-r | Bertscorer-f1 | Sacrebleu-score | Sacrebleu-precisions | Bleu-bp |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------------:|:------------:|:-------------:|:---------------:|:-----------------------------------------------------------------------------:|:-------:|
| 0.137 | 1.0 | 10382 | 0.1146 | 19.0 | 0.4908 | 0.0444 | 0.2581 | 3.8066 | [81.64981564768446, 70.67489114658926, 61.40689811921132, 54.44849682509699] | 0.0574 |
| 0.0937 | 2.0 | 20764 | 0.0913 | 19.0 | 0.5070 | 0.0536 | 0.2705 | 4.7246 | [82.26851609027145, 72.56818398298178, 64.03031654034775, 58.101657218801705] | 0.0688 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Dugoss/qwerty
|
Dugoss
| 2023-06-20T17:30:10Z | 2 | 0 |
tf-keras
|
[
"tf-keras",
"region:us"
] | null | 2023-06-20T16:23:31Z |
Построили модель и натренировали ее на большей части данных с цифрами так, чтобы можно было передавать модели фотографии с цифрами размером 28×28 пикселей и получать на выходе значение этой цифры.

Для построения модели использовали обычные полносвязанные слои с разным количеством узлов. В качестве функции активации на входном и промежуточных слоях использовали функцию relu. На выходном слое в качестве функции активации определили сигмоиду

В качестве оптимайзера был выбран Adam.
В массиве X_train содержится 60000 изображений, ровно столько же содержится и в массиве y_train с соответствующими метками. Тестовые данные X_test и y_test содержат по 10000 элементов.
Epoch 1/5
96/96 [==============================] - 43s 429ms/step - loss: 0.1776 - binary_accuracy: 0.9385 - val_loss: 0.0580 - val_binary_accuracy: 0.9812
Epoch 2/5
96/96 [==============================] - 40s 417ms/step - loss: 0.0492 - binary_accuracy: 0.9838 - val_loss: 0.0376 - val_binary_accuracy: 0.9880
Epoch 3/5
96/96 [==============================] - 40s 419ms/step - loss: 0.0370 - binary_accuracy: 0.9881 - val_loss: 0.0347 - val_binary_accuracy: 0.9892
Epoch 4/5
96/96 [==============================] - 41s 423ms/step - loss: 0.0327 - binary_accuracy: 0.9893 - val_loss: 0.0327 - val_binary_accuracy: 0.9896
Epoch 5/5
96/96 [==============================] - 41s 427ms/step - loss: 0.0295 - binary_accuracy: 0.9905 - val_loss: 0.0312 - val_binary_accuracy: 0.9903
В результате обучения модели на 5 эпохах был замечен очень низкий loss и высокая точность!
|
Nika7664/Mnist_generator
|
Nika7664
| 2023-06-20T17:11:02Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"dataset:mnist",
"region:us"
] | null | 2023-06-20T16:59:58Z |
---
datasets:
- mnist
metrics:
- accuracy
library_name: keras
---
## Задание:
Дан датасет mnist и обученная нейронная сеть. Используйте их для генерации
изображения похожего на цифру из набора mnist . Веса нейронной сети данной по заданию
не должны быть изменены в процессе дообучения.

## Общее количество обучаемых параметров: 54 160
## Алгоритмы
Алгоритм оптимизации - Adam
Функция потерь - Категориальная кроссэнтропия - categorical_crossentropy
## Размер датасета
Обучающий - 60000
Тестовы - 60000
Валидационный - 12000
## Результаты работы:
Функция потерь: 2625.412109375
Точность: 0.7566536664962769
## Ссылка на Colab:
https://colab.research.google.com/drive/18kyzm2mHhXyfZjEIy684sssv_SjV5zf2?usp=sharing
|
charmiemimie/t5-small-finetuned-led
|
charmiemimie
| 2023-06-20T17:07:58Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-20T15:44:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-led
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-led
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5953
- Rouge1: 14.0375
- Rouge2: 4.8978
- Rougel: 11.149
- Rougelsum: 12.7172
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| 2.7802 | 1.0 | 1549 | 2.5953 | 14.0375 | 4.8978 | 11.149 | 12.7172 | 19.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
TheBloke/baichuan-vicuna-7B-GGML
|
TheBloke
| 2023-06-20T17:06:45Z | 0 | 16 | null |
[
"arxiv:2306.04751",
"license:other",
"region:us"
] | null | 2023-06-20T13:01:31Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Fire Balloon's Baichuan Vicuna 7B GGML
These files are GGML format model files for [Fire Balloon's Baichuan Vicuna 7B](https://huggingface.co/fireballoon/baichuan-vicuna-7b).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/baichuan-vicuna-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/baichuan-vicuna-7B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/fireballoon/baichuan-vicuna-7b)
## Prompt template: Vicuna 1.1
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: prompt
ASSISTANT:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| baichuan-vicuna-7b.ggmlv3.q2_K.bin | q2_K | 2 | 3.02 GB | 5.52 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| baichuan-vicuna-7b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.76 GB | 6.26 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| baichuan-vicuna-7b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.45 GB | 5.95 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| baichuan-vicuna-7b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 3.11 GB | 5.61 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| baichuan-vicuna-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.94 GB | 6.44 GB | Original llama.cpp quant method, 4-bit. |
| baichuan-vicuna-7b.ggmlv3.q4_1.bin | q4_1 | 4 | 4.38 GB | 6.88 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| baichuan-vicuna-7b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.26 GB | 6.76 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| baichuan-vicuna-7b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 4.01 GB | 6.51 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| baichuan-vicuna-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.81 GB | 7.31 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| baichuan-vicuna-7b.ggmlv3.q5_1.bin | q5_1 | 5 | 5.25 GB | 7.75 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| baichuan-vicuna-7b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.98 GB | 7.48 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| baichuan-vicuna-7b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.85 GB | 7.35 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| baichuan-vicuna-7b.ggmlv3.q6_K.bin | q6_K | 6 | 5.74 GB | 8.24 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| baichuan-vicuna-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.44 GB | 9.94 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m baichuan-vicuna-7b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "USER: Write a story about llamas\nASSISTANT:"
```
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Mano Prime, Fen Risland, Derek Yates, Preetika Verma, webtim, Sean Connelly, Alps Aficionado, Karl Bernard, Junyu Yang, Nathan LeClaire, Chris McCloskey, Lone Striker, Asp the Wyvern, Eugene Pentland, Imad Khwaja, trip7s trip, WelcomeToTheClub, John Detwiler, Artur Olbinski, Khalefa Al-Ahmad, Trenton Dambrowitz, Talal Aujan, Kevin Schuppel, Luke Pendergrass, Pyrater, Joseph William Delisle, terasurfer , vamX, Gabriel Puliatti, David Flickinger, Jonathan Leane, Iucharbius , Luke, Deep Realms, Cory Kujawski, ya boyyy, Illia Dulskyi, senxiiz, Johann-Peter Hartmann, John Villwock, K, Ghost , Spiking Neurons AB, Nikolai Manek, Rainer Wilmers, Pierre Kircher, biorpg, Space Cruiser, Ai Maven, subjectnull, Willem Michiel, Ajan Kanaga, Kalila, chris gileta, Oscar Rangel.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Fire Balloon's Baichuan Vicuna 7B
# baichuan-vicuna-7b
baichuan-vicuna-7b is a chat model supervised finetuned on vicuna sharegpt data.
- The foundation model is [baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B), which is a large-scale pre-training model developed by Baichuan Intelligence allowing for commercial purposes.
- The finetuning data includes [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split.json), mixed with [COT](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT) and [Leetcode](https://www.kaggle.com/datasets/erichartford/leetcode-solutions), which are aimed to improve the model's reasoning and coding ability (the data mixing strategy is inspired by [TULU](https://arxiv.org/abs/2306.04751)).
- The training code: https://huggingface.co/fireballoon/baichuan-vicuna-7b/blob/main/train_vicuna.py, which is based on [FastChat](https://github.com/lm-sys/FastChat).
[中文说明](#chinese-model-card)
**[New]** [baichuan-vicuna-chinese-7b](https://huggingface.co/fireballoon/baichuan-vicuna-chinese-7b), baichuan finetuned on both English and Chinese ShareGPT.
# Load model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("fireballoon/baichuan-vicuna-7b", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("fireballoon/baichuan-vicuna-7b")
```
# Inference
Inference with FastChat:
```
python3 -m fastchat.serve.cli --model-path fireballoon/baichuan-vicuna-7b
```
Inference with Transformers:
```ipython
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
>>> tokenizer = AutoTokenizer.from_pretrained("fireballoon/baichuan-vicuna-7b", use_fast=False)
>>> model = AutoModelForCausalLM.from_pretrained("fireballoon/baichuan-vicuna-7b").half().cuda()
>>> streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
>>> instruction = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {} ASSISTANT:"
>>> prompt = instruction.format("five tips to help with sleep") # user message
>>> generate_ids = model.generate(tokenizer(prompt, return_tensors='pt').input_ids.cuda(), max_new_tokens=2048, streamer=streamer)
'''
1. Create a relaxing bedtime routine, such as winding down with a warm bath or reading a book before bed.
2. Avoid caffeine and alcohol close to bedtime, as they can disrupt sleep.
3. Use blackout curtains or shades to block out light from the outside, which can interfere with your body's natural sleep rhythms.
4. Keep your bedroom cool and dark, and avoid using electronic devices at least an hour before bed.
5. Regular exercise and a healthy diet can also help improve sleep quality.
'''
```
# Test examples on FastChat Eval
Test example on [FastChat Eval](https://github.com/lm-sys/FastChat/blob/main/fastchat/eval/table/question.jsonl)
Please refer to [discussions 2](https://huggingface.co/fireballoon/baichuan-vicuna-7b/discussions/2) for comparative evaluation of Baichuan-vicuna-7b and Vicuna-13b by GPT-4.
```
Q: How can I improve my time management skills?
A: There are many different ways to improve your time management skills. Here are a few strategies that you may find helpful:
1 Prioritize your tasks: Make a list of all the tasks you need to complete and prioritize them based on importance and urgency. This will help you focus on the most important tasks and avoid getting bogged down by less critical items.
2 Use a time management planner: A planner can help you organize your time by breaking down your tasks into smaller, more manageable chunks. This can help you stay focused and on track.
3 Set clear boundaries: Setting clear boundaries between work time and personal time can help you avoid distractions and stay focused on your tasks. This may involve designating certain times of day or days of the week for work and other times for rest and leisure.
4 Take care of your physical and mental health: Taking care of your physical and mental health can help you be more productive and focused. This may involve getting enough sleep, eating a healthy diet, and taking breaks to relax and recharge.
Remember that improving your time management skills takes practice and patience. Don't be discouraged if you make mistakes or experience setbacks. Keep trying and you will eventually develop better time management habits that will help you be more productive and successful.
```
```
Q: Can you help me write a formal email to a potential business partner proposing a joint venture?
A: Sure! Here is a sample formal email that you could use as a template to propose a joint venture with a
potential business partner:
Subject: Proposal for Joint Venture
Dear [Business Partner],
I hope this email finds you well. I am writing to propose a joint venture between our companies. We believe that a collaboration between our companies could bring significant benefits to both of us, and we would like to explore the possibility of forming a partnership.
Our companies have complementary strengths, and we believe that a joint venture could allow us to leverage those strengths to create new products, services, and opportunities. We believe that a joint venture could also provide increased revenue and growth for both of our companies.
In order to explore the possibility of a joint venture, we would like to schedule a call to discuss the details of our proposal. During the call, we would like to discuss the terms of the joint venture, including the specific roles and responsibilities of each partner, the financial terms of the agreement, and any other relevant details.
I would be happy to schedule a call with you at your convenience. In the meantime, please feel free to contact me with any questions or concerns you may have about our proposal.
Sincerely,
[Your Name]
```
```
Q: Implement a binary search algorithm to find a specific element in a sorted array.
A: Here is an implementation of the binary search algorithm to find a specific element in a sorted array:
public static int binarySearch(int[] arr, int target) {
int left = 0;
int right = arr.length - 1;
while (left <= right) {
int mid = left + (right - left) / 2;
if (arr[mid] < target) {
left = mid + 1;
} else if (arr[mid] > target) {
right = mid - 1;
} else {
return mid;
}
}
return -1; // target not found
}
Here is how the binary search algorithm works:
1 Initialize two pointers, left and right, to the beginning and end of the array, respectively.
2 While the left pointer is less than or equal to the right pointer, perform the following steps:
1 Calculate the middle index, mid, by dividing the left pointer and the right pointer.
2 Check if the middle element, arr[mid], is less than or equal to the target value.
3 If the middle element is less than the target value, update the left pointer to mid + 1.
4 If the middle element is greater than the target value, update the right pointer to mid - 1.
5 If the middle element is equal to the target value, the target element has been found, so return the middle index.
3 If the target element is not found, return -1.
This algorithm has a runtime complexity of O(log n) and a space complexity of O(1), making it an efficient way to find a specific element in a sorted array.
```
---
# Chinese model card
baichuan-vicuna-7b是在vicuna sharegpt数据上全参数微调的对话模型。
- 基座模型是[baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B),由百川智能开发的可商用大规模预训练模型。
- 微调数据包括[ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split.json),混合一定比例的[COT](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)和[Leetcode](https://www.kaggle.com/datasets/erichartford/leetcode-solutions)数据以提升模型的推理和代码能力(数据混合策略受到[TULU](https://arxiv.org/abs/2306.04751)研究成果的启发)。
- 训练代码:https://huggingface.co/fireballoon/baichuan-vicuna-7b/blob/main/train_vicuna.py, 代码基于[FastChat](https://github.com/lm-sys/FastChat)。
# Test examples on Ziyan Eval
在[Ziya-Eval-Chinese](https://huggingface.co/datasets/IDEA-CCNL/Ziya-Eval-Chinese)上的测试例子
```
Q: 鸿门宴是什么意思
A: 鸿门宴是一个汉语成语,意思是指在公元前206年,项羽、刘邦两军在鸿门(今陕西临潼区东北)一带展开的一场“生死对决”。当时,项羽的叔父项伯在会见刘邦的手下时,将刘邦已经逃走的情报透露给项羽,使得刘邦免于被项羽所杀。项羽因此怀恨在心,准备第二天在鸿门设宴,邀请刘邦赴宴,然后袭击刘邦,然而,项羽手下的谋士范增早已看出刘邦的“龙气象” 为刘邦是“沛公刘邦是高祖(刘邦的谥号),他将统率四海,认为杀掉沛公是上策”。宴会期间,项羽的堂弟项庄在席间舞剑,企图将刘邦刺杀。最后,刘邦借故离开,前往韩信大军的营地。鸿门宴上,刘邦因项伯的缘故逃过一劫,但是,他也失项羽议和的机会,为以后楚汉之争埋下了隐患。
Q: 中国目前通用的火警电话是
A: 目前,中国通用的是119火警电话。
```
```
Q: 晚上睡不着应该怎么办
A:失眠是一种症状,而不是一种疾病。治疗失眠的目标是解决其潜在原因。失眠的一些常见原因包括:
*睡眠不足
*压力
*心理健康问题 (焦虑,抑郁等)
*药物 (处方和非处方)
*环境因素 (噪音,光线,温度等)
以下是一些通常可以帮助您睡眠的方法:
*睡觉前4-6个小时避免刺激物 (咖啡因,尼古丁,酒精)
*睡前冥想以减少压力和焦虑
*睡觉前不要看亮屏幕 (电脑,手机,平板电脑等)
*睡觉前至少一个小时不要运动
*睡觉前不要吃食物
*在睡觉前放松并平静下来 (看看书,喝杯牛奶等)
如果这些技术不起作用,或者如果失眠是由药物或其他医学状况引起的,您应该联系您的医生。
```
|
YoavWigelman/a2c-PandaReachDense-v2
|
YoavWigelman
| 2023-06-20T17:04:11Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T17:01:31Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -3.42 +/- 1.06
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Yandexxxx/zachet_python
|
Yandexxxx
| 2023-06-20T17:04:05Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-06-20T16:13:33Z |
---
library_name: keras
---
Модель для распознования цифр, которая выдает результат %2 от чисел, натренерованна на наборе данных mnist

Общее количество обучаемых параметров НС мы узнаем с помощью .summary и их число равно 209 826
.summary выводит сводку модели машинного обучения, созданной в рамках проекта. Он позволяет увидеть количество слоев, количество нейронов в каждом слое,
функции активации и другие параметры модели. Это помогает определить, какие данные будут входить в модель, какие выходные данные будут получены,
какие параметры будут использоваться и какие функции потерь будут использоваться при обучении модели.

В данной работе я использую функцию потерь categorical_crossentropy, которая используется для классификации с несколькими классами.
В качестве оптимизатора я использую adam, который является одним из наиболее популярных оптимизаторов для обучения нейронных сетей.
Так как в данной работе я использую Mnist, он содержит 70 000 рукописных чисел, при чем 10 000 это тестовая выборка, 60 000 тренировочная, но в ней 20% являются валидационными
поэтому тестовая 10 000, валидационная 12 000 и тренировочная 48 000 данных
Ниже прикреплены картинки который показывают loss, accuracy на всех трех датасетах
Точность accuracy для валидационной и обучающей

Loss для валидационной и обучающей

accuracy и loss для тестовой выборки

|
sundragon/itog
|
sundragon
| 2023-06-20T17:01:56Z | 0 | 0 | null |
[
"dataset:mnist",
"license:openrail",
"region:us"
] | null | 2023-06-20T14:58:17Z |
---
license: openrail
datasets:
- mnist
---
---
library_name: keras
---
2. Дан датасет mnist по входному изображению определить остаток от деления этой цифры
на 2;
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
тестовый датасет составляет 10 000, вариабельный 12 000, тренировочный 48 000


|
sridhar1ga/telugu_dialect_classifier_on_vakyansh-wav2vec2-telugu-tem-100
|
sridhar1ga
| 2023-06-20T16:56:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-06-20T16:15:31Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: telugu_dialect_classifier_on_vakyansh-wav2vec2-telugu-tem-100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# telugu_dialect_classifier_on_vakyansh-wav2vec2-telugu-tem-100
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-telugu-tem-100](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-telugu-tem-100) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7231
- Accuracy: 0.7125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.95 | 9 | 1.0745 | 0.4575 |
| 1.0888 | 2.0 | 19 | 1.0178 | 0.49 |
| 1.0449 | 2.95 | 28 | 0.9084 | 0.585 |
| 0.9557 | 4.0 | 38 | 0.8364 | 0.6417 |
| 0.888 | 4.95 | 47 | 0.8408 | 0.6417 |
| 0.8509 | 6.0 | 57 | 0.7608 | 0.6817 |
| 0.8185 | 6.95 | 66 | 0.7746 | 0.6817 |
| 0.8092 | 8.0 | 76 | 0.7231 | 0.715 |
| 0.7908 | 8.95 | 85 | 0.7266 | 0.7142 |
| 0.7728 | 9.47 | 90 | 0.7231 | 0.7125 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
pszemraj/long-t5-tglobal-xl-16384-booksci-summary-plos-10k
|
pszemraj
| 2023-06-20T16:49:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"generated_from_trainer",
"dataset:pszemraj/scientific_lay_summarisation-plos-norm",
"license:bsd-3-clause",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
] |
text2text-generation
| 2023-06-20T15:35:19Z |
---
license:
- bsd-3-clause
- apache-2.0
tags:
- generated_from_trainer
datasets:
- pszemraj/scientific_lay_summarisation-plos-norm
metrics:
- rouge
model-index:
- name: long-t5-tglobal-xl-16384-book-summary-scientific_lay_summarisation-plos-norm-16384-summ-v1
results:
- task:
name: Summarization
type: summarization
dataset:
name: pszemraj/scientific_lay_summarisation-plos-norm
type: pszemraj/scientific_lay_summarisation-plos-norm
split: validation
metrics:
- name: Rouge1
type: rouge
value: 44.3203
inference: False
---
# long-t5-tglobal-xl-16384-booksci-summary-plos-10k
This model is a fine-tuned version of [pszemraj/long-t5-tglobal-xl-16384-book-summary](https://huggingface.co/pszemraj/long-t5-tglobal-xl-16384-book-summary) on the pszemraj/scientific_lay_summarisation-plos-norm dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5041
- Rouge1: 44.3203
- Rouge2: 11.0576
- Rougel: 22.7584
- Rougelsum: 40.1462
- Gen Len: 256.66
## Model description
Another test of further fine-tuning booksum-based models, this one fine-tuned on the PLOS subset of lay-summaries for about 10k examples input, to make it roughly equivalent to [this checkpoint](https://huggingface.co/pszemraj/long-t5-tglobal-xl-16384-booksci-summary-v1) fine-tuned on the ELIFE subset for two epochs (also around 10k examples).
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 165
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.7715 | 0.28 | 350 | 1.5310 | 43.4729 | 10.4616 | 22.1928 | 39.505 | 260.87 |
| 1.9307 | 0.56 | 700 | 1.5102 | 44.1634 | 10.9336 | 22.3896 | 40.2939 | 253.58 |
| 1.2981 | 0.84 | 1050 | 1.5046 | 44.2728 | 10.8455 | 22.4122 | 40.3019 | 261.29 |
|
jxssx/autoencoder
|
jxssx
| 2023-06-20T16:40:13Z | 1 | 0 |
tf-keras
|
[
"tf-keras",
"region:us"
] | null | 2023-06-20T15:05:31Z |
Данная нейронная сеть восстанавливает входное изображение из "скрытого" состояния. Таким образом, на выходе получается новое изображение.

Алгоритм оптимизации: Adam.
Функция ошибки выглядит так:
def loss(y, z):
y = K.reshape(y, shape = (batch_size, 28*28))
z = K.reshape(z, shape = (batch_size, 28*28))
mse = K.sum(K.square(y - z), axis = 1)
kl = -.5 * K.sum(1 + loss_z_log_var - K.square(loss_z_mean) - K.exp(loss_z_log_var), axis = 1)
return mse
Длина тренировочного и тестового датасетов: 60000 и 10000 соответственно.
Потери в процессе обучения:

|
Mustru/BSCON
|
Mustru
| 2023-06-20T16:40:11Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-18T09:39:04Z |
---
license: creativeml-openrail-m
---
|
medmac01/moroccan-qa-v2
|
medmac01
| 2023-06-20T16:40:03Z | 8 | 0 |
transformers
|
[
"transformers",
"RefinedWebModel",
"text-generation",
"custom_code",
"en",
"dataset:medmac01/qa_morocco_history_v1",
"autotrain_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2023-06-11T01:30:03Z |
---
datasets:
- medmac01/qa_morocco_history_v1
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
|
FortemDave/ppo-LunarLander-v2
|
FortemDave
| 2023-06-20T16:38:15Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T16:37:34Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 125.95 +/- 30.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
andersonbcdefg/nous-hermes-13b-ct2
|
andersonbcdefg
| 2023-06-20T16:35:02Z | 5 | 9 |
transformers
|
[
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2023-06-18T18:05:14Z |
8-bit version of Nous Research [Nous-Hermes-13B](https://huggingface.co/NousResearch/Nous-Hermes-13b), quantized using [CTranslate2](https://github.com/OpenNMT/CTranslate2).
## How to Use
The great thing about `ctranslate2` is that it is basically self-contained (other than the tokenizer, for which you'll use a HuggingFace Transformers tokenizer). One quirk is that the translated model (depending which inference/generation method you use) may expect tokens (string) rather than token_ids (int). To get started, use git or huggingface_hub to download this repo. You'll point `ctranslate2` at the folder for inference.
Example:
```python
import ctranslate2
# point it to folder that contains all the files in this repo. here we're calling it nous-hermes-ct2
model = ctranslate2.Generator("nous-hermes-ct2", device="cuda")
tokenizer = AutoTokenizer.from_pretrained("NousResearch/Nous-Hermes-13b", use_fast=False)
# get input ids, then turn them back into tokens
input_ids = tokenizer((
"### Instruction: What's the square root of 2?\n\n"
"### Response:")).input_ids
input_tokens = tokenizer.convert_ids_to_tokens(input_ids)
# generate completion, which is an iterator (you can stream tokens as they come out!)
it = model.generate_tokens(
input_tokens,
max_length=100
)
output = [token.token_id for token in it]
decoded = tokenizer.decode(output, skip_special_tokens=True)
print(decoded)
```
There are other methods for inference, including `generate_batch` (no streaming, supports batched inputs), `forward_batch` (only does 1 forward pass of the model), and `score_batch` (computes token-level likelihood & perplexity). See docs [here](https://opennmt.net/CTranslate2/generation.html).
|
agustd00/amanda
|
agustd00
| 2023-06-20T16:33:31Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-20T16:32:49Z |
---
license: creativeml-openrail-m
---
|
luckylvcx/ichika
|
luckylvcx
| 2023-06-20T16:21:46Z | 0 | 2 | null |
[
"nsfw",
"flirty",
"charming",
"romantic",
"dominant",
"en",
"region:us"
] | null | 2023-06-20T16:18:19Z |
---
language:
- en
tags:
- nsfw
- flirty
- charming
- romantic
- dominant
---
|
YukiNon/khokhlov_alexander_practice
|
YukiNon
| 2023-06-20T16:21:20Z | 0 | 0 | null |
[
"autoencoder",
"fashion-mnist",
"pytorch",
"huggingface",
"region:us"
] | null | 2023-06-20T15:34:26Z |
---
tags:
- autoencoder
- fashion-mnist
- pytorch
- huggingface
---
# Энкодер на датасете fashion_mnist
## Задача
Построить автоэнкодер для датасета Fashion MNIST, который будет принимать на вход изображение и создавать его же изображение на выходе.
## Архитектура
Нейросеть состоит из следующих слоев:
1.Входной слой, принимающий изображение
2.Полносвязный слой с 256 нейронами и функцией активации ReLU
3.Полносвязный слой с 128 нейронами и функцией активации ReLU
4.Скрытый слой, имеющий 50 нейронов и функцию активации ReLU
5.Полносвязный слой с 128 нейронами и функцией активации ReLU
6.Полносвязный слой с 256 нейронами и функцией активации ReLU
7.Выходной слой с функцией активации сигмоид, который производит декодирование изображения
8.Выходной слой, преобразующий выходную карту признаков в изображение

## Общее количество обучаемых параметров
Автоэнкодер содержит 235,978 обучаемых параметров
## Функция оптимизации и функция ошибок
Алгоритм оптимизации - Adam, функция ошибок - бинарная кросс-энтропия.
## Размеры датасета
- Тренировочный датасет: 48,000 изображений
- Валидационный датасет: 16,000 изображений
- Тестовый датасет: 16,000 изображений
## Результаты обучения модели на всех трех датасетах
После 50 эпох обучения модели на тренировочном датасете, результаты на трех датасетах такие:
- Train Loss: 0.26351
- Train Accuracy: 0.27152
- Val Loss: 0.26502
- Val Accuracy: 0.26352
- Test Loss: 0.26442
- Test Accuracy:0.26600
|
gyrov/finalTask
|
gyrov
| 2023-06-20T16:19:16Z | 0 | 0 |
tf-keras
|
[
"tf-keras",
"region:us"
] | null | 2023-06-20T14:15:07Z |
Зачетное задание
## Гуров Б.И.группа 46(1)
Задание №2. Дан датасет mnist по входному изображению определить остаток от деления этой цифры на 2
**1. Описание задачи модели:**
По входному изображению определяет остаток от деления на 2 изображенной цифры.
**2. Изображение модели нейросети:**

**3. Общее кол-во обучаемых параметров:**
Total params: 7,861
**4. Используемый алгоритмы оптимизации и функция ошибки:**
Оптимайзер - стандартный адам, функция ошибки - бинарная кроссэнтропия, так как выходный данные либо 0 либо 1.
**5. Размеры тренировочного, валидационного и тестового датасетов:**
Тренировочный: 48 000
Валидационный: 12 000
Тестовый: 10 000
**6. Результаты обучения модели: loss и accuracy на всех трёх датасетах:**
* Train_loss: 0.054842568933963776
* Train_accuracy: 0.9802291393280029
* Val_loss: 0.08922852575778961
* Val_accuracy: 0.9724166393280029
* Test loss: 0.0881127342581749
* Test accuracy: 0.9726999998092651
|
D3nkik/My_task
|
D3nkik
| 2023-06-20T16:17:51Z | 0 | 1 |
keras
|
[
"keras",
"images",
"ru",
"dataset:fashion_mnist",
"region:us"
] | null | 2023-06-20T14:37:21Z |
---
datasets:
- fashion_mnist
language:
- ru
metrics:
- accuracy
library_name: keras
tags:
- images
---
# 1)Описание задачи которую выполняет НС;
Модель нейронной сети,предназначена для решения задачи классификации изображений одежды с использованием датасета Fashion MNIST
# 2)Изображение послойной архитектуры НС на которой указаны размеры слоя, функция активации;

# 3)Общее количество обучаемых параметров НС;

# 4)Используемый алгоритмы оптимизации и функция ошибки;
В коде,используется алгоритм оптимизации Adam и функция ошибки Sparse Categorical Crossentropy.
Функция ошибки Sparse Categorical Crossentropy используется для многоклассовой классификации, когда классы являются взаимоисключающими.
# 5)Размеры тренировочного, валидационного и тестового датасетов;

# 6)Результаты обучения модели: loss и accuracy на всех трёх датасетах.

|
abhishek-ignite/bloom-1b7-ignite-3
|
abhishek-ignite
| 2023-06-20T16:13:58Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-20T16:13:57Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
SHENMU007/neunit_BASE_V9.3
|
SHENMU007
| 2023-06-20T16:07:45Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-06-14T16:02:53Z |
---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jcnecio/poca-SoccerTwos
|
jcnecio
| 2023-06-20T16:06:57Z | 74 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-06-19T23:33:27Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jcnecio/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Elvis120/95point
|
Elvis120
| 2023-06-20T16:05:20Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-06-20T15:37:36Z |
---
library_name: keras
---
# Моя модель для распознавания цифр и определения остатка от деления этой цифры на 2
# Описание задачи
Цель данной нейронной сети состоит в определении остатка от деления цифры на 2 по входному изображению из набора данных MNIST.
# Послойная архитектура нейронной сети

# Общее количество обучаемых параметров НС
Всего обучаемых параметров в нейронной сети: (28*28 + 1) * 128 + (128 + 1) * 1 = 100609 параметра.
# Используемый алгоритм оптимизации и функция ошибки
Алгоритм оптимизации: Adam
Функция ошибки: binary_crossentropy
# Размеры тренировочного, валидационного и тестового датасетов
Размер тренировочного датасета: 48000 изображений.
Размер валидационного датасета: 12000 изображений.
Размер тестового датасета: 10000 изображений.
# Результаты обучения модели
Тренировочная выборка - Loss: 0.01 Accuracy: 0.99
Тестовая выборка - Loss: 0.04 Accuracy: 0.98
|
catrabbitbear/taxi-v3-attempt2
|
catrabbitbear
| 2023-06-20T15:54:57Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T15:54:55Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3-attempt2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="catrabbitbear/taxi-v3-attempt2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
TheFools/Normanvtsrr
|
TheFools
| 2023-06-20T15:46:24Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-20T15:45:35Z |
---
license: creativeml-openrail-m
---
|
hassansoliman/falcon-7b-qlora-utterance-adaptations_v6
|
hassansoliman
| 2023-06-20T15:39:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-20T15:38:49Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
feabries/sd-class-butterflies-32
|
feabries
| 2023-06-20T15:35:37Z | 38 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-06-20T15:35:10Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('feabries/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
SotirisLegkas/Socratic-GODEL-instruct-user-system
|
SotirisLegkas
| 2023-06-20T15:34:18Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-20T13:54:57Z |
---
pipeline_tag: text2text-generation
---
Instruction: given a context, reply as in a Socratic dialogue.
|
Elvis120/95points
|
Elvis120
| 2023-06-20T15:30:22Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-06-20T15:25:38Z |
---
library_name: keras
---
# Моя модель для распознавания цифр
Натренирована на наборе данных mnist
навания цифр
|
paumena/BioASQ-BERT
|
paumena
| 2023-06-20T15:29:07Z | 72 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-20T15:08:17Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: paumena/BioASQ-BERT
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# paumena/BioASQ-BERT
This model is a fine-tuned version of [paumena/QA-BERT](https://huggingface.co/paumena/QA-BERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4065
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
{'exact_match': 82.21544715447155, 'f1': 85.49528966471387}
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1225, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.0949 | 0 |
| 0.6130 | 1 |
| 0.4899 | 2 |
| 0.4401 | 3 |
| 0.4065 | 4 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
LarryAIDraw/tingyun_hsr
|
LarryAIDraw
| 2023-06-20T15:28:14Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-20T15:19:36Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/10234/tingyun-or-honkai-star-rail
|
LarryAIDraw/ty-000004
|
LarryAIDraw
| 2023-06-20T15:27:32Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-20T15:19:15Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/60413/tingyun-star-rail
|
LarryAIDraw/alice_touhou
|
LarryAIDraw
| 2023-06-20T15:22:48Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-20T15:17:29Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/81799/alice-margatroid-touhou-project
|
LarryAIDraw/DakiEyesV4-15
|
LarryAIDraw
| 2023-06-20T15:22:36Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-20T15:16:48Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/93955/daki-eyes-concept-lora-inpaint
|
theSOL1/kogrammar-distil
|
theSOL1
| 2023-06-20T15:10:35Z | 147 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"grammar",
"ko",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-01T15:55:58Z |
---
language: ko
license: mit
tags:
- bart
- grammar
---
# kogrammar-distil
Dataset: 국립국어원 맞춤법 교정 말뭉치
<br>
<br>
**Backbone Model**:
- [kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2/blob/main/README.md)
**Contributors**:
- ChangGeon Kang: [NeverAsking](https://github.com/NeverAsking/)
- Subin Park: [SOL1archive](https://github.com/SOL1archive/)
**GitHub Repo**:
- [SOL1archive/KoGrammar](https://github.com/SOL1archive/KoGrammar)
## Train Method
전체 데이터셋 중 약 67.5%를 학습데이터로 활용하여 학습함.
<br>
SFT Distillation을 이용해 [kogrammar-base](https://huggingface.co/theSOL1/kogrammar-base) 모델의 Decoder Layer를 6개에서 3개로 줄여 다시 학습시킴.
## Metric
|BLEU-2|ROUGE-2 F1|
|-|-|
|77.8 %|55.0 %|
|
theSOL1/kogrammar-tiny-distil
|
theSOL1
| 2023-06-20T15:10:13Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"grammar",
"ko",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-01T16:09:38Z |
---
language: ko
license: mit
tags:
- bart
- grammar
---
# kogrammar-tiny-distil
Dataset: 국립국어원 맞춤법 교정 말뭉치
<br>
<br>
**Backbone Model**:
- [kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2/blob/main/README.md)
- [kogrammar-base](https://huggingface.co/theSOL1/kogrammar-base)
**Contributors**:
- ChangGeon Kang: [NeverAsking](https://github.com/NeverAsking/)
- Subin Park: [SOL1archive](https://github.com/SOL1archive/)
**GitHub Repo**:
- [SOL1archive/KoGrammar](https://github.com/SOL1archive/KoGrammar)
## Train Method
전체 데이터셋 중 약 67.5%를 학습데이터로 활용하여 학습함.
<br>
SFT Distillation을 이용해 [kogrammar-base](https://huggingface.co/theSOL1/kogrammar-base) 모델의 Decoder Layer를 6개에서 1개로 줄여 다시 학습시킴.
## Metric
|BLEU-2|ROUGE-2 F1|
|-|-|
|77.8 %|55.0 %|
|
antuuuu/anstsykh
|
antuuuu
| 2023-06-20T15:03:35Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-20T14:55:22Z |
---
license: creativeml-openrail-m
---
|
huangyuyang/chatglm-6b-int8.flm
|
huangyuyang
| 2023-06-20T15:03:17Z | 0 | 2 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-06-20T14:15:51Z |
---
license: apache-2.0
---
fastllm model for chatglm-6b-int8
Github address: https://github.com/ztxz16/fastllm
|
tiansz/fastllm_chatglm
|
tiansz
| 2023-06-20T14:56:42Z | 0 | 0 | null |
[
"text-generation",
"zh",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-06-19T14:26:40Z |
---
license: apache-2.0
language:
- zh
pipeline_tag: text-generation
---
github链接:[点击此处跳转](https://github.com/ztxz16/fastllm)
|
SotirisLegkas/Socratic-GODEL-instruct
|
SotirisLegkas
| 2023-06-20T14:54:20Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-20T13:54:02Z |
---
pipeline_tag: text2text-generation
---
Instruction: given a context, reply as in a Socratic dialogue.
|
mfi/llm-kibook-falcon
|
mfi
| 2023-06-20T14:45:06Z | 0 | 0 | null |
[
"text-generation",
"en",
"de",
"dataset:mfi/ki-book-falcon",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-06-16T11:27:36Z |
---
datasets:
- mfi/ki-book-falcon
language:
- en
- de
license: apache-2.0
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rakgesh/onepiece_predictor_transfer_v01
|
rakgesh
| 2023-06-20T14:22:17Z | 4 | 0 |
tf-keras
|
[
"tf-keras",
"mobilenet",
"image-classification",
"region:us"
] |
image-classification
| 2023-06-20T14:20:57Z |
---
pipeline_tag: image-classification
---
|
moussaKam/mbarthez-dialogue-summarization
|
moussaKam
| 2023-06-20T14:17:56Z | 201 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"summarization",
"bart",
"fr",
"arxiv:2010.12321",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-06-20T13:42:52Z |
---
tags:
- summarization
- bart
language:
- fr
license: apache-2.0
widget:
- text: "#Personne1#: Excusez-moi, savez-vous où se trouve le bureau des visas ?\n#Personne2#: Oui, je le sais. Je vais vous y accompagner.\n#Personne1#: Merci.\n#Personne2#: Postulez-vous pour étudier ou travailler à l'étranger ?\n#Personne1#: Je souhaite étudier à l'étranger.\n#Personne2#: Qu'aimeriez-vous étudier ?\n#Personne1#: J'espère étudier la littérature anglaise.\n#Personne2#: Avez-vous déjà obtenu votre visa ?\n#Personne1#: Pas encore. J'ai un entretien avec un responsable des visas aujourd'hui.\n#Personne2#: Je comprends. Est-ce votre premier entretien ?\n#Personne1#: Non, j'ai déjà passé trois entretiens ici."
- text: "#Personne1#: Bonjour, Monsieur Smith. Je suis le docteur Hawkins. Pourquoi êtes-vous ici aujourd'hui ?\n#Personne2#: Je pensais que ce serait une bonne idée de faire un bilan de santé.\n#Personne1#: Oui, enfin, vous n'en avez pas fait depuis 5 ans. Vous devriez en faire un chaque année.\n#Personne2#: Je sais. Je me dis que tant qu'il n'y a rien de mal, pourquoi aller voir le médecin ?\n#Personne1#: Eh bien, la meilleure façon d'éviter les maladies graves est de les détecter tôt. Essayez donc de venir au moins une fois par an pour votre bien-être.\n#Personne2#: D'accord.\n#Personne1#: Laissez-moi voir. Vos yeux et vos oreilles ont l'air bien. Respirez profondément, s'il vous plaît. Vous fumez, Monsieur Smith ?\n#Personne2#: Oui.\n#Personne1#: Le tabagisme est la principale cause de cancer du poumon et de maladies cardiaques, vous savez. Vous devriez vraiment arrêter.\n#Personne2#: J'ai essayé des centaines de fois, mais je n'arrive pas à me débarrasser de cette habitude.\n#Personne1#: Eh bien, nous avons des cours et des médicaments qui pourraient vous aider. Je vous donnerai plus d'informations avant votre départ.\n#Personne2#: D'accord, merci docteur."
---
### mBarthez model finetuned on dialogue summarization task (abstract generation)
finetuning: examples/seq2seq (as of Feb 08 2021)
paper: https://arxiv.org/abs/2010.12321 \
github: https://github.com/moussaKam/BARThez
```
@article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
```
|
tassadar667/ChatGLM6B-Legal
|
tassadar667
| 2023-06-20T14:03:38Z | 3 | 6 |
transformers
|
[
"transformers",
"pytorch",
"chatglm",
"glm",
"custom_code",
"zh",
"endpoints_compatible",
"region:us"
] | null | 2023-05-22T03:43:21Z |
---
language:
- zh
tags:
- glm
- chatglm
---
# ChatGLM-6B-Legal
## 介绍
ChatGLM-6B-Legal是一个在ChatGLM-6B上进行了参数微调的模型,主要关注于法律判据的预测方面。
## 使用方法
首先需要下载ChatGLM-6B模型,再下载本模型中的model_1和model_2,运行法律问答jupyter文件。需要修改文件中模型、config等目录。
依赖环境与ChatGLM-6B相同。
|
WALIDALI/bekiksrilym
|
WALIDALI
| 2023-06-20T14:02:49Z | 29 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-20T13:50:28Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### bekiksrilym Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
TheBloke/vicuna-7B-1.1-GPTQ
|
TheBloke
| 2023-06-20T13:53:38Z | 49 | 58 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-04-12T22:27:25Z |
---
license: other
inference: false
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Vicuna 7B 1.1 GPTQ 4bit 128g
This is a 4-bit GPTQ version of the [Vicuna 7B 1.1 model](https://huggingface.co/lmsys/vicuna-7b-delta-v1.1).
It was created by merging the deltas provided in the above repo with the original Llama 7B model, [using the code provided on their Github page](https://github.com/lm-sys/FastChat#vicuna-weights).
It was then quantized to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
## My Vicuna 1.1 model repositories
I have the following Vicuna 1.1 repositories available:
**13B models:**
* [Unquantized 13B 1.1 model for GPU - HF format](https://huggingface.co/TheBloke/vicuna-13B-1.1-HF)
* [GPTQ quantized 4bit 13B 1.1 for GPU - `safetensors` and `pt` formats](https://huggingface.co/TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/vicuna-13B-1.1-GGML)
**7B models:**
* [Unquantized 7B 1.1 model for GPU - HF format](https://huggingface.co/TheBloke/vicuna-7B-1.1-HF)
* [GPTQ quantized 4bit 7B 1.1 for GPU - `safetensors` and `pt` formats](https://huggingface.co/TheBloke/vicuna-7B-1.1-GPTQ-4bit-128g)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/vicuna-7B-1.1-GGML)
## Provided files
Two model files are provided. Ideally use the `safetensors` file. Full details below:
Details of the files provided:
* `vicuna-7B-1.1-GPTQ-4bit-128g.safetensors`
* `safetensors` format, with improved file security, created with the latest [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) code.
* Command to create:
* `python3 llama.py vicuna-7B-1.1-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors vicuna-7B-1.1-GPTQ-4bit-128g.safetensors`
* `vicuna-7B-1.1-GPTQ-4bit-128g.no-act-order.pt`
* `pt` format file, created without the `--act-order` flag.
* This file may have slightly lower quality, but is included as it can be used without needing to compile the latest GPTQ-for-LLaMa code.
* It should hopefully therefore work with one-click-installers on Windows, which include the older GPTQ-for-LLaMa code.
* Command to create:
* `python3 llama.py vicuna-7B-1.1-HF c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors vicuna-7B-1.1-GPTQ-4bit-128g.no-act-order.pt`
## How to run in `text-generation-webui`
File `vicuna-7B-1.1-GPTQ-4bit-128g.no-act-order.pt` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
The `safetensors` model file was created with the latest GPTQ code, and may be highere quality, but requires that the latest GPTQ-for-LLaMa is used inside the UI.
Here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
```
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
git clone https://github.com/oobabooga/text-generation-webui
mkdir -p text-generation-webui/repositories
ln -s GPTQ-for-LLaMa text-generation-webui/repositories/GPTQ-for-LLaMa
```
Then install this model into `text-generation-webui/models` and launch the UI as follows:
```
cd text-generation-webui
python server.py --model vicuna-7B-1.1-GPTQ-4bit-128g --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
```
The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
If you are on Windows, or cannot use the Triton branch of GPTQ for any other reason, you can instead use the CUDA branch:
```
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa -b cuda
cd GPTQ-for-LLaMa
python setup_cuda.py install
```
Then link that into `text-generation-webui/repositories` as described above.
Or just use `vicuna-7B-1.1-GPTQ-4bit-128g.no-act-order.pt` as mentioned above.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Vicuna Model Card
## Model details
**Model type:**
Vicuna is an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
Vicuna was trained between March 2023 and April 2023.
**Organizations developing the model:**
The Vicuna team with members from UC Berkeley, CMU, Stanford, and UC San Diego.
**Paper or resources for more information:**
https://vicuna.lmsys.org/
**License:**
Apache License 2.0
**Where to send questions or comments about the model:**
https://github.com/lm-sys/FastChat/issues
## Intended use
**Primary intended uses:**
The primary use of Vicuna is research on large language models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## Training dataset
70K conversations collected from ShareGPT.com.
## Evaluation dataset
A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs. See https://vicuna.lmsys.org/ for more details.
## Major updates of weights v1.1
- Refactor the tokenization and separator. In Vicuna v1.1, the separator has been changed from `"###"` to the EOS token `"</s>"`. This change makes it easier to determine the generation stop criteria and enables better compatibility with other libraries.
- Fix the supervised fine-tuning loss computation for better model quality.
|
hassansoliman/falcon-7b-qlora-utterance-adaptations_v5
|
hassansoliman
| 2023-06-20T13:45:06Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-20T13:44:16Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
karina-aquino/bertmultilingual-sentiment-model
|
karina-aquino
| 2023-06-20T13:22:35Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-16T19:48:15Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bertmultilingual-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertmultilingual-sentiment-model
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9775
- Accuracy: 0.66
- F1: 0.6726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 375 | 0.9775 | 0.66 | 0.6726 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
TheBloke/minotaur-15B-GGML
|
TheBloke
| 2023-06-20T13:19:02Z | 0 | 10 |
transformers
|
[
"transformers",
"code",
"text-generation",
"dataset:bigcode/the-stack-dedup",
"dataset:tiiuae/falcon-refinedweb",
"dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered",
"dataset:QingyiSi/Alpaca-CoT",
"dataset:teknium/GPTeacher-General-Instruct",
"dataset:metaeval/ScienceQA_text_only",
"dataset:hellaswag",
"dataset:openai/summarize_from_feedback",
"dataset:riddle_sense",
"dataset:gsm8k",
"dataset:camel-ai/math",
"dataset:camel-ai/biology",
"dataset:camel-ai/physics",
"dataset:camel-ai/chemistry",
"dataset:winglian/evals",
"arxiv:1911.02150",
"arxiv:2205.14135",
"arxiv:2207.14255",
"arxiv:2305.06161",
"region:us"
] |
text-generation
| 2023-06-18T11:47:47Z |
---
inference: false
pipeline_tag: text-generation
widget:
- text: 'def print_hello_world():'
example_title: Hello world
group: Python
- text: 'Gradient descent is'
example_title: Machine Learning
group: English
- license: bigcode-openrail-m
datasets:
- bigcode/the-stack-dedup
- tiiuae/falcon-refinedweb
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
- QingyiSi/Alpaca-CoT
- teknium/GPTeacher-General-Instruct
- metaeval/ScienceQA_text_only
- hellaswag
- openai/summarize_from_feedback
- riddle_sense
- gsm8k
- camel-ai/math
- camel-ai/biology
- camel-ai/physics
- camel-ai/chemistry
- winglian/evals
metrics:
- code_eval
- mmlu
- arc
- hellaswag
- truthfulqa
library_name: transformers
tags:
- code
extra_gated_prompt: >-
## Model License Agreement
Please read the BigCode [OpenRAIL-M
license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
agreement before accepting it.
extra_gated_fields:
I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OpenAccess AI Collective's Minotaur 15B GGML
These files are GGML format model files for [OpenAccess AI Collective's Minotaur 15B](https://huggingface.co/openaccess-ai-collective/minotaur-15b).
Please note that these GGMLs are **not compatible with llama.cpp, or currently with text-generation-webui**. Please see below for a list of tools known to work with these model files.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/minotaur-15B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/minotaur-15B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openaccess-ai-collective/minotaur-15b)
## A note regarding context length: 8K
It is confirmed that the 8K context of this model works in [KoboldCpp](https://github.com/LostRuins/koboldcpp), if you manually set max context to 8K by adjusting the text box above the slider:

(set it to 8192 at most)
It is currently unknown as to whether it is compatible with other clients.
If you have feedback on this, please let me know.
## Prompt template
```
USER: <prompt>
ASSISTANT:
```
<!-- compatibility_ggml start -->
## Compatibilty
These files are **not** compatible with text-generation-webui, llama.cpp, or llama-cpp-python.
Currently they can be used with:
* KoboldCpp, a powerful inference engine based on llama.cpp, with good UI: [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers)
* The LoLLMS Web UI which uses ctransformers: [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [rustformers' llm](https://github.com/rustformers/llm)
* The example `starcoder` binary provided with [ggml](https://github.com/ggerganov/ggml)
As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!)
## Tutorial for using LoLLMS Web UI
* [Text tutorial, written by **Lucas3DCG**](https://huggingface.co/TheBloke/MPT-7B-Storywriter-GGML/discussions/2#6475d914e9b57ce0caa68888)
* [Video tutorial, by LoLLMS Web UI's author **ParisNeo**](https://www.youtube.com/watch?v=ds_U0TDzbzI)
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| minotaur-15b.ggmlv3.q4_0.bin | q4_0 | 4 | 10.75 GB | 13.25 GB | Original llama.cpp quant method, 4-bit. |
| minotaur-15b.ggmlv3.q4_1.bin | q4_1 | 4 | 11.92 GB | 14.42 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| minotaur-15b.ggmlv3.q5_0.bin | q5_0 | 5 | 13.09 GB | 15.59 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| minotaur-15b.ggmlv3.q5_1.bin | q5_1 | 5 | 14.26 GB | 16.76 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| minotaur-15b.ggmlv3.q8_0.bin | q8_0 | 8 | 20.11 GB | 22.61 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: OpenAccess AI Collective's Minotaur 15B
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
**[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!**
# Minotaur 15B 8K
Minotaur 15B is an instruct fine-tuned model on top of Starcoder Plus. Minotaur 15B is fine-tuned **on only completely open datasets** making this model reproducible by anyone.
Minotaur 15B has a context length of 8K tokens, allowing for strong recall at long contexts.
Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [[email protected]](mailto:[email protected])
# Prompts
Chat only style prompts using `USER:`,`ASSISTANT:`.
<img src="https://huggingface.co/openaccess-ai-collective/minotaur-13b/resolve/main/minotaur.png" alt="minotaur" width="600" height="500"/>
# Training Datasets
Minotaur 15B model is fine-tuned on the following openly available datasets:
- [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
- [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
- [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
- [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
- [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
- [camel-ai/math](https://huggingface.co/datasets/camel-ai/math)
- [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics)
- [camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry)
- [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology)
- [winglian/evals](https://huggingface.co/datasets/winglian/evals) - instruct augmented datasets
- custom sysnthetic datasets around misconceptions, in-context qa, jokes, N-tasks problems, and context-insensitivity
- ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split
- [hellaswag](https://huggingface.co/datasets/hellaswag) - 30K+ rows of instruct augmented for detailed explanations w 30K+ rows, derived from the `train` split
- [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented, derived from the `train` split
- [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented, derived from the `train` split
- prose generation
# Shoutouts
Special thanks to Nanobit for helping with Axolotl and TheBloke for quantizing these models are more accessible to all.
# Demo
HF Demo in Spaces available in the [Community ChatBot Arena](https://huggingface.co/spaces/openaccess-ai-collective/rlhf-arena) under the OAAIC Chatbots tab.
## Release Notes
- https://wandb.ai/wing-lian/minotaur-16b-8k/runs/tshgbl2k
## Build
Minotaur was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 4XA100 80GB
- 1 epochs taking approximately 30 hours
- Trained using QLoRA techniques
## Bias, Risks, and Limitations
Minotaur has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Minotaur was fine-tuned from the base model StarCoder, please refer to its model card's Limitations Section for relevant information. (included below)
## Benchmarks
TBD
## Examples
TBD
# StarCoderPlus
Play with the instruction-tuned StarCoderPlus at [StarChat-Beta](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground).
## Table of Contents
1. [Model Summary](##model-summary)
2. [Use](##use)
3. [Limitations](##limitations)
4. [Training](##training)
5. [License](##license)
6. [Citation](##citation)
## Model Summary
StarCoderPlus is a fine-tuned version of [StarCoderBase](https://huggingface.co/bigcode/starcoderbase) on 600B tokens from the English web dataset [RedefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
combined with [StarCoderData](https://huggingface.co/datasets/bigcode/starcoderdata) from [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack) and a Wikipedia dataset.
It's a 15.5B parameter Language Model trained on English and 80+ programming languages. The model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150),
[a context window of 8192 tokens](https://arxiv.org/abs/2205.14135), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 1.6 trillion tokens.
- **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
- **Point of Contact:** [[email protected]](mailto:[email protected])
- **Languages:** English & 80+ Programming languages
## Use
### Intended use
The model was trained on English and GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. However, the instruction-tuned version in [StarChat](hhttps://huggingface.co/spaces/HuggingFaceH4/starchat-playground) makes a capable assistant.
**Feel free to share your generations in the Community tab!**
### Generation
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/starcoderplus"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
### Fill-in-the-middle
Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:
```python
input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>"
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
### Attribution & Other Requirements
The training code dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/starcoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
# Limitations
The model has been trained on a mixture of English text from the web and GitHub code. Therefore it might encounter limitations when working with non-English text, and can carry the stereotypes and biases commonly encountered online.
Additionally, the generated code should be used with caution as it may contain errors, inefficiencies, or potential vulnerabilities. For a more comprehensive understanding of the base model's code limitations, please refer to See [StarCoder paper](hhttps://arxiv.org/abs/2305.06161).
# Training
StarCoderPlus is a fine-tuned version on 600B English and code tokens of StarCoderBase, which was pre-trained on 1T code tokens. Below are the fine-tuning details:
## Model
- **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
- **Finetuning steps:** 150k
- **Finetuning tokens:** 600B
- **Precision:** bfloat16
## Hardware
- **GPUs:** 512 Tesla A100
- **Training time:** 14 days
## Software
- **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# License
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
|
RajkNakka/bert-finetuned-ner
|
RajkNakka
| 2023-06-20T13:18:24Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-18T00:24:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9346891534391535
- name: Recall
type: recall
value: 0.9513631773813531
- name: F1
type: f1
value: 0.942952460383653
- name: Accuracy
type: accuracy
value: 0.9863866486136458
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0625
- Precision: 0.9347
- Recall: 0.9514
- F1: 0.9430
- Accuracy: 0.9864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0851 | 1.0 | 1756 | 0.0643 | 0.9149 | 0.9337 | 0.9242 | 0.9827 |
| 0.0354 | 2.0 | 3512 | 0.0593 | 0.9276 | 0.9465 | 0.9369 | 0.9859 |
| 0.0176 | 3.0 | 5268 | 0.0625 | 0.9347 | 0.9514 | 0.9430 | 0.9864 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gaioNL/a2c-PandaReachDense-v2
|
gaioNL
| 2023-06-20T13:06:12Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T13:03:22Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.62 +/- 0.35
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
platzi/text-classification-mtopdomain-es-alexrods
|
platzi
| 2023-06-20T13:05:01Z | 0 | 0 | null |
[
"text-classification",
"es",
"dataset:mteb/mtop_domain",
"license:mit",
"region:us"
] |
text-classification
| 2023-06-18T05:07:02Z |
---
datasets:
- mteb/mtop_domain
language:
- es
pipeline_tag: text-classification
license: mit
---
Prueba el modelo en [spaces](https://huggingface.co/spaces/alexrods/assistant-commands-classification-es)
Modelo de clasificación de textos entrenado con el dataset [mteb/mtop_domain](https://huggingface.co/datasets/mteb/mtop_domain) y el subset en español.
Clasifica comandos como asistente virtual en las siguientes categorías:
| label | label_text |
| --- | :----: |
| 0 | messaging |
| 1 | calling |
| 2 | event |
| 3 | timer |
| 4 | music |
| 5 | weather |
| 6 | alarm |
| 7 | people |
| 8 | reminder |
| 9 | recipes |
| 10 | news |
|
ManiShankar-AlpesAi/paraphrase-multilingual-mpnet-base-v2-KE_Sieve
|
ManiShankar-AlpesAi
| 2023-06-20T12:56:48Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"mteb",
"text-classification",
"arxiv:1910.09700",
"model-index",
"region:us"
] |
text-classification
| 2023-05-08T08:49:13Z |
---
pipeline_tag: text-classification
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
model-index:
- name: KE Sieve_model
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 79.05970149253731
- type: ap
value: 42.7075359884682
- type: f1
value: 72.99649470402085
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 70.193
- type: ap
value: 64.37171698026376
- type: f1
value: 69.99260638185035
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 34.288000000000004
- type: f1
value: 34.00390576721439
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 70.37283775714532
- type: cos_sim_spearman
value: 65.28702977793742
- type: euclidean_pearson
value: 68.81678452970543
- type: euclidean_spearman
value: 66.10212250382912
- type: manhattan_pearson
value: 70.06439132928513
- type: manhattan_spearman
value: 66.10212250382912
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 75.88961038961038
- type: f1
value: 75.71295362599926
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 40.26
- type: f1
value: 35.91571484611428
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 61.1396
- type: ap
value: 57.0336104684589
- type: f1
value: 60.711055351249385
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 87.21842225262198
- type: f1
value: 86.60570158294514
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 69.44824441404468
- type: f1
value: 51.1702284173121
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.60188298587761
- type: f1
value: 64.57658770410065
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.36987222595829
- type: f1
value: 70.34853403058946
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 78.1402991982508
- type: cos_sim_spearman
value: 76.01438891892613
- type: euclidean_pearson
value: 76.07791972310307
- type: euclidean_spearman
value: 76.4750927224088
- type: manhattan_pearson
value: 78.7022742184064
- type: manhattan_spearman
value: 76.4750927224088
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 77.41946856528065
- type: cos_sim_spearman
value: 71.2452368975646
- type: euclidean_pearson
value: 68.76119955717198
- type: euclidean_spearman
value: 70.40762520824568
- type: manhattan_pearson
value: 76.1638570991111
- type: manhattan_spearman
value: 70.40762520824568
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 77.86983630535461
- type: cos_sim_spearman
value: 78.39885607110992
- type: euclidean_pearson
value: 75.81565277674996
- type: euclidean_spearman
value: 78.70053430302474
- type: manhattan_pearson
value: 78.14484348028292
- type: manhattan_spearman
value: 78.70053430302474
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 76.52542250553228
- type: cos_sim_spearman
value: 74.23425444398934
- type: euclidean_pearson
value: 73.63790688920109
- type: euclidean_spearman
value: 74.14127580980806
- type: manhattan_pearson
value: 76.76724842158396
- type: manhattan_spearman
value: 74.14127580980806
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 80.9319282262523
- type: cos_sim_spearman
value: 81.40861373830771
- type: euclidean_pearson
value: 79.61339072348075
- type: euclidean_spearman
value: 82.1601716091385
- type: manhattan_pearson
value: 81.76770515821788
- type: manhattan_spearman
value: 82.1601716091385
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 78.83953330477087
- type: cos_sim_spearman
value: 79.1312883671738
- type: euclidean_pearson
value: 77.02068269010785
- type: euclidean_spearman
value: 78.85332564873545
- type: manhattan_pearson
value: 78.66151014252961
- type: manhattan_spearman
value: 78.85332564873545
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (ko-ko)
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 77.06164373590121
- type: cos_sim_spearman
value: 76.99890844656588
- type: euclidean_pearson
value: 73.39118839457844
- type: euclidean_spearman
value: 77.11144988540109
- type: manhattan_pearson
value: 77.20681515013695
- type: manhattan_spearman
value: 77.11144988540109
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (ar-ar)
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 77.60555084043324
- type: cos_sim_spearman
value: 76.04445852887906
- type: euclidean_pearson
value: 72.71133101639413
- type: euclidean_spearman
value: 75.91338695530828
- type: manhattan_pearson
value: 77.35612564470868
- type: manhattan_spearman
value: 75.91338695530828
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-ar)
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 78.41618617815928
- type: cos_sim_spearman
value: 77.60195378076503
- type: euclidean_pearson
value: 78.16168735305624
- type: euclidean_spearman
value: 77.67819163961478
- type: manhattan_pearson
value: 78.40140865643386
- type: manhattan_spearman
value: 77.67819163961478
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-de)
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 71.44561691901534
- type: cos_sim_spearman
value: 70.39834592402187
- type: euclidean_pearson
value: 71.5559771884868
- type: euclidean_spearman
value: 70.11301222833383
- type: manhattan_pearson
value: 71.51922693185502
- type: manhattan_spearman
value: 70.11301222833383
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.7214978664316
- type: cos_sim_spearman
value: 85.4010906321244
- type: euclidean_pearson
value: 84.6346870837772
- type: euclidean_spearman
value: 85.72569452807713
- type: manhattan_pearson
value: 86.96159961830801
- type: manhattan_spearman
value: 85.72569452807713
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-tr)
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 72.09730265741813
- type: cos_sim_spearman
value: 71.0352138913937
- type: euclidean_pearson
value: 72.55713973075069
- type: euclidean_spearman
value: 71.41534122613018
- type: manhattan_pearson
value: 72.55966082460004
- type: manhattan_spearman
value: 71.41534122613018
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (es-en)
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 82.03153344804768
- type: cos_sim_spearman
value: 81.58711344537957
- type: euclidean_pearson
value: 81.23021018553894
- type: euclidean_spearman
value: 81.92757732356259
- type: manhattan_pearson
value: 82.15831176471193
- type: manhattan_spearman
value: 81.92757732356259
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (es-es)
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 83.82880794136425
- type: cos_sim_spearman
value: 82.77088436337785
- type: euclidean_pearson
value: 81.25832734044387
- type: euclidean_spearman
value: 83.62944563056716
- type: manhattan_pearson
value: 84.53726605538859
- type: manhattan_spearman
value: 83.62944563056716
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (fr-en)
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 78.4156098242599
- type: cos_sim_spearman
value: 77.15842055051796
- type: euclidean_pearson
value: 78.9792127917851
- type: euclidean_spearman
value: 78.09974898801255
- type: manhattan_pearson
value: 79.0925556678293
- type: manhattan_spearman
value: 78.09974898801255
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (it-en)
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 82.90712716373704
- type: cos_sim_spearman
value: 81.519207224176
- type: euclidean_pearson
value: 82.74512409664257
- type: euclidean_spearman
value: 81.99923052819682
- type: manhattan_pearson
value: 83.32430067509108
- type: manhattan_spearman
value: 81.99923052819682
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (nl-en)
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 81.93681389517745
- type: cos_sim_spearman
value: 80.70090384624984
- type: euclidean_pearson
value: 82.04806027549073
- type: euclidean_spearman
value: 81.45677948183294
- type: manhattan_pearson
value: 82.62825908719917
- type: manhattan_spearman
value: 81.45677948183294
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.8307489054962
- type: cos_sim_spearman
value: 58.62505961044144
- type: euclidean_pearson
value: 55.77564028818849
- type: euclidean_spearman
value: 58.03263946623424
- type: manhattan_pearson
value: 57.934500833835756
- type: manhattan_spearman
value: 58.03263946623424
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de)
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 34.274519281072244
- type: cos_sim_spearman
value: 41.84134494905925
- type: euclidean_pearson
value: 24.113418166636
- type: euclidean_spearman
value: 42.55202188864813
- type: manhattan_pearson
value: 34.64265468569397
- type: manhattan_spearman
value: 42.55202188864813
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (es)
config: es
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.477886702880916
- type: cos_sim_spearman
value: 57.226736875881365
- type: euclidean_pearson
value: 51.58883207688278
- type: euclidean_spearman
value: 57.86581420207087
- type: manhattan_pearson
value: 55.6341174643668
- type: manhattan_spearman
value: 57.86581420207087
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (pl)
config: pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 20.336503083893273
- type: cos_sim_spearman
value: 36.367365959741676
- type: euclidean_pearson
value: 3.9896117703332306
- type: euclidean_spearman
value: 35.58006670036499
- type: manhattan_pearson
value: 19.472741193199475
- type: manhattan_spearman
value: 35.58006670036499
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (tr)
config: tr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 52.55051438010185
- type: cos_sim_spearman
value: 52.71302742082575
- type: euclidean_pearson
value: 51.51870956964007
- type: euclidean_spearman
value: 53.81785040820307
- type: manhattan_pearson
value: 52.83864930315768
- type: manhattan_spearman
value: 53.81785040820307
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (ar)
config: ar
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 50.058410116717056
- type: cos_sim_spearman
value: 52.60613795295755
- type: euclidean_pearson
value: 44.34171068199546
- type: euclidean_spearman
value: 50.972497500185995
- type: manhattan_pearson
value: 48.47153098268435
- type: manhattan_spearman
value: 50.972497500185995
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (ru)
config: ru
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 48.18132407899186
- type: cos_sim_spearman
value: 53.35340508300852
- type: euclidean_pearson
value: 39.82149695080574
- type: euclidean_spearman
value: 52.682446757364744
- type: manhattan_pearson
value: 47.28762038747965
- type: manhattan_spearman
value: 52.682446757364744
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 56.658087211796015
- type: cos_sim_spearman
value: 60.00152778866955
- type: euclidean_pearson
value: 49.64087381385087
- type: euclidean_spearman
value: 60.15322267559951
- type: manhattan_pearson
value: 56.343272070378504
- type: manhattan_spearman
value: 60.15322267559951
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (fr)
config: fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 70.45337327084312
- type: cos_sim_spearman
value: 72.79410290057697
- type: euclidean_pearson
value: 65.79888764581077
- type: euclidean_spearman
value: 71.95723099514818
- type: manhattan_pearson
value: 69.39143945386915
- type: manhattan_spearman
value: 71.95723099514818
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de-en)
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 54.250555833893486
- type: cos_sim_spearman
value: 49.08853609665319
- type: euclidean_pearson
value: 56.41903104763859
- type: euclidean_spearman
value: 48.5360965015595
- type: manhattan_pearson
value: 55.42445266426144
- type: manhattan_spearman
value: 48.5360965015595
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (es-en)
config: es-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.77771892182398
- type: cos_sim_spearman
value: 67.29191603287435
- type: euclidean_pearson
value: 67.17511110245552
- type: euclidean_spearman
value: 68.48737613290533
- type: manhattan_pearson
value: 67.84988405103397
- type: manhattan_spearman
value: 68.48737613290533
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (it)
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.28155325846798
- type: cos_sim_spearman
value: 64.16669097648895
- type: euclidean_pearson
value: 59.403028984978356
- type: euclidean_spearman
value: 64.53234398252941
- type: manhattan_pearson
value: 62.71911466592815
- type: manhattan_spearman
value: 64.53234398252941
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (pl-en)
config: pl-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.52507293566482
- type: cos_sim_spearman
value: 67.7160213688307
- type: euclidean_pearson
value: 67.20401581128685
- type: euclidean_spearman
value: 73.5516139257937
- type: manhattan_pearson
value: 69.31380011990255
- type: manhattan_spearman
value: 73.5516139257937
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh-en)
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.00687646805075
- type: cos_sim_spearman
value: 64.45259281540577
- type: euclidean_pearson
value: 67.27796918266225
- type: euclidean_spearman
value: 63.85338920706559
- type: manhattan_pearson
value: 67.1156006669401
- type: manhattan_spearman
value: 63.85338920706559
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (es-it)
config: es-it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 58.377177955731966
- type: cos_sim_spearman
value: 57.93025327632129
- type: euclidean_pearson
value: 59.93402849184793
- type: euclidean_spearman
value: 60.01820523185587
- type: manhattan_pearson
value: 60.315338046091725
- type: manhattan_spearman
value: 60.01820523185587
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de-fr)
config: de-fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 53.82667440921093
- type: cos_sim_spearman
value: 50.5954961502418
- type: euclidean_pearson
value: 55.73092376619234
- type: euclidean_spearman
value: 55.313175399483484
- type: manhattan_pearson
value: 56.81790111656754
- type: manhattan_spearman
value: 55.313175399483484
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de-pl)
config: de-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 37.23788982242752
- type: cos_sim_spearman
value: 50.44074153238998
- type: euclidean_pearson
value: 41.25620114235842
- type: euclidean_spearman
value: 50.817224893459255
- type: manhattan_pearson
value: 40.20839143792603
- type: manhattan_spearman
value: 50.817224893459255
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (fr-pl)
config: fr-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 58.03829696246709
- type: cos_sim_spearman
value: 73.24670207647144
- type: euclidean_pearson
value: 55.854312917676864
- type: euclidean_spearman
value: 73.24670207647144
- type: manhattan_pearson
value: 58.529125221260614
- type: manhattan_spearman
value: 73.24670207647144
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 82.10559795910007
- type: cos_sim_spearman
value: 81.33502456405203
- type: euclidean_pearson
value: 80.71725031531976
- type: euclidean_spearman
value: 81.48140012027567
- type: manhattan_pearson
value: 82.33088191846421
- type: manhattan_spearman
value: 81.48140012027567
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.47227722772277
- type: cos_sim_ap
value: 77.36042895972905
- type: cos_sim_f1
value: 72.23880597014924
- type: cos_sim_precision
value: 71.88118811881188
- type: cos_sim_recall
value: 72.6
- type: dot_accuracy
value: 99.409900990099
- type: dot_ap
value: 68.42835773716114
- type: dot_f1
value: 65.83783783783784
- type: dot_precision
value: 71.6470588235294
- type: dot_recall
value: 60.9
- type: euclidean_accuracy
value: 99.48019801980197
- type: euclidean_ap
value: 76.69004973047716
- type: euclidean_f1
value: 72.51638930912759
- type: euclidean_precision
value: 73.14343845371313
- type: euclidean_recall
value: 71.89999999999999
- type: manhattan_accuracy
value: 99.48019801980197
- type: manhattan_ap
value: 76.69004973047716
- type: manhattan_f1
value: 72.51638930912759
- type: manhattan_precision
value: 73.14343845371313
- type: manhattan_recall
value: 71.89999999999999
- type: max_accuracy
value: 99.48019801980197
- type: max_ap
value: 77.36042895972905
- type: max_f1
value: 72.51638930912759
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.2614
- type: ap
value: 13.421228681716107
- type: f1
value: 53.71534671651974
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 54.48783248443689
- type: f1
value: 54.7405015752634
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.22703701496096
- type: cos_sim_ap
value: 63.58031791834936
- type: cos_sim_f1
value: 59.3132854578097
- type: cos_sim_precision
value: 51.60093713393206
- type: cos_sim_recall
value: 69.73614775725594
- type: dot_accuracy
value: 81.96936281814389
- type: dot_ap
value: 59.07547966241098
- type: dot_f1
value: 56.032535020334386
- type: dot_precision
value: 48.99249308573686
- type: dot_recall
value: 65.4353562005277
- type: euclidean_accuracy
value: 83.26280026226381
- type: euclidean_ap
value: 63.64817520735364
- type: euclidean_f1
value: 59.91221653255303
- type: euclidean_precision
value: 55.68902991840435
- type: euclidean_recall
value: 64.82849604221636
- type: manhattan_accuracy
value: 83.26280026226381
- type: manhattan_ap
value: 63.64817520735364
- type: manhattan_f1
value: 59.91221653255303
- type: manhattan_precision
value: 55.68902991840435
- type: manhattan_recall
value: 64.82849604221636
- type: max_accuracy
value: 83.26280026226381
- type: max_ap
value: 63.64817520735364
- type: max_f1
value: 59.91221653255303
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 87.49563395040167
- type: cos_sim_ap
value: 82.6398035947217
- type: cos_sim_f1
value: 74.74134990715125
- type: cos_sim_precision
value: 73.59504440629898
- type: cos_sim_recall
value: 75.92392978133662
- type: dot_accuracy
value: 85.70264291535685
- type: dot_ap
value: 76.35175453791561
- type: dot_f1
value: 70.42039872869113
- type: dot_precision
value: 66.31972789115646
- type: dot_recall
value: 75.06159531875576
- type: euclidean_accuracy
value: 87.51503861528312
- type: euclidean_ap
value: 82.74416973508781
- type: euclidean_f1
value: 75.0812647754137
- type: euclidean_precision
value: 72.15989775631922
- type: euclidean_recall
value: 78.2491530643671
- type: manhattan_accuracy
value: 87.51503861528312
- type: manhattan_ap
value: 82.74416973508781
- type: manhattan_f1
value: 75.0812647754137
- type: manhattan_precision
value: 72.15989775631922
- type: manhattan_recall
value: 78.2491530643671
- type: max_accuracy
value: 87.51503861528312
- type: max_ap
value: 82.74416973508781
- type: max_f1
value: 75.0812647754137
---
# paraphrase-multilingual-mpnet-base-v2-KE_Sieve
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SHENMU007/neunit_BASE_V9.2
|
SHENMU007
| 2023-06-20T12:46:46Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-06-14T08:24:21Z |
---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Yandexxxx/DrawListner
|
Yandexxxx
| 2023-06-20T12:44:52Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-06-08T13:32:31Z |
---
library_name: keras
---
# Модель для распознования цифр, натренерованна на наборе данных mnist

|
hungngo04/cluster_to_text_t5_base
|
hungngo04
| 2023-06-20T12:39:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-20T10:04:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: cluster_to_text_t5_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cluster_to_text_t5_base
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4653
- Bleu: 66.2251
- Gen Len: 9.2028
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.542 | 1.0 | 3964 | 1.1583 | 26.2658 | 9.3137 |
| 1.0563 | 2.0 | 7928 | 0.7809 | 46.4766 | 9.1855 |
| 0.8165 | 3.0 | 11892 | 0.6044 | 57.0741 | 9.1809 |
| 0.7165 | 4.0 | 15856 | 0.5197 | 62.3771 | 9.2197 |
| 0.6568 | 5.0 | 19820 | 0.4784 | 65.331 | 9.1905 |
| 0.6231 | 6.0 | 23784 | 0.4653 | 66.2251 | 9.2028 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
lololll23/my_awesome_eli5_clm-model
|
lololll23
| 2023-06-20T12:38:45Z | 217 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-20T11:02:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8659 | 1.0 | 1114 | 3.7894 |
| 3.7705 | 2.0 | 2228 | 3.7735 |
| 3.7312 | 3.0 | 3342 | 3.7717 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Kanbara/doll-likeness-series
|
Kanbara
| 2023-06-20T12:31:45Z | 0 | 388 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-28T13:49:45Z |
---
license: creativeml-openrail-m
---
# Doll Series by Kbr
The 'Doll-Series' is a set of LORA focused on realistic Asian faces, with incredible levels of beauty and aesthetics.
My Pixiv: https://www.pixiv.net/en/users/92373922
My Twitter: https://twitter.com/KbrLoras
---
# Table of Contents
- [License](#license)
- [Disclaimer](#disclaimer)
- [Used Models](#used-models)
- [LORA Detail](#lora-detail)
- [KoreanDollLikeness](#koreandolllikeness)
- [KoreanDollLikeness_v10](#koreandolllikeness_v10)
- [KoreanDollLikeness_v15](#koreandolllikeness_v15)
- [KoreanDollLikeness_v20](#koreandolllikeness_v20)
- [JapaneseDollLikeness](#japanesedolllikeness)
- [JapaneseDollLikeness_v10](#japanesedolllikeness_v10)
- [JapaneseDollLikeness_v15](#japanesedolllikeness_v15)
- [TaiwanDollLikeness](#taiwandolllikeness)
- [TaiwanDollLikeness_v15](#taiwandolllikeness_v15)
- [TaiwanDollLikeness_v20](#taiwandolllikeness_v20)
- [ChinaDollLikeness](#chinadolllikeness)
- [ChinaDollLikeness_v10](#chinadolllikeness_v10)
- [ThaiDollLikeness](#thaidolllikeness)
- [ThaiDollLikeness_v10](#Thaidolllikeness_v10)
- [FAQ](#faq)
---
# License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
(Full license here: https://huggingface.co/spaces/CompVis/stable-diffusion-license)
# Additional Add-ons to license/notes
1. You shall take full responsibility for any creative work that uses this model
2. Refrain from using this model for malicious intent, harm, defamation, scam or political usages. It may impair and discourage the author from producing more works.
# Disclaimer
- Creation of SFW and NSFW images is user's decision, user has complete control over to generate NSFW content whether or not.
---
# Used Models
* Chilled_re_generic_v2
- https://github.com/wibus-wee/stable_diffusion_chilloutmix_ipynb
* chilloutmix_cilloutmixNi
- https://civitai.com/models/6424/chilloutmix
# Recommended settings:
- Make sure you are aware on the usage instructions of LORA
- VAE: vae-ft-mse-840000-ema-pruned (For realistic models)
- Sampler: DPM++ SDE Karras (Recommended for best quality, you may try other samplers)
- Steps: 20 to 35
- Clipskip: 1 or 2
- Upscaler : Latent (bicubic antialiased)
- CFG Scale : 5 to 9
- LORA weight for txt2img: anywhere between 0.2 to 0.7 are recommended
- Denoise strength for img2img: 0.4 to 0.7
---
# LORA Detail
---
## KoreanDollLikeness
The first version that is widely used by many authors/AI artist/creators
### KoreanDollLikeness_v10
- KoreanDollLikeness_v10:
<img src="https://files.catbox.moe/r61ozj.png" width="" height="">
### KoreanDollLikeness_v15
- KoreanDollLikeness_v15:
<img src="https://files.catbox.moe/pgcfhc.png" widht="" height="">
### KoreanDollLikeness_v20
- KoreanDollLikeness_v20:
<img src="https://files.catbox.moe/thehrt.png" widht="" height="">
---
## JapaneseDollLikeness
The Japanese variant version, subjected for v15 in the future
### JapaneseDollLikeness_v10
- JapaneseDollLikeness_v10:
<img src="https://files.catbox.moe/cfypot.png" width="" height="">
### JapaneseDollLikeness_v15
- JapaneseDollLikeness_v15:
<img src="https://files.catbox.moe/doa0n2.png" width="" height="">
---
## TaiwanDollLikeness
The Taiwan variant version, I have decided to discontinue v10, it is still out there somewhere in the internet, you may still find it.
### TaiwanDollLikeness_v15
- TaiwanDollLikeness_v15:
<img src="https://files.catbox.moe/5vr2z4.png" width="" height="">
### TaiwanDollLikeness_v20
This version is a huge overhaul and remake, instead of building upon v10 or v15, I took small amount of samples from v10 and introduced a new pool of training images.
- TaiwanDollLikeness_v20:
<img src="https://files.catbox.moe/f8c9mb.png" width="" height="">
---
## ChinaDollLikeness
The China variant version, took awhile despite the requests, will probably make more versions of it in the future.
### ChinaDollLikeness_v10
<img src="https://files.catbox.moe/zpj9ov.png" width="" height="">
---
## ThaiDollLikeness
The Thai variant version, took me a long time to make this, many versions were made but this version is the one I've deemed the best out of all. Might update in the future.
### ThaiDollLikeness_v10
<img src="https://files.catbox.moe/imtxsm.png" width="" height="">
# FAQ
- # Q: Why can't I produce the same pictures as you?
- A: Sorry I don't share my prompt, you may check the recommended settings, but you may ask me for advice on Twitter or Pixiv
- # Q: What is the difference of each version upgrade?
- A: Version upgrade does not mean it will fix hands or legs, it is mainly difference of the face of the LORA, newer versions have wider range of faces.
- # Q: Will you release all your other LORAs?
- A: Yes, maybe, but I would like to keep certain LORA to be exclusive to fans or supporters in the future through fanbox/ko-fi
- # Q: Do you take requests or commissions on making custom LORAs?
- A: I might do community voting for requests, if you are somehow interested in a custom/exclusive LORA, you may contact my through Pixiv or Twitter for discussion
---
|
AlgorithmicResearchGroup/arxiv-t5-small-GenQ
|
AlgorithmicResearchGroup
| 2023-06-20T12:17:52Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"t5",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-19T03:51:07Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Arxiv-t5-small-GenQ
Arxiv-t5-small-GenQ is trained on [ArtifactAI/arxiv-beir-500k-generated-queries](ArtifactAI/arxiv-beir-500k-generated-queries), a large corpus of 500k question/abstract pairs extracted from the ArXiv dataset. It is designed to encode and transform sentences from academic papers, allowing for effective semantic similarity and information retrieval tasks. It maps sentences & paragraphs to a 768 dimensional dense vector space.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ArtifactAI/arxiv-t5-small-GenQ')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ArtifactAI/arxiv-t5-small-GenQ')
model = AutoModel.from_pretrained('ArtifactAI/arxiv-t5-small-GenQ')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 23128 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2312,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
gokuls/add-bert-Massive-intent_24
|
gokuls
| 2023-06-20T12:07:57Z | 48 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-20T11:59:03Z |
---
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: add-bert-Massive-intent_24
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: validation
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.8548942449581899
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# add-bert-Massive-intent_24
This model is a fine-tuned version of [gokuls/add_bert_12_layer_model_complete_training_new](https://huggingface.co/gokuls/add_bert_12_layer_model_complete_training_new) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9139
- Accuracy: 0.8549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7362 | 1.0 | 180 | 0.9126 | 0.7590 |
| 0.7607 | 2.0 | 360 | 0.7920 | 0.7870 |
| 0.5345 | 3.0 | 540 | 0.7643 | 0.8062 |
| 0.3975 | 4.0 | 720 | 0.7447 | 0.8131 |
| 0.2823 | 5.0 | 900 | 0.7352 | 0.8269 |
| 0.214 | 6.0 | 1080 | 0.7413 | 0.8308 |
| 0.1642 | 7.0 | 1260 | 0.7857 | 0.8357 |
| 0.1215 | 8.0 | 1440 | 0.8389 | 0.8337 |
| 0.0896 | 9.0 | 1620 | 0.8059 | 0.8515 |
| 0.0636 | 10.0 | 1800 | 0.8186 | 0.8519 |
| 0.0439 | 11.0 | 1980 | 0.8643 | 0.8510 |
| 0.0297 | 12.0 | 2160 | 0.8882 | 0.8485 |
| 0.0168 | 13.0 | 2340 | 0.9139 | 0.8549 |
| 0.0094 | 14.0 | 2520 | 0.9200 | 0.8529 |
| 0.0048 | 15.0 | 2700 | 0.9243 | 0.8544 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
YunxinLi/ModCR_checkpoints
|
YunxinLi
| 2023-06-20T12:03:13Z | 0 | 0 | null |
[
"arxiv:2305.04530",
"license:apache-2.0",
"region:us"
] | null | 2023-06-06T07:53:53Z |
---
license: apache-2.0
---
This space contains the cold start checkpoints of ModCR on PMR and VCR.
It also includes the pretrained phrase-level text-language alignmenter.
More detailed Description of ModCR are presented in Paper Link: https://arxiv.org/pdf/2305.04530.pdf
|
Halcyonindo/chikachillora
|
Halcyonindo
| 2023-06-20T12:03:01Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-20T12:01:58Z |
---
license: creativeml-openrail-m
---
|
sxandie/san_BERT1-newData1
|
sxandie
| 2023-06-20T12:02:36Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-20T11:47:16Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: sxandie/san_BERT1-newData1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# sxandie/san_BERT1-newData1
This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2672
- Validation Loss: 0.5384
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2960, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.7679 | 0.6127 | 0 |
| 0.4946 | 0.5462 | 1 |
| 0.3845 | 0.5452 | 2 |
| 0.3137 | 0.5357 | 3 |
| 0.2672 | 0.5384 | 4 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.2.2
- Tokenizers 0.13.3
|
Malaika/a2c-AntBulletEnv-v0
|
Malaika
| 2023-06-20T12:01:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T12:00:08Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 2072.37 +/- 53.66
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gokuls/bert-base-Massive-intent_48
|
gokuls
| 2023-06-20T11:54:36Z | 132 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-20T11:48:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: bert-base-Massive-intent_48
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: validation
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.8622725036891293
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-Massive-intent_48
This model is a fine-tuned version of [gokuls/bert_base_48](https://huggingface.co/gokuls/bert_base_48) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6203
- Accuracy: 0.8623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6304 | 1.0 | 180 | 0.8747 | 0.7821 |
| 0.6654 | 2.0 | 360 | 0.6418 | 0.8347 |
| 0.4063 | 3.0 | 540 | 0.5890 | 0.8529 |
| 0.2592 | 4.0 | 720 | 0.6132 | 0.8446 |
| 0.1832 | 5.0 | 900 | 0.6417 | 0.8519 |
| 0.1357 | 6.0 | 1080 | 0.6203 | 0.8623 |
| 0.0969 | 7.0 | 1260 | 0.6742 | 0.8534 |
| 0.0735 | 8.0 | 1440 | 0.7212 | 0.8436 |
| 0.0532 | 9.0 | 1620 | 0.7192 | 0.8529 |
| 0.0378 | 10.0 | 1800 | 0.7625 | 0.8564 |
| 0.0298 | 11.0 | 1980 | 0.7275 | 0.8588 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
parkyunmin/beatles_model
|
parkyunmin
| 2023-06-20T11:41:43Z | 144 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-20T11:35:58Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: beatles_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beatles_model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 268 | 2.8108 |
| 3.0136 | 2.0 | 536 | 2.7216 |
| 3.0136 | 3.0 | 804 | 2.6944 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
saad3631/Lane_Detection
|
saad3631
| 2023-06-20T11:40:36Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-06-20T07:40:03Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sanchit-gandhi/distilhubert-finetuned-gtzan-5-epochs
|
sanchit-gandhi
| 2023-06-20T11:36:47Z | 170 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-06-20T11:09:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.75
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8068
- Accuracy: 0.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5613 | 1.0 | 113 | 1.7108 | 0.52 |
| 1.1928 | 2.0 | 226 | 1.2290 | 0.67 |
| 1.0137 | 3.0 | 339 | 0.9546 | 0.7 |
| 0.7152 | 4.0 | 452 | 0.8872 | 0.76 |
| 0.6655 | 5.0 | 565 | 0.8068 | 0.75 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.1.0.dev20230607+cu121
- Datasets 2.13.1.dev0
- Tokenizers 0.13.3
|
arunptp/dqn-SpaceInvadersNoFrameskip-v4
|
arunptp
| 2023-06-20T11:33:02Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T11:32:28Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 461.50 +/- 168.18
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga arunptp -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga arunptp -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga arunptp
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
gilang21/Sarahh
|
gilang21
| 2023-06-20T11:30:21Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-20T11:24:02Z |
---
license: creativeml-openrail-m
---
|
AravindVadlapudi02/swiftformer-s-finetuned-swift2
|
AravindVadlapudi02
| 2023-06-20T11:22:17Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-20T11:22:15Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
LOGQS/Reinforce-CartPole-v1
|
LOGQS
| 2023-06-20T11:21:58Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T10:39:04Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
NbAiLab/nb-bert-base-sami-relevant
|
NbAiLab
| 2023-06-20T11:20:51Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"sami relevant",
"se",
"no",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
pipeline_tag: text-classification
widget:
- text: >-
Riddu Riđđu Festivála lea jahkásaš musihkka- ja -kulturfestivála mii lágiduvvo Gáivuonas Davvi-Romssas.
- text: >-
The Sámi languages form a branch of the Uralic language family. According to
the traditional view, Sámi is within the Uralic family most closely related
to the Finnic languages (Sammallahti 1998).
- text: >-
Joseph Robinette Biden Jr. is an American politician who is the 46th and current president of the United States.
language:
- se
- 'no'
- en
metrics:
- matthews_correlation
library_name: transformers
tags:
- sami relevant
---
# NB-BERT-base Sámi Relevant
This a model capable of predicting when a chunk of text could potentially be of interest to the Sámi Bibliographers at the National Library of Norway.
|
sdw103/finalprojectyonsei807
|
sdw103
| 2023-06-20T11:18:16Z | 212 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-20T11:11:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: finalprojectyonsei807
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finalprojectyonsei807
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Pstman/Text_summarization
|
Pstman
| 2023-06-20T11:12:32Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-07T07:19:16Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: Text_Summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_text_sum_model
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1211
- Rouge1: 0.3212
- Rouge2: 0.1225
- Rougel: 0.2329
- Rougelsum: 0.2327
- Gen Len: 65.4998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.8113 | 1.0 | 1134 | 1.7859 | 0.3221 | 0.1245 | 0.2348 | 0.2348 | 64.0653 |
| 1.3327 | 2.0 | 2268 | 1.8142 | 0.3248 | 0.1255 | 0.2368 | 0.2368 | 64.2325 |
| 0.9968 | 3.0 | 3402 | 1.9795 | 0.3215 | 0.1236 | 0.2335 | 0.2335 | 65.0935 |
| 0.7786 | 4.0 | 4536 | 2.1211 | 0.3212 | 0.1225 | 0.2329 | 0.2327 | 65.4998 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
patrickvonplaten/wav2vec2-large-mms-1b-turkish-colab-test
|
patrickvonplaten
| 2023-06-20T11:11:59Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_6_1",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-20T10:05:43Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
datasets:
- common_voice_6_1
metrics:
- wer
model-index:
- name: wav2vec2-large-mms-1b-turkish-colab-test
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_6_1
type: common_voice_6_1
config: tr
split: test
args: tr
metrics:
- name: Wer
type: wer
value: 0.22040649576141355
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-mms-1b-turkish-colab-test
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the common_voice_6_1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1534
- Wer: 0.2204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.5985 | 0.92 | 100 | 0.1805 | 0.2490 |
| 0.2839 | 1.83 | 200 | 0.1657 | 0.2350 |
| 0.2662 | 2.75 | 300 | 0.1579 | 0.2274 |
| 0.2413 | 3.67 | 400 | 0.1534 | 0.2204 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Saurabh1105/MMDet
|
Saurabh1105
| 2023-06-20T11:03:53Z | 0 | 0 | null |
[
"tensorboard",
"medical",
"image-segmentation",
"license:other",
"region:us"
] |
image-segmentation
| 2023-06-20T10:45:52Z |
---
license: other
metrics:
- accuracy
pipeline_tag: image-segmentation
tags:
- medical
---
|
elmurod1202/bertbek-news-big-cased
|
elmurod1202
| 2023-06-20T10:58:46Z | 205 | 4 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"uz",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-14T14:29:00Z |
---
license: mit
language:
- uz
---
# BERTbek-news-big-cased
A pre-trained BERT model for Uzbek (12layers, cased). Trained on big News corpus (Daryo)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.