modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
qnguyen3/quan-1.8b-base
|
qnguyen3
| 2024-01-20T03:28:41Z
| 45
| 3
|
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:KnutJaegersberg/Qwen-1_8B-Llamafied",
"base_model:finetune:KnutJaegersberg/Qwen-1_8B-Llamafied",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T03:26:01Z
|
---
license: other
base_model: KnutJaegersberg/Qwen-1_8B-Llamafied
tags:
- generated_from_trainer
model-index:
- name: qwen-1.8b-vi-pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# qwen-1.8b-vi-pt
This model is a fine-tuned version of [KnutJaegersberg/Qwen-1_8B-Llamafied](https://huggingface.co/KnutJaegersberg/Qwen-1_8B-Llamafied) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
liwii/output
|
liwii
| 2024-01-20T03:24:56Z
| 9
| 0
|
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"generated_from_trainer",
"base_model:line-corporation/line-distilbert-base-japanese",
"base_model:finetune:line-corporation/line-distilbert-base-japanese",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-19T09:41:35Z
|
---
license: apache-2.0
base_model: line-corporation/line-distilbert-base-japanese
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [line-corporation/line-distilbert-base-japanese](https://huggingface.co/line-corporation/line-distilbert-base-japanese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3471
- Accuracy: 0.8672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 306 | 0.3968 | 0.8594 |
| 0.4221 | 2.0 | 612 | 0.3889 | 0.8594 |
| 0.4221 | 3.0 | 918 | 0.3814 | 0.8594 |
| 0.4026 | 4.0 | 1224 | 0.3775 | 0.8594 |
| 0.396 | 5.0 | 1530 | 0.3724 | 0.8594 |
| 0.396 | 6.0 | 1836 | 0.3707 | 0.8594 |
| 0.392 | 7.0 | 2142 | 0.3721 | 0.8594 |
| 0.392 | 8.0 | 2448 | 0.3653 | 0.8594 |
| 0.3898 | 9.0 | 2754 | 0.3765 | 0.8613 |
| 0.3835 | 10.0 | 3060 | 0.3572 | 0.8594 |
| 0.3835 | 11.0 | 3366 | 0.3664 | 0.8613 |
| 0.3869 | 12.0 | 3672 | 0.3568 | 0.8613 |
| 0.3869 | 13.0 | 3978 | 0.3583 | 0.8613 |
| 0.3825 | 14.0 | 4284 | 0.3526 | 0.8613 |
| 0.3813 | 15.0 | 4590 | 0.3581 | 0.8613 |
| 0.3813 | 16.0 | 4896 | 0.3553 | 0.8613 |
| 0.3759 | 17.0 | 5202 | 0.3504 | 0.8613 |
| 0.3788 | 18.0 | 5508 | 0.3490 | 0.8613 |
| 0.3788 | 19.0 | 5814 | 0.3520 | 0.8613 |
| 0.3754 | 20.0 | 6120 | 0.3450 | 0.8613 |
| 0.3754 | 21.0 | 6426 | 0.3494 | 0.8633 |
| 0.3748 | 22.0 | 6732 | 0.3491 | 0.8633 |
| 0.3775 | 23.0 | 7038 | 0.3499 | 0.8633 |
| 0.3775 | 24.0 | 7344 | 0.3494 | 0.8633 |
| 0.3748 | 25.0 | 7650 | 0.3504 | 0.8672 |
| 0.3748 | 26.0 | 7956 | 0.3495 | 0.8672 |
| 0.3701 | 27.0 | 8262 | 0.3454 | 0.8633 |
| 0.3712 | 28.0 | 8568 | 0.3472 | 0.8633 |
| 0.3712 | 29.0 | 8874 | 0.3478 | 0.8672 |
| 0.3751 | 30.0 | 9180 | 0.3471 | 0.8672 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.0+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
jlbaker361/ft-ddpo25
|
jlbaker361
| 2024-01-20T03:16:42Z
| 29
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-19T18:33:02Z
|
---
{}
---
# DDPO trained model
num_epochs=15
train_gradient_accumulation_steps=4
sample_num_steps=30
sample_batch_size=4
train_batch_size=4
sample_num_batches_per_epoch=32
|
yunconglong/7Bx4_DPO_2e
|
yunconglong
| 2024-01-20T03:15:55Z
| 1,370
| 1
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T02:49:29Z
|
---
license: mit
---
* [DPO Trainer](https://huggingface.co/docs/trl/main/en/dpo_trainer) with dataset jondurbin/truthy-dpo-v0.1
```
DPO Trainer
TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023.
```
```
"num_experts_per_tok": 2
```
|
DopeorNope/Mistralopithecus-v0.1-10.8B
|
DopeorNope
| 2024-01-20T03:03:26Z
| 60
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T17:05:44Z
|
---
license: cc-by-nc-sa-4.0
---
## Model Details
**Model Developers** Seungyoo Lee (DopeorNope)
์ด ๋ชจ๋ธ์ Mistral Base์ ์๋ก์ด ์ํคํ
์ณ์ด๋ฉฐ, 10.7B์ ํ๋ผ๋ฏธํฐ๋ก ๊ตฌ์ฑ๋์์ต๋๋ค. (Solar๋, ์๋ํธ๋ผ ๋ฒ ์ด์ค ๋ชจ๋ธ์ด ์๋๋๋ค.)
์ฝ 1.5B์ ํ ํฐ์ผ๋ก pretrain ๋์์ผ๋, ์คํ๋จ๊ณ๋ก ํฅํ ๋ค์ ํ๋ จ๋์ด ์๋กญ๊ฒ ๋์ฌ ์์ ์
๋๋ค.
ํ
์คํธ์ฉ์ผ๋ก ์ฌ๋ ค๋ด
๋๋ค.
Context length๊ฐ 32k ๊น์ง์ง์ ๊ฐ๋ฅํ ๋ชจ๋ธ์ด๋ฉฐ, ํฅํ ๋ ์๋ฒฝํ๊ฒ ์ค๊ณํ์ฌ ์ฌ๋ฆฌ๋๋ก ํ๊ฒ ์ต๋๋ค.
|
jhiggs/tim-robinson
|
jhiggs
| 2024-01-20T02:37:34Z
| 4
| 1
|
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-03T22:39:26Z
|
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: A photo of <s0><s1> itysl a man in a yellow shirt and black jacket
output:
url: image-0.png
- text: A photo of <s0><s1> itysl a man wearing a black jacket
output:
url: image-1.png
- text: A photo of <s0><s1> itysl a man in a red suit and tie is standing in front
of a colorful background
output:
url: image-2.png
- text: A photo of <s0><s1> itysl a man with a white shirt and a black jacket
output:
url: image-3.png
- text: A photo of <s0><s1> itysl a man in a blue shirt sitting at a desk
output:
url: image-4.png
- text: A photo of <s0><s1> itysl a man with a tie on
output:
url: image-5.png
- text: A photo of <s0><s1> itysl a man with a mustache
output:
url: image-6.png
- text: A photo of <s0><s1> itysl a man in a checkered shirt holding a red ball
output:
url: image-7.png
- text: A photo of <s0><s1> itysl a man holding a box of wine in front of him
output:
url: image-8.png
- text: A photo of <s0><s1> itysl a man sitting at a desk
output:
url: image-9.png
- text: A photo of <s0><s1> itysl a man sitting at a desk
output:
url: image-10.png
- text: A photo of <s0><s1> itysl a man with a blue shirt
output:
url: image-11.png
- text: A photo of <s0><s1> itysl a man wearing a white shirt
output:
url: image-12.png
- text: A photo of <s0><s1> itysl a man with a smile on his face
output:
url: image-13.png
- text: A photo of <s0><s1> itysl a man in a plaid shirt standing in front of a wall
output:
url: image-14.png
- text: A photo of <s0><s1> itysl a man holding his head with both hands
output:
url: image-15.png
- text: A photo of <s0><s1> itysl a man with a sad face looking at something
output:
url: image-16.png
- text: A photo of <s0><s1> itysl a man in a suit and tie making a gesture
output:
url: image-17.png
- text: A photo of <s0><s1> itysl a man in a car
output:
url: image-18.png
- text: A photo of <s0><s1> itysl a man in a black jacket and white shirt standing
in an office
output:
url: image-19.png
- text: A photo of <s0><s1> itysl a man in a black jacket and blue shirt smiling
output:
url: image-20.png
- text: A photo of <s0><s1> itysl a man in glasses and a polo shirt
output:
url: image-21.png
- text: A photo of <s0><s1> itysl a man in a jacket and a tie
output:
url: image-22.png
- text: A photo of <s0><s1> itysl a man in a suit standing in front of a window
output:
url: image-23.png
- text: A photo of <s0><s1> itysl a man holding a pizza
output:
url: image-24.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of <s0><s1> itysl
license: openrail++
---
# SDXL LoRA DreamBooth - jhiggs/tim-robinson
<Gallery />
## Model description
### These are jhiggs/tim-robinson LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`tim-robinson.safetensors` here ๐พ](/jhiggs/tim-robinson/blob/main/tim-robinson.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:tim-robinson:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`tim-robinson_emb.safetensors` here ๐พ](/jhiggs/tim-robinson/blob/main/tim-robinson_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `tim-robinson_emb` to your prompt. For example, `A photo of tim-robinson_emb itysl`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jhiggs/tim-robinson', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='jhiggs/tim-robinson', filename='tim-robinson_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('A photo of <s0><s1> itysl').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` โ use `<s0><s1>` in your prompt
## Details
All [Files & versions](/jhiggs/tim-robinson/tree/main).
The weights were trained using [๐งจ diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
Vivacem/DeepSeek-67B-MMIQC
|
Vivacem
| 2024-01-20T01:56:09Z
| 6
| 1
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2401.09003",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T05:52:12Z
|
---
license: apache-2.0
---
DeepSeek-67B-MMIQC is obtained by fine-tuning [DeepSeek-67B](https://huggingface.co/deepseek-ai/deepseek-llm-67b-base) on [MMIQC](https://huggingface.co/datasets/Vivacem/MMIQC).
It achieves 41.0% test accuracy on MATH.
See our [paper](https://arxiv.org/abs/2401.09003) for details.
|
ntc-ai/SDXL-LoRA-slider.in-a-hot-air-balloon-race
|
ntc-ai
| 2024-01-20T01:22:27Z
| 2
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-20T01:22:24Z
|
---
language:
- en
thumbnail: "images/evaluate/in a hot air balloon race.../in a hot air balloon race_17_3.0.png"
widget:
- text: in a hot air balloon race
output:
url: images/in a hot air balloon race_17_3.0.png
- text: in a hot air balloon race
output:
url: images/in a hot air balloon race_19_3.0.png
- text: in a hot air balloon race
output:
url: images/in a hot air balloon race_20_3.0.png
- text: in a hot air balloon race
output:
url: images/in a hot air balloon race_21_3.0.png
- text: in a hot air balloon race
output:
url: images/in a hot air balloon race_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "in a hot air balloon race"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - in a hot air balloon race (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/in a hot air balloon race_17_-3.0.png" width=256 height=256 /> | <img src="images/in a hot air balloon race_17_0.0.png" width=256 height=256 /> | <img src="images/in a hot air balloon race_17_3.0.png" width=256 height=256 /> |
| <img src="images/in a hot air balloon race_19_-3.0.png" width=256 height=256 /> | <img src="images/in a hot air balloon race_19_0.0.png" width=256 height=256 /> | <img src="images/in a hot air balloon race_19_3.0.png" width=256 height=256 /> |
| <img src="images/in a hot air balloon race_20_-3.0.png" width=256 height=256 /> | <img src="images/in a hot air balloon race_20_0.0.png" width=256 height=256 /> | <img src="images/in a hot air balloon race_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
in a hot air balloon race
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.in-a-hot-air-balloon-race', weight_name='in a hot air balloon race.safetensors', adapter_name="in a hot air balloon race")
# Activate the LoRA
pipe.set_adapters(["in a hot air balloon race"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, in a hot air balloon race"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
longlonglong23/llama2-qlora-finetuned-chinese
|
longlonglong23
| 2024-01-20T01:12:28Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2024-01-20T01:12:21Z
|
---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
afrideva/phi-2-psy-GGUF
|
afrideva
| 2024-01-20T01:05:45Z
| 44
| 6
| null |
[
"gguf",
"merge",
"mergekit",
"lazymergekit",
"rhysjones/phi-2-orange",
"cognitivecomputations/dolphin-2_6-phi-2",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"base_model:vince62s/phi-2-psy",
"base_model:quantized:vince62s/phi-2-psy",
"license:mit",
"region:us"
] |
text-generation
| 2024-01-20T00:55:24Z
|
---
base_model: vince62s/phi-2-psy
inference: false
license: mit
model_creator: vince62s
model_name: phi-2-psy
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- merge
- mergekit
- lazymergekit
- rhysjones/phi-2-orange
- cognitivecomputations/dolphin-2_6-phi-2
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# vince62s/phi-2-psy-GGUF
Quantized GGUF model files for [phi-2-psy](https://huggingface.co/vince62s/phi-2-psy) from [vince62s](https://huggingface.co/vince62s)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [phi-2-psy.fp16.gguf](https://huggingface.co/afrideva/phi-2-psy-GGUF/resolve/main/phi-2-psy.fp16.gguf) | fp16 | 5.56 GB |
| [phi-2-psy.q2_k.gguf](https://huggingface.co/afrideva/phi-2-psy-GGUF/resolve/main/phi-2-psy.q2_k.gguf) | q2_k | 1.11 GB |
| [phi-2-psy.q3_k_m.gguf](https://huggingface.co/afrideva/phi-2-psy-GGUF/resolve/main/phi-2-psy.q3_k_m.gguf) | q3_k_m | 1.43 GB |
| [phi-2-psy.q4_k_m.gguf](https://huggingface.co/afrideva/phi-2-psy-GGUF/resolve/main/phi-2-psy.q4_k_m.gguf) | q4_k_m | 1.74 GB |
| [phi-2-psy.q5_k_m.gguf](https://huggingface.co/afrideva/phi-2-psy-GGUF/resolve/main/phi-2-psy.q5_k_m.gguf) | q5_k_m | 2.00 GB |
| [phi-2-psy.q6_k.gguf](https://huggingface.co/afrideva/phi-2-psy-GGUF/resolve/main/phi-2-psy.q6_k.gguf) | q6_k | 2.29 GB |
| [phi-2-psy.q8_0.gguf](https://huggingface.co/afrideva/phi-2-psy-GGUF/resolve/main/phi-2-psy.q8_0.gguf) | q8_0 | 2.96 GB |
## Original Model Card:
# Phi-2-psy
Phi-2-psy is a merge of the following models:
* [rhysjones/phi-2-orange](https://huggingface.co/rhysjones/phi-2-orange)
* [cognitivecomputations/dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2)
## ๐ Evaluation
The evaluation was performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval) on Nous suite.
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|----------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[**phi-2-psy**](https://huggingface.co/vince62s/phi-2-psy)| **34.4**| **71.4**| **48.2**| **38.1**| **48.02**|
|[phixtral-2x2_8](https://huggingface.co/mlabonne/phixtral-2x2_8)| 34.1| 70.4| 48.8| 37.8| 47.78|
|[dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2)| 33.1| 69.9| 47.4| 37.2| 46.89|
|[phi-2-orange](https://huggingface.co/rhysjones/phi-2-orange)| 33.4| 71.3| 49.9| 37.3| 47.97|
|[phi-2](https://huggingface.co/microsoft/phi-2)| 28.0| 70.8| 44.4| 35.2| 44.61|
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: rhysjones/phi-2-orange
layer_range: [0, 32]
- model: cognitivecomputations/dolphin-2_6-phi-2
layer_range: [0, 32]
merge_method: slerp
base_model: rhysjones/phi-2-orange
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## ๐ป Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("vince62s/phi-2-psy", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("vince62s/phi-2-psy", trust_remote_code=True)
inputs = tokenizer('''def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
|
thrunlab/Mistral-7B-v0.1_colaMistral_scratch_cola
|
thrunlab
| 2024-01-20T00:59:07Z
| 9
| 0
|
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-20T00:40:54Z
|
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
metrics:
- accuracy
- matthews_correlation
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: Mistral-7B-v0.1_colaMistral_scratch_cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1_colaMistral_scratch_cola
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4281
- Accuracy: {'accuracy': 0.8387850467289719}
- Matthews Correlation: 0.6114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 2
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------:|:--------------------:|
| 1.9322 | 0.17 | 20 | 1.5215 | {'accuracy': 0.5743048897411314} | 0.0818 |
| 1.1953 | 0.33 | 40 | 0.9950 | {'accuracy': 0.660594439117929} | 0.1870 |
| 0.6611 | 0.5 | 60 | 0.7549 | {'accuracy': 0.7353787152444871} | 0.3527 |
| 0.6165 | 0.66 | 80 | 0.6317 | {'accuracy': 0.7583892617449665} | 0.4081 |
| 0.5467 | 0.83 | 100 | 0.5667 | {'accuracy': 0.7842761265580057} | 0.5041 |
| 0.4864 | 1.0 | 120 | 0.5268 | {'accuracy': 0.7996164908916586} | 0.5385 |
| 0.478 | 1.16 | 140 | 0.4803 | {'accuracy': 0.8283796740172579} | 0.5859 |
| 0.439 | 1.33 | 160 | 0.4965 | {'accuracy': 0.8293384467881112} | 0.5818 |
| 0.4395 | 1.49 | 180 | 0.4669 | {'accuracy': 0.8283796740172579} | 0.5778 |
| 0.4202 | 1.66 | 200 | 0.5002 | {'accuracy': 0.825503355704698} | 0.6192 |
| 0.3485 | 1.83 | 220 | 0.4360 | {'accuracy': 0.8389261744966443} | 0.6099 |
| 0.442 | 1.99 | 240 | 0.4391 | {'accuracy': 0.840843720038351} | 0.6121 |
| 0.3752 | 2.16 | 260 | 0.4306 | {'accuracy': 0.8446788111217641} | 0.6474 |
| 0.3013 | 2.32 | 280 | 0.4163 | {'accuracy': 0.8427612655800575} | 0.6216 |
| 0.3395 | 2.49 | 300 | 0.4151 | {'accuracy': 0.8542665388302972} | 0.6592 |
| 0.3305 | 2.66 | 320 | 0.4096 | {'accuracy': 0.8475551294343241} | 0.6299 |
| 0.342 | 2.82 | 340 | 0.4101 | {'accuracy': 0.8465963566634708} | 0.6322 |
| 0.3183 | 2.99 | 360 | 0.4166 | {'accuracy': 0.8494726749760306} | 0.6364 |
| 0.2551 | 3.15 | 380 | 0.4321 | {'accuracy': 0.8542665388302972} | 0.6503 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
UAEpro/whisper-small-ar-2
|
UAEpro
| 2024-01-20T00:42:47Z
| 10
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ar",
"dataset:mozilla-foundation/common_voice_16_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-15T20:58:48Z
|
---
language:
- ar
license: apache-2.0
base_model: uaepro/whisper-small-ar-2
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
model-index:
- name: Whisper Small ar - majed test
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 16.0
type: mozilla-foundation/common_voice_16_0
config: ar
split: test
args: 'config: ar, split: test'
metrics:
- name: Wer
type: wer
value: 168.22177271055537
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small ar - majed test
This model is a fine-tuned version of [uaepro/whisper-small-ar-2](https://huggingface.co/uaepro/whisper-small-ar-2) on the Common Voice 16.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3392
- Wer: 168.2218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1459 | 0.41 | 1000 | 0.3714 | 182.4752 |
| 0.1378 | 0.82 | 2000 | 0.3486 | 177.9993 |
| 0.0738 | 1.24 | 3000 | 0.3513 | 184.2939 |
| 0.0855 | 1.65 | 4000 | 0.3392 | 168.2218 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.0
|
thrunlab/Mistral-7B-v0.1_cola_sparse_swiglu_ignore_0_1
|
thrunlab
| 2024-01-20T00:40:06Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T21:05:37Z
|
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
metrics:
- accuracy
- matthews_correlation
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: Mistral-7B-v0.1_cola_sparse_swiglu_ignore_0_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1_cola_sparse_swiglu_ignore_0_1
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4277
- Accuracy: {'accuracy': 0.8212616822429907}
- Matthews Correlation: 0.5699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 2
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------:|:--------------------:|
| 1.6028 | 0.17 | 20 | 1.5539 | {'accuracy': 0.5417066155321189} | 0.0764 |
| 0.9736 | 0.33 | 40 | 0.9708 | {'accuracy': 0.6864813039309684} | 0.1852 |
| 0.7146 | 0.5 | 60 | 0.7850 | {'accuracy': 0.713326941514861} | 0.3141 |
| 0.6892 | 0.66 | 80 | 0.6674 | {'accuracy': 0.7238734419942474} | 0.3498 |
| 0.6792 | 0.83 | 100 | 0.6401 | {'accuracy': 0.7411313518696069} | 0.3977 |
| 0.6233 | 1.0 | 120 | 0.6104 | {'accuracy': 0.7574304889741131} | 0.3784 |
| 0.4778 | 1.16 | 140 | 0.5641 | {'accuracy': 0.7948226270373921} | 0.4874 |
| 0.4792 | 1.33 | 160 | 0.5961 | {'accuracy': 0.7746883988494727} | 0.4284 |
| 0.5573 | 1.49 | 180 | 0.5210 | {'accuracy': 0.8034515819750719} | 0.5126 |
| 0.4464 | 1.66 | 200 | 0.5716 | {'accuracy': 0.7871524448705657} | 0.5601 |
| 0.4541 | 1.83 | 220 | 0.5130 | {'accuracy': 0.8015340364333653} | 0.5046 |
| 0.4989 | 1.99 | 240 | 0.4648 | {'accuracy': 0.8149568552253116} | 0.5452 |
| 0.3891 | 2.16 | 260 | 0.4566 | {'accuracy': 0.8207094918504314} | 0.5856 |
| 0.336 | 2.32 | 280 | 0.4516 | {'accuracy': 0.822627037392138} | 0.5657 |
| 0.3854 | 2.49 | 300 | 0.4224 | {'accuracy': 0.8322147651006712} | 0.6066 |
| 0.3917 | 2.66 | 320 | 0.4247 | {'accuracy': 0.837967401725791} | 0.6125 |
| 0.3779 | 2.82 | 340 | 0.4177 | {'accuracy': 0.8302972195589645} | 0.5897 |
| 0.3462 | 2.99 | 360 | 0.4649 | {'accuracy': 0.8207094918504314} | 0.5584 |
| 0.3448 | 3.15 | 380 | 0.4182 | {'accuracy': 0.8293384467881112} | 0.5837 |
| 0.3894 | 3.32 | 400 | 0.4388 | {'accuracy': 0.8302972195589645} | 0.5893 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
jdang/openhermes-mistral-dpo-gptq
|
jdang
| 2024-01-20T00:35:18Z
| 0
| 0
| null |
[
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:TheBloke/OpenHermes-2-Mistral-7B-GPTQ",
"base_model:finetune:TheBloke/OpenHermes-2-Mistral-7B-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-01-12T17:05:27Z
|
---
license: apache-2.0
base_model: TheBloke/OpenHermes-2-Mistral-7B-GPTQ
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: openhermes-mistral-dpo-gptq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openhermes-mistral-dpo-gptq
This model is a fine-tuned version of [TheBloke/OpenHermes-2-Mistral-7B-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2-Mistral-7B-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6104
- Rewards/chosen: -0.0458
- Rewards/rejected: -0.4535
- Rewards/accuracies: 0.6875
- Rewards/margins: 0.4077
- Logps/rejected: -390.3771
- Logps/chosen: -149.5892
- Logits/rejected: -1.3692
- Logits/chosen: -1.4352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6865 | 0.01 | 10 | 0.6792 | -0.0093 | -0.0078 | 0.6875 | -0.0015 | -385.9200 | -149.2238 | -1.3698 | -1.4189 |
| 0.6882 | 0.01 | 20 | 0.6660 | -0.0137 | -0.0526 | 0.625 | 0.0389 | -386.3681 | -149.2680 | -1.3729 | -1.4240 |
| 0.6391 | 0.01 | 30 | 0.6446 | 0.0000 | -0.1131 | 0.625 | 0.1131 | -386.9731 | -149.1310 | -1.3737 | -1.4292 |
| 0.639 | 0.02 | 40 | 0.6271 | -0.0337 | -0.2758 | 0.6875 | 0.2421 | -388.6000 | -149.4686 | -1.3729 | -1.4342 |
| 0.6533 | 0.03 | 50 | 0.6104 | -0.0458 | -0.4535 | 0.6875 | 0.4077 | -390.3771 | -149.5892 | -1.3692 | -1.4352 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LoneStriker/WinterGoddess-1.4x-70B-L2-3.5bpw-h6-exl2
|
LoneStriker
| 2024-01-20T00:24:49Z
| 5
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T00:08:10Z
|
---
license: cc-by-nc-4.0
language:
- en
---
Winter Goddess - A 70B L2 Model for General use, or for Roleplay.
I wanted a Smart Model that is Capable of following Instructions, while being able to (e)RP effectively. Sort of like 1.3, but better.
I merged some models as a base, and had tuned on top of it afterwards.
I personally think this mogs Euryale 1.3, but ymmv.
***
For Transparency's Sake:
Models Used:
<br> Platypus2-70B-instruct
<br> Lila-70B
<br> SunsetBoulevard (at roughly 0.1 weight, boosting coherency)
<br> Private De-alignment LoRA on top.
why does it show mergekit in the safetensors.index metadata? -> I used DARE method to merge the 3 models. Then Axolotl qLoRA. then used lora-merge, copying the files of the base merged model because they didn't save to the new one, only the .safetensor files got saved.
***
Prompt Format - Alpaca
```
### Instruction:
<Prompt>
### Response:
```
OR
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
***
<br> 42. A 25-year-old female has been struck in the right eye with a pipe. She has a ruptured right globe, an orbital fracture and no other obvious injury. You should bandage:
<br> A) The right eye tightly
<br> B) Both eyes loosely
<br> C) The right eye loosely
<br> D) Both eyes tightly
|
mu0gum/AIFT-42dot_LLM-PLM-1.3B-ao-instruct-all-v0.52
|
mu0gum
| 2024-01-20T00:17:38Z
| 59
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T16:44:20Z
|
---
license: cc-by-nc-4.0
---
# AIFT-42dot-LLM-PLM-1.3B-ao-instruct-all-v0.52
๋ฒ ์ด์ค ๋ชจ๋ธ : 42dot/42dot_LLM-PLM-1.3B
ํ์ต ๋ฐ์ดํฐ : ์์ฒด ์ ์ํ Open Orca ์คํ์ผ ๋ฐ์ดํฐ์
์ฝ 28,000๊ฑด (๋ฐ์ดํฐ ์๋ ์กฐ์ )
ํ์ต ๋ฐฉ๋ฒ : Full finetuning
## ko-lm-evaluation-harness(0-shot)
|kobest_boolq|kobest_copa|kobest_hellaswag|kobest_sentineg|kohatespeech|kohatespeech_apeach|kohatespeech_gen_bias|korunsmile|nsmc|pawsx_ko|
|--|--|--|--|--|--|--|--|--|--|
|0.5826210826210826|0.68|0.436|0.7758186397984886|0.2908704883227176|0.5082228116710875|0.14225053078556263|0.39027300210119553|0.65938|0.513|
## Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.0.0
- Tokenizers 0.15.0
|
rwatler/Mixtral_R2_v0
|
rwatler
| 2024-01-20T00:14:34Z
| 2
| 0
|
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T01:22:51Z
|
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
model-index:
- name: Mixtral_R2_v0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mixtral_R2_v0
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7671
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.0101 | 1.0 | 18 | 2.1741 |
| 2.1612 | 2.0 | 36 | 1.5962 |
| 1.6591 | 3.0 | 54 | 1.4202 |
| 1.4985 | 4.0 | 72 | 1.3035 |
| 1.3585 | 5.0 | 90 | 1.1977 |
| 1.294 | 6.0 | 108 | 1.0993 |
| 1.1823 | 7.0 | 126 | 1.0139 |
| 1.0983 | 8.0 | 144 | 0.9641 |
| 1.0371 | 9.0 | 162 | 0.9293 |
| 0.9868 | 10.0 | 180 | 0.8961 |
| 0.9535 | 11.0 | 198 | 0.8655 |
| 0.9259 | 12.0 | 216 | 0.8358 |
| 0.882 | 13.0 | 234 | 0.8067 |
| 0.8472 | 14.0 | 252 | 0.7938 |
| 0.8484 | 15.0 | 270 | 0.7872 |
| 0.8215 | 16.0 | 288 | 0.7826 |
| 0.8167 | 17.0 | 306 | 0.7779 |
| 0.8199 | 18.0 | 324 | 0.7751 |
| 0.8042 | 19.0 | 342 | 0.7730 |
| 0.8186 | 20.0 | 360 | 0.7710 |
| 0.794 | 21.0 | 378 | 0.7698 |
| 0.7958 | 22.0 | 396 | 0.7685 |
| 0.7858 | 23.0 | 414 | 0.7677 |
| 0.7857 | 24.0 | 432 | 0.7671 |
| 0.7843 | 25.0 | 450 | 0.7671 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LoneStriker/WinterGoddess-1.4x-70B-L2-2.65bpw-h6-exl2
|
LoneStriker
| 2024-01-20T00:08:08Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T23:52:57Z
|
---
license: cc-by-nc-4.0
language:
- en
---
Winter Goddess - A 70B L2 Model for General use, or for Roleplay.
I wanted a Smart Model that is Capable of following Instructions, while being able to (e)RP effectively. Sort of like 1.3, but better.
I merged some models as a base, and had tuned on top of it afterwards.
I personally think this mogs Euryale 1.3, but ymmv.
***
For Transparency's Sake:
Models Used:
<br> Platypus2-70B-instruct
<br> Lila-70B
<br> SunsetBoulevard (at roughly 0.1 weight, boosting coherency)
<br> Private De-alignment LoRA on top.
why does it show mergekit in the safetensors.index metadata? -> I used DARE method to merge the 3 models. Then Axolotl qLoRA. then used lora-merge, copying the files of the base merged model because they didn't save to the new one, only the .safetensor files got saved.
***
Prompt Format - Alpaca
```
### Instruction:
<Prompt>
### Response:
```
OR
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
***
<br> 42. A 25-year-old female has been struck in the right eye with a pipe. She has a ruptured right globe, an orbital fracture and no other obvious injury. You should bandage:
<br> A) The right eye tightly
<br> B) Both eyes loosely
<br> C) The right eye loosely
<br> D) Both eyes tightly
|
segolilylabs/Lily-Cybersecurity-7B-v0.2-GGUF
|
segolilylabs
| 2024-01-20T00:01:19Z
| 3,243
| 16
| null |
[
"gguf",
"cybersecurity",
"cyber security",
"hacking",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-12T02:13:04Z
|
---
license: apache-2.0
tags:
- cybersecurity
- cyber security
- hacking
language:
- en
---
My attempt at making GGUF versions of <a href= "https://huggingface.co/segolilylabs/Lily-Cybersecurity-7B-v0.2">segolilylabs/Lily-Cybersecurity-7B-v0.2</a>
|
arnavgrg/phi2-adapter-test
|
arnavgrg
| 2024-01-19T23:56:52Z
| 1
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"region:us"
] | null | 2024-01-19T23:56:22Z
|
---
library_name: peft
base_model: microsoft/phi-2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
afrideva/TinyLlama-haiku-dpo-v.0.1-GGUF
|
afrideva
| 2024-01-19T23:54:54Z
| 6
| 0
| null |
[
"gguf",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"base_model:davanstrien/TinyLlama-haiku-dpo-v.0.1",
"base_model:quantized:davanstrien/TinyLlama-haiku-dpo-v.0.1",
"region:us",
"conversational"
] |
text-generation
| 2024-01-19T23:42:50Z
|
---
base_model: davanstrien/TinyLlama-haiku-dpo-v.0.1
inference: false
model_creator: davanstrien
model_name: TinyLlama-haiku-dpo-v.0.1
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# davanstrien/TinyLlama-haiku-dpo-v.0.1-GGUF
Quantized GGUF model files for [TinyLlama-haiku-dpo-v.0.1](https://huggingface.co/davanstrien/TinyLlama-haiku-dpo-v.0.1) from [davanstrien](https://huggingface.co/davanstrien)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [tinyllama-haiku-dpo-v.0.1.fp16.gguf](https://huggingface.co/afrideva/TinyLlama-haiku-dpo-v.0.1-GGUF/resolve/main/tinyllama-haiku-dpo-v.0.1.fp16.gguf) | fp16 | 2.20 GB |
| [tinyllama-haiku-dpo-v.0.1.q2_k.gguf](https://huggingface.co/afrideva/TinyLlama-haiku-dpo-v.0.1-GGUF/resolve/main/tinyllama-haiku-dpo-v.0.1.q2_k.gguf) | q2_k | 432.13 MB |
| [tinyllama-haiku-dpo-v.0.1.q3_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-haiku-dpo-v.0.1-GGUF/resolve/main/tinyllama-haiku-dpo-v.0.1.q3_k_m.gguf) | q3_k_m | 548.40 MB |
| [tinyllama-haiku-dpo-v.0.1.q4_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-haiku-dpo-v.0.1-GGUF/resolve/main/tinyllama-haiku-dpo-v.0.1.q4_k_m.gguf) | q4_k_m | 667.81 MB |
| [tinyllama-haiku-dpo-v.0.1.q5_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-haiku-dpo-v.0.1-GGUF/resolve/main/tinyllama-haiku-dpo-v.0.1.q5_k_m.gguf) | q5_k_m | 782.04 MB |
| [tinyllama-haiku-dpo-v.0.1.q6_k.gguf](https://huggingface.co/afrideva/TinyLlama-haiku-dpo-v.0.1-GGUF/resolve/main/tinyllama-haiku-dpo-v.0.1.q6_k.gguf) | q6_k | 903.41 MB |
| [tinyllama-haiku-dpo-v.0.1.q8_0.gguf](https://huggingface.co/afrideva/TinyLlama-haiku-dpo-v.0.1-GGUF/resolve/main/tinyllama-haiku-dpo-v.0.1.q8_0.gguf) | q8_0 | 1.17 GB |
## Original Model Card:
|
Aneeth/zephyr_7k
|
Aneeth
| 2024-01-19T23:51:26Z
| 6
| 0
|
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/zephyr-7B-beta-GPTQ",
"base_model:adapter:TheBloke/zephyr-7B-beta-GPTQ",
"license:mit",
"region:us"
] | null | 2024-01-17T11:53:37Z
|
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/zephyr-7B-beta-GPTQ
model-index:
- name: zephyr_7k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr_7k
This model is a fine-tuned version of [TheBloke/zephyr-7B-beta-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-beta-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2630
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3761 | 0.23 | 100 | 1.1737 |
| 0.8147 | 0.46 | 200 | 0.4469 |
| 0.3427 | 0.68 | 300 | 0.2869 |
| 0.2726 | 0.91 | 400 | 0.2630 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.16.0
- Tokenizers 0.15.0
|
FractalGPT/FRED-T5-Interp
|
FractalGPT
| 2024-01-19T23:47:41Z
| 7
| 0
|
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"llm_interpretation",
"FRED T5",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-19T15:41:30Z
|
---
license: apache-2.0
language:
- ru
library_name: transformers
tags:
- llm_interpretation
- FRED T5
---
* ะ ัััะบะพัะทััะฝะฐั ะผะพะดะตะปั ะพะฑััะตะฝะฝะฐั ะพัะฒะตัะฐัั ะฝะฐ ะฒะพะฟัะพัั ะฟะพ ะธะฝัะตัะฟัะตัะฐัะธะธ ัะทัะบะพะฒัั
ะผะพะดะตะปะตะน
* ะะฐะทะพะฒะฐั ะผะพะดะตะปั: SiberiaSoft/SiberianFredT5-instructor
|
shyamsubbu/tailf_mixtral_Mixtral-8x7B-Instruct-v0.1
|
shyamsubbu
| 2024-01-19T23:36:51Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-19T23:36:47Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
samwell/Reinforce-PixelCopter
|
samwell
| 2024-01-19T23:11:22Z
| 0
| 0
| null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T02:11:09Z
|
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 30.80 +/- 25.01
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
LoneStriker/WinterGoddess-1.4x-70B-L2-4.65bpw-h6-exl2
|
LoneStriker
| 2024-01-19T23:05:01Z
| 7
| 1
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T17:52:48Z
|
---
license: cc-by-nc-4.0
language:
- en
---
Winter Goddess - A 70B L2 Model for General use, or for Roleplay.
I wanted a Smart Model that is Capable of following Instructions, while being able to (e)RP effectively. Sort of like 1.3, but better.
I merged some models as a base, and had tuned on top of it afterwards.
I personally think this mogs Euryale 1.3, but ymmv.
***
For Transparency's Sake:
Models Used:
<br> Platypus2-70B-instruct
<br> Lila-70B
<br> SunsetBoulevard (at roughly 0.1 weight, boosting coherency)
<br> Private De-alignment LoRA on top.
why does it show mergekit in the safetensors.index metadata? -> I used DARE method to merge the 3 models. Then Axolotl qLoRA. then used lora-merge, copying the files of the base merged model because they didn't save to the new one, only the .safetensor files got saved.
***
Prompt Format - Alpaca
```
### Instruction:
<Prompt>
### Response:
```
OR
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
***
<br> 42. A 25-year-old female has been struck in the right eye with a pipe. She has a ruptured right globe, an orbital fracture and no other obvious injury. You should bandage:
<br> A) The right eye tightly
<br> B) Both eyes loosely
<br> C) The right eye loosely
<br> D) Both eyes tightly
|
arun100/whisper-base-hi-3
|
arun100
| 2024-01-19T22:46:48Z
| 60
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"dataset:google/fleurs",
"base_model:arun100/whisper-base-hi-2",
"base_model:finetune:arun100/whisper-base-hi-2",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-19T16:26:54Z
|
---
license: apache-2.0
base_model: arun100/whisper-base-hi-2
tags:
- whisper-event
- generated_from_trainer
datasets:
- google/fleurs
metrics:
- wer
model-index:
- name: Whisper Base Hindi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: google/fleurs hi_in
type: google/fleurs
config: hi_in
split: test
args: hi_in
metrics:
- name: Wer
type: wer
value: 27.72060783790989
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Hindi
This model is a fine-tuned version of [arun100/whisper-base-hi-2](https://huggingface.co/arun100/whisper-base-hi-2) on the google/fleurs hi_in dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4468
- Wer: 27.7206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.4805 | 33.0 | 250 | 0.4868 | 30.4186 |
| 0.3559 | 66.0 | 500 | 0.4417 | 29.0909 |
| 0.2655 | 99.0 | 750 | 0.4307 | 28.2165 |
| 0.1987 | 133.0 | 1000 | 0.4350 | 27.8326 |
| 0.1472 | 166.0 | 1250 | 0.4468 | 27.7206 |
| 0.1061 | 199.0 | 1500 | 0.4640 | 28.0992 |
| 0.0767 | 233.0 | 1750 | 0.4835 | 28.5737 |
| 0.0541 | 266.0 | 2000 | 0.5032 | 28.6857 |
| 0.0396 | 299.0 | 2250 | 0.5202 | 28.7763 |
| 0.03 | 333.0 | 2500 | 0.5353 | 29.2029 |
| 0.0237 | 366.0 | 2750 | 0.5479 | 28.9096 |
| 0.0195 | 399.0 | 3000 | 0.5587 | 28.9096 |
| 0.0163 | 433.0 | 3250 | 0.5683 | 28.9469 |
| 0.014 | 466.0 | 3500 | 0.5767 | 29.1336 |
| 0.0121 | 499.0 | 3750 | 0.5838 | 29.3415 |
| 0.0108 | 533.0 | 4000 | 0.5900 | 29.2775 |
| 0.01 | 566.0 | 4250 | 0.5951 | 29.6081 |
| 0.0093 | 599.0 | 4500 | 0.5988 | 29.4855 |
| 0.0088 | 633.0 | 4750 | 0.6012 | 29.5281 |
| 0.0087 | 666.0 | 5000 | 0.6020 | 29.4268 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
LegoClipStars/River_Kendall_RH
|
LegoClipStars
| 2024-01-19T22:46:27Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:dataautogpt3/OpenDalleV1.1",
"base_model:adapter:dataautogpt3/OpenDalleV1.1",
"license:cc-by-4.0",
"region:us"
] |
text-to-image
| 2024-01-19T22:45:08Z
|
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: NEFT
parameters:
negative_prompt: High school student
output:
url: images/5b7538c2190e21a3a865cbe703015bd6.jpg
base_model: dataautogpt3/OpenDalleV1.1
instance_prompt: Please spare me
license: cc-by-4.0
---
# River_Kendall_Rainbow_High
<Gallery />
## Model description
Here's my RVC voice model of River Kendall from Rainbow High
## Trigger words
You should use `Please spare me` to trigger the image generation.
## Download model
[Download](/LegoClipStars/River_Kendall_RH/tree/main) them in the Files & versions tab.
|
RiverTest/RiverMTG20
|
RiverTest
| 2024-01-19T22:46:01Z
| 2
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:RiverTest/RiverMTG15",
"base_model:adapter:RiverTest/RiverMTG15",
"region:us"
] | null | 2024-01-19T22:45:55Z
|
---
library_name: peft
base_model: RiverTest/RiverMTG15
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Kooten/WinterGoddess-1.4x-70B-L2-IQ2-GGUF
|
Kooten
| 2024-01-19T22:44:15Z
| 8
| 1
| null |
[
"gguf",
"en",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-19T19:58:51Z
|
---
license: cc-by-nc-4.0
language:
- en
---
# WinterGoddess-1.4x-70B-L2 IQ2-GGUF
## Description
IQ2-GGUF quants of [Sao10K/WinterGoddess-1.4x-70B-L2](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2)
Unlike regular GGUF quants this uses important matrix similar to Quip# to keep the quant from degrading too much even at 2bpw allowing you to run larger models on less powerful machines.
***NOTE:*** Currently you will need experimental branches of Koboldcpp or Ooba for this to work.
- Nexesenex have compiled Windows binaries [HERE](https://github.com/Nexesenex/kobold.cpp/releases/tag/v1.55.1_b1842)
- [llamacpp_0.2.29 branch](https://github.com/oobabooga/text-generation-webui/tree/llamacpp_0.2.29) of Ooba also works
[More info about IQ2](https://github.com/ggerganov/llama.cpp/pull/4897)
# Models
Models: [IQ2-XS](https://huggingface.co/Kooten/WinterGoddess-1.4x-70B-L2-IQ2-GGUF/blob/main/WinterGoddess-1.4x-70B-L2-IQ2_XS.gguf), [IQ2-XXS](https://huggingface.co/Kooten/WinterGoddess-1.4x-70B-L2-IQ2-GGUF/blob/main/WinterGoddess-1.4x-70B-L2-IQ2_XXS.gguf)
Regular GGUF Quants: [Here](https://huggingface.co/TheBloke/WinterGoddess-1.4x-70B-L2-GGUF)
## Prompt Format
### Alpaca:
```
### Instruction:
<Prompt>
### Response:
```
OR
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
## Contact
Kooten on discord
|
Kooten/Euryale-1.4-L2-70B-IQ2-GGUF
|
Kooten
| 2024-01-19T22:43:59Z
| 3
| 3
| null |
[
"gguf",
"en",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-01-19T09:14:54Z
|
---
license: llama2
language:
- en
---
# Euryale-1.4-L2-70B IQ2-GGUF
## Description
IQ2-GGUF quants of [Sao10K/Euryale-1.4-L2-70B](https://huggingface.co/Sao10K/Euryale-1.4-L2-70B)
Unlike regular GGUF quants this uses important matrix similar to Quip# to keep the quant from degrading too much even at 2bpw allowing you to run larger models on less powerful machines.
***NOTE:*** Currently you will need experimental branches of Koboldcpp or Ooba for this to work.
- Nexesenex have compiled Windows binaries [HERE](https://github.com/Nexesenex/kobold.cpp/releases/tag/v1.55.1_b1842)
- [llamacpp_0.2.29 branch](https://github.com/oobabooga/text-generation-webui/tree/llamacpp_0.2.29) of Ooba also works
[More info about IQ2](https://github.com/ggerganov/llama.cpp/pull/4897)
# Models
Models: [IQ2-XS](https://huggingface.co/Kooten/Euryale-1.4-L2-70B-IQ2-GGUF/blob/main/Euryale-1.4-L2-70B-IQ2_XS.gguf), [IQ2-XXS](https://huggingface.co/Kooten/Euryale-1.4-L2-70B-IQ2-GGUF/blob/main/Euryale-1.4-L2-70B-IQ2_XXS.gguf)
Regular GGUF Quants: [Here](https://huggingface.co/Sao10K/Euryale-1.4-L2-70B-GGUF)
## Prompt Format
### Alpaca:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Input:
{input}
### Response:
```
## Contact
Kooten on discord
|
timuryun/autotrain-ughdn-x1a7j
|
timuryun
| 2024-01-19T22:42:39Z
| 0
| 0
| null |
[
"tensorboard",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T22:42:35Z
|
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
abayb/lora-trained-xl
|
abayb
| 2024-01-19T22:41:59Z
| 1
| 1
|
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-18T22:48:51Z
|
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'sks man in a dark theme, photoshoot'
output:
url:
"image_0.png"
- text: 'sks man in a dark theme, photoshoot'
output:
url:
"image_1.png"
- text: 'sks man in a dark theme, photoshoot'
output:
url:
"image_2.png"
- text: 'sks man in a dark theme, photoshoot'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks man
license: openrail++
---
# SDXL LoRA DreamBooth - abayb/lora-trained-xl
<Gallery />
## Model description
These are abayb/lora-trained-xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks man to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](abayb/lora-trained-xl/tree/main) them in the Files & versions tab.
|
TommyNgx/broi
|
TommyNgx
| 2024-01-19T22:38:00Z
| 0
| 0
| null |
[
"region:us"
] | null | 2023-12-26T06:55:02Z
|
## Download
from huggingface_hub import hf_hub_download
broi = hf_hub_download("TommyNgx/broi", 'yoloV8x_broi.pt')
---
license: mit
---
|
pervision/enchantimalistic
|
pervision
| 2024-01-19T22:31:37Z
| 0
| 0
|
adapter-transformers
|
[
"adapter-transformers",
"en",
"ru",
"dataset:fka/awesome-chatgpt-prompts",
"license:apache-2.0",
"region:us"
] | null | 2024-01-19T22:30:23Z
|
---
license: apache-2.0
datasets:
- fka/awesome-chatgpt-prompts
language:
- en
- ru
metrics:
- character
- bleurt
library_name: adapter-transformers
---
|
sonthenguyen/NeuralHermes-2.5-Mistral-7B
|
sonthenguyen
| 2024-01-19T22:26:58Z
| 1,341
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T19:15:08Z
|
---
license: apache-2.0
---
# Model Card for Model ID
base_model: teknium/OpenHermes-2.5-Mistral-7B
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- dpo
- rlhf
license: apache-2.0
language:
- en
datasets:
- mlabonne/chatml_dpo_pairs
---
|
afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF
|
afrideva
| 2024-01-19T22:21:52Z
| 71
| 2
| null |
[
"gguf",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"en",
"es",
"ru",
"zh",
"de",
"fr",
"th",
"ca",
"it",
"ja",
"pl",
"eo",
"eu",
"vi",
"fi",
"hu",
"ar",
"nl",
"da",
"tr",
"ko",
"he",
"id",
"cs",
"bn",
"sv",
"base_model:NickyNicky/dolphin-2_6-phi-2_oasst2_chatML_V2",
"base_model:quantized:NickyNicky/dolphin-2_6-phi-2_oasst2_chatML_V2",
"region:us",
"conversational"
] |
text-generation
| 2024-01-19T22:11:44Z
|
---
base_model: NickyNicky/dolphin-2_6-phi-2_oasst2_chatML_V2
inference: false
language:
- en
- es
- ru
- zh
- de
- fr
- th
- ca
- it
- ja
- pl
- eo
- eu
- vi
- fi
- hu
- ar
- nl
- da
- tr
- ko
- he
- id
- cs
- bn
- sv
model_creator: NickyNicky
model_name: dolphin-2_6-phi-2_oasst2_chatML_V2
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# NickyNicky/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF
Quantized GGUF model files for [dolphin-2_6-phi-2_oasst2_chatML_V2](https://huggingface.co/NickyNicky/dolphin-2_6-phi-2_oasst2_chatML_V2) from [NickyNicky](https://huggingface.co/NickyNicky)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [dolphin-2_6-phi-2_oasst2_chatml_v2.fp16.gguf](https://huggingface.co/afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF/resolve/main/dolphin-2_6-phi-2_oasst2_chatml_v2.fp16.gguf) | fp16 | 5.56 GB |
| [dolphin-2_6-phi-2_oasst2_chatml_v2.q2_k.gguf](https://huggingface.co/afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF/resolve/main/dolphin-2_6-phi-2_oasst2_chatml_v2.q2_k.gguf) | q2_k | 1.09 GB |
| [dolphin-2_6-phi-2_oasst2_chatml_v2.q3_k_m.gguf](https://huggingface.co/afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF/resolve/main/dolphin-2_6-phi-2_oasst2_chatml_v2.q3_k_m.gguf) | q3_k_m | 1.49 GB |
| [dolphin-2_6-phi-2_oasst2_chatml_v2.q4_k_m.gguf](https://huggingface.co/afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF/resolve/main/dolphin-2_6-phi-2_oasst2_chatml_v2.q4_k_m.gguf) | q4_k_m | 1.79 GB |
| [dolphin-2_6-phi-2_oasst2_chatml_v2.q5_k_m.gguf](https://huggingface.co/afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF/resolve/main/dolphin-2_6-phi-2_oasst2_chatml_v2.q5_k_m.gguf) | q5_k_m | 2.07 GB |
| [dolphin-2_6-phi-2_oasst2_chatml_v2.q6_k.gguf](https://huggingface.co/afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF/resolve/main/dolphin-2_6-phi-2_oasst2_chatml_v2.q6_k.gguf) | q6_k | 2.29 GB |
| [dolphin-2_6-phi-2_oasst2_chatml_v2.q8_0.gguf](https://huggingface.co/afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF/resolve/main/dolphin-2_6-phi-2_oasst2_chatml_v2.q8_0.gguf) | q8_0 | 2.96 GB |
## Original Model Card:
```
- model fine tune base: cognitivecomputations/dolphin-2_6-phi-2
- sft
- flash-attention 2
- loss: 0.85
- steps: 3000
- max_length: 2028
- neftune_noise_alpha: 5
```

Install packages
```Python
!python -m pip install --upgrade pip
!pip install -q datasets trl peft bitsandbytes sentencepiece wandb
!pip install -q accelerate safetensors deepspeed
!pip install -q scipy
!export CUDA_HOME=/usr/local/cuda-11.8
# !pip install ninja
!pip install ninja packaging --upgrade -qqq
!MAX_JOBS=4 pip install flash-attn --no-build-isolation -qqq
!pip install git+"https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary" -qqq
!python -m pip install optimum -qqq
```
Ioad model and generate text
```Python
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
logging,
GenerationConfig,
TextIteratorStreamer,
)
# from attention_sinks import AutoModelForCausalLM
import torch
model_id = "NickyNicky/dolphin-2_6-phi-2_oasst2_chatML_V2"
model = AutoModelForCausalLM.from_pretrained(model_id,
device_map="auto",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
load_in_4bit=True,
low_cpu_mem_usage= True,
flash_attn=True,
flash_rotary=True,
fused_dense=True,
)
max_length=2028
print("max_length",max_length)
tokenizer = AutoTokenizer.from_pretrained(model_id,
use_fast = True,
max_length=max_length,
trust_remote_code=True,)
prompt= """<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
tengo hambre que me recomiendas<|im_end|>
<|im_start|>assistant"""
inputs = tokenizer.encode(prompt,
return_tensors="pt",
add_special_tokens=False).cuda()#.to("cuda") # False # True
generation_config = GenerationConfig(
max_new_tokens=700,
temperature=0.5,
top_p=0.9,
top_k=45,
repetition_penalty=1.15, #1.1, # 1.0 means no penalty, > 1.0 means penalty, 1.2 from CTRL paper
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
# bos_token_id=tokenizer.eos_token_id,
# use_cache=True,
# stopping_criteria= StoppingCriteriaList([stopping_criteria]),
)
outputs = model.generate(generation_config=generation_config,
input_ids=inputs,)
# tokenizer.decode(outputs[0], skip_special_tokens=False) #True
print(tokenizer.decode(outputs[0], skip_special_tokens=False))
'''OUTPUT:
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
tengo hambre que me recomiendas<|im_end|>
<|im_start|>assistant
Hay muchos tipos de alimentos que puedes probar si tienes hambre, pero aquรญ te muestro una lista ordenada por calor:
1- Frutas y verduras: Estas son buena opciรณn para quitar hambre sin agotar tu cuerpo con grandes cantidades de carbohidratos. Algunas frutas y verduras que podrรญan ser suficientemente altas en calor durante el dรญa incluyen tomates, plรกtanos, espinacas, papas, nueces, manzanas, limones, guisantes, cucumbers, zanahorias, etc.
2- Proteรญnas: Estas son importantes para mantener tu masa muscular y fuerzosa durante el dรญa. Algunas proteรญnas que podrรญan ser รบtiles para quitar hambre durante el dรญa incluyen carne, aceite de oliva, miel, yogur, leche fresca o sopa de gorditas, etc.
3- Carbohidratos: Estas son importantes para energizarte durante el dรญa y mantenerte fรญsico. Algunas frutas y verduras que podrรญan ser รบtiles para quitar hambre durante el dรญa incluyen pan, tortillas, roti, arroz, pasta, rice, polenta, cereales, granola, etc.
4- Grains: Estas son importantes para mantenerte satiente durante el dรญa y reducir la frecuencia de comidas rรกpida. Algunas gromas que podrรญan ser รบtiles para quitar hambre durante el dรญa incluyen lentejas, farinas, tortilla, ensalada, etc.
5- Nuts y semolina: Estas son buenas opciones para quitar hambre durante el dรญa sin agotar tu cuerpo con grandes cantidades de azรบcar. Algunas frutas y verduras que podrรญan ser รบtiles para quitar hambre durante el dรญa incluyen anacardios, almendras, macetas, bocaditos, panquesado, etc.
6- Papel picado: Esta es una opciรณn deliciosa y econรณmica que puedes preparar en caso de quitar hambre durante el dรญa. Para hacer papel picado, primero cortezamos las frutas y verduras que deseas usarlas, y luego cortezamos las frutas y verduras que no deseas usarlas. A continuaciรณn, cortezamos las frutas y verduras que deseas usarlas mรกs grandes y que estรฉn mรกs frescas, y luego cortezamos las frutas y verduras
'''
```
|
corbt/example-mistral-lora
|
corbt
| 2024-01-19T22:05:32Z
| 5
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"base_model:OpenPipe/mistral-ft-optimized-1227",
"base_model:quantized:OpenPipe/mistral-ft-optimized-1227",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-01-19T22:04:27Z
|
---
license: apache-2.0
base_model: OpenPipe/mistral-ft-optimized-1227
tags:
- generated_from_trainer
model-index:
- name: models/loras2/7bdb17d0-3f6b-4921-93db-0f46c4d9d81b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# models/loras2/7bdb17d0-3f6b-4921-93db-0f46c4d9d81b
This model is a fine-tuned version of [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0179
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4795 | 0.02 | 1 | 0.4746 |
| 0.0282 | 0.2 | 12 | 0.0309 |
| 0.0168 | 0.4 | 24 | 0.0242 |
| 0.0216 | 0.59 | 36 | 0.0208 |
| 0.0167 | 0.79 | 48 | 0.0189 |
| 0.0157 | 0.99 | 60 | 0.0186 |
| 0.0156 | 1.19 | 72 | 0.0177 |
| 0.0135 | 1.38 | 84 | 0.0182 |
| 0.0139 | 1.58 | 96 | 0.0178 |
| 0.0169 | 1.78 | 108 | 0.0178 |
| 0.0111 | 1.98 | 120 | 0.0179 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.6
- Tokenizers 0.14.1
|
grimulkan/llama2_70b_longlora_fp16_32k_ROPE8
|
grimulkan
| 2024-01-19T21:42:07Z
| 22
| 2
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-10T18:26:34Z
|
---
license: llama2
---
This is the same as Yukang's [Llama-2-70b-longlora-32k](https://huggingface.co/Yukang/Llama-2-70b-longlora-32k), except that the extra pad token has been stripped from the tokenizer to make it similar to the base Llama model (and it has been merged into the base model). Please refer to that page for more details.
It was created by merging [LongAlpaca-70B-lora](https://huggingface.co/Yukang/LongAlpaca-70B-lora) into [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b), replacing the embed and norm layers as described in the [LongLoRA repo](https://github.com/dvlab-research/LongLoRA), and removing the extra row and pad token.
This is not an instruct-tuned model, but a base model for further fine-tuning. It supports 32K of context with linear rope scaling of 8.
|
karawalla/shiptraining2024001
|
karawalla
| 2024-01-19T21:41:49Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-19T21:41:42Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
OpenDILabCommunity/MsPacmanNoFrameskip-v4-SampledEfficientZero
|
OpenDILabCommunity
| 2024-01-19T21:37:37Z
| 0
| 0
|
pytorch
|
[
"pytorch",
"deep-reinforcement-learning",
"reinforcement-learning",
"DI-engine",
"MsPacmanNoFrameskip-v4",
"en",
"arxiv:2310.08348",
"license:apache-2.0",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-19T21:37:13Z
|
---
language: en
license: apache-2.0
library_name: pytorch
tags:
- deep-reinforcement-learning
- reinforcement-learning
- DI-engine
- MsPacmanNoFrameskip-v4
benchmark_name: OpenAI/Gym/Atari
task_name: MsPacmanNoFrameskip-v4
pipeline_tag: reinforcement-learning
model-index:
- name: SampledEfficientZero
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MsPacmanNoFrameskip-v4
type: MsPacmanNoFrameskip-v4
metrics:
- type: mean_reward
value: 1028.0 +/- 186.43
name: mean_reward
---
# Play **MsPacmanNoFrameskip-v4** with **SampledEfficientZero** Policy
## Model Description
<!-- Provide a longer summary of what this model is. -->
This implementation applies **SampledEfficientZero** to the OpenAI/Gym/Atari **MsPacmanNoFrameskip-v4** environment using [LightZero](https://github.com/opendilab/LightZero) and [DI-engine](https://github.com/opendilab/di-engine).
**LightZero** is an efficient, easy-to-understand open-source toolkit that merges Monte Carlo Tree Search (MCTS) with Deep Reinforcement Learning (RL), simplifying their integration for developers and researchers. More details are in paper [LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios](https://huggingface.co/papers/2310.08348).
## Model Usage
### Install the Dependencies
<details close>
<summary>(Click for Details)</summary>
```shell
# install huggingface_ding
git clone https://github.com/opendilab/huggingface_ding.git
pip3 install -e ./huggingface_ding/
# install environment dependencies if needed
pip3 install DI-engine[common_env,video]
pip3 install LightZero
```
</details>
### Git Clone from Huggingface and Run the Model
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from lzero.agent import SampledEfficientZeroAgent
from ding.config import Config
from easydict import EasyDict
import torch
# Pull model from files which are git cloned from huggingface
policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu"))
cfg = EasyDict(Config.file_to_dict("policy_config.py").cfg_dict)
# Instantiate the agent
agent = SampledEfficientZeroAgent(
env_id="PongNoFrameskip-v4", exp_name="PongNoFrameskip-v4-SampledEfficientZero", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
### Run Model by Using Huggingface_ding
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from lzero.agent import SampledEfficientZeroAgent
from huggingface_ding import pull_model_from_hub
# Pull model from Hugggingface hub
policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/PongNoFrameskip-v4-SampledEfficientZero")
# Instantiate the agent
agent = SampledEfficientZeroAgent(
env_id="PongNoFrameskip-v4", exp_name="PongNoFrameskip-v4-SampledEfficientZero", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
## Model Training
### Train the Model and Push to Huggingface_hub
<details close>
<summary>(Click for Details)</summary>
```shell
#Training Your Own Agent
python3 -u train.py
```
**train.py**
```python
from lzero.agent import SampledEfficientZeroAgent
from huggingface_ding import push_model_to_hub
# Instantiate the agent
agent = SampledEfficientZeroAgent(env_id="PongNoFrameskip-v4", exp_name="PongNoFrameskip-v4-SampledEfficientZero")
# Train the agent
return_ = agent.train(step=int(2000000))
# Push model to huggingface hub
push_model_to_hub(
agent=agent.best,
env_name="OpenAI/Gym/Atari",
task_name="PongNoFrameskip-v4",
algo_name="SampledEfficientZero",
github_repo_url="https://github.com/opendilab/LightZero",
github_doc_model_url=None,
github_doc_env_url=None,
installation_guide='''
pip3 install DI-engine[common_env,video]
pip3 install LightZero
''',
usage_file_by_git_clone="./sampled_efficientzero/pong_sampled_efficientzero_deploy.py",
usage_file_by_huggingface_ding="./sampled_efficientzero/pong_sampled_efficientzero_download.py",
train_file="./sampled_efficientzero/pong_sampled_efficientzero.py",
repo_id="OpenDILabCommunity/PongNoFrameskip-v4-SampledEfficientZero",
platform_info="[LightZero](https://github.com/opendilab/LightZero) and [DI-engine](https://github.com/opendilab/di-engine)",
model_description="**LightZero** is an efficient, easy-to-understand open-source toolkit that merges Monte Carlo Tree Search (MCTS) with Deep Reinforcement Learning (RL), simplifying their integration for developers and researchers. More details are in paper [LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios](https://huggingface.co/papers/2310.08348).",
create_repo=True
)
```
</details>
**Configuration**
<details close>
<summary>(Click for Details)</summary>
```python
exp_config = {
'main_config': {
'exp_name': 'MsPacmanNoFrameskip-v4-SampledEfficientZero',
'seed': 0,
'env': {
'env_id': 'MsPacmanNoFrameskip-v4',
'env_name': 'MsPacmanNoFrameskip-v4',
'obs_shape': [4, 96, 96],
'collector_env_num': 8,
'evaluator_env_num': 3,
'n_evaluator_episode': 3,
'manager': {
'shared_memory': False
}
},
'policy': {
'on_policy': False,
'cuda': True,
'multi_gpu': False,
'bp_update_sync': True,
'traj_len_inf': False,
'model': {
'observation_shape': [4, 96, 96],
'frame_stack_num': 4,
'action_space_size': 9,
'downsample': True,
'continuous_action_space': False,
'num_of_sampled_actions': 5,
'discrete_action_encoding_type': 'one_hot',
'norm_type': 'BN'
},
'use_rnd_model': False,
'sampled_algo': True,
'gumbel_algo': False,
'mcts_ctree': True,
'collector_env_num': 8,
'evaluator_env_num': 3,
'env_type': 'not_board_games',
'action_type': 'fixed_action_space',
'battle_mode': 'play_with_bot_mode',
'monitor_extra_statistics': True,
'game_segment_length': 400,
'transform2string': False,
'gray_scale': False,
'use_augmentation': True,
'augmentation': ['shift', 'intensity'],
'ignore_done': False,
'update_per_collect': 1000,
'model_update_ratio': 0.1,
'batch_size': 256,
'optim_type': 'SGD',
'learning_rate': 0.2,
'target_update_freq': 100,
'target_update_freq_for_intrinsic_reward': 1000,
'weight_decay': 0.0001,
'momentum': 0.9,
'grad_clip_value': 10,
'n_episode': 8,
'num_simulations': 50,
'discount_factor': 0.997,
'td_steps': 5,
'num_unroll_steps': 5,
'reward_loss_weight': 1,
'value_loss_weight': 0.25,
'policy_loss_weight': 1,
'policy_entropy_loss_weight': 0,
'ssl_loss_weight': 2,
'lr_piecewise_constant_decay': True,
'threshold_training_steps_for_final_lr': 50000,
'manual_temperature_decay': False,
'threshold_training_steps_for_final_temperature': 100000,
'fixed_temperature_value': 0.25,
'use_ture_chance_label_in_chance_encoder': False,
'use_priority': True,
'priority_prob_alpha': 0.6,
'priority_prob_beta': 0.4,
'root_dirichlet_alpha': 0.3,
'root_noise_weight': 0.25,
'random_collect_episode_num': 0,
'eps': {
'eps_greedy_exploration_in_collect': False,
'type': 'linear',
'start': 1.0,
'end': 0.05,
'decay': 100000
},
'cfg_type': 'SampledEfficientZeroPolicyDict',
'init_w': 0.003,
'normalize_prob_of_sampled_actions': False,
'policy_loss_type': 'cross_entropy',
'lstm_horizon_len': 5,
'cos_lr_scheduler': False,
'reanalyze_ratio': 0.0,
'eval_freq': 2000,
'replay_buffer_size': 1000000
},
'wandb_logger': {
'gradient_logger': False,
'video_logger': False,
'plot_logger': False,
'action_logger': False,
'return_logger': False
}
},
'create_config': {
'env': {
'type': 'atari_lightzero',
'import_names': ['zoo.atari.envs.atari_lightzero_env']
},
'env_manager': {
'type': 'subprocess'
},
'policy': {
'type': 'sampled_efficientzero',
'import_names': ['lzero.policy.sampled_efficientzero']
}
}
}
```
</details>
**Training Procedure**
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- **Weights & Biases (wandb):** [monitor link](<TODO>)
## Model Information
<!-- Provide the basic links for the model. -->
- **Github Repository:** [repo link](https://github.com/opendilab/LightZero)
- **Doc**: [Algorithm link](<TODO>)
- **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/MsPacmanNoFrameskip-v4-SampledEfficientZero/blob/main/policy_config.py)
- **Demo:** [video](https://huggingface.co/OpenDILabCommunity/MsPacmanNoFrameskip-v4-SampledEfficientZero/blob/main/replay.mp4)
<!-- Provide the size information for the model. -->
- **Parameters total size:** 33030.28 KB
- **Last Update Date:** 2024-01-19
## Environments
<!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. -->
- **Benchmark:** OpenAI/Gym/Atari
- **Task:** MsPacmanNoFrameskip-v4
- **Gym version:** 0.25.1
- **DI-engine version:** v0.5.0
- **PyTorch version:** 2.0.1+cu117
- **Doc**: [Environments link](<TODO>)
|
simpla360/suero
|
simpla360
| 2024-01-19T21:13:29Z
| 0
| 0
| null |
[
"region:us"
] | null | 2024-01-19T21:08:26Z
|
<title>Simpla 360 Suero Antiarrugas: Revitaliza tu Piel</title>
<h1>Simpla 360 Suero Antiarrugas: Revitaliza tu Piel</h1>
Para quienes buscan rejuvenecer su piel, Simpla 360 Suero Antiarrugas es la elecciรณn perfecta. Este suero de avanzada, disponible exclusivamente en <a href="https://es.mejornutra.xyz/?target=-7EBNQCgQAAAPZFwMXjAAFAQEREQoRCQoRDUIRDRIAAX9hZGNvbWJvATE&al=94332&subacc=hug"><b>>>>www.simpla360.com<<<</b></a>, estรก formulado para ofrecer resultados efectivos y visibles en la reducciรณn de arrugas y lรญneas de expresiรณn.
<a href="https://es.mejornutra.xyz/?target=-7EBNQCgQAAAPZFwMXjAAFAQEREQoRCQoRDUIRDRIAAX9hZGNvbWJvATE&al=94332&subacc=hug"><b>>>>IR AL SITIO WEB OFICIAL AQUI<<<</b></a>
Con un precio de solo 49 USD, Simpla 360 te ofrece una soluciรณn de alta calidad para el cuidado de tu piel. Este suero antiarrugas estรก enriquecido con ingredientes activos que nutren, hidratan y revitalizan la piel, mejorando su elasticidad y firmeza. Es ideal para todos los tipos de piel y es perfecto para incorporar en tu rutina diaria de cuidado facial.
Haz tu pedido en <a href="https://es.mejornutra.xyz/?target=-7EBNQCgQAAAPZFwMXjAAFAQEREQoRCQoRDUIRDRIAAX9hZGNvbWJvATE&al=94332&subacc=hug"><b>>>>www.simpla360.com<<<</b></a> y comienza a experimentar los beneficios de Simpla 360 Suero Antiarrugas. Este suero no solo combate los signos del envejecimiento, sino que tambiรฉn deja tu piel con una apariencia mรกs juvenil y radiante. No esperes mรกs para darle a tu piel el cuidado que se merece. ยกSimpla 360 es tu aliado para una piel hermosa y saludable!
|
lilianz/dqn-SpaceInvadersNoFrameskip-v4
|
lilianz
| 2024-01-19T21:13:17Z
| 2
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-19T21:12:41Z
|
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 627.00 +/- 138.37
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga lilianz -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga lilianz -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga lilianz
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 150000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
samwell/SoccerTwos
|
samwell
| 2024-01-19T21:09:38Z
| 9
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-01-19T21:01:04Z
|
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: akanametov/SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
MaziyarPanahi/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ
|
MaziyarPanahi
| 2024-01-19T21:09:13Z
| 30
| 1
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"finetuned",
"quantized",
"4-bit",
"gptq",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"en",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us",
"conversational",
"base_model:NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
"base_model:finetune:NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO"
] |
text-generation
| 2024-01-19T20:56:47Z
|
---
license: apache-2.0
tags:
- finetuned
- quantized
- 4-bit
- gptq
- transformers
- safetensors
- mixtral
- text-generation
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- en
- base_model:mistralai/Mixtral-8x7B-v0.1
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- has_space
- text-generation-inference
- region:us
model_name: Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ
base_model: NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
inference: false
model_creator: NousResearch
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# Description
[MaziyarPanahi/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ](https://huggingface.co/MaziyarPanahi/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ) is a quantized (GPTQ) version of [NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)
## How to use
### Install the necessary packages
```
pip install --upgrade accelerate auto-gptq transformers
```
### Example Python code
```python
from transformers import AutoTokenizer, pipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import torch
model_id = "MaziyarPanahi/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ"
quantize_config = BaseQuantizeConfig(
bits=4,
group_size=128,
desc_act=False
)
model = AutoGPTQForCausalLM.from_quantized(
model_id,
use_safetensors=True,
device="cuda:0",
quantize_config=quantize_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.1
)
outputs = pipe("What is a large language model?")
print(outputs[0]["generated_text"])
```
|
andrijdavid/TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF
|
andrijdavid
| 2024-01-19T21:08:30Z
| 47
| 0
|
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation",
"GGUF",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T20:58:33Z
|
---
language:
- en
license: apache-2.0
tags:
- GGUF
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
quantized_by: andrijdavid
---
# TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF
- Original model: [TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T)
<!-- description start -->
## Description
This repo contains GGUF format model files for [TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applicationsโ
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: andrijdavid/TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF and below it, a specific filename to download, such as: TinyLlama-1.1B-intermediate-step-1431k-3T-f16.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download andrijdavid/TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF TinyLlama-1.1B-intermediate-step-1431k-3T-f16.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download andrijdavid/TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download andrijdavid/TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF TinyLlama-1.1B-intermediate-step-1431k-3T-f16.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m TinyLlama-1.1B-intermediate-step-1431k-3T-f16.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./TinyLlama-1.1B-intermediate-step-1431k-3T-f16.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./TinyLlama-1.1B-intermediate-step-1431k-3T-f16.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: TinyLlama-1.1B-intermediate-step-1431k-3T
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs ๐๐. The training has started on 2023-09-01.
<div align="center">
<img src="./TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Collection
This collection contains all checkpoints after the 1T fix. Branch name indicates the step and number of tokens seen.
#### Eval
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
| - | | - | -- | -- | ----- |
| Pythia-1.0B | 300B | 47.16 | 31.40 | 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
| TinyLlama-1.1B-intermediate-step-50K-104b | 103B | 43.50 | 29.80 | 53.28 | 24.32 | 44.91 | 59.66 | 67.30 | 46.11 |
| TinyLlama-1.1B-intermediate-step-240k-503b | 503B | 49.56 | 31.40 | 55.80 | 26.54 | 48.32 | 56.91 | 69.42 | 48.28 |
| TinyLlama-1.1B-intermediate-step-480k-1007B | 1007B | 52.54 | 33.40 | 55.96 | 27.82 | 52.36 | 59.54 | 69.91 | 50.22 |
| TinyLlama-1.1B-intermediate-step-715k-1.5T | 1.5T | 53.68 | 35.20 | 58.33 | 29.18 | 51.89 | 59.08 | 71.65 | 51.29 |
| TinyLlama-1.1B-intermediate-step-955k-2T | 2T | 54.63 | 33.40 | 56.83 | 28.07 | 54.67 | 63.21 | 70.67 | 51.64 |
| TinyLlama-1.1B-intermediate-step-1195k-2.5T | 2.5T | 58.96 | 34.40 | 58.72 | 31.91 | 56.78 | 63.21 | 73.07 | 53.86 |
| TinyLlama-1.1B-intermediate-step-1431k-3T | 3T | 59.20 | 36.00 | 59.12 | 30.12 | 55.25 | 57.83 | 73.29 | 52.99 |
<!-- original-model-card end -->
|
rheubanks/llama2_instruct_generation
|
rheubanks
| 2024-01-19T21:06:05Z
| 3
| 0
|
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:NousResearch/Llama-2-7b-hf",
"base_model:adapter:NousResearch/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-19T21:05:41Z
|
---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: NousResearch/Llama-2-7b-hf
model-index:
- name: llama2_instruct_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2_instruct_generation
This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9724 | 0.0 | 20 | 1.8100 |
| 1.8173 | 0.01 | 40 | 1.7801 |
| 1.8184 | 0.01 | 60 | 1.7671 |
| 1.8725 | 0.01 | 80 | 1.7568 |
| 1.8967 | 0.01 | 100 | 1.7460 |
| 1.8943 | 0.02 | 120 | 1.7172 |
| 1.788 | 0.02 | 140 | 1.7045 |
| 1.8953 | 0.02 | 160 | 1.6986 |
| 1.8262 | 0.02 | 180 | 1.6943 |
| 1.8472 | 0.03 | 200 | 1.6926 |
| 1.8416 | 0.03 | 220 | 1.6896 |
| 1.838 | 0.03 | 240 | 1.6855 |
| 1.7743 | 0.04 | 260 | 1.6806 |
| 1.8562 | 0.04 | 280 | 1.6785 |
| 1.8562 | 0.04 | 300 | 1.6794 |
| 1.8117 | 0.04 | 320 | 1.6783 |
| 1.8193 | 0.05 | 340 | 1.6768 |
| 1.8807 | 0.05 | 360 | 1.6745 |
| 1.7641 | 0.05 | 380 | 1.6738 |
| 1.7738 | 0.05 | 400 | 1.6735 |
| 1.7759 | 0.06 | 420 | 1.6733 |
| 1.7089 | 0.06 | 440 | 1.6721 |
| 1.7984 | 0.06 | 460 | 1.6706 |
| 1.7243 | 0.07 | 480 | 1.6720 |
| 1.9205 | 0.07 | 500 | 1.6705 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
andrijdavid/baby-llama-58m-GGUF
|
andrijdavid
| 2024-01-19T20:38:27Z
| 1
| 0
|
transformers
|
[
"transformers",
"llama",
"text-generation",
"GGUF",
"en",
"arxiv:2308.02019",
"license:unknown",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T20:38:26Z
|
---
language:
- en
license: unknown
tags:
- GGUF
quantized_by: andrijdavid
---
# baby-llama-58m-GGUF
- Original model: [baby-llama-58m](https://huggingface.co/timinar/baby-llama-58m)
<!-- description start -->
## Description
This repo contains GGUF format model files for [baby-llama-58m](https://huggingface.co/timinar/baby-llama-58m).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applicationsโ
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: andrijdavid/baby-llama-58m-GGUF and below it, a specific filename to download, such as: baby-llama-58m-f16.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download andrijdavid/baby-llama-58m-GGUF baby-llama-58m-f16.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download andrijdavid/baby-llama-58m-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download andrijdavid/baby-llama-58m-GGUF baby-llama-58m-f16.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m baby-llama-58m-f16.gguf --color -c 1024 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 1024` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./baby-llama-58m-f16.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./baby-llama-58m-f16.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: baby-llama-58m
# Baby Llama
Our submission to the `strict-small` track of the [BabyLM challenge](https://babylm.github.io/index.html).
Baby Llama is a 58M-parameter model, distilled from an ensemble consisting of LLaMA-360M and GPT2-705M, both trained on the `babylm_10M` dataset.
See the associated [paper](https://arxiv.org/abs/2308.02019) for a detailed discussion of the training procedure and of the model performance.
The training code is available at [https://github.com/timinar/BabyLlama](https://github.com/timinar/BabyLlama).
### Hyperparameters for the tasks that require fine-tuning
When evaluating the model on the [tasks that require fine-tuning](https://github.com/babylm/evaluation-pipeline/tree/main#fine-tuning),
we noticed that the [default hyperparameters](https://github.com/babylm/evaluation-pipeline/tree/main#hyperparameters)
suggested by the BabyLM organizers lead to severe overfitting in a number of tasks.
To avoid this issue, we have re-tuned those hyperparameters.
The sets of hyperparameters selected for each task are listed in the table below.
| Task | Maximum learning rate | Batch size | Maximum epochs | Patience | Evaluate every (steps) | Random seed |
| | - | -- | -- | -- |
| CoLA | 4e-5 | 64 | 3 | 10 | 20 | 12 |
| SST-2 | 5e-5 | 64 | 6 | 10 | 200 | 12 |
| MRPC | 3e-5 | 64 | 3 | 10 | 20 | 12 |
| QQP | 4e-5 | 64 | 10 | 10 | 1000 | 12 |
| MNLI | 5e-5 | 64 | 6 | 10 | 200 | 12 |
| MNLI-mm | 5e-5 | 64 | 6 | 10 | 200 | 12 |
| QNLI | 5e-5 | 64 | 6 | 10 | 200 | 12 |
| RTE | 5e-5 | 64 | 6 | 10 | 200 | 12 |
| BoolQ | 3e-4 | 16 | 10 | 10 | 10 | 12 |
| MultiRC | 1e-4 | 64 | 7 | 10 | 1000 | 42 |
| WSC | 5e-7 | 1 | 10 | 1000 | 2000 | 12 |
| CR (Control) | 5e-5 | 64 | 10 | 10 | 100 | 12 |
| LC (Control) | 1e-3 | 64 | 1 | 2 | 10 | 12 |
| MV (Control) | 5e-5 | 64 | 6 | 10 | 200 | 12 |
| RP (Control) | 1e-3 | 64 | 1 | 10 | 10 | 12 |
| SC (Control) | 1e-3 | 64 | 2 | 10 | 10 | 12 |
| CR\_LC | 1e-3 | 64 | 2 | 10 | 10 | 12 |
| CR\_RTP | 5e-5 | 64 | 6 | 10 | 200 | 12 |
| MV\_LC | 5e-5 | 64 | 6 | 10 | 200 | 12 |
| MV\_RTP | 5e-5 | 64 | 6 | 10 | 200 | 12 |
| SC\_LC | 1e-3 | 64 | 2 | 10 | 10 | 12 |
| SC\_RP | 1e-3 | 64 | 2 | 10 | 10 | 12 |
<!-- original-model-card end -->
|
Ghunghru/xmod-base
|
Ghunghru
| 2024-01-19T20:31:51Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"xmod",
"text-classification",
"generated_from_trainer",
"base_model:facebook/xmod-base",
"base_model:finetune:facebook/xmod-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T19:50:43Z
|
---
license: mit
base_model: facebook/xmod-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xmod-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xmod-base
This model is a fine-tuned version of [facebook/xmod-base](https://huggingface.co/facebook/xmod-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5756
- F1: 0.4000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6685 | 1.0 | 189 | 0.6350 | 0.0 |
| 0.6631 | 2.0 | 378 | 0.6223 | 0.0 |
| 0.6368 | 3.0 | 567 | 0.6064 | 0.0 |
| 0.6075 | 4.0 | 756 | 0.5928 | 0.0 |
| 0.6102 | 5.0 | 945 | 0.5549 | 0.3729 |
| 0.5635 | 6.0 | 1134 | 0.6121 | 0.2727 |
| 0.5783 | 7.0 | 1323 | 0.5595 | 0.4118 |
| 0.5206 | 8.0 | 1512 | 0.5852 | 0.4068 |
| 0.5619 | 9.0 | 1701 | 0.5778 | 0.4000 |
| 0.5518 | 10.0 | 1890 | 0.5756 | 0.4000 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mitro99/whisper-tiny-polyai-enUS_fewer_epochs
|
mitro99
| 2024-01-19T20:16:26Z
| 60
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-19T20:03:49Z
|
---
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-polyai-enUS_fewer_epochs
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
metrics:
- name: Wer
type: wer
value: 0.34946871310507677
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-polyai-enUS_fewer_epochs
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6145
- Wer Ortho: 0.3800
- Wer: 0.3495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 2.9576 | 3.33 | 50 | 1.9424 | 0.5077 | 0.4050 |
| 0.5132 | 6.67 | 100 | 0.6382 | 0.4152 | 0.3684 |
| 0.2569 | 10.0 | 150 | 0.5925 | 0.3893 | 0.3554 |
| 0.0973 | 13.33 | 200 | 0.6145 | 0.3800 | 0.3495 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Asude/gpt2-256t-human_reward-neg-25
|
Asude
| 2024-01-19T20:15:29Z
| 28
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2024-01-19T20:14:52Z
|
---
license: apache-2.0
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Asude//tmp/tmpzdtriuax/Asude/gpt2-256t-human_reward-neg-25")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Asude//tmp/tmpzdtriuax/Asude/gpt2-256t-human_reward-neg-25")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Asude//tmp/tmpzdtriuax/Asude/gpt2-256t-human_reward-neg-25")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
soniox/Soniox-7B-v1.0
|
soniox
| 2024-01-19T20:15:16Z
| 1,379
| 2
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-17T09:16:21Z
|
---
license: apache-2.0
---
# Model Card for Soniox-7B-v1.0
Soniox 7B is a powerful large language model. Supports English and code with 8K context.
Matches GPT-4 performance on some benchmarks.
Built on top of Mistral 7B, enhanced with additional pre-training and fine-tuning for strong problem-solving capabilities.
Apache 2.0 License.
For more details, please read our [blog post](https://soniox.com/news/soniox-7B).
## Usage in Transformers
The model is available in transformers and can be used as follows:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "soniox/Soniox-7B-v1.0"
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained(model_path)
device = "cuda"
model.to(device)
messages = [
{"role": "user", "content": "12 plus 21?"},
{"role": "assistant", "content": "33."},
{"role": "user", "content": "Five minus one?"},
]
tok_prompt = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = tok_prompt.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Inference deployment
Refer to our [documentation](https://docs.soniox.com) for inference with vLLM and other
deployment options.
|
castorini/rank_zephyr_7b_v1_full
|
castorini
| 2024-01-19T19:54:29Z
| 2,210
| 20
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"en",
"arxiv:2312.02724",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T18:52:58Z
|
---
tags:
- generated_from_trainer
license: mit
language:
- en
base_model: mistralai/Mistral-7B-v0.1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<img src="https://huggingface.co/castorini/rank_zephyr_7b_v1_full/resolve/main/thumbnail.jpeg" alt="RankZephyr Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<!-- <img src="https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha/resolve/main/thumbnail.png" alt="Zephyr Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/> -->
# Model Card for RankZephyr 7B V1 - Full
RankZephyr is a series of language models trained to act as helpful reranking assistants built on the Zephyr-7B-ฮฒ model.
RankZephyr Base is the model that follows single-stage fine-tuning on the RankGPT-3.5 model, while RankZephyr Full is the model that is further fine-tuned on RankGPT-4 reorderings of OpenAI's Ada2 orderings for 5K queries.
## Model description
- **Model type:** A 7B parameter GPT-like model initially fine-tuned on a mix of publicly available, synthetic datasets, followed by task-specific listwise reranking data.
- **Language(s) (NLP):** Primarily English
- **License:** MIT
- **Fine-tuned from model:** [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/castorini/rank_llm
- **Paper:** https://arxiv.org/abs/2312.02724
## Effectiveness
At the time of release, RankZephyr-7B-Full is the state-of-the-art open-source reranking model on various datasets like DL19/20/21/22 and TREC-COVID and TREC-News.
With the MS MARCO v1 collection:
| Model | Size | First Stage | DL19 | DL20|
|-------------|-----|----|---------------|--------------|
| **RankZephyr-7b-v1-full-rho** ๐ช | **7B** | **SPLADE++ ED** | **0.7855** | **0.8255** |
| **RankZephyr-7b-v1-full** ๐ช | **7B** | **SPLADE++ ED** | **0.7803** | **0.8211** |
| RankGPT-4 (PSC) | -| SPLADE++ ED | 0.7601 | 0.7514 |
| RankGPT-4 | -| SPLADE++ ED | 0.7464 | 0.7076 |
| **RankZephyr-7b-v1-base** ๐ช | **7B** | **SPLADE++ ED** | **0.7341** | **0.7213** |
| RankGPT-3.5 | -| SPLADE++ ED | 0.7504 | 0.7120|
More details can be found in the paper.
## Intended uses & limitations
The model is to be used in conjunction with the [RankLLM repository](https://github.com/castorini/rank_llm). While `rank-llm` exists as a PyPI package, we are currently in the early stages of development and encourage users to directly check install from source.
The original Zephyr model is trained for chat. In our case, RankZephyr is fine-tuned to act as a listwise reranking agent. You provide it with a query and documents and get back a reordered list of document identifiers.
## Bias, Risks, and Limitations
The following is an excerpt from the [Zephyr-7B-ฮฒ model card](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta/blob/main/README.md#bias-risks--limitations):
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
> Zephyr-7B-ฮฒ has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (`mistralai/Mistral-7B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
Our model is trained specifically on monolingual English data, effectiveness on multilingual sets is not guaranteed.
## Citation
If you find RankZephyr is useful in your work, please cite the following paper:
```
@ARTICLE{pradeep2023rankzephyr,
title = {{RankZephyr}: Effective and Robust Zero-Shot Listwise Reranking is a Breeze!},
author = {Ronak Pradeep and Sahel Sharifymoghaddam and Jimmy Lin},
year = {2023},
journal = {arXiv:2312.02724}
}
```
|
Asude/gpt2-256t-human_reward-neg-20
|
Asude
| 2024-01-19T19:42:59Z
| 5
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2024-01-19T19:42:36Z
|
---
license: apache-2.0
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Asude//tmp/tmpbx52zlg9/Asude/gpt2-256t-human_reward-neg-20")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Asude//tmp/tmpbx52zlg9/Asude/gpt2-256t-human_reward-neg-20")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Asude//tmp/tmpbx52zlg9/Asude/gpt2-256t-human_reward-neg-20")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
wenqiglantz/MistralTrinity-7B-slerp-dpo
|
wenqiglantz
| 2024-01-19T19:24:25Z
| 9
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"instruct",
"finetune",
"chatml",
"synthetic data",
"distillation",
"dpo",
"rlhf",
"conversational",
"en",
"dataset:mlabonne/chatml_dpo_pairs",
"base_model:wenqiglantz/MistralTrinity-7B-slerp",
"base_model:finetune:wenqiglantz/MistralTrinity-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T17:07:41Z
|
---
base_model: wenqiglantz/MistralTrinity-7B-slerp
tags:
- mistral
- instruct
- finetune
- chatml
- synthetic data
- distillation
- dpo
- rlhf
license: apache-2.0
language:
- en
datasets:
- mlabonne/chatml_dpo_pairs
---
# MistralTrinity-7B-slerp-dpo
Inspired by @mlabonne's blog post [Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac), this model was fine-tuned with DPO (Direct Preference Optimization) on base model `MistralTrinity-7B-slerp`, which is a merged model for `mistralai/Mistral-7B-Instruct-v0.2` and `jan-hq/trinity-v1`, using the [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) dataset.
The code to train this model is available on [Google Colab](https://colab.research.google.com/github/wenqiglantz/llmops/blob/main/Fine_tune_MistralTrinity_7B_slerp_with_DPO.ipynb) and [GitHub](https://github.com/wenqiglantz/llmops/blob/main/Fine_tune_MistralTrinity_7B_slerp_with_DPO.ipynb).
It required an A100 GPU for over an hour.
Check out fine-tuning run details on [Weights & Biases](https://wandb.ai/wenqiglantz/huggingface/runs/sxbgd33f).
|
ntc-ai/SDXL-LoRA-slider.on-a-ship
|
ntc-ai
| 2024-01-19T19:22:16Z
| 45
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-19T19:22:12Z
|
---
language:
- en
thumbnail: "images/evaluate/on a ship.../on a ship_17_3.0.png"
widget:
- text: on a ship
output:
url: images/on a ship_17_3.0.png
- text: on a ship
output:
url: images/on a ship_19_3.0.png
- text: on a ship
output:
url: images/on a ship_20_3.0.png
- text: on a ship
output:
url: images/on a ship_21_3.0.png
- text: on a ship
output:
url: images/on a ship_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "on a ship"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - on a ship (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/on a ship_17_-3.0.png" width=256 height=256 /> | <img src="images/on a ship_17_0.0.png" width=256 height=256 /> | <img src="images/on a ship_17_3.0.png" width=256 height=256 /> |
| <img src="images/on a ship_19_-3.0.png" width=256 height=256 /> | <img src="images/on a ship_19_0.0.png" width=256 height=256 /> | <img src="images/on a ship_19_3.0.png" width=256 height=256 /> |
| <img src="images/on a ship_20_-3.0.png" width=256 height=256 /> | <img src="images/on a ship_20_0.0.png" width=256 height=256 /> | <img src="images/on a ship_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
on a ship
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.on-a-ship', weight_name='on a ship.safetensors', adapter_name="on a ship")
# Activate the LoRA
pipe.set_adapters(["on a ship"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, on a ship"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
miguelcarv/resnet-50-text-detector
|
miguelcarv
| 2024-01-19T19:20:28Z
| 27
| 0
|
transformers
|
[
"transformers",
"safetensors",
"resnet",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-19T18:51:36Z
|
# Model Card for ResNet-50 Text Detector
This model was trained with the intent to quickly classify whether or not an image contains legible text or not. It was trained as a binary classification problem on the COCO-Text dataset together with some images from LLaVAR. This came out to a total of ~70k images, where 50% of them had text and 50% of them had no legible text.
# Model Details
## How to Get Started with the Model
```python
from PIL import Image
import requests
from transformers import AutoImageProcessor, AutoModelForImageClassification
model = AutoModelForImageClassification.from_pretrained(
"miguelcarv/resnet-50-text-detector",
)
processor = AutoImageProcessor.from_pretrained("microsoft/resnet-50", do_resize=False)
url = "http://images.cocodataset.org/train2017/000000044520.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert('RGB').resize((256,256))
inputs = processor(image, return_tensors="pt").pixel_values
outputs = model(inputs)
logits_per_image = outputs.logits
probs = logits_per_image.softmax(dim=1)
print(probs)
# tensor([[0.1149, 0.8851]])
```
# Training Details
- Trained for three epochs
- Resolution: 256x256
- Learning rate: 5e-5
- Optimizer: AdamW
- Batch size: 64
- Trained with FP32
|
Makucas/Mistral-7B-Instruct-v0.2_08
|
Makucas
| 2024-01-19T19:20:00Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-01-19T18:26:17Z
|
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: Mistral-7B-Instruct-v0.2_08
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.2_08
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7263 | 0.17 | 20 | 1.6209 |
| 1.5225 | 0.34 | 40 | 1.5653 |
| 1.398 | 0.51 | 60 | 1.5336 |
| 1.5291 | 0.68 | 80 | 1.4972 |
| 1.5079 | 0.85 | 100 | 1.4544 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Asude/gpt2-256t-human_reward-neg-15
|
Asude
| 2024-01-19T19:15:09Z
| 7
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2024-01-19T19:14:44Z
|
---
license: apache-2.0
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Asude//tmp/tmp46lqattn/Asude/gpt2-256t-human_reward-neg-15")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Asude//tmp/tmp46lqattn/Asude/gpt2-256t-human_reward-neg-15")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Asude//tmp/tmp46lqattn/Asude/gpt2-256t-human_reward-neg-15")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
senseable/MoMo-70B-lora-1.8.6-DPO-gguf
|
senseable
| 2024-01-19T19:05:39Z
| 4
| 4
|
transformers
|
[
"transformers",
"gguf",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-17T02:48:20Z
|
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- gguf
---
Split files need to be merged:
Windows:
`copy /B MoMo-70B-lora-1.8.6-DPO-q5_k_m.gguf-split-* MoMo-70B-lora-1.8.6-DPO-q5_k_m.gguf`
`copy /B MoMo-70B-lora-1.8.6-DPO-q6_k.gguf-split-* MoMo-70B-lora-1.8.6-DPO-q6_k.gguf`
Linux/Mac:
`cat MoMo-70B-lora-1.8.6-DPO-q5_k_m.gguf-split-* > MoMo-70B-lora-1.8.6-DPO-q5_k_m.gguf`
`cat MoMo-70B-lora-1.8.6-DPO-q6_k.gguf-split-* > MoMo-70B-lora-1.8.6-DPO-q6_k.gguf`
|
vicgalle/franken-Beagle-11B
|
vicgalle
| 2024-01-19T19:04:34Z
| 58
| 2
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:mlabonne/NeuralBeagle14-7B",
"base_model:finetune:mlabonne/NeuralBeagle14-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T18:51:17Z
|
---
base_model:
- mlabonne/NeuralBeagle14-7B
tags:
- mergekit
- merge
license: apache-2.0
---
# franken-Beagle-11B

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: mlabonne/NeuralBeagle14-7B
layer_range: [0, 24]
- sources:
- model: mlabonne/NeuralBeagle14-7B
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
|
frluquba/clasificador2-muchocine
|
frluquba
| 2024-01-19T19:00:20Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:GKLMIP/bert-khmer-base-uncased-tokenized",
"base_model:finetune:GKLMIP/bert-khmer-base-uncased-tokenized",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T18:59:46Z
|
---
base_model: GKLMIP/bert-khmer-base-uncased-tokenized
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador2-muchocine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador2-muchocine
This model is a fine-tuned version of [GKLMIP/bert-khmer-base-uncased-tokenized](https://huggingface.co/GKLMIP/bert-khmer-base-uncased-tokenized) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5195
- Accuracy: 0.3313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 339 | 1.5292 | 0.3313 |
| 1.5525 | 2.0 | 678 | 1.5392 | 0.2057 |
| 1.5301 | 3.0 | 1017 | 1.5195 | 0.3313 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
SanjiWatsuki/TinyBagel-248M
|
SanjiWatsuki
| 2024-01-19T18:57:33Z
| 8
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T18:57:00Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ayoubkirouane/Phi-2.7B_MERGED
|
ayoubkirouane
| 2024-01-19T18:51:41Z
| 20
| 0
|
transformers
|
[
"transformers",
"safetensors",
"phi-msft",
"text-generation",
"mergekit",
"merge",
"custom_code",
"en",
"ar",
"fr",
"base_model:Yhyu13/phi-2-sft-dpo-gpt4_en-ep1",
"base_model:merge:Yhyu13/phi-2-sft-dpo-gpt4_en-ep1",
"base_model:rhysjones/phi-2-orange",
"base_model:merge:rhysjones/phi-2-orange",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T18:27:30Z
|
---
base_model:
- rhysjones/phi-2-orange
- Yhyu13/phi-2-sft-dpo-gpt4_en-ep1
tags:
- mergekit
- merge
license: apache-2.0
language:
- en
- ar
- fr
library_name: transformers
pipeline_tag: text-generation
---
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [rhysjones/phi-2-orange](https://huggingface.co/rhysjones/phi-2-orange)
* [Yhyu13/phi-2-sft-dpo-gpt4_en-ep1](https://huggingface.co/Yhyu13/phi-2-sft-dpo-gpt4_en-ep1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: rhysjones/phi-2-orange
layer_range: [0, 32]
- model: Yhyu13/phi-2-sft-dpo-gpt4_en-ep1
layer_range: [0, 32]
merge_method: slerp
base_model: rhysjones/phi-2-orange
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## Usage :
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("ayoubkirouane/phi-2_MERGED", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ayoubkirouane/phi-2_MERGED", trust_remote_code=True)
inputs = tokenizer('What Machine Learning ? ', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=50)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
|
Ghunghru/Misinformation-Covid-xlm-roberta-base
|
Ghunghru
| 2024-01-19T18:50:15Z
| 1
| 0
|
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-12T13:42:13Z
|
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Misinformation-Covid-xlm-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Misinformation-Covid-xlm-roberta-base
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7194
- F1: 0.4333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6737 | 1.0 | 189 | 0.6662 | 0.0 |
| 0.7083 | 2.0 | 378 | 0.6540 | 0.0 |
| 0.7185 | 3.0 | 567 | 0.8346 | 0.0 |
| 0.7826 | 4.0 | 756 | 0.8685 | 0.0 |
| 0.8333 | 5.0 | 945 | 0.7939 | 0.0 |
| 0.7989 | 6.0 | 1134 | 0.8978 | 0.0 |
| 0.8009 | 7.0 | 1323 | 0.7276 | 0.3265 |
| 0.6824 | 8.0 | 1512 | 0.7733 | 0.3774 |
| 0.6979 | 9.0 | 1701 | 0.7327 | 0.4407 |
| 0.6963 | 10.0 | 1890 | 0.7194 | 0.4333 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.12.0
- Tokenizers 0.13.3
|
webpolis/zenos-gpt-j-6B-instruct-4bit
|
webpolis
| 2024-01-19T18:44:12Z
| 150
| 1
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gptj",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2023-09-25T02:19:51Z
|
---
{}
---
# Zenos GPT-J 6B Instruct 4-bit
## Model Overview
- **Name:** zenos-gpt-j-6B-instruct-4bit
- **Datasets Used:** [Alpaca Spanish](https://huggingface.co/datasets/bertin-project/alpaca-spanish), [Evol Instruct](https://huggingface.co/datasets/FreedomIntelligence/evol-instruct-spanish)
- **Architecture:** GPT-J
- **Model Size:** 6 Billion parameters
- **Precision:** 4 bits
- **Fine-tuning:** This model was fine-tuned using Low-Rank Adaptation (LoRa).
- **Content Moderation:** This model is not moderated.
## Description
Zenos GPT-J 6B Instruct 4-bit is a Spanish Instruction capable model based on the GPT-J architecture with 6 billion parameters. It has been fine-tuned on the Alpaca Spanish and Evol Instruct datasets, making it particularly suitable for natural language understanding and generation tasks in Spanish.
An experimental Twitter (**X**) bot is available at [https://twitter.com/ZenosBot](https://twitter.com/ZenosBot) which makes comments on news published in media outlets from Argentina.
### Requirements
The latest development version of Transformers, which includes serialization of 4 bits models.
- [Transformers](https://huggingface.co/docs/transformers/installation#install-from-source)
- Bitsandbytes >= 0.41.3
Since this is a compressed version (4 bits), it can fit into ~7GB of VRAM.
## Usage
You can use this model for various natural language processing tasks such as text generation, summarization, and more. Below is an example of how to use it in Python with the Transformers library:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("webpolis/zenos-gpt-j-6B-instruct-4bit")
model = AutoModelForCausalLM.from_pretrained(
"webpolis/zenos-gpt-j-6B-instruct-4bit",
use_safetensors=True
)
user_msg = '''Escribe un poema breve utilizando los siguientes conceptos:
Bienestar, Corriente, Iluminaciรณn, Sed'''
# Generate text; watch out the padding between [INST] ... [/INST]
prompt = f'[INST] {user_msg} [/INST]'
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to(model.device)
attention_mask = inputs["attention_mask"].to(model.device)
generation_config = GenerationConfig(
temperature=0.2,
top_p=0.8,
top_k=40,
num_beams=1,
repetition_penalty=1.3,
do_sample=True
)
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
pad_token_id=tokenizer.eos_token_id,
attention_mask=attention_mask,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=False,
max_new_tokens=512,
early_stopping=True
)
s = generation_output.sequences[0]
output = tokenizer.decode(s)
start_txt = output.find('[/INST]') + len('[/INST]')
end_txt = output.find("<|endoftext|>", start_txt)
answer = output[start_txt:end_txt]
print(answer)
```
# Inference
## Online
Currently, the HuggingFace's Inference Tool UI doesn't properly load the model. However, you can use it with regular Python code as shown above once you meet the [requirements](#requirements).
## CPU
Best performance can be achieved downloading the [GGML 4 bits](https://huggingface.co/webpolis/zenos-gpt-j-6B-instruct-4bit/resolve/main/ggml-f16-q4_0.bin) model and doing inference using the [rustformers' llm](https://github.com/rustformers/llm) tool.
### Requirements
For optimal performance:
- 4 CPU cores
- 8GB RAM
In my Core i7 laptop it goes around 250ms per token:

# Acknowledgments
This model was developed by [Nicolรกs Iglesias](mailto:[email protected]) using the Hugging Face Transformers library.
# LICENSE
Copyright 2023 [Nicolรกs Iglesias](mailto:[email protected])
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this software except in compliance with the License.
You may obtain a copy of the License at
[Apache License 2.0](http://www.apache.org/licenses/LICENSE-2.0)
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
|
cocoirun/AIFT-Yi-Ko-6B-instruct-v0.4.15-dpo
|
cocoirun
| 2024-01-19T18:43:41Z
| 8
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T18:31:05Z
|
---
license: cc-by-sa-4.0
---
<h1>instruct ๋ชจ๋ธ v0.4.15</h1>
<b><ํ์ต ๋ฐ์ดํฐ ๊ตฌ์ถ></b>
Open-Orca-ko ๋ฐ์ดํฐ๋ฅผ ๋ถ์ํ์ฌ ํ์คํฌ๋ฅผ ์ถ์ถํ ๋ค
ํด๋น ํ์คํฌ์ ๋ง์ถฐ์ NLP ๊ด๋ จ ์คํ์์ค ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ํ์ต๋ฐ์ดํฐ๋ฅผ ์์ฒด์ ์ผ๋ก ๊ตฌ์ถ = aift-orca-v0.4
์ฝ 4๋ง๊ฑด(์ญ์ฌ, ๊ณผํ, ์ํ, ๊ธฐ๊ณ๋
ํด, ๋ฆฌ๋ทฐ ๋ถ์) ๊ตฌ์ถํ์๊ณ ,
๊ทธ ์ธ์ Open-Orca-Ko์์ ๋ฐ์ดํฐ๋ฅผ ์ผ๋ถ ํํฐ๋งํ์ฌ ์ ์ ํด๊ฑฐ๋ KoBEST ๋ฐ์ดํฐ๋ฅผ ํจ๊ป ์ถ๊ฐํ์์ต๋๋ค.
aihub ์ผ๋ฐ์์ ๋ฐ ๊ธฐ๊ณ๋
ํด ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ์ถ๊ฐ๋ก ํ์ต ๋ฐ์ดํฐ๋ฅผ ๊ตฌ์ถ(ํํ์ ๊ด๋ จ, ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ๋ฐ ์์ฝ)
๊ฐ์ข
๋ธ๋ก๊ทธ์์ ์ญ์ฌ ๋ฐ ์์ ํด์ฆ๋ฅผ ์ฌ๋์ด ์ง์ ํ์ต๋ฐ์ดํฐ ํํ๋ก ๋ณ๊ฒฝ
AI2AI Challenge ๋ฐ์ดํฐ๋ฅผ ํํ๊ณ ๋ฅผ ํตํด ๋ฒ์ญ ๋ฐ ์ค์ญ๋ ๋ถ๋ถ์ ์ฌ๋์ด ์ง์ ์์ ํ๋ ์์
์ ์ํ
์์ด ๋ฒ์ญ ๋ฐ์ดํฐ ์ํ/ํ์ ๋ฐ์ดํฐ ํ์ต ๋ฐ์ดํฐ๋ก ํ์ฉ ์งํ
์ด 11๋ง๊ฐ์ ํ์ต๋ฐ์ดํฐ๋ก sft๋ฅผ ์งํํ์์ต๋๋ค.
<br>
ํ์ฌ, ์๋ก์ด ๋ฒ์ ์ ๋ชจ๋ธ ํ์ต ๋ฐ ์ฑ๋ฅ์ ์ํด Open-Orca ๋ฐ์ดํฐ์
์ผ๋ถ๋ฅผ ๋ฒ์ญํ์ฌ ์ ์ ์ค์ ์์ต๋๋ค.
<br>
+ ๊ณ ๋ฑํ๊ต ์ญ์ฌ ๋ฌธ์ ๋ฐ TruthfulQA ๊ด๋ จ ๋ฌธ์ ์ถ๊ฐ๋ฅผ ์งํํ์์ต๋๋ค.
+ ๊ฐ์ข
it ์ง์ ๋ฐ์ดํฐ ์ถ๊ฐ์งํ.
+ ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ๋ฅผ ChatGPT๋ฅผ ํตํด์ ๋ต๋ณ์ ์ป์ด ํ์ต
+ ๋ฌธ๋ฒ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ
<br>
###ํ์ต ๋ฐ์ดํฐ ํ์ผ์ ๋น๊ณต๊ฐ์
๋๋ค.
<br>
<b><ํ์ต></b>
ํ์ต์ LoRA๋ฅผ ์ฌ์ฉํ์ฌ A100 40G *2์์ ํ์ต์ ์งํํ์์ต๋๋ค.
|
awilliamson/phrankened
|
awilliamson
| 2024-01-19T18:41:50Z
| 8
| 0
|
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"microsoft/phi-2",
"custom_code",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T18:39:25Z
|
---
tags:
- merge
- mergekit
- lazymergekit
- microsoft/phi-2
- microsoft/phi-2
base_model:
- microsoft/phi-2
- microsoft/phi-2
---
# phrankened
phrankened is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [microsoft/phi-2](https://huggingface.co/microsoft/phi-2)
* [microsoft/phi-2](https://huggingface.co/microsoft/phi-2)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: "microsoft/phi-2"
layer_range: [0, 12]
- sources:
- model: "microsoft/phi-2"
layer_range: [10, 22]
merge_method: passthrough
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "awilliamson/phrankened"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
mattshumer/QuadPhi
|
mattshumer
| 2024-01-19T18:33:33Z
| 14
| 0
|
transformers
|
[
"transformers",
"safetensors",
"phi-msft",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mattshumer/ThinkPhi",
"mattshumer/TalkPhi",
"conversational",
"custom_code",
"base_model:mattshumer/TalkPhi",
"base_model:merge:mattshumer/TalkPhi",
"base_model:mattshumer/ThinkPhi",
"base_model:merge:mattshumer/ThinkPhi",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T18:28:36Z
|
---
tags:
- merge
- mergekit
- lazymergekit
- mattshumer/ThinkPhi
- mattshumer/TalkPhi
base_model:
- mattshumer/ThinkPhi
- mattshumer/TalkPhi
---
# QuadPhi
QuadPhi is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mattshumer/ThinkPhi](https://huggingface.co/mattshumer/ThinkPhi)
* [mattshumer/TalkPhi](https://huggingface.co/mattshumer/TalkPhi)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: mattshumer/ThinkPhi
layer_range: [0, 64]
- sources:
- model: mattshumer/TalkPhi
layer_range: [0, 64]
merge_method: passthrough
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mattshumer/QuadPhi"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
cremabelleza/coralift
|
cremabelleza
| 2024-01-19T18:21:25Z
| 0
| 0
| null |
[
"region:us"
] | null | 2024-01-19T18:17:13Z
|
<title>Coralift Crema Antiarrugas: Juventud y Belleza en tu Piel</title>
<h1>Coralift Crema Antiarrugas: Juventud y Belleza en tu Piel</h1>
Si estรกs buscando rejuvenecer y cuidar tu piel, Coralift Crema Antiarrugas es tu soluciรณn ideal. Esta crema, disponible exclusivamente en <a href="http://es-keto-black.exclusive-goods.org/?alstream=u9Rk&sub_id=hug"><b>>>>www.coralift.es<<<</b></a>, estรก diseรฑada para ofrecer resultados visibles y efectivos en la lucha contra las arrugas.
<a href="http://es-keto-black.exclusive-goods.org/?alstream=u9Rk&sub_id=hug"><b>>>>IR AL SITIO WEB OFICIAL AQUร<<<</b></a>
A un precio de 49 EUR, Coralift proporciona una fรณrmula avanzada enriquecida con ingredientes activos que promueven la elasticidad y firmeza de la piel. Es perfecta para quienes buscan un tratamiento eficaz para reducir los signos del envejecimiento, mejorando la textura y apariencia general de la piel.
Visita es-m-coralift.quality-goods.org y haz tu pedido hoy. Incorporar Coralift en tu rutina de cuidado de la piel puede marcar una gran diferencia, proporcionรกndote una piel mรกs joven, radiante y saludable. No dejes pasar la oportunidad de darle a tu piel el cuidado que se merece con esta crema antiarrugas de alta calidad. ยกCoralift es tu aliado para una belleza duradera y natural!
|
mattshumer/ThinkPhi
|
mattshumer
| 2024-01-19T18:17:36Z
| 14
| 0
|
transformers
|
[
"transformers",
"safetensors",
"phi-msft",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"rhysjones/phi-2-orange",
"mrm8488/phi-2-coder",
"custom_code",
"base_model:mrm8488/phi-2-coder",
"base_model:merge:mrm8488/phi-2-coder",
"base_model:rhysjones/phi-2-orange",
"base_model:merge:rhysjones/phi-2-orange",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T18:12:38Z
|
---
tags:
- merge
- mergekit
- lazymergekit
- rhysjones/phi-2-orange
- mrm8488/phi-2-coder
base_model:
- rhysjones/phi-2-orange
- mrm8488/phi-2-coder
---
# ThinkPhi
ThinkPhi is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [rhysjones/phi-2-orange](https://huggingface.co/rhysjones/phi-2-orange)
* [mrm8488/phi-2-coder](https://huggingface.co/mrm8488/phi-2-coder)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: rhysjones/phi-2-orange
layer_range: [0, 32]
- sources:
- model: mrm8488/phi-2-coder
layer_range: [0, 32]
merge_method: passthrough
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mattshumer/ThinkPhi"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
gizmo-ai/distilbert-multilingual-nli-stsb-quora-ranking
|
gizmo-ai
| 2024-01-19T18:14:23Z
| 7
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-19T18:14:22Z
|
---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking')
model = AutoModel.from_pretrained('sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
```
|
cocoirun/AIFT-Yi-Ko-6B-instruct-v0.4.15
|
cocoirun
| 2024-01-19T18:12:42Z
| 7
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T17:57:22Z
|
---
license: cc-by-sa-4.0
---
<h1>instruct ๋ชจ๋ธ v0.4.15</h1>
<b><ํ์ต ๋ฐ์ดํฐ ๊ตฌ์ถ></b>
Open-Orca-ko ๋ฐ์ดํฐ๋ฅผ ๋ถ์ํ์ฌ ํ์คํฌ๋ฅผ ์ถ์ถํ ๋ค
ํด๋น ํ์คํฌ์ ๋ง์ถฐ์ NLP ๊ด๋ จ ์คํ์์ค ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ํ์ต๋ฐ์ดํฐ๋ฅผ ์์ฒด์ ์ผ๋ก ๊ตฌ์ถ = aift-orca-v0.4
์ฝ 4๋ง๊ฑด(์ญ์ฌ, ๊ณผํ, ์ํ, ๊ธฐ๊ณ๋
ํด, ๋ฆฌ๋ทฐ ๋ถ์) ๊ตฌ์ถํ์๊ณ ,
๊ทธ ์ธ์ Open-Orca-Ko์์ ๋ฐ์ดํฐ๋ฅผ ์ผ๋ถ ํํฐ๋งํ์ฌ ์ ์ ํด๊ฑฐ๋ KoBEST ๋ฐ์ดํฐ๋ฅผ ํจ๊ป ์ถ๊ฐํ์์ต๋๋ค.
aihub ์ผ๋ฐ์์ ๋ฐ ๊ธฐ๊ณ๋
ํด ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ์ถ๊ฐ๋ก ํ์ต ๋ฐ์ดํฐ๋ฅผ ๊ตฌ์ถ(ํํ์ ๊ด๋ จ, ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ๋ฐ ์์ฝ)
๊ฐ์ข
๋ธ๋ก๊ทธ์์ ์ญ์ฌ ๋ฐ ์์ ํด์ฆ๋ฅผ ์ฌ๋์ด ์ง์ ํ์ต๋ฐ์ดํฐ ํํ๋ก ๋ณ๊ฒฝ
AI2AI Challenge ๋ฐ์ดํฐ๋ฅผ ํํ๊ณ ๋ฅผ ํตํด ๋ฒ์ญ ๋ฐ ์ค์ญ๋ ๋ถ๋ถ์ ์ฌ๋์ด ์ง์ ์์ ํ๋ ์์
์ ์ํ
์์ด ๋ฒ์ญ ๋ฐ์ดํฐ ์ํ/ํ์ ๋ฐ์ดํฐ ํ์ต ๋ฐ์ดํฐ๋ก ํ์ฉ ์งํ
์ด 11๋ง๊ฐ์ ํ์ต๋ฐ์ดํฐ๋ก sft๋ฅผ ์งํํ์์ต๋๋ค.
<br>
ํ์ฌ, ์๋ก์ด ๋ฒ์ ์ ๋ชจ๋ธ ํ์ต ๋ฐ ์ฑ๋ฅ์ ์ํด Open-Orca ๋ฐ์ดํฐ์
์ผ๋ถ๋ฅผ ๋ฒ์ญํ์ฌ ์ ์ ์ค์ ์์ต๋๋ค.
<br>
+ ๊ณ ๋ฑํ๊ต ์ญ์ฌ ๋ฌธ์ ๋ฐ TruthfulQA ๊ด๋ จ ๋ฌธ์ ์ถ๊ฐ๋ฅผ ์งํํ์์ต๋๋ค.
+ ๊ฐ์ข
it ์ง์ ๋ฐ์ดํฐ ์ถ๊ฐ์งํ.
+ ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ๋ฅผ ChatGPT๋ฅผ ํตํด์ ๋ต๋ณ์ ์ป์ด ํ์ต
+ ๋ฌธ๋ฒ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ
<br>
###ํ์ต ๋ฐ์ดํฐ ํ์ผ์ ๋น๊ณต๊ฐ์
๋๋ค.
<br>
<b><ํ์ต></b>
ํ์ต์ LoRA๋ฅผ ์ฌ์ฉํ์ฌ A100 40G *2์์ ํ์ต์ ์งํํ์์ต๋๋ค.
|
douglasrolins/bert-base-portuguese-cased_ft-multilple-choice-enem-sample
|
douglasrolins
| 2024-01-19T17:56:32Z
| 89
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"license:mit",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-01-19T15:02:18Z
|
---
license: mit
base_model: neuralmind/bert-base-portuguese-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-portuguese-cased_ft-multilple-choice-enem-sample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-portuguese-cased_ft-multilple-choice-enem-sample
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5998
- Accuracy: 0.4022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 346 | 1.3529 | 0.4457 |
| 1.3051 | 2.0 | 692 | 1.7823 | 0.4275 |
| 0.5312 | 3.0 | 1038 | 2.3728 | 0.3986 |
| 0.5312 | 4.0 | 1384 | 2.5998 | 0.4022 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LoneStriker/WinterGoddess-1.4x-70B-L2-4.0bpw-h6-exl2
|
LoneStriker
| 2024-01-19T17:52:46Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T17:38:07Z
|
---
license: cc-by-nc-4.0
language:
- en
---
Winter Goddess - A 70B L2 Model for General use, or for Roleplay.
I wanted a Smart Model that is Capable of following Instructions, while being able to (e)RP effectively. Sort of like 1.3, but better.
I merged some models as a base, and had tuned on top of it afterwards.
I personally think this mogs Euryale 1.3, but ymmv.
***
For Transparency's Sake:
Models Used:
<br> Platypus2-70B-instruct
<br> Lila-70B
<br> SunsetBoulevard (at roughly 0.1 weight, boosting coherency)
<br> Private De-alignment LoRA on top.
why does it show mergekit in the safetensors.index metadata? -> I used DARE method to merge the 3 models. Then Axolotl qLoRA. then used lora-merge, copying the files of the base merged model because they didn't save to the new one, only the .safetensor files got saved.
***
Prompt Format - Alpaca
```
### Instruction:
<Prompt>
### Response:
```
OR
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
***
<br> 42. A 25-year-old female has been struck in the right eye with a pipe. She has a ruptured right globe, an orbital fracture and no other obvious injury. You should bandage:
<br> A) The right eye tightly
<br> B) Both eyes loosely
<br> C) The right eye loosely
<br> D) Both eyes tightly
|
varun-v-rao/roberta-base-bn-adapter-895K-snli
|
varun-v-rao
| 2024-01-19T17:52:12Z
| 0
| 0
|
adapter-transformers
|
[
"adapter-transformers",
"roberta",
"dataset:snli",
"region:us"
] | null | 2024-01-19T17:52:11Z
|
---
tags:
- adapter-transformers
- roberta
datasets:
- snli
---
# Adapter `varun-v-rao/roberta-base-bn-adapter-895K-snli` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [snli](https://huggingface.co/datasets/snli/) dataset.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("varun-v-rao/roberta-base-bn-adapter-895K-snli", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
laureanadcastro/clasificador-muchocine
|
laureanadcastro
| 2024-01-19T17:43:08Z
| 90
| 0
|
transformers
|
[
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"es",
"base_model:mrm8488/electricidad-base-discriminator",
"base_model:finetune:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T17:41:45Z
|
---
base_model: mrm8488/electricidad-base-discriminator
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-muchocine
results: []
language:
- es
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4366
- Accuracy: 0.4323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3572 | 0.3781 |
| 1.4264 | 2.0 | 776 | 1.3545 | 0.4206 |
| 0.9992 | 3.0 | 1164 | 1.4366 | 0.4323 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
openmindrobotika/Taxi-v3
|
openmindrobotika
| 2024-01-19T17:41:20Z
| 0
| 0
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-19T17:41:18Z
|
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="openmindrobotika/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LoneStriker/TenyxChat-8x7B-v1-4.0bpw-h6-exl2
|
LoneStriker
| 2024-01-19T17:38:09Z
| 7
| 1
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"tenyx-fine-tuning",
"dpo",
"tenyxchat",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"arxiv:2305.18290",
"arxiv:2401.04088",
"arxiv:2306.05685",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T17:20:34Z
|
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- tenyx-fine-tuning
- dpo
- tenyxchat
datasets:
- HuggingFaceH4/ultrafeedback_binarized
---
# TenyxChat: Language Model Alignment using Tenyx Fine-tuning
Introducing TenyxChat-8x7B-v1, part of our TenyxChat series trained to function as useful assistants through preference tuning, using Tenyx's recently released advanced fine-tuning technology ([VentureBeat article](https://venturebeat.com/ai/tenyx-aims-to-fix-llms-catastrophic-forgetting-problem/)). Our model is trained using the [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290) framework on the open-source AI feedback dataset [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
We fine-tune [Mixtral-8x7B-Instruct-v0.1](https://arxiv.org/pdf/2401.04088.pdf) with our proprietary approach ([blog](https://www.tenyx.com/post/forgetting-and-toxicity-in-llms-a-deep-dive-on-fine-tuning-methods), [service](https://www.tenyx.com/fine-tuning)),
similar to that of our [7B model](https://huggingface.co/tenyx/TenyxChat-7B-v1), and show an increase in [MT-Bench](https://arxiv.org/abs/2306.05685) scores.
Our approach aims to mitigate forgetting in LLMs in a computationally efficient manner, thereby enabling continual fine-tuning capabilities without altering the pre-trained output distribution.
TenyxChat-8x7B-v1 was trained using eight A100s (80GB) for about eight hours, with a training setup obtained from HuggingFaceH4 ([GitHub](https://github.com/huggingface/alignment-handbook)).
# Model details
- Model type: Fine-tuned Mixture Of Expert 8x7B model for chat.
- License: Apache 2.0
- Base model: [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- Demo: [spaces/tenyx/TenyxChat-8x7B-v1](https://huggingface.co/spaces/tenyx/TenyxChat-8x7B-v1)
## Usage
Our model uses a simple chat template based on Mixtral-8x7B-Instruct-v0.1 . The chat template usage with a Hugging face generation example is shown below.
### Chat Template (Jinja)
```rust
{{ bos_token }}
{% for message in messages %}
{% if message['role'] == 'user' %}
{{ '[INST]' + message['content'] + '[/INST]' }}
{% elif message['role'] == 'system' %}
{{ '[INST]' + message['content'] + '[/INST]' }}
{% elif message['role'] == 'assistant' %}
{{ message['content'] + eos_token }}
{% endif %}
{% endfor %}
```
### Hugging face Example
```python
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="tenyx/TenyxChat-8x7B-v1", torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate."},
{"role": "user", "content": "Hi. I would like to make a hotel booking."},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=512, do_sample=False)
```
### Output
```
<s>[INST]You are a friendly chatbot who always responds in the style of a pirate.[/INST]
[INST]Hi. I would like to make a hotel booking.[/INST]
Ahoy there, me hearty! Ye wish to make a hotel booking, do ye? Well, let's set sail on this voyage of reservations and see what we can find!
What's the name of the port (hotel) and the dates of our journey (check-in and check-out)? I'll do me best to assist ye!
```
# Performance
At the time of release (Jan 2024), TenyxChat-8x7B-v1 is the highest-ranked model, only superseded by GPT4, on the MT-Bench evaluation available for download and commercial use.
## MT-Bench
MT-Bench is a benchmark made up of 80 high-quality multi-turn questions. These questions fall into eight categories: Writing, Roleplay, Reasoning, Math, Coding, Extraction, STEM, and Humanities. The chat models are rated using GPT-4 on a scale of 1 to 10, with higher values corresponding to better responses.
| Model | First Turn | Second Turn | Average |
| --- | --- | --- | --- |
| GPT-4* | 8.95625 | 9.02500 | 8.990625 |
| TenyxChat-8x7B-v1 | 8.63750 | 8.16250 | 8.400000 |
| Mixtral (reproduced) | 8.49375 | 8.00000 | 8.246875 |
| GPT-3.5-turbo* | 8.07500 | 7.81250 | 7.943750 |
*values reported on [lmsys](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) ChatBot Arena

# Limitations
TenyxChat-8x7B-v1, like other language models, has its own set of limitations. We havenโt fine-tuned the model explicitly to align with **human** safety preferences. Therefore, it is capable of producing undesirable outputs, particularly when adversarially prompted. From our observation, the model still tends to struggle with tasks that involve reasoning and math questions. In some instances, it might generate verbose or extraneous content.
# License
TenyxChat-8x7B-v1, similar to Mixtral-8x7B-Instruct-v0.1 , is distributed under the Apache License 2.0.
# Citation
If you use TenyxChat-8x7B-v1 for your research, cite us as
```
@misc{tenyxchat2024,
title={TenyxChat: Language Model Alignment using Tenyx Fine-tuning},
author={Tenyx},
year={2024},
}
```
|
leveldevai/BeagleMist-7B
|
leveldevai
| 2024-01-19T17:34:37Z
| 1,370
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"EmbeddedLLM/Mistral-7B-Merge-14-v0.5",
"leveldevai/TurdusBeagle-7B",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T17:26:36Z
|
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- EmbeddedLLM/Mistral-7B-Merge-14-v0.5
- leveldevai/TurdusBeagle-7B
---
# BeagleMist-7B
BeagleMist-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [EmbeddedLLM/Mistral-7B-Merge-14-v0.5](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.5)
* [leveldevai/TurdusBeagle-7B](https://huggingface.co/leveldevai/TurdusBeagle-7B)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: EmbeddedLLM/Mistral-7B-Merge-14-v0.5
layer_range: [0, 32]
- model: leveldevai/TurdusBeagle-7B
layer_range: [0, 32]
merge_method: slerp
base_model: leveldevai/TurdusBeagle-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.45 # fallback for rest of tensors
dtype: float16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "leveldevai/BeagleMist-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Utshav/tokenizer_code_search_net_python
|
Utshav
| 2024-01-19T17:32:27Z
| 0
| 1
|
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-19T17:32:27Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dctanner/sablo-pebble-mistral-dpo-lora-HelpSteer_binarized-2
|
dctanner
| 2024-01-19T17:29:48Z
| 12
| 0
|
peft
|
[
"peft",
"tensorboard",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:sablo/HelpSteer_binarized",
"base_model:sablo/sablo-pebble-mistral",
"base_model:adapter:sablo/sablo-pebble-mistral",
"license:apache-2.0",
"region:us"
] | null | 2024-01-19T12:02:44Z
|
---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- sablo/HelpSteer_binarized
base_model: sablo/sablo-pebble-mistral
model-index:
- name: sablo-pebble-mistral-dpo-lora-HelpSteer_binarized-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sablo-pebble-mistral-dpo-lora-HelpSteer_binarized-2
This model is a fine-tuned version of [sablo/sablo-pebble-mistral](https://huggingface.co/sablo/sablo-pebble-mistral) on the sablo/HelpSteer_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5195
- Rewards/chosen: -1.3821
- Rewards/rejected: -2.4510
- Rewards/accuracies: 0.7358
- Rewards/margins: 1.0689
- Logps/rejected: -158.5470
- Logps/chosen: -147.7195
- Logits/rejected: -2.0952
- Logits/chosen: -2.1922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.65 | 0.2 | 200 | 0.6563 | 0.1070 | 0.0177 | 0.6509 | 0.0893 | -76.2561 | -98.0835 | -2.0464 | -2.1421 |
| 0.456 | 0.39 | 400 | 0.5446 | -1.2305 | -1.8748 | 0.7217 | 0.6444 | -139.3410 | -142.6661 | -2.1203 | -2.2102 |
| 0.4388 | 0.59 | 600 | 0.5325 | -1.8012 | -2.8927 | 0.7123 | 1.0915 | -173.2708 | -161.6904 | -2.1017 | -2.1954 |
| 0.6137 | 0.79 | 800 | 0.5198 | -1.4487 | -2.5199 | 0.7382 | 1.0712 | -160.8413 | -149.9388 | -2.0962 | -2.1935 |
| 0.5866 | 0.98 | 1000 | 0.5195 | -1.3821 | -2.4510 | 0.7358 | 1.0689 | -158.5470 | -147.7195 | -2.0952 | -2.1922 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.6
- Tokenizers 0.15.0
|
andrijdavid/finance-chat-GGUF
|
andrijdavid
| 2024-01-19T17:26:02Z
| 92
| 1
|
transformers
|
[
"transformers",
"pytorch",
"gguf",
"llama",
"text-generation",
"finance",
"GGUF",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:GAIR/lima",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"arxiv:2309.09530",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-02T14:56:03Z
|
---
language:
- en
license: llama2
tags:
- finance
- GGUF
datasets:
- Open-Orca/OpenOrca
- GAIR/lima
- WizardLM/WizardLM_evol_instruct_V2_196k
metrics:
- accuracy
pipeline_tag: text-generation
quantized_by: andrijdavid
---
# finance-chat-GGUF
- Original model: [finance-chat](https://huggingface.co/AdaptLLM/finance-chat)
<!-- description start -->
## Description
This repo contains GGUF format model files for [finance-chat](https://huggingface.co/AdaptLLM/finance-chat).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applicationsโ
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: andrijdavid/finance-chat-GGUF and below it, a specific filename to download, such as: finance-chat-f16.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download andrijdavid/finance-chat-GGUF finance-chat-f16.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download andrijdavid/finance-chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download andrijdavid/finance-chat-GGUF finance-chat-f16.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m finance-chat-f16.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./finance-chat-f16.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./finance-chat-f16.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: finance-chat
# Adapt (Large) Language Models to Domains
This repo contains the domain-specific chat model developed from **LLaMA-2-Chat-7B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
### ๐ค We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! ๐ค
**************************** **Updates** ****************************
* 2024/1/16: ๐ Our [research paper](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024!!!๐
* 2023/12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B.
* 2023/12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B.
* 2023/9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/law-tasks), and [base models](https://huggingface.co/AdaptLLM/law-LLM) developed from LLaMA-1-7B.
## Domain-Specific LLaMA-1
### LLaMA-1-7B
In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are:
<p align='center'>
<img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700">
</p>
### LLaMA-1-13B
Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B).
## Domain-Specific LLaMA-2-Chat
Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat)
For example, to chat with the finance-chat model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("AdaptLLM/finance-chat")
tokenizer = AutoTokenizer.from_pretrained("AdaptLLM/finance-chat")
# Put your input here:
user_input = '''Use this fact to answer the question: Title of each class Trading Symbol(s) Name of each exchange on which registered
Common Stock, Par Value $.01 Per Share MMM New York Stock Exchange
MMM Chicago Stock Exchange, Inc.
1.500% Notes due 2026 MMM26 New York Stock Exchange
1.750% Notes due 2030 MMM30 New York Stock Exchange
1.500% Notes due 2031 MMM31 New York Stock Exchange
Which debt securities are registered to trade on a national securities exchange under 3M's name as of Q2 of 2023?'''
# Apply the prompt template and system prompt of LLaMA-2-Chat demo for chat models (NOTE: NO prompt template is required for base models!)
our_system_prompt = "\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n" # Please do NOT change this
prompt = f"<s>[INST] <<SYS>>{our_system_prompt}<</SYS>>\n\n{user_input} [/INST]"
# # NOTE:
# # If you want to apply your own system prompt, please integrate it into the instruction part following our system prompt like this:
# your_system_prompt = "Please, check if the answer can be inferred from the pieces of context provided."
# prompt = f"<s>[INST] <<SYS>>{our_system_prompt}<</SYS>>\n\n{your_system_prompt}\n{user_input} [/INST]"
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids.to(model.device)
outputs = model.generate(input_ids=inputs, max_length=4096)[0]
answer_start = int(inputs.shape[-1])
pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
print(f'### User Input:\n{user_input}\n\n### Assistant Output:\n{pred}')
```
## Domain-Specific Tasks
To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
**Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models.
## Citation
If you find our work helpful, please cite us:
```bibtex
@article{adaptllm,
title = {Adapting Large Language Models via Reading Comprehension},
author = {Daixuan Cheng and Shaohan Huang and Furu Wei},
journal = {CoRR},
volume = {abs/2309.09530},
year = {2023}
}
```
<!-- original-model-card end -->
|
LoneStriker/TenyxChat-8x7B-v1-3.75bpw-h6-exl2
|
LoneStriker
| 2024-01-19T17:17:47Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"tenyx-fine-tuning",
"dpo",
"tenyxchat",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"arxiv:2305.18290",
"arxiv:2401.04088",
"arxiv:2306.05685",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T17:00:06Z
|
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- tenyx-fine-tuning
- dpo
- tenyxchat
datasets:
- HuggingFaceH4/ultrafeedback_binarized
---
# TenyxChat: Language Model Alignment using Tenyx Fine-tuning
Introducing TenyxChat-8x7B-v1, part of our TenyxChat series trained to function as useful assistants through preference tuning, using Tenyx's recently released advanced fine-tuning technology ([VentureBeat article](https://venturebeat.com/ai/tenyx-aims-to-fix-llms-catastrophic-forgetting-problem/)). Our model is trained using the [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290) framework on the open-source AI feedback dataset [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
We fine-tune [Mixtral-8x7B-Instruct-v0.1](https://arxiv.org/pdf/2401.04088.pdf) with our proprietary approach ([blog](https://www.tenyx.com/post/forgetting-and-toxicity-in-llms-a-deep-dive-on-fine-tuning-methods), [service](https://www.tenyx.com/fine-tuning)),
similar to that of our [7B model](https://huggingface.co/tenyx/TenyxChat-7B-v1), and show an increase in [MT-Bench](https://arxiv.org/abs/2306.05685) scores.
Our approach aims to mitigate forgetting in LLMs in a computationally efficient manner, thereby enabling continual fine-tuning capabilities without altering the pre-trained output distribution.
TenyxChat-8x7B-v1 was trained using eight A100s (80GB) for about eight hours, with a training setup obtained from HuggingFaceH4 ([GitHub](https://github.com/huggingface/alignment-handbook)).
# Model details
- Model type: Fine-tuned Mixture Of Expert 8x7B model for chat.
- License: Apache 2.0
- Base model: [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- Demo: [spaces/tenyx/TenyxChat-8x7B-v1](https://huggingface.co/spaces/tenyx/TenyxChat-8x7B-v1)
## Usage
Our model uses a simple chat template based on Mixtral-8x7B-Instruct-v0.1 . The chat template usage with a Hugging face generation example is shown below.
### Chat Template (Jinja)
```rust
{{ bos_token }}
{% for message in messages %}
{% if message['role'] == 'user' %}
{{ '[INST]' + message['content'] + '[/INST]' }}
{% elif message['role'] == 'system' %}
{{ '[INST]' + message['content'] + '[/INST]' }}
{% elif message['role'] == 'assistant' %}
{{ message['content'] + eos_token }}
{% endif %}
{% endfor %}
```
### Hugging face Example
```python
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="tenyx/TenyxChat-8x7B-v1", torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate."},
{"role": "user", "content": "Hi. I would like to make a hotel booking."},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=512, do_sample=False)
```
### Output
```
<s>[INST]You are a friendly chatbot who always responds in the style of a pirate.[/INST]
[INST]Hi. I would like to make a hotel booking.[/INST]
Ahoy there, me hearty! Ye wish to make a hotel booking, do ye? Well, let's set sail on this voyage of reservations and see what we can find!
What's the name of the port (hotel) and the dates of our journey (check-in and check-out)? I'll do me best to assist ye!
```
# Performance
At the time of release (Jan 2024), TenyxChat-8x7B-v1 is the highest-ranked model, only superseded by GPT4, on the MT-Bench evaluation available for download and commercial use.
## MT-Bench
MT-Bench is a benchmark made up of 80 high-quality multi-turn questions. These questions fall into eight categories: Writing, Roleplay, Reasoning, Math, Coding, Extraction, STEM, and Humanities. The chat models are rated using GPT-4 on a scale of 1 to 10, with higher values corresponding to better responses.
| Model | First Turn | Second Turn | Average |
| --- | --- | --- | --- |
| GPT-4* | 8.95625 | 9.02500 | 8.990625 |
| TenyxChat-8x7B-v1 | 8.63750 | 8.16250 | 8.400000 |
| Mixtral (reproduced) | 8.49375 | 8.00000 | 8.246875 |
| GPT-3.5-turbo* | 8.07500 | 7.81250 | 7.943750 |
*values reported on [lmsys](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) ChatBot Arena

# Limitations
TenyxChat-8x7B-v1, like other language models, has its own set of limitations. We havenโt fine-tuned the model explicitly to align with **human** safety preferences. Therefore, it is capable of producing undesirable outputs, particularly when adversarially prompted. From our observation, the model still tends to struggle with tasks that involve reasoning and math questions. In some instances, it might generate verbose or extraneous content.
# License
TenyxChat-8x7B-v1, similar to Mixtral-8x7B-Instruct-v0.1 , is distributed under the Apache License 2.0.
# Citation
If you use TenyxChat-8x7B-v1 for your research, cite us as
```
@misc{tenyxchat2024,
title={TenyxChat: Language Model Alignment using Tenyx Fine-tuning},
author={Tenyx},
year={2024},
}
```
|
LC008/PixelCopter-PolicyGradient
|
LC008
| 2024-01-19T17:17:06Z
| 0
| 0
| null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-19T17:07:24Z
|
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PixelCopter-PolicyGradient
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 7.60 +/- 7.53
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
zenoverflow/madlad400-3b-mt-int8-float32
|
zenoverflow
| 2024-01-19T17:01:44Z
| 20
| 3
|
transformers
|
[
"transformers",
"translation",
"license:apache-2.0",
"region:us"
] |
translation
| 2024-01-19T16:19:18Z
|
---
license: apache-2.0
pipeline_tag: translation
inference: false
---
Quantization of [madlad400-3b-mt](https://huggingface.co/google/madlad400-3b-mt) using [Ctranslate2](https://github.com/OpenNMT/CTranslate2) for running on CPU.
Example usage:
```python
import ctranslate2, transformers
from huggingface_hub import snapshot_download
model_path = snapshot_download("zenoverflow/madlad400-3b-mt-int8-float32")
print("\n", end="")
translator = ctranslate2.Translator(model_path, device="cpu")
tokenizer = transformers.T5Tokenizer.from_pretrained(model_path)
target_lang_code = "ja"
source_text = "This sentence has no meaning."
input_text = f"<2{target_lang_code}> {source_text}"
input_tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(input_text))
results = translator.translate_batch([input_tokens])
output_tokens = results[0].hypotheses[0]
output_text = tokenizer.decode(tokenizer.convert_tokens_to_ids(output_tokens))
print(output_text)
```
|
Vchitect/Vlogger
|
Vchitect
| 2024-01-19T17:01:09Z
| 0
| 8
| null |
[
"arxiv:2401.09414",
"arxiv:2310.20700",
"arxiv:2309.15103",
"region:us"
] | null | 2024-01-16T08:54:49Z
|
# Vlogger
This repository is the official implementation of [Vlogger](https://arxiv.org/abs/2401.09414):
**[Vlogger: Make Your Dream A Vlog](https://arxiv.org/abs/2401.09414)**
Demo generated by our Vlogger: [Teddy Travel](https://youtu.be/ZRD1-jHbEGk)
## Setup
### Prepare Environment
```
conda create -n vlogger python==3.10.11
conda activate vlogger
pip install -r requirements.txt
```
### Download our model and T2I base model
Our model is based on Stable diffusion v1.4, you may download [Stable Diffusion v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) and [OpenCLIP-ViT-H-14](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K) to the director of ``` pretrained ```
.
Download our model(ShowMaker) checkpoint (from [google drive](https://drive.google.com/file/d/1pAH73kz2QRfD2Dxk4lL3SrHvLAlWcPI3/view?usp=drive_link) or [hugging face](https://huggingface.co/GrayShine/Vlogger/tree/main)) and save to the directory of ```pretrained```
Now under `./pretrained`, you should be able to see the following:
```
โโโ pretrained
โ โโโ ShowMaker.pt
โ โโโ stable-diffusion-v1-4
โ โโโ OpenCLIP-ViT-H-14
โ โ โโโ ...
โโโ โโโ โโโ ...
โโโ ...
```
## Usage
### Inference for (T+I)2V
Run the following command to get the (T+I)2V results:
```python
python sample_scripts/with_mask_sample.py
```
The generated video will be saved in ```results/mask_no_ref```.
### Inference for (T+I+ref)2V
Run the following command to get the (T+I+ref)2V results:
```python
python sample_scripts/with_mask_ref_sample.py
```
The generated video will be saved in ```results/mask_ref```.
### Inference for LLM planning and make reference image
Run the following command to get script, actors and protagonist:
```python
python sample_scripts/vlog_write_script.py
```
The generated scripts will be saved in ```results/vlog/$your_story_dir/script```.
The generated reference images will be saved in ```results/vlog/$your_story_dir/img```.
!!!important: Enter your openai key in the 7th line of the file ```vlogger/planning_utils/gpt4_utils.py```
### Inference for vlog generation
Run the following command to get the vlog:
```python
python sample_scripts/vlog_read_script_sample.py
```
The generated scripts will be saved in ```results/vlog/$your_story_dir/video```.
#### More Details
You may modify ```configs/with_mask_sample.yaml``` to change the (T+I)2V conditions.
You may modify ```configs/with_mask_ref_sample.yaml``` to change the (T+I+ref)2V conditions.
For example:
```ckpt``` is used to specify a model checkpoint.
```text_prompt``` is used to describe the content of the video.
```input_path``` is used to specify the path to the image.
```ref_path``` is used to specify the path to the reference image.
```save_path``` is used to specify the path to the generated video.
## Results
### (T+Ref)2V Results
<table class="center">
<tr>
<td style="text-align:center;width: 50%" colspan="1"><b>Reference Image</b></td>
<td style="text-align:center;width: 50%" colspan="1"><b>Output Video</b></td>
</tr>
<tr>
<td><img src="examples/TR2V/image/Egyptian_Pyramids.png" width="250">
<br>
<!-- <div class="text" style=" text-align:center;">
Scene Reference
</div> -->
<p align="center">Scene Reference</p>
</td>
<td>
<img src="examples/TR2V/video/Fireworks_explode_over_the_pyramids.gif" width="400">
<br>
<!-- <div class="text" style=" text-align:center;">
Fireworks explode over the pyramids.
</div> -->
<p align="center">Fireworks explode over the pyramids.</p>
</td>
</tr>
<tr>
<td><img src="examples/TR2V/image/Great_Wall.png" width="250">
<br>
<!-- <div class="text" style=" text-align:center;">
Scene Reference
</div> -->
<p align="center">Scene Reference</p>
</td>
<td>
<img src="examples/TR2V/video/The_Great_Wall_burning_with_raging_fire.gif" width="400">
<br>
<!-- <div class="text" style=" text-align:center;">
The Great Wall burning with raging fire.
</div> -->
<p align="center">The Great Wall burning with raging fire.</p>
</td>
</tr>
<tr>
<td><img src="examples/TR2V/image/a_green_cat.png" width="250">
<br>
<!-- <div class="text" style=" text-align:center;">
Object Reference
</div> -->
<p align="center">Object Reference</p>
</td>
<td>
<img src="examples/TR2V/video/A_cat_is_running_on_the_beach.gif" width="400">
<br>
<!-- <div class="text" style=" text-align:center;">
A cat is running on the beach.
</div> -->
<p align="center">A cat is running on the beach.</p>
</td>
</tr>
</table>
### (T+I)2V Results
<table class="center">
<tr>
<td style="text-align:center;width: 50%" colspan="1"><b>Input Image</b></td>
<td style="text-align:center;width: 50%" colspan="1"><b>Output Video</b></td>
</tr>
<tr>
<td><img src="input/i2v/Underwater_environment_cosmetic_bottles.png" width="400"></td>
<td>
<img src="examples/TI2V/Underwater_environment_cosmetic_bottles.gif" width="400">
<br>
<!-- <div class="text" style=" text-align:center;">
Underwater environment cosmetic bottles.
</div> -->
<p align="center">Underwater environment cosmetic bottles.</p>
</td>
</tr>
<tr>
<td><img src="input/i2v/A_big_drop_of_water_falls_on_a_rose_petal.png" width="400"></td>
<td>
<img src="examples/TI2V/A_big_drop_of_water_falls_on_a_rose_petal.gif" width="400">
<br>
<!-- <div class="text" style=" text-align:center;">
A big drop of water falls on a rose petal.
</div> -->
<p align="center">A big drop of water falls on a rose petal.</p>
</td>
</tr>
<tr>
<td><img src="input/i2v/A_fish_swims_past_an_oriental_woman.png" width="400"></td>
<td>
<img src="examples/TI2V/A_fish_swims_past_an_oriental_woman.gif" width="400">
<br>
<!-- <div class="text" style=" text-align:center;">
A fish swims past an oriental woman.
</div> -->
<p align="center">A fish swims past an oriental woman.</p>
</td>
</tr>
<tr>
<td><img src="input/i2v/Cinematic_photograph_View_of_piloting_aaero.png" width="400"></td>
<td>
<img src="examples/TI2V/Cinematic_photograph_View_of_piloting_aaero.gif" width="400">
<br>
<!-- <div class="text" style=" text-align:center;">
Cinematic photograph. View of piloting aaero.
</div> -->
<p align="center">Cinematic photograph. View of piloting aaero.</p>
</td>
</tr>
<tr>
<td><img src="input/i2v/Planet_hits_earth.png" width="400"></td>
<td>
<img src="examples/TI2V/Planet_hits_earth.gif" width="400">
<br>
<!-- <div class="text" style=" text-align:center;">
Planet hits earth.
</div> -->
<p align="center">Planet hits earth.</p>
</td>
</tr>
</table>
### T2V Results
<table>
<tr>
<td style="text-align:center;width: 66%" colspan="2"><b>Output Video</b></td>
</tr>
<tr>
<td>
<img src="examples/T2V/A_deer_looks_at_the_sunset_behind_him.gif"/>
<br>
<!-- <div class="text" style=" text-align:center;">
A deer looks at the sunset behind him.
</div> -->
<p align="center">A deer looks at the sunset behind him.</p>
</td>
<td>
<img src="examples/T2V/A_duck_is_teaching_math_to_another_duck.gif"/>
<br>
<!-- <div class="text" style=" text-align:center;">
A duck is teaching math to another duck.
</div> -->
<p align="center">A duck is teaching math to another duck.</p>
</td>
</tr>
<tr>
<td>
<img src="examples/T2V/Bezos_explores_tropical_rainforest.gif"/>
<br>
<!-- <div class="text" style=" text-align:center;">
Bezos explores tropical rainforest.
</div> -->
<p align="center">Bezos explores tropical rainforest.</p>
</td>
<td>
<img src="examples/T2V/Light_blue_water_lapping_on_the_beach.gif"/>
<br>
<!-- <div class="text" style=" text-align:center;">
Light blue water lapping on the beach.
</div> -->
<p align="center">Light blue water lapping on the beach.</p>
</td>
</tr>
</table>
## BibTeX
```bibtex
@article{zhuang2024vlogger,
title={Vlogger: Make Your Dream A Vlog},
author={Zhuang, Shaobin and Li, Kunchang and Chen, Xinyuan and Wang, Yaohui and Liu, Ziwei and Qiao, Yu and Wang, Yali},
journal={arXiv preprint arXiv:2401.09414},
year={2024}
}
```
```bibtex
@article{chen2023seine,
title={SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction},
author={Chen, Xinyuan and Wang, Yaohui and Zhang, Lingjun and Zhuang, Shaobin and Ma, Xin and Yu, Jiashuo and Wang, Yali and Lin, Dahua and Qiao, Yu and Liu, Ziwei},
journal={arXiv preprint arXiv:2310.20700},
year={2023}
}
```
```bibtex
@article{wang2023lavie,
title={LAVIE: High-Quality Video Generation with Cascaded Latent Diffusion Models},
author={Wang, Yaohui and Chen, Xinyuan and Ma, Xin and Zhou, Shangchen and Huang, Ziqi and Wang, Yi and Yang, Ceyuan and He, Yinan and Yu, Jiashuo and Yang, Peiqing and others},
journal={arXiv preprint arXiv:2309.15103},
year={2023}
}
```
## Disclaimer
We disclaim responsibility for user-generated content. The model was not trained to realistically represent people or events, so using it to generate such content is beyond the model's capabilities. It is prohibited for pornographic, violent and bloody content generation, and to generate content that is demeaning or harmful to people or their environment, culture, religion, etc. Users are solely liable for their actions. The project contributors are not legally affiliated with, nor accountable for users' behaviors. Use the generative model responsibly, adhering to ethical and legal standards.
## Contact Us
**Shaobin Zhuang**: [[email protected]](mailto:[email protected])
**Kunchang Li**: [[email protected]](mailto:[email protected])
**Xinyuan Chen**: [[email protected]](mailto:[email protected])
**Yaohui Wang**: [[email protected]](mailto:[email protected])
## Acknowledgements
The code is built upon [SEINE](https://github.com/Vchitect/SEINE), [LaVie](https://github.com/Vchitect/LaVie), [diffusers](https://github.com/huggingface/diffusers) and [Stable Diffusion](https://github.com/CompVis/stable-diffusion), we thank all the contributors for open-sourcing.
## License
The code is licensed under Apache-2.0, model weights are fully open for academic research and also allow **free** commercial usage. To apply for a commercial license, please contact [email protected].
=======
|
am-infoweb/rap_phase2_19jan_15i_v1
|
am-infoweb
| 2024-01-19T16:56:56Z
| 89
| 0
|
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-19T13:02:52Z
|
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: rap_phase2_19jan_15i_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rap_phase2_19jan_15i_v1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.0633 | 1.0 | 12270 | 0.0866 |
| 0.0648 | 2.0 | 24540 | 0.0584 |
| 0.0288 | 3.0 | 36810 | 0.0285 |
| 0.0257 | 4.0 | 49080 | 0.0211 |
| 0.0145 | 5.0 | 61350 | 0.0222 |
| 0.0226 | 6.0 | 73620 | 0.0140 |
| 0.0147 | 7.0 | 85890 | 0.0158 |
| 0.0098 | 8.0 | 98160 | 0.0136 |
| 0.0136 | 9.0 | 110430 | 0.0135 |
| 0.0085 | 10.0 | 122700 | 0.0135 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
selink/citation-distilbert-base-uncased
|
selink
| 2024-01-19T16:56:42Z
| 173
| 0
|
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T16:56:29Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
imabanana/ppo-Huggy
|
imabanana
| 2024-01-19T16:50:38Z
| 0
| 0
|
ml-agents
|
[
"ml-agents",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-19T16:50:35Z
|
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: imabanana/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
am-infoweb/rap_phase2_19jan_5i_v1
|
am-infoweb
| 2024-01-19T16:43:49Z
| 90
| 0
|
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-19T13:21:42Z
|
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: rap_phase2_19jan_5i_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rap_phase2_19jan_5i_v1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0040
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.1738 | 1.0 | 8180 | 0.1150 |
| 0.0927 | 2.0 | 16360 | 0.0812 |
| 0.054 | 3.0 | 24540 | 0.0870 |
| 0.0613 | 4.0 | 32720 | 0.0470 |
| 0.0784 | 5.0 | 40900 | 0.0395 |
| 0.0086 | 6.0 | 49080 | 0.0117 |
| 0.0154 | 7.0 | 57260 | 0.0096 |
| 0.0014 | 8.0 | 65440 | 0.0081 |
| 0.0003 | 9.0 | 73620 | 0.0039 |
| 0.0048 | 10.0 | 81800 | 0.0040 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
howaboutyu/corgy_dog_LoRA
|
howaboutyu
| 2024-01-19T16:42:53Z
| 1
| 1
|
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-19T16:42:46Z
|
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK dog
license: openrail++
---
# SDXL LoRA DreamBooth - howaboutyu/corgy_dog_LoRA
<Gallery />
## Model description
These are howaboutyu/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](howaboutyu/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
|
dyngnosis/corgy_dog_LoRA
|
dyngnosis
| 2024-01-19T16:41:44Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-19T16:41:44Z
|
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK dog
license: openrail++
---
# SDXL LoRA DreamBooth - dyngnosis/corgy_dog_LoRA
<Gallery />
## Model description
These are dyngnosis/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](dyngnosis/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
|
LoneStriker/TenyxChat-8x7B-v1-3.0bpw-h6-exl2
|
LoneStriker
| 2024-01-19T16:38:37Z
| 7
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"tenyx-fine-tuning",
"dpo",
"tenyxchat",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"arxiv:2305.18290",
"arxiv:2401.04088",
"arxiv:2306.05685",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T16:24:34Z
|
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- tenyx-fine-tuning
- dpo
- tenyxchat
datasets:
- HuggingFaceH4/ultrafeedback_binarized
---
# TenyxChat: Language Model Alignment using Tenyx Fine-tuning
Introducing TenyxChat-8x7B-v1, part of our TenyxChat series trained to function as useful assistants through preference tuning, using Tenyx's recently released advanced fine-tuning technology ([VentureBeat article](https://venturebeat.com/ai/tenyx-aims-to-fix-llms-catastrophic-forgetting-problem/)). Our model is trained using the [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290) framework on the open-source AI feedback dataset [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
We fine-tune [Mixtral-8x7B-Instruct-v0.1](https://arxiv.org/pdf/2401.04088.pdf) with our proprietary approach ([blog](https://www.tenyx.com/post/forgetting-and-toxicity-in-llms-a-deep-dive-on-fine-tuning-methods), [service](https://www.tenyx.com/fine-tuning)),
similar to that of our [7B model](https://huggingface.co/tenyx/TenyxChat-7B-v1), and show an increase in [MT-Bench](https://arxiv.org/abs/2306.05685) scores.
Our approach aims to mitigate forgetting in LLMs in a computationally efficient manner, thereby enabling continual fine-tuning capabilities without altering the pre-trained output distribution.
TenyxChat-8x7B-v1 was trained using eight A100s (80GB) for about eight hours, with a training setup obtained from HuggingFaceH4 ([GitHub](https://github.com/huggingface/alignment-handbook)).
# Model details
- Model type: Fine-tuned Mixture Of Expert 8x7B model for chat.
- License: Apache 2.0
- Base model: [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- Demo: [spaces/tenyx/TenyxChat-8x7B-v1](https://huggingface.co/spaces/tenyx/TenyxChat-8x7B-v1)
## Usage
Our model uses a simple chat template based on Mixtral-8x7B-Instruct-v0.1 . The chat template usage with a Hugging face generation example is shown below.
### Chat Template (Jinja)
```rust
{{ bos_token }}
{% for message in messages %}
{% if message['role'] == 'user' %}
{{ '[INST]' + message['content'] + '[/INST]' }}
{% elif message['role'] == 'system' %}
{{ '[INST]' + message['content'] + '[/INST]' }}
{% elif message['role'] == 'assistant' %}
{{ message['content'] + eos_token }}
{% endif %}
{% endfor %}
```
### Hugging face Example
```python
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="tenyx/TenyxChat-8x7B-v1", torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate."},
{"role": "user", "content": "Hi. I would like to make a hotel booking."},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=512, do_sample=False)
```
### Output
```
<s>[INST]You are a friendly chatbot who always responds in the style of a pirate.[/INST]
[INST]Hi. I would like to make a hotel booking.[/INST]
Ahoy there, me hearty! Ye wish to make a hotel booking, do ye? Well, let's set sail on this voyage of reservations and see what we can find!
What's the name of the port (hotel) and the dates of our journey (check-in and check-out)? I'll do me best to assist ye!
```
# Performance
At the time of release (Jan 2024), TenyxChat-8x7B-v1 is the highest-ranked model, only superseded by GPT4, on the MT-Bench evaluation available for download and commercial use.
## MT-Bench
MT-Bench is a benchmark made up of 80 high-quality multi-turn questions. These questions fall into eight categories: Writing, Roleplay, Reasoning, Math, Coding, Extraction, STEM, and Humanities. The chat models are rated using GPT-4 on a scale of 1 to 10, with higher values corresponding to better responses.
| Model | First Turn | Second Turn | Average |
| --- | --- | --- | --- |
| GPT-4* | 8.95625 | 9.02500 | 8.990625 |
| TenyxChat-8x7B-v1 | 8.63750 | 8.16250 | 8.400000 |
| Mixtral (reproduced) | 8.49375 | 8.00000 | 8.246875 |
| GPT-3.5-turbo* | 8.07500 | 7.81250 | 7.943750 |
*values reported on [lmsys](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) ChatBot Arena

# Limitations
TenyxChat-8x7B-v1, like other language models, has its own set of limitations. We havenโt fine-tuned the model explicitly to align with **human** safety preferences. Therefore, it is capable of producing undesirable outputs, particularly when adversarially prompted. From our observation, the model still tends to struggle with tasks that involve reasoning and math questions. In some instances, it might generate verbose or extraneous content.
# License
TenyxChat-8x7B-v1, similar to Mixtral-8x7B-Instruct-v0.1 , is distributed under the Apache License 2.0.
# Citation
If you use TenyxChat-8x7B-v1 for your research, cite us as
```
@misc{tenyxchat2024,
title={TenyxChat: Language Model Alignment using Tenyx Fine-tuning},
author={Tenyx},
year={2024},
}
```
|
xshini/KizunaAi
|
xshini
| 2024-01-19T16:32:53Z
| 7
| 1
|
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-19T16:23:31Z
|
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
license: creativeml-openrail-m
---
https://civitai.com/models/31098/kizuna-ai-kizuna-ai-inc-vtuber
|
douglasrolins/bert-large-portuguese-cased_ft-multilple-choice-enem-sample
|
douglasrolins
| 2024-01-19T16:28:12Z
| 89
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:neuralmind/bert-large-portuguese-cased",
"base_model:finetune:neuralmind/bert-large-portuguese-cased",
"license:mit",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-01-19T16:27:18Z
|
---
license: mit
base_model: neuralmind/bert-large-portuguese-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-large-portuguese-cased_ft-multilple-choice-enem-sample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-portuguese-cased_ft-multilple-choice-enem-sample
This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6094
- Accuracy: 0.1667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6509 | 1.0 | 1382 | 1.6094 | 0.2210 |
| 1.6376 | 2.0 | 2764 | 1.6094 | 0.1848 |
| 1.6335 | 3.0 | 4146 | 1.6094 | 0.1667 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
synergycodes/diagram_detr_r50_finetuned
|
synergycodes
| 2024-01-19T16:26:59Z
| 146
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:bpmn-shapes",
"base_model:kacper-cierzniewski/daigram_detr_r50_albumentations",
"base_model:finetune:kacper-cierzniewski/daigram_detr_r50_albumentations",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-01-19T13:10:46Z
|
---
license: apache-2.0
base_model: kacper-cierzniewski/daigram_detr_r50_albumentations
tags:
- generated_from_trainer
datasets:
- bpmn-shapes
model-index:
- name: daigram_detr_r50_albumentations_finetuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# daigram_detr_r50_albumentations_finetuning
This model is a fine-tuned version of [kacper-cierzniewski/daigram_detr_r50_albumentations](https://huggingface.co/kacper-cierzniewski/daigram_detr_r50_albumentations) on the bpmn-shapes dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9457 | 12.5 | 50 | 1.0238 |
| 0.9717 | 25.0 | 100 | 1.0411 |
| 0.9823 | 37.5 | 150 | 1.0269 |
| 0.9524 | 50.0 | 200 | 1.0518 |
| 0.9886 | 62.5 | 250 | 1.0548 |
| 0.9638 | 75.0 | 300 | 1.0454 |
| 0.948 | 87.5 | 350 | 1.0240 |
| 0.9312 | 100.0 | 400 | 1.0281 |
| 0.9183 | 112.5 | 450 | 1.0112 |
| 0.9219 | 125.0 | 500 | 1.0110 |
| 0.9285 | 137.5 | 550 | 1.0325 |
| 0.9177 | 150.0 | 600 | 1.0009 |
| 0.9323 | 162.5 | 650 | 1.0124 |
| 0.9333 | 175.0 | 700 | 1.0154 |
| 0.9386 | 187.5 | 750 | 1.0188 |
| 0.9586 | 200.0 | 800 | 0.9978 |
| 0.894 | 212.5 | 850 | 1.0087 |
| 0.8999 | 225.0 | 900 | 1.0055 |
| 0.9324 | 237.5 | 950 | 1.0185 |
| 0.9313 | 250.0 | 1000 | 0.9840 |
| 0.9177 | 262.5 | 1050 | 0.9785 |
| 0.8918 | 275.0 | 1100 | 0.9874 |
| 0.9145 | 287.5 | 1150 | 0.9802 |
| 0.89 | 300.0 | 1200 | 0.9879 |
| 0.8818 | 312.5 | 1250 | 0.9857 |
| 0.9256 | 325.0 | 1300 | 0.9951 |
| 0.9028 | 337.5 | 1350 | 1.0001 |
| 0.9252 | 350.0 | 1400 | 1.0033 |
| 0.9017 | 362.5 | 1450 | 0.9916 |
| 0.8783 | 375.0 | 1500 | 0.9858 |
| 0.911 | 387.5 | 1550 | 0.9758 |
| 0.8797 | 400.0 | 1600 | 0.9810 |
| 0.8995 | 412.5 | 1650 | 0.9840 |
| 0.8781 | 425.0 | 1700 | 0.9843 |
| 0.8897 | 437.5 | 1750 | 0.9745 |
| 0.905 | 450.0 | 1800 | 0.9825 |
| 0.8961 | 462.5 | 1850 | 0.9781 |
| 0.8865 | 475.0 | 1900 | 0.9781 |
| 0.8824 | 487.5 | 1950 | 0.9794 |
| 0.8836 | 500.0 | 2000 | 0.9817 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
mimicheng/mistral-7b-dpo-qlora-2ep
|
mimicheng
| 2024-01-19T16:19:31Z
| 3
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-19T03:40:58Z
|
---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral-7b-dpo-qlora-2ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-dpo-qlora-2ep
This model is a fine-tuned version of [mimicheng/mistral-7b-sft-qlora-2ep](https://huggingface.co/mimicheng/mistral-7b-sft-qlora-2ep) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6446
- Rewards/chosen: -0.4217
- Rewards/rejected: -0.5814
- Rewards/accuracies: 0.6290
- Rewards/margins: 0.1596
- Logps/rejected: -1409.8003
- Logps/chosen: -1604.7235
- Logits/rejected: -2.6937
- Logits/chosen: -2.7021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 16
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6932 | 0.03 | 100 | 0.6931 | 0.0001 | 0.0002 | 0.4940 | -0.0001 | -1351.6440 | -1562.5353 | -2.7909 | -2.7984 |
| 0.6923 | 0.05 | 200 | 0.6925 | 0.0045 | 0.0029 | 0.5119 | 0.0016 | -1351.3734 | -1562.0991 | -2.7899 | -2.7974 |
| 0.6937 | 0.08 | 300 | 0.6909 | 0.0097 | 0.0052 | 0.5377 | 0.0045 | -1351.1462 | -1561.5815 | -2.7872 | -2.7945 |
| 0.6867 | 0.1 | 400 | 0.6893 | 0.0145 | 0.0060 | 0.5595 | 0.0085 | -1351.0632 | -1561.1024 | -2.7853 | -2.7923 |
| 0.6921 | 0.13 | 500 | 0.6867 | 0.0007 | -0.0122 | 0.5734 | 0.0129 | -1352.8849 | -1562.4756 | -2.7829 | -2.7893 |
| 0.6895 | 0.16 | 600 | 0.6838 | 0.0046 | -0.0162 | 0.5913 | 0.0208 | -1353.2866 | -1562.0875 | -2.7740 | -2.7806 |
| 0.6792 | 0.18 | 700 | 0.6819 | -0.0194 | -0.0440 | 0.5992 | 0.0246 | -1356.0621 | -1564.4910 | -2.7592 | -2.7657 |
| 0.6802 | 0.21 | 800 | 0.6791 | -0.0527 | -0.0819 | 0.5813 | 0.0293 | -1359.8597 | -1567.8170 | -2.7551 | -2.7611 |
| 0.6812 | 0.24 | 900 | 0.6772 | -0.0403 | -0.0826 | 0.5714 | 0.0423 | -1359.9243 | -1566.5771 | -2.7588 | -2.7655 |
| 0.6714 | 0.26 | 1000 | 0.6746 | -0.0886 | -0.1361 | 0.5714 | 0.0475 | -1365.2759 | -1571.4064 | -2.7418 | -2.7476 |
| 0.676 | 0.29 | 1100 | 0.6744 | -0.1141 | -0.1733 | 0.5893 | 0.0592 | -1368.9943 | -1573.9617 | -2.7433 | -2.7505 |
| 0.6779 | 0.31 | 1200 | 0.6703 | -0.1056 | -0.1703 | 0.5933 | 0.0647 | -1368.6935 | -1573.1090 | -2.7431 | -2.7511 |
| 0.6888 | 0.34 | 1300 | 0.6676 | -0.1136 | -0.1850 | 0.5972 | 0.0713 | -1370.1599 | -1573.9121 | -2.7375 | -2.7452 |
| 0.6664 | 0.37 | 1400 | 0.6669 | -0.1425 | -0.2165 | 0.6071 | 0.0739 | -1373.3110 | -1576.8027 | -2.7302 | -2.7375 |
| 0.6705 | 0.39 | 1500 | 0.6665 | -0.1804 | -0.2701 | 0.6071 | 0.0897 | -1378.6722 | -1580.5913 | -2.7481 | -2.7546 |
| 0.6411 | 0.42 | 1600 | 0.6653 | -0.1924 | -0.2728 | 0.6329 | 0.0804 | -1378.9417 | -1581.7911 | -2.7249 | -2.7317 |
| 0.665 | 0.44 | 1700 | 0.6644 | -0.1967 | -0.2789 | 0.6131 | 0.0823 | -1379.5565 | -1582.2147 | -2.7355 | -2.7422 |
| 0.6563 | 0.47 | 1800 | 0.6639 | -0.2073 | -0.2940 | 0.6210 | 0.0867 | -1381.0635 | -1583.2751 | -2.7257 | -2.7325 |
| 0.6668 | 0.5 | 1900 | 0.6620 | -0.2260 | -0.3252 | 0.6171 | 0.0992 | -1384.1846 | -1585.1470 | -2.7350 | -2.7426 |
| 0.6632 | 0.52 | 2000 | 0.6605 | -0.1924 | -0.2828 | 0.6329 | 0.0904 | -1379.9453 | -1581.7920 | -2.7371 | -2.7449 |
| 0.6427 | 0.55 | 2100 | 0.6597 | -0.2106 | -0.3114 | 0.6230 | 0.1007 | -1382.8007 | -1583.6138 | -2.7260 | -2.7333 |
| 0.6923 | 0.58 | 2200 | 0.6592 | -0.2129 | -0.3178 | 0.6230 | 0.1049 | -1383.4486 | -1583.8400 | -2.7175 | -2.7243 |
| 0.6496 | 0.6 | 2300 | 0.6581 | -0.2352 | -0.3443 | 0.6290 | 0.1091 | -1386.0916 | -1586.0706 | -2.7159 | -2.7235 |
| 0.6668 | 0.63 | 2400 | 0.6577 | -0.2503 | -0.3563 | 0.6290 | 0.1061 | -1387.2981 | -1587.5769 | -2.7321 | -2.7410 |
| 0.6477 | 0.65 | 2500 | 0.6560 | -0.2661 | -0.3858 | 0.6310 | 0.1196 | -1390.2400 | -1589.1620 | -2.7287 | -2.7370 |
| 0.6444 | 0.68 | 2600 | 0.6550 | -0.2830 | -0.3993 | 0.6270 | 0.1163 | -1391.5975 | -1590.8505 | -2.7240 | -2.7330 |
| 0.6594 | 0.71 | 2700 | 0.6566 | -0.3546 | -0.4862 | 0.6190 | 0.1316 | -1400.2867 | -1598.0084 | -2.6748 | -2.6818 |
| 0.6329 | 0.73 | 2800 | 0.6544 | -0.2748 | -0.3936 | 0.625 | 0.1189 | -1391.0292 | -1590.0247 | -2.6985 | -2.7063 |
| 0.6351 | 0.76 | 2900 | 0.6545 | -0.2928 | -0.4152 | 0.6270 | 0.1224 | -1393.1847 | -1591.8256 | -2.7050 | -2.7136 |
| 0.6724 | 0.79 | 3000 | 0.6528 | -0.3067 | -0.4418 | 0.6448 | 0.1351 | -1395.8458 | -1593.2202 | -2.6986 | -2.7069 |
| 0.6413 | 0.81 | 3100 | 0.6514 | -0.3153 | -0.4541 | 0.6548 | 0.1388 | -1397.0781 | -1594.0812 | -2.6892 | -2.6985 |
| 0.6242 | 0.84 | 3200 | 0.6523 | -0.3197 | -0.4618 | 0.6349 | 0.1421 | -1397.8459 | -1594.5162 | -2.7123 | -2.7206 |
| 0.6773 | 0.86 | 3300 | 0.6506 | -0.3038 | -0.4433 | 0.6508 | 0.1395 | -1395.9939 | -1592.9280 | -2.7042 | -2.7136 |
| 0.6531 | 0.89 | 3400 | 0.6505 | -0.3036 | -0.4426 | 0.6329 | 0.1390 | -1395.9207 | -1592.9099 | -2.6620 | -2.6712 |
| 0.6499 | 0.92 | 3500 | 0.6504 | -0.3509 | -0.4975 | 0.6448 | 0.1467 | -1401.4177 | -1597.6368 | -2.6611 | -2.6701 |
| 0.6439 | 0.94 | 3600 | 0.6509 | -0.3522 | -0.4975 | 0.6349 | 0.1453 | -1401.4176 | -1597.7729 | -2.6758 | -2.6841 |
| 0.6279 | 0.97 | 3700 | 0.6505 | -0.4035 | -0.5500 | 0.6310 | 0.1466 | -1406.6675 | -1602.8950 | -2.6918 | -2.7012 |
| 0.6443 | 0.99 | 3800 | 0.6497 | -0.3970 | -0.5441 | 0.6290 | 0.1471 | -1406.0728 | -1602.2509 | -2.6876 | -2.6965 |
| 0.6355 | 1.02 | 3900 | 0.6484 | -0.3538 | -0.4986 | 0.6349 | 0.1449 | -1401.5294 | -1597.9247 | -2.6950 | -2.7039 |
| 0.6683 | 1.05 | 4000 | 0.6482 | -0.3608 | -0.5119 | 0.6349 | 0.1511 | -1402.8545 | -1598.6262 | -2.6992 | -2.7080 |
| 0.6459 | 1.07 | 4100 | 0.6475 | -0.3305 | -0.4760 | 0.6448 | 0.1455 | -1399.2634 | -1595.5988 | -2.6852 | -2.6944 |
| 0.6451 | 1.1 | 4200 | 0.6471 | -0.3471 | -0.4991 | 0.6369 | 0.1519 | -1401.5713 | -1597.2633 | -2.6954 | -2.7042 |
| 0.6744 | 1.13 | 4300 | 0.6483 | -0.3619 | -0.5112 | 0.6429 | 0.1493 | -1402.7870 | -1598.7428 | -2.7008 | -2.7095 |
| 0.6355 | 1.15 | 4400 | 0.6477 | -0.4040 | -0.5558 | 0.6270 | 0.1518 | -1407.2480 | -1602.9531 | -2.6916 | -2.7001 |
| 0.6187 | 1.18 | 4500 | 0.6472 | -0.4050 | -0.5534 | 0.6349 | 0.1485 | -1407.0084 | -1603.0441 | -2.6883 | -2.6963 |
| 0.6555 | 1.2 | 4600 | 0.6472 | -0.3883 | -0.5354 | 0.6310 | 0.1471 | -1405.2079 | -1601.3826 | -2.7075 | -2.7168 |
| 0.6178 | 1.23 | 4700 | 0.6476 | -0.3993 | -0.5414 | 0.6190 | 0.1422 | -1405.8092 | -1602.4763 | -2.6912 | -2.7006 |
| 0.6242 | 1.26 | 4800 | 0.6477 | -0.4302 | -0.5746 | 0.625 | 0.1444 | -1409.1267 | -1605.5714 | -2.6917 | -2.7016 |
| 0.6221 | 1.28 | 4900 | 0.6464 | -0.3848 | -0.5302 | 0.6349 | 0.1454 | -1404.6871 | -1601.0272 | -2.7073 | -2.7167 |
| 0.6582 | 1.31 | 5000 | 0.6460 | -0.3995 | -0.5463 | 0.6310 | 0.1468 | -1406.2927 | -1602.5012 | -2.7174 | -2.7268 |
| 0.6276 | 1.33 | 5100 | 0.6458 | -0.4048 | -0.5543 | 0.6310 | 0.1495 | -1407.0914 | -1603.0245 | -2.7192 | -2.7281 |
| 0.6573 | 1.36 | 5200 | 0.6452 | -0.4069 | -0.5580 | 0.6290 | 0.1512 | -1407.4680 | -1603.2344 | -2.7142 | -2.7230 |
| 0.6672 | 1.39 | 5300 | 0.6458 | -0.4020 | -0.5504 | 0.6329 | 0.1485 | -1406.7059 | -1602.7441 | -2.6997 | -2.7080 |
| 0.6112 | 1.41 | 5400 | 0.6460 | -0.4035 | -0.5510 | 0.6290 | 0.1475 | -1406.7632 | -1602.8997 | -2.6953 | -2.7036 |
| 0.6421 | 1.44 | 5500 | 0.6449 | -0.3915 | -0.5414 | 0.6409 | 0.1499 | -1405.8010 | -1601.6963 | -2.6991 | -2.7081 |
| 0.658 | 1.47 | 5600 | 0.6451 | -0.4023 | -0.5553 | 0.6429 | 0.1530 | -1407.1986 | -1602.7803 | -2.6938 | -2.7027 |
| 0.6437 | 1.49 | 5700 | 0.6454 | -0.4050 | -0.5555 | 0.6389 | 0.1505 | -1407.2163 | -1603.0527 | -2.6883 | -2.6972 |
| 0.6289 | 1.52 | 5800 | 0.6443 | -0.3986 | -0.5520 | 0.6468 | 0.1534 | -1406.8611 | -1602.4105 | -2.7007 | -2.7094 |
| 0.6361 | 1.54 | 5900 | 0.6442 | -0.4036 | -0.5574 | 0.6409 | 0.1538 | -1407.4087 | -1602.9125 | -2.6962 | -2.7047 |
| 0.6374 | 1.57 | 6000 | 0.6446 | -0.4164 | -0.5717 | 0.6429 | 0.1553 | -1408.8311 | -1604.1853 | -2.6963 | -2.7048 |
| 0.6423 | 1.6 | 6100 | 0.6448 | -0.4212 | -0.5781 | 0.6349 | 0.1569 | -1409.4735 | -1604.6692 | -2.6905 | -2.6992 |
| 0.6611 | 1.62 | 6200 | 0.6453 | -0.4344 | -0.5916 | 0.625 | 0.1572 | -1410.8239 | -1605.9866 | -2.6925 | -2.7010 |
| 0.6355 | 1.65 | 6300 | 0.6451 | -0.4325 | -0.5909 | 0.625 | 0.1584 | -1410.7570 | -1605.8035 | -2.6922 | -2.7008 |
| 0.6555 | 1.67 | 6400 | 0.6451 | -0.4326 | -0.5912 | 0.6230 | 0.1586 | -1410.7894 | -1605.8125 | -2.6935 | -2.7021 |
| 0.6584 | 1.7 | 6500 | 0.6449 | -0.4310 | -0.5905 | 0.6270 | 0.1595 | -1410.7151 | -1605.6461 | -2.6900 | -2.6987 |
| 0.6371 | 1.73 | 6600 | 0.6448 | -0.4266 | -0.5864 | 0.6310 | 0.1598 | -1410.3033 | -1605.2112 | -2.6897 | -2.6985 |
| 0.6051 | 1.75 | 6700 | 0.6446 | -0.4220 | -0.5821 | 0.6329 | 0.1601 | -1409.8746 | -1604.7469 | -2.6927 | -2.7012 |
| 0.6136 | 1.78 | 6800 | 0.6446 | -0.4219 | -0.5822 | 0.6310 | 0.1603 | -1409.8861 | -1604.7394 | -2.6940 | -2.7024 |
| 0.6503 | 1.81 | 6900 | 0.6445 | -0.4222 | -0.5826 | 0.6349 | 0.1603 | -1409.9208 | -1604.7736 | -2.6947 | -2.7030 |
| 0.6318 | 1.83 | 7000 | 0.6445 | -0.4216 | -0.5817 | 0.6329 | 0.1601 | -1409.8387 | -1604.7111 | -2.6925 | -2.7010 |
| 0.6493 | 1.86 | 7100 | 0.6445 | -0.4215 | -0.5815 | 0.6329 | 0.1600 | -1409.8179 | -1604.7026 | -2.6940 | -2.7025 |
| 0.6292 | 1.88 | 7200 | 0.6446 | -0.4217 | -0.5816 | 0.6329 | 0.1599 | -1409.8223 | -1604.7195 | -2.6943 | -2.7027 |
| 0.625 | 1.91 | 7300 | 0.6445 | -0.4215 | -0.5816 | 0.6329 | 0.1600 | -1409.8219 | -1604.7013 | -2.6937 | -2.7022 |
| 0.6306 | 1.94 | 7400 | 0.6446 | -0.4218 | -0.5814 | 0.6290 | 0.1596 | -1409.8014 | -1604.7244 | -2.6937 | -2.7021 |
| 0.6446 | 1.96 | 7500 | 0.6446 | -0.4217 | -0.5814 | 0.6290 | 0.1596 | -1409.8003 | -1604.7235 | -2.6937 | -2.7021 |
| 0.6394 | 1.99 | 7600 | 0.6446 | -0.4217 | -0.5814 | 0.6290 | 0.1596 | -1409.8003 | -1604.7235 | -2.6937 | -2.7021 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
|
st-karlos-efood/setfit-multilabel-example-classifier-chain-25iters
|
st-karlos-efood
| 2024-01-19T16:18:16Z
| 4
| 0
|
setfit
|
[
"setfit",
"safetensors",
"xlm-roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"dataset:ethos",
"arxiv:2209.11055",
"base_model:lighteternal/stsb-xlm-r-greek-transfer",
"base_model:finetune:lighteternal/stsb-xlm-r-greek-transfer",
"model-index",
"region:us"
] |
text-classification
| 2024-01-19T16:17:38Z
|
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
datasets:
- ethos
metrics:
- accuracy
widget:
- text: "This is what happens when you'\x80\x99re a bitch and a cunt. Looks like\
\ that guy called it correctly. Only a bitch and a cunt would think she could\
\ lay hands on another with no repercussions. Maybe after that seizure she won'\x80\
\x99t be as much of a bitchy cunt."
- text: What if we send every men to mars to start a new life there! They are useless
here (Of course milk them first to store their juices)
- text: Honestly, go fuck yourself! bitch!
- text: Hindus take my ass please
- text: Im going to choke you with your cross necklace idiotic religious pig
pipeline_tag: text-classification
inference: false
base_model: lighteternal/stsb-xlm-r-greek-transfer
model-index:
- name: SetFit with lighteternal/stsb-xlm-r-greek-transfer
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: ethos
type: ethos
split: test
metrics:
- type: accuracy
value: 0.20533333333333334
name: Accuracy
---
# SetFit with lighteternal/stsb-xlm-r-greek-transfer
This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [ethos](https://huggingface.co/datasets/ethos) dataset that can be used for Text Classification. This SetFit model uses [lighteternal/stsb-xlm-r-greek-transfer](https://huggingface.co/lighteternal/stsb-xlm-r-greek-transfer) as the Sentence Transformer embedding model. A ClassifierChain instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [lighteternal/stsb-xlm-r-greek-transfer](https://huggingface.co/lighteternal/stsb-xlm-r-greek-transfer)
- **Classification head:** a ClassifierChain instance
- **Maximum Sequence Length:** 400 tokens
<!-- - **Number of Classes:** Unknown -->
- **Training Dataset:** [ethos](https://huggingface.co/datasets/ethos)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.2053 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("st-karlos-efood/setfit-multilabel-example-classifier-chain-25iters")
# Run inference
preds = model("Hindus take my ass please")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 3 | 9.9307 | 61 |
### Training Hyperparameters
- batch_size: (64, 64)
- num_epochs: (10, 10)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 25
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0006 | 1 | 0.2027 | - |
| 0.0305 | 50 | 0.2092 | - |
| 0.0609 | 100 | 0.1605 | - |
| 0.0914 | 150 | 0.1726 | - |
| 0.1219 | 200 | 0.1322 | - |
| 0.1523 | 250 | 0.1252 | - |
| 0.1828 | 300 | 0.1404 | - |
| 0.2133 | 350 | 0.0927 | - |
| 0.2438 | 400 | 0.1039 | - |
| 0.2742 | 450 | 0.0904 | - |
| 0.3047 | 500 | 0.1194 | - |
| 0.3352 | 550 | 0.1024 | - |
| 0.3656 | 600 | 0.151 | - |
| 0.3961 | 650 | 0.0842 | - |
| 0.4266 | 700 | 0.1158 | - |
| 0.4570 | 750 | 0.214 | - |
| 0.4875 | 800 | 0.1167 | - |
| 0.5180 | 850 | 0.1174 | - |
| 0.5484 | 900 | 0.1567 | - |
| 0.5789 | 950 | 0.0726 | - |
| 0.6094 | 1000 | 0.0741 | - |
| 0.6399 | 1050 | 0.0841 | - |
| 0.6703 | 1100 | 0.0606 | - |
| 0.7008 | 1150 | 0.1005 | - |
| 0.7313 | 1200 | 0.1236 | - |
| 0.7617 | 1250 | 0.141 | - |
| 0.7922 | 1300 | 0.1611 | - |
| 0.8227 | 1350 | 0.1068 | - |
| 0.8531 | 1400 | 0.0542 | - |
| 0.8836 | 1450 | 0.1635 | - |
| 0.9141 | 1500 | 0.106 | - |
| 0.9445 | 1550 | 0.0817 | - |
| 0.9750 | 1600 | 0.1157 | - |
| 1.0055 | 1650 | 0.1031 | - |
| 1.0360 | 1700 | 0.0969 | - |
| 1.0664 | 1750 | 0.0742 | - |
| 1.0969 | 1800 | 0.0697 | - |
| 1.1274 | 1850 | 0.1072 | - |
| 1.1578 | 1900 | 0.0593 | - |
| 1.1883 | 1950 | 0.1102 | - |
| 1.2188 | 2000 | 0.1586 | - |
| 1.2492 | 2050 | 0.1523 | - |
| 1.2797 | 2100 | 0.0921 | - |
| 1.3102 | 2150 | 0.0634 | - |
| 1.3406 | 2200 | 0.073 | - |
| 1.3711 | 2250 | 0.1131 | - |
| 1.4016 | 2300 | 0.0493 | - |
| 1.4321 | 2350 | 0.106 | - |
| 1.4625 | 2400 | 0.0585 | - |
| 1.4930 | 2450 | 0.1058 | - |
| 1.5235 | 2500 | 0.0892 | - |
| 1.5539 | 2550 | 0.0649 | - |
| 1.5844 | 2600 | 0.0481 | - |
| 1.6149 | 2650 | 0.1359 | - |
| 1.6453 | 2700 | 0.0734 | - |
| 1.6758 | 2750 | 0.0762 | - |
| 1.7063 | 2800 | 0.1082 | - |
| 1.7367 | 2850 | 0.1274 | - |
| 1.7672 | 2900 | 0.0724 | - |
| 1.7977 | 2950 | 0.0842 | - |
| 1.8282 | 3000 | 0.1558 | - |
| 1.8586 | 3050 | 0.071 | - |
| 1.8891 | 3100 | 0.1716 | - |
| 1.9196 | 3150 | 0.1078 | - |
| 1.9500 | 3200 | 0.1037 | - |
| 1.9805 | 3250 | 0.0773 | - |
| 2.0110 | 3300 | 0.0706 | - |
| 2.0414 | 3350 | 0.1577 | - |
| 2.0719 | 3400 | 0.0825 | - |
| 2.1024 | 3450 | 0.1227 | - |
| 2.1328 | 3500 | 0.1069 | - |
| 2.1633 | 3550 | 0.1037 | - |
| 2.1938 | 3600 | 0.0595 | - |
| 2.2243 | 3650 | 0.0569 | - |
| 2.2547 | 3700 | 0.0967 | - |
| 2.2852 | 3750 | 0.0632 | - |
| 2.3157 | 3800 | 0.1014 | - |
| 2.3461 | 3850 | 0.0868 | - |
| 2.3766 | 3900 | 0.0986 | - |
| 2.4071 | 3950 | 0.0585 | - |
| 2.4375 | 4000 | 0.063 | - |
| 2.4680 | 4050 | 0.1124 | - |
| 2.4985 | 4100 | 0.0444 | - |
| 2.5289 | 4150 | 0.1547 | - |
| 2.5594 | 4200 | 0.1087 | - |
| 2.5899 | 4250 | 0.0946 | - |
| 2.6204 | 4300 | 0.0261 | - |
| 2.6508 | 4350 | 0.0414 | - |
| 2.6813 | 4400 | 0.0715 | - |
| 2.7118 | 4450 | 0.0831 | - |
| 2.7422 | 4500 | 0.0779 | - |
| 2.7727 | 4550 | 0.1049 | - |
| 2.8032 | 4600 | 0.1224 | - |
| 2.8336 | 4650 | 0.0926 | - |
| 2.8641 | 4700 | 0.0745 | - |
| 2.8946 | 4750 | 0.0642 | - |
| 2.9250 | 4800 | 0.0536 | - |
| 2.9555 | 4850 | 0.1296 | - |
| 2.9860 | 4900 | 0.0596 | - |
| 3.0165 | 4950 | 0.0361 | - |
| 3.0469 | 5000 | 0.0592 | - |
| 3.0774 | 5050 | 0.0656 | - |
| 3.1079 | 5100 | 0.0584 | - |
| 3.1383 | 5150 | 0.0729 | - |
| 3.1688 | 5200 | 0.1037 | - |
| 3.1993 | 5250 | 0.0685 | - |
| 3.2297 | 5300 | 0.0511 | - |
| 3.2602 | 5350 | 0.0427 | - |
| 3.2907 | 5400 | 0.1067 | - |
| 3.3211 | 5450 | 0.0807 | - |
| 3.3516 | 5500 | 0.0815 | - |
| 3.3821 | 5550 | 0.1016 | - |
| 3.4126 | 5600 | 0.1034 | - |
| 3.4430 | 5650 | 0.1257 | - |
| 3.4735 | 5700 | 0.0877 | - |
| 3.5040 | 5750 | 0.0808 | - |
| 3.5344 | 5800 | 0.0926 | - |
| 3.5649 | 5850 | 0.0967 | - |
| 3.5954 | 5900 | 0.0401 | - |
| 3.6258 | 5950 | 0.0547 | - |
| 3.6563 | 6000 | 0.0872 | - |
| 3.6868 | 6050 | 0.0808 | - |
| 3.7172 | 6100 | 0.1125 | - |
| 3.7477 | 6150 | 0.1431 | - |
| 3.7782 | 6200 | 0.1039 | - |
| 3.8087 | 6250 | 0.061 | - |
| 3.8391 | 6300 | 0.1022 | - |
| 3.8696 | 6350 | 0.0394 | - |
| 3.9001 | 6400 | 0.0892 | - |
| 3.9305 | 6450 | 0.0535 | - |
| 3.9610 | 6500 | 0.0793 | - |
| 3.9915 | 6550 | 0.0462 | - |
| 4.0219 | 6600 | 0.0686 | - |
| 4.0524 | 6650 | 0.0506 | - |
| 4.0829 | 6700 | 0.1012 | - |
| 4.1133 | 6750 | 0.0852 | - |
| 4.1438 | 6800 | 0.0729 | - |
| 4.1743 | 6850 | 0.1007 | - |
| 4.2048 | 6900 | 0.0431 | - |
| 4.2352 | 6950 | 0.0683 | - |
| 4.2657 | 7000 | 0.0712 | - |
| 4.2962 | 7050 | 0.0732 | - |
| 4.3266 | 7100 | 0.0374 | - |
| 4.3571 | 7150 | 0.1015 | - |
| 4.3876 | 7200 | 0.15 | - |
| 4.4180 | 7250 | 0.0852 | - |
| 4.4485 | 7300 | 0.0714 | - |
| 4.4790 | 7350 | 0.0587 | - |
| 4.5094 | 7400 | 0.1335 | - |
| 4.5399 | 7450 | 0.1123 | - |
| 4.5704 | 7500 | 0.0538 | - |
| 4.6009 | 7550 | 0.0989 | - |
| 4.6313 | 7600 | 0.0878 | - |
| 4.6618 | 7650 | 0.0963 | - |
| 4.6923 | 7700 | 0.0991 | - |
| 4.7227 | 7750 | 0.0776 | - |
| 4.7532 | 7800 | 0.0663 | - |
| 4.7837 | 7850 | 0.0696 | - |
| 4.8141 | 7900 | 0.0704 | - |
| 4.8446 | 7950 | 0.0626 | - |
| 4.8751 | 8000 | 0.0657 | - |
| 4.9055 | 8050 | 0.0567 | - |
| 4.9360 | 8100 | 0.0619 | - |
| 4.9665 | 8150 | 0.0792 | - |
| 4.9970 | 8200 | 0.0671 | - |
| 5.0274 | 8250 | 0.1068 | - |
| 5.0579 | 8300 | 0.1111 | - |
| 5.0884 | 8350 | 0.0968 | - |
| 5.1188 | 8400 | 0.0577 | - |
| 5.1493 | 8450 | 0.0934 | - |
| 5.1798 | 8500 | 0.0854 | - |
| 5.2102 | 8550 | 0.0587 | - |
| 5.2407 | 8600 | 0.048 | - |
| 5.2712 | 8650 | 0.0829 | - |
| 5.3016 | 8700 | 0.0985 | - |
| 5.3321 | 8750 | 0.107 | - |
| 5.3626 | 8800 | 0.0662 | - |
| 5.3931 | 8850 | 0.0799 | - |
| 5.4235 | 8900 | 0.0948 | - |
| 5.4540 | 8950 | 0.087 | - |
| 5.4845 | 9000 | 0.0429 | - |
| 5.5149 | 9050 | 0.0699 | - |
| 5.5454 | 9100 | 0.0911 | - |
| 5.5759 | 9150 | 0.1268 | - |
| 5.6063 | 9200 | 0.1042 | - |
| 5.6368 | 9250 | 0.0642 | - |
| 5.6673 | 9300 | 0.0736 | - |
| 5.6977 | 9350 | 0.0329 | - |
| 5.7282 | 9400 | 0.126 | - |
| 5.7587 | 9450 | 0.0991 | - |
| 5.7892 | 9500 | 0.1038 | - |
| 5.8196 | 9550 | 0.0842 | - |
| 5.8501 | 9600 | 0.0623 | - |
| 5.8806 | 9650 | 0.0642 | - |
| 5.9110 | 9700 | 0.0902 | - |
| 5.9415 | 9750 | 0.0994 | - |
| 5.9720 | 9800 | 0.0685 | - |
| 6.0024 | 9850 | 0.0573 | - |
| 6.0329 | 9900 | 0.0537 | - |
| 6.0634 | 9950 | 0.0478 | - |
| 6.0938 | 10000 | 0.0513 | - |
| 6.1243 | 10050 | 0.0529 | - |
| 6.1548 | 10100 | 0.095 | - |
| 6.1853 | 10150 | 0.0578 | - |
| 6.2157 | 10200 | 0.0918 | - |
| 6.2462 | 10250 | 0.0594 | - |
| 6.2767 | 10300 | 0.1015 | - |
| 6.3071 | 10350 | 0.036 | - |
| 6.3376 | 10400 | 0.0524 | - |
| 6.3681 | 10450 | 0.0927 | - |
| 6.3985 | 10500 | 0.0934 | - |
| 6.4290 | 10550 | 0.0788 | - |
| 6.4595 | 10600 | 0.0842 | - |
| 6.4899 | 10650 | 0.0703 | - |
| 6.5204 | 10700 | 0.0684 | - |
| 6.5509 | 10750 | 0.0759 | - |
| 6.5814 | 10800 | 0.0271 | - |
| 6.6118 | 10850 | 0.0391 | - |
| 6.6423 | 10900 | 0.0895 | - |
| 6.6728 | 10950 | 0.054 | - |
| 6.7032 | 11000 | 0.0987 | - |
| 6.7337 | 11050 | 0.0577 | - |
| 6.7642 | 11100 | 0.0822 | - |
| 6.7946 | 11150 | 0.0986 | - |
| 6.8251 | 11200 | 0.0423 | - |
| 6.8556 | 11250 | 0.0672 | - |
| 6.8860 | 11300 | 0.0747 | - |
| 6.9165 | 11350 | 0.0873 | - |
| 6.9470 | 11400 | 0.106 | - |
| 6.9775 | 11450 | 0.0975 | - |
| 7.0079 | 11500 | 0.0957 | - |
| 7.0384 | 11550 | 0.0487 | - |
| 7.0689 | 11600 | 0.0698 | - |
| 7.0993 | 11650 | 0.0317 | - |
| 7.1298 | 11700 | 0.0732 | - |
| 7.1603 | 11750 | 0.1114 | - |
| 7.1907 | 11800 | 0.0689 | - |
| 7.2212 | 11850 | 0.1211 | - |
| 7.2517 | 11900 | 0.0753 | - |
| 7.2821 | 11950 | 0.062 | - |
| 7.3126 | 12000 | 0.075 | - |
| 7.3431 | 12050 | 0.0494 | - |
| 7.3736 | 12100 | 0.0724 | - |
| 7.4040 | 12150 | 0.0605 | - |
| 7.4345 | 12200 | 0.0508 | - |
| 7.4650 | 12250 | 0.0828 | - |
| 7.4954 | 12300 | 0.0512 | - |
| 7.5259 | 12350 | 0.1291 | - |
| 7.5564 | 12400 | 0.0459 | - |
| 7.5868 | 12450 | 0.0869 | - |
| 7.6173 | 12500 | 0.0379 | - |
| 7.6478 | 12550 | 0.1878 | - |
| 7.6782 | 12600 | 0.0824 | - |
| 7.7087 | 12650 | 0.0945 | - |
| 7.7392 | 12700 | 0.0763 | - |
| 7.7697 | 12750 | 0.0602 | - |
| 7.8001 | 12800 | 0.0342 | - |
| 7.8306 | 12850 | 0.0746 | - |
| 7.8611 | 12900 | 0.065 | - |
| 7.8915 | 12950 | 0.0749 | - |
| 7.9220 | 13000 | 0.0618 | - |
| 7.9525 | 13050 | 0.0567 | - |
| 7.9829 | 13100 | 0.069 | - |
| 8.0134 | 13150 | 0.0487 | - |
| 8.0439 | 13200 | 0.0578 | - |
| 8.0743 | 13250 | 0.0876 | - |
| 8.1048 | 13300 | 0.0942 | - |
| 8.1353 | 13350 | 0.0774 | - |
| 8.1658 | 13400 | 0.0557 | - |
| 8.1962 | 13450 | 0.0872 | - |
| 8.2267 | 13500 | 0.0652 | - |
| 8.2572 | 13550 | 0.088 | - |
| 8.2876 | 13600 | 0.05 | - |
| 8.3181 | 13650 | 0.0572 | - |
| 8.3486 | 13700 | 0.053 | - |
| 8.3790 | 13750 | 0.0745 | - |
| 8.4095 | 13800 | 0.1119 | - |
| 8.4400 | 13850 | 0.0909 | - |
| 8.4704 | 13900 | 0.0374 | - |
| 8.5009 | 13950 | 0.0515 | - |
| 8.5314 | 14000 | 0.0827 | - |
| 8.5619 | 14050 | 0.0925 | - |
| 8.5923 | 14100 | 0.0793 | - |
| 8.6228 | 14150 | 0.1123 | - |
| 8.6533 | 14200 | 0.0387 | - |
| 8.6837 | 14250 | 0.0898 | - |
| 8.7142 | 14300 | 0.0627 | - |
| 8.7447 | 14350 | 0.0863 | - |
| 8.7751 | 14400 | 0.1257 | - |
| 8.8056 | 14450 | 0.0553 | - |
| 8.8361 | 14500 | 0.0664 | - |
| 8.8665 | 14550 | 0.0641 | - |
| 8.8970 | 14600 | 0.0577 | - |
| 8.9275 | 14650 | 0.0672 | - |
| 8.9580 | 14700 | 0.0776 | - |
| 8.9884 | 14750 | 0.0951 | - |
| 9.0189 | 14800 | 0.0721 | - |
| 9.0494 | 14850 | 0.0609 | - |
| 9.0798 | 14900 | 0.0821 | - |
| 9.1103 | 14950 | 0.0477 | - |
| 9.1408 | 15000 | 0.0974 | - |
| 9.1712 | 15050 | 0.0534 | - |
| 9.2017 | 15100 | 0.0673 | - |
| 9.2322 | 15150 | 0.0549 | - |
| 9.2626 | 15200 | 0.0833 | - |
| 9.2931 | 15250 | 0.0957 | - |
| 9.3236 | 15300 | 0.0601 | - |
| 9.3541 | 15350 | 0.0702 | - |
| 9.3845 | 15400 | 0.0852 | - |
| 9.4150 | 15450 | 0.0576 | - |
| 9.4455 | 15500 | 0.1006 | - |
| 9.4759 | 15550 | 0.0697 | - |
| 9.5064 | 15600 | 0.0778 | - |
| 9.5369 | 15650 | 0.0778 | - |
| 9.5673 | 15700 | 0.0844 | - |
| 9.5978 | 15750 | 0.0724 | - |
| 9.6283 | 15800 | 0.0988 | - |
| 9.6587 | 15850 | 0.0699 | - |
| 9.6892 | 15900 | 0.0772 | - |
| 9.7197 | 15950 | 0.0757 | - |
| 9.7502 | 16000 | 0.0671 | - |
| 9.7806 | 16050 | 0.1057 | - |
| 9.8111 | 16100 | 0.075 | - |
| 9.8416 | 16150 | 0.0475 | - |
| 9.8720 | 16200 | 0.0572 | - |
| 9.9025 | 16250 | 0.1176 | - |
| 9.9330 | 16300 | 0.0552 | - |
| 9.9634 | 16350 | 0.1032 | - |
| 9.9939 | 16400 | 0.0935 | - |
| 0.0006 | 1 | 0.0579 | - |
| 0.0305 | 50 | 0.0231 | - |
| 0.0609 | 100 | 0.0598 | - |
| 0.0914 | 150 | 0.0541 | - |
| 0.1219 | 200 | 0.0534 | - |
| 0.1523 | 250 | 0.048 | - |
| 0.1828 | 300 | 0.0912 | - |
| 0.2133 | 350 | 0.0447 | - |
| 0.2438 | 400 | 0.0442 | - |
| 0.2742 | 450 | 0.0579 | - |
| 0.0006 | 1 | 0.0585 | - |
| 0.0305 | 50 | 0.0204 | - |
| 0.0609 | 100 | 0.0653 | - |
| 0.0914 | 150 | 0.0599 | - |
| 0.1219 | 200 | 0.0577 | - |
| 0.1523 | 250 | 0.0468 | - |
| 0.1828 | 300 | 0.0911 | - |
| 0.2133 | 350 | 0.0423 | - |
| 0.2438 | 400 | 0.0405 | - |
| 0.2742 | 450 | 0.0561 | - |
| 0.3047 | 500 | 0.0925 | - |
| 0.3352 | 550 | 0.0771 | - |
| 0.3656 | 600 | 0.0718 | - |
| 0.3961 | 650 | 0.0708 | - |
| 0.4266 | 700 | 0.0673 | - |
| 0.4570 | 750 | 0.1501 | - |
| 0.4875 | 800 | 0.0849 | - |
| 0.5180 | 850 | 0.1132 | - |
| 0.5484 | 900 | 0.0865 | - |
| 0.5789 | 950 | 0.0527 | - |
| 0.6094 | 1000 | 0.0552 | - |
| 0.6399 | 1050 | 0.0656 | - |
| 0.6703 | 1100 | 0.0648 | - |
| 0.7008 | 1150 | 0.0884 | - |
| 0.7313 | 1200 | 0.0803 | - |
| 0.7617 | 1250 | 0.083 | - |
| 0.7922 | 1300 | 0.0863 | - |
| 0.8227 | 1350 | 0.0731 | - |
| 0.8531 | 1400 | 0.0504 | - |
| 0.8836 | 1450 | 0.1039 | - |
| 0.9141 | 1500 | 0.0817 | - |
| 0.9445 | 1550 | 0.0655 | - |
| 0.9750 | 1600 | 0.0987 | - |
| 1.0055 | 1650 | 0.0905 | - |
| 1.0360 | 1700 | 0.088 | - |
| 1.0664 | 1750 | 0.0767 | - |
| 1.0969 | 1800 | 0.0574 | - |
| 1.1274 | 1850 | 0.0741 | - |
| 1.1578 | 1900 | 0.0529 | - |
| 1.1883 | 1950 | 0.0758 | - |
| 1.2188 | 2000 | 0.1253 | - |
| 1.2492 | 2050 | 0.1129 | - |
| 1.2797 | 2100 | 0.0852 | - |
| 1.3102 | 2150 | 0.0475 | - |
| 1.3406 | 2200 | 0.063 | - |
| 1.3711 | 2250 | 0.0893 | - |
| 1.4016 | 2300 | 0.0494 | - |
| 1.4321 | 2350 | 0.1083 | - |
| 1.4625 | 2400 | 0.0468 | - |
| 1.4930 | 2450 | 0.0902 | - |
| 1.5235 | 2500 | 0.0607 | - |
| 1.5539 | 2550 | 0.0571 | - |
| 1.5844 | 2600 | 0.0395 | - |
| 1.6149 | 2650 | 0.1184 | - |
| 1.6453 | 2700 | 0.0735 | - |
| 1.6758 | 2750 | 0.06 | - |
| 1.7063 | 2800 | 0.0646 | - |
| 1.7367 | 2850 | 0.1055 | - |
| 1.7672 | 2900 | 0.0592 | - |
| 1.7977 | 2950 | 0.0522 | - |
| 1.8282 | 3000 | 0.1025 | - |
| 1.8586 | 3050 | 0.0615 | - |
| 1.8891 | 3100 | 0.1491 | - |
| 1.9196 | 3150 | 0.0796 | - |
| 1.9500 | 3200 | 0.0768 | - |
| 1.9805 | 3250 | 0.0601 | - |
| 2.0110 | 3300 | 0.0543 | - |
| 2.0414 | 3350 | 0.1128 | - |
| 2.0719 | 3400 | 0.06 | - |
| 2.1024 | 3450 | 0.0994 | - |
| 2.1328 | 3500 | 0.1018 | - |
| 2.1633 | 3550 | 0.0915 | - |
| 2.1938 | 3600 | 0.0626 | - |
| 2.2243 | 3650 | 0.0454 | - |
| 2.2547 | 3700 | 0.0915 | - |
| 2.2852 | 3750 | 0.0334 | - |
| 2.3157 | 3800 | 0.0827 | - |
| 2.3461 | 3850 | 0.0709 | - |
| 2.3766 | 3900 | 0.0806 | - |
| 2.4071 | 3950 | 0.055 | - |
| 2.4375 | 4000 | 0.0571 | - |
| 2.4680 | 4050 | 0.1002 | - |
| 2.4985 | 4100 | 0.0492 | - |
| 2.5289 | 4150 | 0.1322 | - |
| 2.5594 | 4200 | 0.0961 | - |
| 2.5899 | 4250 | 0.0788 | - |
| 2.6204 | 4300 | 0.0243 | - |
| 2.6508 | 4350 | 0.0406 | - |
| 2.6813 | 4400 | 0.0786 | - |
| 2.7118 | 4450 | 0.0852 | - |
| 2.7422 | 4500 | 0.0789 | - |
| 2.7727 | 4550 | 0.0787 | - |
| 2.8032 | 4600 | 0.1152 | - |
| 2.8336 | 4650 | 0.0992 | - |
| 2.8641 | 4700 | 0.0599 | - |
| 2.8946 | 4750 | 0.0496 | - |
| 2.9250 | 4800 | 0.0444 | - |
| 2.9555 | 4850 | 0.0898 | - |
| 2.9860 | 4900 | 0.0422 | - |
| 3.0165 | 4950 | 0.0328 | - |
| 3.0469 | 5000 | 0.0584 | - |
| 3.0774 | 5050 | 0.052 | - |
| 3.1079 | 5100 | 0.0485 | - |
| 3.1383 | 5150 | 0.0542 | - |
| 3.1688 | 5200 | 0.0854 | - |
| 3.1993 | 5250 | 0.048 | - |
| 3.2297 | 5300 | 0.0417 | - |
| 3.2602 | 5350 | 0.0497 | - |
| 3.2907 | 5400 | 0.0809 | - |
| 3.3211 | 5450 | 0.074 | - |
| 3.3516 | 5500 | 0.0761 | - |
| 3.3821 | 5550 | 0.0768 | - |
| 3.4126 | 5600 | 0.0954 | - |
| 3.4430 | 5650 | 0.0955 | - |
| 3.4735 | 5700 | 0.0906 | - |
| 3.5040 | 5750 | 0.0916 | - |
| 3.5344 | 5800 | 0.0915 | - |
| 3.5649 | 5850 | 0.107 | - |
| 3.5954 | 5900 | 0.0327 | - |
| 3.6258 | 5950 | 0.0534 | - |
| 3.6563 | 6000 | 0.059 | - |
| 3.6868 | 6050 | 0.0806 | - |
| 3.7172 | 6100 | 0.0941 | - |
| 3.7477 | 6150 | 0.1368 | - |
| 3.7782 | 6200 | 0.0848 | - |
| 3.8087 | 6250 | 0.0625 | - |
| 3.8391 | 6300 | 0.103 | - |
| 3.8696 | 6350 | 0.0307 | - |
| 3.9001 | 6400 | 0.0716 | - |
| 3.9305 | 6450 | 0.0518 | - |
| 3.9610 | 6500 | 0.0645 | - |
| 3.9915 | 6550 | 0.0417 | - |
| 4.0219 | 6600 | 0.0588 | - |
| 4.0524 | 6650 | 0.047 | - |
| 4.0829 | 6700 | 0.0951 | - |
| 4.1133 | 6750 | 0.0689 | - |
| 4.1438 | 6800 | 0.0731 | - |
| 4.1743 | 6850 | 0.0785 | - |
| 4.2048 | 6900 | 0.0411 | - |
| 4.2352 | 6950 | 0.0568 | - |
| 4.2657 | 7000 | 0.0688 | - |
| 4.2962 | 7050 | 0.066 | - |
| 4.3266 | 7100 | 0.0313 | - |
| 4.3571 | 7150 | 0.1127 | - |
| 4.3876 | 7200 | 0.1347 | - |
| 4.4180 | 7250 | 0.0685 | - |
| 4.4485 | 7300 | 0.0693 | - |
| 4.4790 | 7350 | 0.053 | - |
| 4.5094 | 7400 | 0.1353 | - |
| 4.5399 | 7450 | 0.1057 | - |
| 4.5704 | 7500 | 0.0467 | - |
| 4.6009 | 7550 | 0.1059 | - |
| 4.6313 | 7600 | 0.0791 | - |
| 4.6618 | 7650 | 0.0928 | - |
| 4.6923 | 7700 | 0.0989 | - |
| 4.7227 | 7750 | 0.0619 | - |
| 4.7532 | 7800 | 0.0572 | - |
| 4.7837 | 7850 | 0.06 | - |
| 4.8141 | 7900 | 0.0711 | - |
| 4.8446 | 7950 | 0.0595 | - |
| 4.8751 | 8000 | 0.0675 | - |
| 4.9055 | 8050 | 0.0487 | - |
| 4.9360 | 8100 | 0.0569 | - |
| 4.9665 | 8150 | 0.0637 | - |
| 4.9970 | 8200 | 0.0634 | - |
| 5.0274 | 8250 | 0.093 | - |
| 5.0579 | 8300 | 0.1107 | - |
| 5.0884 | 8350 | 0.0883 | - |
| 5.1188 | 8400 | 0.051 | - |
| 5.1493 | 8450 | 0.1034 | - |
| 5.1798 | 8500 | 0.0832 | - |
| 5.2102 | 8550 | 0.0463 | - |
| 5.2407 | 8600 | 0.0596 | - |
| 5.2712 | 8650 | 0.078 | - |
| 5.3016 | 8700 | 0.0686 | - |
| 5.3321 | 8750 | 0.1053 | - |
| 5.3626 | 8800 | 0.0684 | - |
| 5.3931 | 8850 | 0.0684 | - |
| 5.4235 | 8900 | 0.092 | - |
| 5.4540 | 8950 | 0.088 | - |
| 5.4845 | 9000 | 0.0503 | - |
| 5.5149 | 9050 | 0.0752 | - |
| 5.5454 | 9100 | 0.0975 | - |
| 5.5759 | 9150 | 0.1306 | - |
| 5.6063 | 9200 | 0.1038 | - |
| 5.6368 | 9250 | 0.0573 | - |
| 5.6673 | 9300 | 0.0584 | - |
| 5.6977 | 9350 | 0.0309 | - |
| 5.7282 | 9400 | 0.1232 | - |
| 5.7587 | 9450 | 0.0991 | - |
| 5.7892 | 9500 | 0.1111 | - |
| 5.8196 | 9550 | 0.0845 | - |
| 5.8501 | 9600 | 0.0587 | - |
| 5.8806 | 9650 | 0.0589 | - |
| 5.9110 | 9700 | 0.0751 | - |
| 5.9415 | 9750 | 0.0929 | - |
| 5.9720 | 9800 | 0.0613 | - |
| 6.0024 | 9850 | 0.0578 | - |
| 6.0329 | 9900 | 0.0499 | - |
| 6.0634 | 9950 | 0.0435 | - |
| 6.0938 | 10000 | 0.0547 | - |
| 6.1243 | 10050 | 0.0549 | - |
| 6.1548 | 10100 | 0.0872 | - |
| 6.1853 | 10150 | 0.0509 | - |
| 6.2157 | 10200 | 0.0913 | - |
| 6.2462 | 10250 | 0.0581 | - |
| 6.2767 | 10300 | 0.0942 | - |
| 6.3071 | 10350 | 0.0273 | - |
| 6.3376 | 10400 | 0.0426 | - |
| 6.3681 | 10450 | 0.0825 | - |
| 6.3985 | 10500 | 0.0713 | - |
| 6.4290 | 10550 | 0.0698 | - |
| 6.4595 | 10600 | 0.0679 | - |
| 6.4899 | 10650 | 0.0631 | - |
| 6.5204 | 10700 | 0.0489 | - |
| 6.5509 | 10750 | 0.0599 | - |
| 6.5814 | 10800 | 0.033 | - |
| 6.6118 | 10850 | 0.0401 | - |
| 6.6423 | 10900 | 0.0782 | - |
| 6.6728 | 10950 | 0.0512 | - |
| 6.7032 | 11000 | 0.0939 | - |
| 6.7337 | 11050 | 0.0523 | - |
| 6.7642 | 11100 | 0.0784 | - |
| 6.7946 | 11150 | 0.0898 | - |
| 6.8251 | 11200 | 0.042 | - |
| 6.8556 | 11250 | 0.0616 | - |
| 6.8860 | 11300 | 0.0667 | - |
| 6.9165 | 11350 | 0.0807 | - |
| 6.9470 | 11400 | 0.1054 | - |
| 6.9775 | 11450 | 0.0961 | - |
| 7.0079 | 11500 | 0.0896 | - |
| 7.0384 | 11550 | 0.0463 | - |
| 7.0689 | 11600 | 0.065 | - |
| 7.0993 | 11650 | 0.0318 | - |
| 7.1298 | 11700 | 0.0692 | - |
| 7.1603 | 11750 | 0.1055 | - |
| 7.1907 | 11800 | 0.0619 | - |
| 7.2212 | 11850 | 0.1234 | - |
| 7.2517 | 11900 | 0.0698 | - |
| 7.2821 | 11950 | 0.0526 | - |
| 7.3126 | 12000 | 0.0695 | - |
| 7.3431 | 12050 | 0.051 | - |
| 7.3736 | 12100 | 0.0759 | - |
| 7.4040 | 12150 | 0.062 | - |
| 7.4345 | 12200 | 0.0509 | - |
| 7.4650 | 12250 | 0.0874 | - |
| 7.4954 | 12300 | 0.0534 | - |
| 7.5259 | 12350 | 0.1089 | - |
| 7.5564 | 12400 | 0.0516 | - |
| 7.5868 | 12450 | 0.0755 | - |
| 7.6173 | 12500 | 0.0295 | - |
| 7.6478 | 12550 | 0.1767 | - |
| 7.6782 | 12600 | 0.0744 | - |
| 7.7087 | 12650 | 0.0875 | - |
| 7.7392 | 12700 | 0.075 | - |
| 7.7697 | 12750 | 0.0583 | - |
| 7.8001 | 12800 | 0.0353 | - |
| 7.8306 | 12850 | 0.0638 | - |
| 7.8611 | 12900 | 0.045 | - |
| 7.8915 | 12950 | 0.0647 | - |
| 7.9220 | 13000 | 0.0593 | - |
| 7.9525 | 13050 | 0.0515 | - |
| 7.9829 | 13100 | 0.0705 | - |
| 8.0134 | 13150 | 0.0521 | - |
| 8.0439 | 13200 | 0.059 | - |
| 8.0743 | 13250 | 0.0758 | - |
| 8.1048 | 13300 | 0.0922 | - |
| 8.1353 | 13350 | 0.0859 | - |
| 8.1658 | 13400 | 0.0526 | - |
| 8.1962 | 13450 | 0.0892 | - |
| 8.2267 | 13500 | 0.0665 | - |
| 8.2572 | 13550 | 0.0711 | - |
| 8.2876 | 13600 | 0.0535 | - |
| 8.3181 | 13650 | 0.055 | - |
| 8.3486 | 13700 | 0.0516 | - |
| 8.3790 | 13750 | 0.0683 | - |
| 8.4095 | 13800 | 0.0959 | - |
| 8.4400 | 13850 | 0.0901 | - |
| 8.4704 | 13900 | 0.041 | - |
| 8.5009 | 13950 | 0.0464 | - |
| 8.5314 | 14000 | 0.0726 | - |
| 8.5619 | 14050 | 0.0959 | - |
| 8.5923 | 14100 | 0.0739 | - |
| 8.6228 | 14150 | 0.1083 | - |
| 8.6533 | 14200 | 0.0374 | - |
| 8.6837 | 14250 | 0.0767 | - |
| 8.7142 | 14300 | 0.0626 | - |
| 8.7447 | 14350 | 0.0847 | - |
| 8.7751 | 14400 | 0.1211 | - |
| 8.8056 | 14450 | 0.0457 | - |
| 8.8361 | 14500 | 0.0705 | - |
| 8.8665 | 14550 | 0.06 | - |
| 8.8970 | 14600 | 0.052 | - |
| 8.9275 | 14650 | 0.0677 | - |
| 8.9580 | 14700 | 0.0747 | - |
| 8.9884 | 14750 | 0.0877 | - |
| 9.0189 | 14800 | 0.0791 | - |
| 9.0494 | 14850 | 0.0573 | - |
| 9.0798 | 14900 | 0.0786 | - |
| 9.1103 | 14950 | 0.0376 | - |
| 9.1408 | 15000 | 0.0964 | - |
| 9.1712 | 15050 | 0.0542 | - |
| 9.2017 | 15100 | 0.0568 | - |
| 9.2322 | 15150 | 0.0583 | - |
| 9.2626 | 15200 | 0.0861 | - |
| 9.2931 | 15250 | 0.0994 | - |
| 9.3236 | 15300 | 0.0614 | - |
| 9.3541 | 15350 | 0.0689 | - |
| 9.3845 | 15400 | 0.0803 | - |
| 9.4150 | 15450 | 0.0599 | - |
| 9.4455 | 15500 | 0.0952 | - |
| 9.4759 | 15550 | 0.0597 | - |
| 9.5064 | 15600 | 0.0762 | - |
| 9.5369 | 15650 | 0.0718 | - |
| 9.5673 | 15700 | 0.0794 | - |
| 9.5978 | 15750 | 0.0721 | - |
| 9.6283 | 15800 | 0.0966 | - |
| 9.6587 | 15850 | 0.0604 | - |
| 9.6892 | 15900 | 0.0764 | - |
| 9.7197 | 15950 | 0.0707 | - |
| 9.7502 | 16000 | 0.0724 | - |
| 9.7806 | 16050 | 0.1072 | - |
| 9.8111 | 16100 | 0.0728 | - |
| 9.8416 | 16150 | 0.0516 | - |
| 9.8720 | 16200 | 0.0519 | - |
| 9.9025 | 16250 | 0.1077 | - |
| 9.9330 | 16300 | 0.0539 | - |
| 9.9634 | 16350 | 0.095 | - |
| 9.9939 | 16400 | 0.0957 | - |
| 0.0005 | 1 | 0.0632 | - |
| 0.0244 | 50 | 0.058 | - |
| 0.0488 | 100 | 0.0531 | - |
| 0.0731 | 150 | 0.0769 | - |
| 0.0975 | 200 | 0.0445 | - |
| 0.1219 | 250 | 0.0852 | - |
| 0.1463 | 300 | 0.058 | - |
| 0.1706 | 350 | 0.0611 | - |
| 0.1950 | 400 | 0.0772 | - |
| 0.2194 | 450 | 0.0806 | - |
| 0.2438 | 500 | 0.0686 | - |
| 0.2682 | 550 | 0.0591 | - |
| 0.2925 | 600 | 0.0838 | - |
| 0.3169 | 650 | 0.0862 | - |
| 0.3413 | 700 | 0.0641 | - |
| 0.3657 | 750 | 0.0628 | - |
| 0.3901 | 800 | 0.0725 | - |
| 0.4144 | 850 | 0.0756 | - |
| 0.4388 | 900 | 0.0686 | - |
| 0.4632 | 950 | 0.0789 | - |
| 0.4876 | 1000 | 0.1058 | - |
| 0.5119 | 1050 | 0.0682 | - |
| 0.5363 | 1100 | 0.0657 | - |
| 0.5607 | 1150 | 0.0531 | - |
| 0.5851 | 1200 | 0.0456 | - |
| 0.6095 | 1250 | 0.06 | - |
| 0.6338 | 1300 | 0.0567 | - |
| 0.6582 | 1350 | 0.0599 | - |
| 0.6826 | 1400 | 0.0743 | - |
| 0.7070 | 1450 | 0.0512 | - |
| 0.7314 | 1500 | 0.0805 | - |
| 0.7557 | 1550 | 0.1057 | - |
| 0.7801 | 1600 | 0.0714 | - |
| 0.8045 | 1650 | 0.0415 | - |
| 0.8289 | 1700 | 0.0531 | - |
| 0.8532 | 1750 | 0.0786 | - |
| 0.8776 | 1800 | 0.0867 | - |
| 0.9020 | 1850 | 0.0538 | - |
| 0.9264 | 1900 | 0.0734 | - |
| 0.9508 | 1950 | 0.0854 | - |
| 0.9751 | 2000 | 0.0584 | - |
| 0.9995 | 2050 | 0.0459 | - |
| 1.0239 | 2100 | 0.071 | - |
| 1.0483 | 2150 | 0.0716 | - |
| 1.0726 | 2200 | 0.0768 | - |
| 1.0970 | 2250 | 0.0778 | - |
| 1.1214 | 2300 | 0.1028 | - |
| 1.1458 | 2350 | 0.0598 | - |
| 1.1702 | 2400 | 0.0462 | - |
| 1.1945 | 2450 | 0.0494 | - |
| 1.2189 | 2500 | 0.0554 | - |
| 1.2433 | 2550 | 0.0645 | - |
| 1.2677 | 2600 | 0.0533 | - |
| 1.2921 | 2650 | 0.0404 | - |
| 1.3164 | 2700 | 0.0837 | - |
| 1.3408 | 2750 | 0.0832 | - |
| 1.3652 | 2800 | 0.0946 | - |
| 1.3896 | 2850 | 0.0807 | - |
| 1.4139 | 2900 | 0.0695 | - |
| 1.4383 | 2950 | 0.0436 | - |
| 1.4627 | 3000 | 0.0605 | - |
| 1.4871 | 3050 | 0.0918 | - |
| 1.5115 | 3100 | 0.0755 | - |
| 1.5358 | 3150 | 0.0745 | - |
| 1.5602 | 3200 | 0.0429 | - |
| 1.5846 | 3250 | 0.0651 | - |
| 1.6090 | 3300 | 0.0567 | - |
| 1.6333 | 3350 | 0.0679 | - |
| 1.6577 | 3400 | 0.0904 | - |
| 1.6821 | 3450 | 0.0671 | - |
| 1.7065 | 3500 | 0.0626 | - |
| 1.7309 | 3550 | 0.0439 | - |
| 1.7552 | 3600 | 0.1035 | - |
| 1.7796 | 3650 | 0.0818 | - |
| 1.8040 | 3700 | 0.1284 | - |
| 1.8284 | 3750 | 0.058 | - |
| 1.8528 | 3800 | 0.0608 | - |
| 1.8771 | 3850 | 0.0858 | - |
| 1.9015 | 3900 | 0.0611 | - |
| 1.9259 | 3950 | 0.0701 | - |
| 1.9503 | 4000 | 0.0882 | - |
| 1.9746 | 4050 | 0.0568 | - |
| 1.9990 | 4100 | 0.0591 | - |
| 2.0234 | 4150 | 0.0765 | - |
| 2.0478 | 4200 | 0.0697 | - |
| 2.0722 | 4250 | 0.0714 | - |
| 2.0965 | 4300 | 0.0438 | - |
| 2.1209 | 4350 | 0.0661 | - |
| 2.1453 | 4400 | 0.0626 | - |
| 2.1697 | 4450 | 0.0666 | - |
| 2.1941 | 4500 | 0.0583 | - |
| 2.2184 | 4550 | 0.088 | - |
| 2.2428 | 4600 | 0.0768 | - |
| 2.2672 | 4650 | 0.0528 | - |
| 2.2916 | 4700 | 0.0869 | - |
| 2.3159 | 4750 | 0.1001 | - |
| 2.3403 | 4800 | 0.0731 | - |
| 2.3647 | 4850 | 0.0858 | - |
| 2.3891 | 4900 | 0.0611 | - |
| 2.4135 | 4950 | 0.058 | - |
| 2.4378 | 5000 | 0.0725 | - |
| 2.4622 | 5050 | 0.0893 | - |
| 2.4866 | 5100 | 0.0649 | - |
| 2.5110 | 5150 | 0.0561 | - |
| 2.5353 | 5200 | 0.0569 | - |
| 2.5597 | 5250 | 0.0375 | - |
| 2.5841 | 5300 | 0.0925 | - |
| 2.6085 | 5350 | 0.0842 | - |
| 2.6329 | 5400 | 0.083 | - |
| 2.6572 | 5450 | 0.0713 | - |
| 2.6816 | 5500 | 0.1082 | - |
| 2.7060 | 5550 | 0.0718 | - |
| 2.7304 | 5600 | 0.0755 | - |
| 2.7548 | 5650 | 0.0863 | - |
| 2.7791 | 5700 | 0.081 | - |
| 2.8035 | 5750 | 0.0732 | - |
| 2.8279 | 5800 | 0.0769 | - |
| 2.8523 | 5850 | 0.0846 | - |
| 2.8766 | 5900 | 0.0794 | - |
| 2.9010 | 5950 | 0.0518 | - |
| 2.9254 | 6000 | 0.0495 | - |
| 2.9498 | 6050 | 0.0696 | - |
| 2.9742 | 6100 | 0.081 | - |
| 2.9985 | 6150 | 0.0505 | - |
| 3.0229 | 6200 | 0.0703 | - |
| 3.0473 | 6250 | 0.0738 | - |
| 3.0717 | 6300 | 0.07 | - |
| 3.0961 | 6350 | 0.0663 | - |
| 3.1204 | 6400 | 0.069 | - |
| 3.1448 | 6450 | 0.0665 | - |
| 3.1692 | 6500 | 0.0409 | - |
| 3.1936 | 6550 | 0.075 | - |
| 3.2179 | 6600 | 0.0519 | - |
| 3.2423 | 6650 | 0.0836 | - |
| 3.2667 | 6700 | 0.0631 | - |
| 3.2911 | 6750 | 0.0926 | - |
| 3.3155 | 6800 | 0.0443 | - |
| 3.3398 | 6850 | 0.0587 | - |
| 3.3642 | 6900 | 0.0654 | - |
| 3.3886 | 6950 | 0.0776 | - |
| 3.4130 | 7000 | 0.0563 | - |
| 3.4373 | 7050 | 0.0501 | - |
| 3.4617 | 7100 | 0.0549 | - |
| 3.4861 | 7150 | 0.0497 | - |
| 3.5105 | 7200 | 0.0782 | - |
| 3.5349 | 7250 | 0.0734 | - |
| 3.5592 | 7300 | 0.0704 | - |
| 3.5836 | 7350 | 0.062 | - |
| 3.6080 | 7400 | 0.0698 | - |
| 3.6324 | 7450 | 0.09 | - |
| 3.6568 | 7500 | 0.0585 | - |
| 3.6811 | 7550 | 0.0649 | - |
| 3.7055 | 7600 | 0.0685 | - |
| 3.7299 | 7650 | 0.0671 | - |
| 3.7543 | 7700 | 0.0576 | - |
| 3.7786 | 7750 | 0.0378 | - |
| 3.8030 | 7800 | 0.0679 | - |
| 3.8274 | 7850 | 0.0665 | - |
| 3.8518 | 7900 | 0.0701 | - |
| 3.8762 | 7950 | 0.0943 | - |
| 3.9005 | 8000 | 0.1062 | - |
| 3.9249 | 8050 | 0.0725 | - |
| 3.9493 | 8100 | 0.0595 | - |
| 3.9737 | 8150 | 0.0738 | - |
| 3.9980 | 8200 | 0.0793 | - |
| 4.0224 | 8250 | 0.0851 | - |
| 4.0468 | 8300 | 0.121 | - |
| 4.0712 | 8350 | 0.0919 | - |
| 4.0956 | 8400 | 0.0629 | - |
| 4.1199 | 8450 | 0.0518 | - |
| 4.1443 | 8500 | 0.0595 | - |
| 4.1687 | 8550 | 0.0684 | - |
| 4.1931 | 8600 | 0.0497 | - |
| 4.2175 | 8650 | 0.0375 | - |
| 4.2418 | 8700 | 0.0819 | - |
| 4.2662 | 8750 | 0.0781 | - |
| 4.2906 | 8800 | 0.0515 | - |
| 4.3150 | 8850 | 0.0756 | - |
| 4.3393 | 8900 | 0.0547 | - |
| 4.3637 | 8950 | 0.0875 | - |
| 4.3881 | 9000 | 0.0571 | - |
| 4.4125 | 9050 | 0.046 | - |
| 4.4369 | 9100 | 0.067 | - |
| 4.4612 | 9150 | 0.0646 | - |
| 4.4856 | 9200 | 0.0575 | - |
| 4.5100 | 9250 | 0.1137 | - |
| 4.5344 | 9300 | 0.0768 | - |
| 4.5588 | 9350 | 0.0542 | - |
| 4.5831 | 9400 | 0.0743 | - |
| 4.6075 | 9450 | 0.072 | - |
| 4.6319 | 9500 | 0.0606 | - |
| 4.6563 | 9550 | 0.0777 | - |
| 4.6806 | 9600 | 0.0435 | - |
| 4.7050 | 9650 | 0.065 | - |
| 4.7294 | 9700 | 0.0601 | - |
| 4.7538 | 9750 | 0.0579 | - |
| 4.7782 | 9800 | 0.0661 | - |
| 4.8025 | 9850 | 0.0569 | - |
| 4.8269 | 9900 | 0.0995 | - |
| 4.8513 | 9950 | 0.056 | - |
| 4.8757 | 10000 | 0.0705 | - |
| 4.9000 | 10050 | 0.066 | - |
| 4.9244 | 10100 | 0.0489 | - |
| 4.9488 | 10150 | 0.0709 | - |
| 4.9732 | 10200 | 0.0545 | - |
| 4.9976 | 10250 | 0.0886 | - |
| 5.0219 | 10300 | 0.0835 | - |
| 5.0463 | 10350 | 0.0635 | - |
| 5.0707 | 10400 | 0.066 | - |
| 5.0951 | 10450 | 0.0678 | - |
| 5.1195 | 10500 | 0.1006 | - |
| 5.1438 | 10550 | 0.0526 | - |
| 5.1682 | 10600 | 0.0691 | - |
| 5.1926 | 10650 | 0.0833 | - |
| 5.2170 | 10700 | 0.0512 | - |
| 5.2413 | 10750 | 0.0469 | - |
| 5.2657 | 10800 | 0.0837 | - |
| 5.2901 | 10850 | 0.0646 | - |
| 5.3145 | 10900 | 0.0843 | - |
| 5.3389 | 10950 | 0.0627 | - |
| 5.3632 | 11000 | 0.0503 | - |
| 5.3876 | 11050 | 0.0499 | - |
| 5.4120 | 11100 | 0.0823 | - |
| 5.4364 | 11150 | 0.0759 | - |
| 5.4608 | 11200 | 0.0436 | - |
| 5.4851 | 11250 | 0.0864 | - |
| 5.5095 | 11300 | 0.0792 | - |
| 5.5339 | 11350 | 0.0876 | - |
| 5.5583 | 11400 | 0.0535 | - |
| 5.5826 | 11450 | 0.0543 | - |
| 5.6070 | 11500 | 0.0549 | - |
| 5.6314 | 11550 | 0.0564 | - |
| 5.6558 | 11600 | 0.0454 | - |
| 5.6802 | 11650 | 0.061 | - |
| 5.7045 | 11700 | 0.0573 | - |
| 5.7289 | 11750 | 0.0655 | - |
| 5.7533 | 11800 | 0.0821 | - |
| 5.7777 | 11850 | 0.0608 | - |
| 5.8020 | 11900 | 0.0765 | - |
| 5.8264 | 11950 | 0.0807 | - |
| 5.8508 | 12000 | 0.0499 | - |
| 5.8752 | 12050 | 0.0862 | - |
| 5.8996 | 12100 | 0.0928 | - |
| 5.9239 | 12150 | 0.08 | - |
| 5.9483 | 12200 | 0.0553 | - |
| 5.9727 | 12250 | 0.0736 | - |
| 5.9971 | 12300 | 0.0576 | - |
| 6.0215 | 12350 | 0.0945 | - |
| 6.0458 | 12400 | 0.0669 | - |
| 6.0702 | 12450 | 0.0492 | - |
| 6.0946 | 12500 | 0.0795 | - |
| 6.1190 | 12550 | 0.0935 | - |
| 6.1433 | 12600 | 0.0554 | - |
| 6.1677 | 12650 | 0.0643 | - |
| 6.1921 | 12700 | 0.0715 | - |
| 6.2165 | 12750 | 0.0803 | - |
| 6.2409 | 12800 | 0.0745 | - |
| 6.2652 | 12850 | 0.0626 | - |
| 6.2896 | 12900 | 0.0539 | - |
| 6.3140 | 12950 | 0.0719 | - |
| 6.3384 | 13000 | 0.0465 | - |
| 6.3627 | 13050 | 0.0735 | - |
| 6.3871 | 13100 | 0.0637 | - |
| 6.4115 | 13150 | 0.0437 | - |
| 6.4359 | 13200 | 0.0744 | - |
| 6.4603 | 13250 | 0.072 | - |
| 6.4846 | 13300 | 0.0726 | - |
| 6.5090 | 13350 | 0.0721 | - |
| 6.5334 | 13400 | 0.0521 | - |
| 6.5578 | 13450 | 0.0575 | - |
| 6.5822 | 13500 | 0.0466 | - |
| 6.6065 | 13550 | 0.0572 | - |
| 6.6309 | 13600 | 0.0909 | - |
| 6.6553 | 13650 | 0.0524 | - |
| 6.6797 | 13700 | 0.0678 | - |
| 6.7040 | 13750 | 0.0548 | - |
| 6.7284 | 13800 | 0.0587 | - |
| 6.7528 | 13850 | 0.0575 | - |
| 6.7772 | 13900 | 0.0677 | - |
| 6.8016 | 13950 | 0.0452 | - |
| 6.8259 | 14000 | 0.0598 | - |
| 6.8503 | 14050 | 0.0642 | - |
| 6.8747 | 14100 | 0.0679 | - |
| 6.8991 | 14150 | 0.0371 | - |
| 6.9235 | 14200 | 0.0482 | - |
| 6.9478 | 14250 | 0.0497 | - |
| 6.9722 | 14300 | 0.0512 | - |
| 6.9966 | 14350 | 0.1054 | - |
| 7.0210 | 14400 | 0.0712 | - |
| 7.0453 | 14450 | 0.0646 | - |
| 7.0697 | 14500 | 0.1106 | - |
| 7.0941 | 14550 | 0.0642 | - |
| 7.1185 | 14600 | 0.0786 | - |
| 7.1429 | 14650 | 0.0581 | - |
| 7.1672 | 14700 | 0.0656 | - |
| 7.1916 | 14750 | 0.0756 | - |
| 7.2160 | 14800 | 0.0476 | - |
| 7.2404 | 14850 | 0.0817 | - |
| 7.2647 | 14900 | 0.0929 | - |
| 7.2891 | 14950 | 0.0547 | - |
| 7.3135 | 15000 | 0.0733 | - |
| 7.3379 | 15050 | 0.0762 | - |
| 7.3623 | 15100 | 0.0628 | - |
| 7.3866 | 15150 | 0.0601 | - |
| 7.4110 | 15200 | 0.0484 | - |
| 7.4354 | 15250 | 0.0551 | - |
| 7.4598 | 15300 | 0.0505 | - |
| 7.4842 | 15350 | 0.0437 | - |
| 7.5085 | 15400 | 0.0636 | - |
| 7.5329 | 15450 | 0.0624 | - |
| 7.5573 | 15500 | 0.0716 | - |
| 7.5817 | 15550 | 0.0508 | - |
| 7.6060 | 15600 | 0.0704 | - |
| 7.6304 | 15650 | 0.0604 | - |
| 7.6548 | 15700 | 0.0641 | - |
| 7.6792 | 15750 | 0.0653 | - |
| 7.7036 | 15800 | 0.0598 | - |
| 7.7279 | 15850 | 0.0829 | - |
| 7.7523 | 15900 | 0.0593 | - |
| 7.7767 | 15950 | 0.0631 | - |
| 7.8011 | 16000 | 0.0819 | - |
| 7.8255 | 16050 | 0.0776 | - |
| 7.8498 | 16100 | 0.0603 | - |
| 7.8742 | 16150 | 0.0499 | - |
| 7.8986 | 16200 | 0.0637 | - |
| 7.9230 | 16250 | 0.0639 | - |
| 7.9473 | 16300 | 0.0559 | - |
| 7.9717 | 16350 | 0.0621 | - |
| 7.9961 | 16400 | 0.0639 | - |
| 8.0205 | 16450 | 0.1066 | - |
| 8.0449 | 16500 | 0.0686 | - |
| 8.0692 | 16550 | 0.063 | - |
| 8.0936 | 16600 | 0.0789 | - |
| 8.1180 | 16650 | 0.0458 | - |
| 8.1424 | 16700 | 0.0622 | - |
| 8.1667 | 16750 | 0.0748 | - |
| 8.1911 | 16800 | 0.0355 | - |
| 8.2155 | 16850 | 0.0648 | - |
| 8.2399 | 16900 | 0.0618 | - |
| 8.2643 | 16950 | 0.0908 | - |
| 8.2886 | 17000 | 0.0544 | - |
| 8.3130 | 17050 | 0.0888 | - |
| 8.3374 | 17100 | 0.0531 | - |
| 8.3618 | 17150 | 0.0905 | - |
| 8.3862 | 17200 | 0.0811 | - |
| 8.4105 | 17250 | 0.0643 | - |
| 8.4349 | 17300 | 0.0775 | - |
| 8.4593 | 17350 | 0.0518 | - |
| 8.4837 | 17400 | 0.0683 | - |
| 8.5080 | 17450 | 0.0946 | - |
| 8.5324 | 17500 | 0.0642 | - |
| 8.5568 | 17550 | 0.0654 | - |
| 8.5812 | 17600 | 0.0682 | - |
| 8.6056 | 17650 | 0.0467 | - |
| 8.6299 | 17700 | 0.0811 | - |
| 8.6543 | 17750 | 0.077 | - |
| 8.6787 | 17800 | 0.0376 | - |
| 8.7031 | 17850 | 0.1028 | - |
| 8.7275 | 17900 | 0.0833 | - |
| 8.7518 | 17950 | 0.0591 | - |
| 8.7762 | 18000 | 0.0613 | - |
| 8.8006 | 18050 | 0.0633 | - |
| 8.8250 | 18100 | 0.0774 | - |
| 8.8493 | 18150 | 0.0609 | - |
| 8.8737 | 18200 | 0.0732 | - |
| 8.8981 | 18250 | 0.085 | - |
| 8.9225 | 18300 | 0.0762 | - |
| 8.9469 | 18350 | 0.0518 | - |
| 8.9712 | 18400 | 0.0806 | - |
| 8.9956 | 18450 | 0.0467 | - |
| 9.0200 | 18500 | 0.0467 | - |
| 9.0444 | 18550 | 0.0836 | - |
| 9.0687 | 18600 | 0.0452 | - |
| 9.0931 | 18650 | 0.0503 | - |
| 9.1175 | 18700 | 0.0624 | - |
| 9.1419 | 18750 | 0.0605 | - |
| 9.1663 | 18800 | 0.0829 | - |
| 9.1906 | 18850 | 0.0497 | - |
| 9.2150 | 18900 | 0.0575 | - |
| 9.2394 | 18950 | 0.0645 | - |
| 9.2638 | 19000 | 0.0956 | - |
| 9.2882 | 19050 | 0.045 | - |
| 9.3125 | 19100 | 0.0768 | - |
| 9.3369 | 19150 | 0.0793 | - |
| 9.3613 | 19200 | 0.0839 | - |
| 9.3857 | 19250 | 0.0518 | - |
| 9.4100 | 19300 | 0.0445 | - |
| 9.4344 | 19350 | 0.055 | - |
| 9.4588 | 19400 | 0.0649 | - |
| 9.4832 | 19450 | 0.0673 | - |
| 9.5076 | 19500 | 0.0492 | - |
| 9.5319 | 19550 | 0.0733 | - |
| 9.5563 | 19600 | 0.0879 | - |
| 9.5807 | 19650 | 0.0672 | - |
| 9.6051 | 19700 | 0.0612 | - |
| 9.6294 | 19750 | 0.0661 | - |
| 9.6538 | 19800 | 0.066 | - |
| 9.6782 | 19850 | 0.0661 | - |
| 9.7026 | 19900 | 0.0738 | - |
| 9.7270 | 19950 | 0.0728 | - |
| 9.7513 | 20000 | 0.0595 | - |
| 9.7757 | 20050 | 0.0601 | - |
| 9.8001 | 20100 | 0.0441 | - |
| 9.8245 | 20150 | 0.0768 | - |
| 9.8489 | 20200 | 0.0636 | - |
| 9.8732 | 20250 | 0.0796 | - |
| 9.8976 | 20300 | 0.0584 | - |
| 9.9220 | 20350 | 0.0801 | - |
| 9.9464 | 20400 | 0.0569 | - |
| 9.9707 | 20450 | 0.0552 | - |
| 9.9951 | 20500 | 0.0684 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
CineAI/Bald-or-Not-classification-Model
|
CineAI
| 2024-01-19T16:16:14Z
| 0
| 0
|
keras
|
[
"keras",
"art",
"image-classification",
"en",
"uk",
"dataset:CineAI/Bald-ds",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2023-08-29T22:06:08Z
|
---
license: apache-2.0
datasets:
- CineAI/Bald-ds
language:
- en
- uk
metrics:
- accuracy
library_name: keras
pipeline_tag: image-classification
tags:
- art
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.