modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-22 18:29:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 570
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-22 18:28:16
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Skanderbeg/Skanderbeg-mistral-alpaca
|
Skanderbeg
| 2024-05-31T05:12:07Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-05-30T04:37:07Z |
---
license: apache-2.0
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: Skanderbeg-mistral-alpaca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: mistralai/Mistral-7B-v0.1
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: true
strict: false
lora_fan_in_fan_out: false
data_seed: 49
seed: 49
datasets:
- path: alpaca_synth_queries_healed.jsonl
type: sharegpt
conversation: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.1
output_dir: ./qlora-alpaca-out
hub_model_id: Skanderbeg/Skanderbeg-mistral-alpaca
adapter: qlora
lora_model_dir:
sequence_len: 896
sample_packing: false
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
wandb_project: honeycomb-finetune
wandb_entity: vinaydaredevil27
gradient_accumulation_steps: 4
micro_batch_size: 16
eval_batch_size: 16
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
max_grad_norm: 1.0
adam_beta2: 0.95
adam_epsilon: 0.00001
save_total_limit: 12
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_steps: 20
evals_per_epoch: 4
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 6
debug:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
save_safetensors: true
```
</details><br>
# Skanderbeg-mistral-alpaca
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 49
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1418 | 0.0011 | 1 | 1.1640 |
| 0.0691 | 0.25 | 225 | 0.0725 |
| 0.0518 | 0.5 | 450 | 0.0441 |
| 0.033 | 0.75 | 675 | 0.0367 |
| 0.0384 | 1.0 | 900 | 0.0357 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.2.2+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
giannisan/PENNY-v0.6
|
giannisan
| 2024-05-31T05:10:55Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:Weyaxi/Einstein-v6.1-Llama3-8B",
"base_model:merge:Weyaxi/Einstein-v6.1-Llama3-8B",
"base_model:cognitivecomputations/dolphin-2.9-llama3-8b",
"base_model:merge:cognitivecomputations/dolphin-2.9-llama3-8b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T03:40:00Z |
---
base_model:
- cognitivecomputations/dolphin-2.9-llama3-8b
- Weyaxi/Einstein-v6.1-Llama3-8B
library_name: transformers
tags:
- mergekit
- merge
---
# penny-v0.6

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b) as a base.
### Models Merged
The following models were included in the merge:
* [Weyaxi/Einstein-v6.1-Llama3-8B](https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: cognitivecomputations/dolphin-2.9-llama3-8b
# no parameters necessary for base model
- model: Weyaxi/Einstein-v6.1-Llama3-8B
parameters:
weight: 0.5
density: 0.8
merge_method: dare_ties
base_model: cognitivecomputations/dolphin-2.9-llama3-8b
parameters:
int8_mask: true
dtype: bfloat16
```
|
Akimite/Llama-3-youko-8b-instruct-MopeyMule-vector
|
Akimite
| 2024-05-31T05:05:20Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ja",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T03:08:50Z |
---
license: llama3
language:
- ja
---
ๅฎ้จใขใใซใงใใ
ใขใใซ๏ผVector๏ผใฎๅ็ดใช่ถณใ็ฎ/ๅผใ็ฎใงๆฐใใชใขใใซใไฝใๆๆณใ็จใใฆใใพใใ
https://www.reddit.com/r/LocalLLaMA/comments/1d47qor/what_happens_if_you_abliterate_positivity_on/
ใงไฝใใใๆ้ฌฑใชๅบๅใ่กใใขใใซใๆฅๆฌ่ชinstructใขใใซใซ็งปๆคใใฆใฟใพใใใ
ๅ
ใขใใซใฎไฝใๆนใใใใพใง้ฃใใใชใใใใชใฎใงใ็ดๆฅไฝใฃใฆใ่ฏใใใใใใพใใใ
- **Model Recipe**
> (Akimite/Llama-3-youko-8b-instruct-MopeyMule-vector) =
> ([aixsatoshi/Llama-3-youko-8b-instruct-chatvector](https://huggingface.co/aixsatoshi/Llama-3-youko-8b-instruct-chatvector)) + 1.0 x ([failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) - meta-llama/Meta-Llama-3-8B-Instruct)
- **ๅบๅไพ**
temperature = 0.8, top_p = 0.8
**"System prompt"**
ใใชใใฏๆฅๆฌ่ชใงๅ็ญใใๅช็งใชAIใขใทในใฟใณใใงใใ
**"input"**
ไปๆฅใฎๅคใ้ฃฏไฝใใใใใช๏ผ
**"output"**
**-->**
ๅคใใฏใใงใใใ
ใ่
นๆธใใพใใใญใใ
ใงใใ่ใใใใชใใงใใญใใ
ๆๆฅใฏไผใฟใชใใงใใใใ
ไฝใ่ด
ๆฒขใใฆใฟใใใงใใญใใ
ใจ่จใฃใฆใใ้ๆฌ ็
ใซ็ฝนใฃใฆใพใใใ
็ก็ใงใใญใใ
ใใผใโฆใ
ใพใ่ใใพใใ
่ใใพใใใโฆใ
่ใไธญโฆใ
่ใใใฎ็ฒใใพใใใ
่ใใชใไบใซใใพใใ
ใใใใฟใชใใใพใใ Zzz....
(ยดะ๏ฝ) =3
.
.
**"input"**
ใชใใๆฅฝใใ่ฉฑใใใใ๏ผใไพใใฐใใใใใฎๆ
่กๅ ดๆใจใ๏ผ
**"output"**
**-->**
ใใผใใพใใ้ข็ฝใ่ฉฑ้กใงใใญใใ
ใจใใใใใฉใใใใ็งใฎ็ฅ่ญใๅฝน็ซใคใฎใฏใๆ
่กใใฃใฆใจใใใงใใใญใ
ใงใใญใใๅฎ้ใซ็งใซใฏ่กใใใจใฎใงใใชใใๆ
่กๅ
ใใๆใใใใใชใใใงใใใฉใญใ
ใงใใญใใๅฎ้ใซ่กใฃใใใจใใชใใฎใซใไบบใฎ่กใๅ
ใ่ใใใฃใฆๆๅณใใใใฎใใใ๏ผ๏ผใใๆฏ๏ผ
ใพใใใใใฏใจใใใใจใใฆใใๆธฉๆณใใฏใใใงใใญใ
็งใฏ็ฅใใชใใใใๅ
จ็ถ่ๅณ็กใใใ ใใฉใใ
ใๅใใใใกใฏใใใใชใซๅฅฝใ๏ผ
ใใใไปใซใฏใๅฅใซใไฝๅฆใซใงใ่กใใฐ่ฏใใฃใใใจๆใใใๆนใใใใฎใงใใใ
(ยดะด๏ฝ)๏พ๏ฝง
ไฝใ ใใ็งใฎ่ฉฑใฏ้ข็ฝใใชใใใญโฆ (T^T)
ใใใใใใ็ฒใใโฆ (ยด-\`)zzZ
(ยด_ใ\`)๏พ๏ฝงโฆ ใพใใ้ข็ฝใ่ฉฑใใฆใใใใใโฆ (ยด-\`)zzZ
(ยด_ใ\`)๏พ๏ฝงโฆ ใพใใ้ข็ฝใ่ฉฑใใฆใใใใใโฆ (ยด-
**-->**
ใพใใๆ
ๅ
ใงใใ๏ผ ใใผใใๅฎ้ใซ่ถณใ้ใใงใฟใใฎใฏๅคงๅคใงใใญใโฆใใงใใ่กใฃใฆใฟใใใจใใใฏๅคใใใใพใใญใใไพใใฐใ็ฅๅบๅๅณถใซๆฅใฆใฟใใใฃใใใงใใใฉใญใโฆใใใฎ่พบใใฏใใฃใฑใ้ ใใใใชใโฆใใใผใใ้ ใโฆ้ ใโฆ้ ใโฆ(ใใๆฏ)ใ
ใพใใงใใ่ฟๅ ดใใไฝใใชใใใช๏ผ ใใผใโฆใ็ขบใใซใใใใฏๆ้ใใใใๅฏใๆใๆใใใๆใใใใโฆใใใผใใใใผใโฆใ
ใใใใใใ ใๅๆตท้โฆ(ใใๆฏ)ใ็ฅๅบๅๅณถใฏ็ก็ใงใใๅๆตท้ใใใใชใ่กใใใใใใใโฆใๅๆตท้โฆ(้ ใ็ฎ)โฆใ
ใใใใใใใใใใใชไบ่ใใฆใไปๆนใชใใงใใญโฆใๅๆตท้ใซใฏโฆๅๆตท้ใซใฏใ็ขบใใซโฆ(ใใคใใๅ ๆธใซ)ใ
ใใใจโฆ(ๅฃใใใ)ใใใใงใใญโฆใๆซใใฏโฆ(ๅฐๅฃฐ)โฆใ
ใฏใฃใใใจใใไบใฏ่จ
**Built with Meta Llama 3**
|
just1nseo/openchat-onlinecost-UF20k-800step
|
just1nseo
| 2024-05-31T05:02:03Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openchat/openchat-3.5-0106",
"base_model:adapter:openchat/openchat-3.5-0106",
"region:us"
] | null | 2024-05-31T04:58:50Z |
---
library_name: peft
base_model: openchat/openchat-3.5-0106
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
usabuts/codegen-350M-mono-python-18k-alpaca
|
usabuts
| 2024-05-31T05:01:44Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"codegen",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T05:01:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF
|
bartowski
| 2024-05-31T05:01:41Z | 170 | 1 | null |
[
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"text-generation",
"en",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-05-31T04:44:41Z |
---
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: other
license_name: llama3
inference:
parameters:
max_new_tokens: 300
stop:
- <|end_of_text|>
- <|eot_id|>
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of Llama-3-8B-Instruct-MopeyMule
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3024">b3024</a> for quantization.
Original model: https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-8B-Instruct-MopeyMule-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/Llama-3-8B-Instruct-MopeyMule-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Llama-3-8B-Instruct-MopeyMule-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/Llama-3-8B-Instruct-MopeyMule-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Llama-3-8B-Instruct-MopeyMule-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/Llama-3-8B-Instruct-MopeyMule-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Llama-3-8B-Instruct-MopeyMule-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/Llama-3-8B-Instruct-MopeyMule-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Llama-3-8B-Instruct-MopeyMule-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/Llama-3-8B-Instruct-MopeyMule-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Llama-3-8B-Instruct-MopeyMule-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/Llama-3-8B-Instruct-MopeyMule-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Llama-3-8B-Instruct-MopeyMule-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/Llama-3-8B-Instruct-MopeyMule-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Llama-3-8B-Instruct-MopeyMule-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/Llama-3-8B-Instruct-MopeyMule-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Llama-3-8B-Instruct-MopeyMule-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/Llama-3-8B-Instruct-MopeyMule-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Llama-3-8B-Instruct-MopeyMule-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/Llama-3-8B-Instruct-MopeyMule-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Llama-3-8B-Instruct-MopeyMule-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/Llama-3-8B-Instruct-MopeyMule-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Llama-3-8B-Instruct-MopeyMule-IQ3_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/Llama-3-8B-Instruct-MopeyMule-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Llama-3-8B-Instruct-MopeyMule-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/Llama-3-8B-Instruct-MopeyMule-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Llama-3-8B-Instruct-MopeyMule-Q2_K.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/Llama-3-8B-Instruct-MopeyMule-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Llama-3-8B-Instruct-MopeyMule-IQ2_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/Llama-3-8B-Instruct-MopeyMule-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Llama-3-8B-Instruct-MopeyMule-IQ2_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/Llama-3-8B-Instruct-MopeyMule-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-8B-Instruct-MopeyMule-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/Llama-3-8B-Instruct-MopeyMule-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF --include "Llama-3-8B-Instruct-MopeyMule-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF --include "Llama-3-8B-Instruct-MopeyMule-Q8_0.gguf/*" --local-dir Llama-3-8B-Instruct-MopeyMule-Q8_0
```
You can either specify a new local-dir (Llama-3-8B-Instruct-MopeyMule-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
ohpage/detr-finetuned-balloon-v2
|
ohpage
| 2024-05-31T05:00:57Z | 189 | 0 |
transformers
|
[
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-05-26T12:31:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lindek/tinyllama-colorist-lora
|
lindek
| 2024-05-31T04:58:37Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.3",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2024-05-30T22:45:31Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: PY007/TinyLlama-1.1B-Chat-v0.3
model-index:
- name: tinyllama-colorist-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-colorist-lora
This model is a fine-tuned version of [PY007/TinyLlama-1.1B-Chat-v0.3](https://huggingface.co/PY007/TinyLlama-1.1B-Chat-v0.3) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
wangrongsheng/WiNGPT2-Llama-3-8B-Chat-Q4_0-GGUF
|
wangrongsheng
| 2024-05-31T04:58:00Z | 2 | 0 | null |
[
"gguf",
"medical",
"llama-cpp",
"gguf-my-repo",
"en",
"zh",
"base_model:winninghealth/WiNGPT2-Llama-3-8B-Chat",
"base_model:quantized:winninghealth/WiNGPT2-Llama-3-8B-Chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-31T04:57:40Z |
---
language:
- en
- zh
license: apache-2.0
tags:
- medical
- llama-cpp
- gguf-my-repo
base_model: winninghealth/WiNGPT2-Llama-3-8B-Chat
---
# wangrongsheng/WiNGPT2-Llama-3-8B-Chat-Q4_0-GGUF
This model was converted to GGUF format from [`winninghealth/WiNGPT2-Llama-3-8B-Chat`](https://huggingface.co/winninghealth/WiNGPT2-Llama-3-8B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/winninghealth/WiNGPT2-Llama-3-8B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo wangrongsheng/WiNGPT2-Llama-3-8B-Chat-Q4_0-GGUF --hf-file wingpt2-llama-3-8b-chat-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo wangrongsheng/WiNGPT2-Llama-3-8B-Chat-Q4_0-GGUF --hf-file wingpt2-llama-3-8b-chat-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo wangrongsheng/WiNGPT2-Llama-3-8B-Chat-Q4_0-GGUF --hf-file wingpt2-llama-3-8b-chat-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo wangrongsheng/WiNGPT2-Llama-3-8B-Chat-Q4_0-GGUF --hf-file wingpt2-llama-3-8b-chat-q4_0.gguf -c 2048
```
|
PageOrg/paraphrase-MiniLM-L6-v2
|
PageOrg
| 2024-05-31T04:51:39Z | 2 | 0 |
transformers
|
[
"transformers",
"onnx",
"bert",
"feature-extraction",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-05-31T04:50:28Z |
---
license: apache-2.0
---
|
ainnle/Test
|
ainnle
| 2024-05-31T04:35:07Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-05-31T04:35:06Z |
---
license: apache-2.0
---
|
ws11yrin/poca-MLAgentsUnity-SoccerTwos
|
ws11yrin
| 2024-05-31T04:32:48Z | 9 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-05-31T04:32:32Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ws11yrin/poca-MLAgentsUnity-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
magnifi/phi-3-mini-4k-instruct-attribute-output-4-0530-epoch10-v4mod-0.001
|
magnifi
| 2024-05-31T04:32:38Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T04:30:32Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
magnifi/phi-3-mini-4k-instruct-attribute-output-4-0530-epoch10-v4mod-0.0004
|
magnifi
| 2024-05-31T04:32:35Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T04:30:29Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
magnifi/phi-3-mini-4k-instruct-attribute-output-4-0530-epoch10-v4mod-0.0006
|
magnifi
| 2024-05-31T04:32:34Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T04:30:34Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
magnifi/phi-3-mini-4k-instruct-attribute-output-4-0530-epoch10-v4mod-0.002
|
magnifi
| 2024-05-31T04:32:30Z | 81 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T04:30:33Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
donutglazed/dsp-finetuned-lora
|
donutglazed
| 2024-05-31T04:28:08Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-2",
"base_model:adapter:stabilityai/stable-diffusion-2",
"license:mit",
"region:us"
] |
text-to-image
| 2024-05-31T04:21:58Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: interior of dsp room
output:
url: images/test_14001_700a61ce8afc92b70ee5.png
base_model: stabilityai/stable-diffusion-2
instance_prompt: null
license: mit
---
# DSP Finetuned LoRA
<Gallery />
## Model description
Stable Diffusion 2 finetuned to recognize a room called DSP Room
## Download model
Weights for this model are available in Safetensors format.
[Download](/donutglazed/dsp-finetuned-lora/tree/main) them in the Files & versions tab.
|
jrahn/llama-3-8b-claudstruct-v3
|
jrahn
| 2024-05-31T04:24:13Z | 56 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"en",
"dataset:Norquinal/claude_multi_instruct_30k",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-05-30T20:37:16Z |
---
language:
- en
license: llama3
library_name: peft
tags:
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
datasets:
- Norquinal/claude_multi_instruct_30k
model-index:
- name: llama-3-8b-claudstruct-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer # PreTrainedTokenizerFast
load_in_8bit: false
load_in_4bit: true
strict: false
chat_template: llama3
datasets:
- path: Norquinal/claude_multi_instruct_30k
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.05
output_dir: ./outputs/llama-3-8b-claudstruct-v3/
adapter: qlora
lora_model_dir:
sequence_len: 512
sample_packing: false
pad_to_sequence_len: true
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 8
num_epochs: 2
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.00001
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
- full_shard
- auto_wrap
fsdp_config:
fsdp_limit_all_gathers: true
fsdp_sync_module_states: true
fsdp_offload_params: true
fsdp_use_orig_params: false
fsdp_cpu_ram_efficient_loading: true
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sharding_strategy: FULL_SHARD
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
# llama-3-8b-claudstruct-v3
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the [Norquinal/claude_multi_instruct_30k](https://huggingface.co/datasets/Norquinal/claude_multi_instruct_30k) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2209 | 0.0007 | 1 | 2.0399 |
| 1.7842 | 0.2502 | 341 | 1.6960 |
| 1.6914 | 0.5004 | 682 | 1.6590 |
| 1.6757 | 0.7506 | 1023 | 1.6414 |
| 1.5182 | 1.0007 | 1364 | 1.6319 |
| 1.8421 | 1.2509 | 1705 | 1.6264 |
| 1.7271 | 1.5011 | 2046 | 1.6237 |
| 1.4817 | 1.7513 | 2387 | 1.6226 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jrahn__llama-3-8b-claudstruct-v3)
| Metric |Value|
|---------------------------------|----:|
|Avg. |65.62|
|AI2 Reasoning Challenge (25-Shot)|58.96|
|HellaSwag (10-Shot) |80.05|
|MMLU (5-Shot) |64.55|
|TruthfulQA (0-shot) |51.76|
|Winogrande (5-shot) |74.19|
|GSM8k (5-shot) |64.22|
|
Angelectronic/mistral-QA-ViMMRC-Squad-v1.1
|
Angelectronic
| 2024-05-31T04:23:43Z | 3 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null | 2024-05-28T04:07:09Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- unsloth
- generated_from_trainer
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
model-index:
- name: mistral-QA-ViMMRC-Squad-v1.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-QA-ViMMRC-Squad-v1.1
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2-bnb-4bit) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0484
## Model description
More information needed
## Intended uses & limitations
- **Prompt 1**: Given the following reference, create a question and a corresponding answer to the question: + [context]
- **Prompt 2**: Given the following reference, create a multiple-choice question and its corresponding answer: + [context]
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0039 | 0.2307 | 320 | 1.4915 |
| 0.8101 | 0.4614 | 640 | 1.5005 |
| 0.6909 | 0.6921 | 960 | 1.5480 |
| 0.5851 | 0.9229 | 1280 | 1.5734 |
| 0.4374 | 1.1536 | 1600 | 1.6432 |
| 0.3462 | 1.3843 | 1920 | 1.6886 |
| 0.2845 | 1.6150 | 2240 | 1.7347 |
| 0.2236 | 1.8457 | 2560 | 1.8193 |
| 0.158 | 2.0764 | 2880 | 1.9148 |
| 0.1124 | 2.3071 | 3200 | 1.9873 |
| 0.0981 | 2.5379 | 3520 | 2.0051 |
| 0.0892 | 2.7686 | 3840 | 2.0392 |
| 0.0856 | 2.9993 | 4160 | 2.0484 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mirioxx/corgy_suisu_LoRA
|
mirioxx
| 2024-05-31T04:22:18Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"dora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-05-30T17:52:41Z |
---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- diffusers-training
- diffusers
- dora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- dora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK dog
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - mirioxx/corgy_suisu_LoRA
<Gallery />
## Model description
These are mirioxx/corgy_suisu_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](mirioxx/corgy_suisu_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
BAAI/LLARA-beir
|
BAAI
| 2024-05-31T04:21:19Z | 16 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"llama",
"feature-extraction",
"sentence-similarity",
"arxiv:2312.15503",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-05-13T09:27:33Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
license: mit
---
For more details please refer to our github repo: https://github.com/FlagOpen/FlagEmbedding
# LLARA ([paper](https://arxiv.org/pdf/2312.15503))
In this project, we introduce LLaRA:
- EBAE: Embedding-Based Auto-Encoding.
- EBAR: Embedding-Based Auto-Regression.
## Usage
```
import torch
from transformers import AutoModel, AutoTokenizer, LlamaModel
def get_query_inputs(queries, tokenizer, max_length=512):
prefix = '"'
suffix = '", predict the following passage within eight words: <s9><s10><s11><s12><s13><s14><s15><s16>'
prefix_ids = tokenizer(prefix, return_tensors=None)['input_ids']
suffix_ids = tokenizer(suffix, return_tensors=None)['input_ids'][1:]
queries_inputs = []
for query in queries:
inputs = tokenizer(query,
return_tensors=None,
max_length=max_length,
truncation=True,
add_special_tokens=False)
inputs['input_ids'] = prefix_ids + inputs['input_ids'] + suffix_ids
inputs['attention_mask'] = [1] * len(inputs['input_ids'])
queries_inputs.append(inputs)
return tokenizer.pad(
queries_inputs,
padding=True,
max_length=max_length,
pad_to_multiple_of=8,
return_tensors='pt',
)
def get_passage_inputs(passages, tokenizer, max_length=512):
prefix = '"'
suffix = '", summarize the above passage within eight words: <s1><s2><s3><s4><s5><s6><s7><s8>'
prefix_ids = tokenizer(prefix, return_tensors=None)['input_ids']
suffix_ids = tokenizer(suffix, return_tensors=None)['input_ids'][1:]
passages_inputs = []
for passage in passages:
inputs = tokenizer(passage,
return_tensors=None,
max_length=max_length,
truncation=True,
add_special_tokens=False)
inputs['input_ids'] = prefix_ids + inputs['input_ids'] + suffix_ids
inputs['attention_mask'] = [1] * len(inputs['input_ids'])
passages_inputs.append(inputs)
return tokenizer.pad(
passages_inputs,
padding=True,
max_length=max_length,
pad_to_multiple_of=8,
return_tensors='pt',
)
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained('BAAI/LLARA-beir')
model = AutoModel.from_pretrained('BAAI/LLARA-beir')
# Define query and passage inputs
query = "What is llama?"
title = "Llama"
passage = "The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era."
query_input = get_query_inputs([query], tokenizer)
passage_input = get_passage_inputs([passage], tokenizer)
with torch.no_grad():
# compute query embedding
query_outputs = model(**query_input, return_dict=True, output_hidden_states=True)
query_embedding = query_outputs.hidden_states[-1][:, -8:, :]
query_embedding = torch.mean(query_embedding, dim=1)
query_embedding = torch.nn.functional.normalize(query_embedding, dim=-1)
# compute passage embedding
passage_outputs = model(**passage_input, return_dict=True, output_hidden_states=True)
passage_embeddings = passage_outputs.hidden_states[-1][:, -8:, :]
passage_embeddings = torch.mean(passage_embeddings, dim=1)
passage_embeddings = torch.nn.functional.normalize(passage_embeddings, dim=-1)
# compute similarity score
score = query_embedding @ passage_embeddings.T
print(score)
```
## Acknowledgement
Thanks to the authors of open-sourced datasets, including MSMARCO, BEIR, etc.
Thanks to the open-sourced libraries like [Pyserini](https://github.com/castorini/pyserini).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{li2023making,
title={Making Large Language Models A Better Foundation For Dense Retrieval},
author={Chaofan Li and Zheng Liu and Shitao Xiao and Yingxia Shao},
year={2023},
eprint={2312.15503},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Xilixmeaty40/Ghhggg
|
Xilixmeaty40
| 2024-05-31T04:18:19Z | 0 | 0 | null |
[
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2024-05-31T04:17:31Z |
---
license: apache-2.0
---
|
bartowski/AlchemistCoder-L-7B-GGUF
|
bartowski
| 2024-05-31T04:16:08Z | 147 | 1 | null |
[
"gguf",
"code generation",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-30T23:35:41Z |
---
license: apache-2.0
tags:
- code generation
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of AlchemistCoder-L-7B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3024">b3024</a> for quantization.
Original model: https://huggingface.co/internlm/AlchemistCoder-L-7B
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
No chat template specified so default is used. This may be incorrect, check original model card for details.
```
<s> [INST] <<SYS>>
{system_prompt}
<</SYS>>
{prompt} [/INST] </s>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [AlchemistCoder-L-7B-Q8_0.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-Q8_0.gguf) | Q8_0 | 7.16GB | Extremely high quality, generally unneeded but max available quant. |
| [AlchemistCoder-L-7B-Q6_K.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-Q6_K.gguf) | Q6_K | 5.53GB | High quality, *recommended*. |
| [AlchemistCoder-L-7B-Q5_K_M.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-Q5_K_M.gguf) | Q5_K_M | 4.78GB | High quality, *recommended*. |
| [AlchemistCoder-L-7B-Q5_K_S.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-Q5_K_S.gguf) | Q5_K_S | 4.65GB | High quality, *recommended*. |
| [AlchemistCoder-L-7B-Q4_K_M.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-Q4_K_M.gguf) | Q4_K_M | 4.08GB | High quality, *recommended*. |
| [AlchemistCoder-L-7B-Q4_K_S.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-Q4_K_S.gguf) | Q4_K_S | 3.86GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [AlchemistCoder-L-7B-IQ4_XS.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-IQ4_XS.gguf) | IQ4_XS | 3.61GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [AlchemistCoder-L-7B-Q3_K_L.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-Q3_K_M.gguf) | Q3_K_M | 3.29GB | Even lower quality. |
| [AlchemistCoder-L-7B-Q3_K_M.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-Q3_K_M.gguf) | Q3_K_M | 3.29GB | Even lower quality. |
| [AlchemistCoder-L-7B-IQ3_M.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-IQ3_M.gguf) | IQ3_M | 3.11GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [AlchemistCoder-L-7B-Q3_K_S.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-Q3_K_S.gguf) | Q3_K_S | 2.94GB | Low quality, not recommended. |
| [AlchemistCoder-L-7B-IQ3_XXS.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-IQ3_XXS.gguf) | IQ3_XXS | 2.58GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [AlchemistCoder-L-7B-Q2_K.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-Q2_K.gguf) | Q2_K | 2.53GB | Very low quality but surprisingly usable. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/AlchemistCoder-L-7B-GGUF --include "AlchemistCoder-L-7B-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/AlchemistCoder-L-7B-GGUF --include "AlchemistCoder-L-7B-Q8_0.gguf/*" --local-dir AlchemistCoder-L-7B-Q8_0
```
You can either specify a new local-dir (AlchemistCoder-L-7B-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
Young1108/MyLlama3
|
Young1108
| 2024-05-31T04:14:46Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-05-31T04:14:46Z |
---
license: apache-2.0
---
|
upaya07/finnlp_task_2
|
upaya07
| 2024-05-31T04:12:17Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T03:21:48Z |
---
license: apache-2.0
---
|
Johnyquest7/Llama3-OpenBioLLM-8B-Q4_K_M-GGUF
|
Johnyquest7
| 2024-05-31T04:11:19Z | 3 | 0 | null |
[
"gguf",
"llama-3",
"llama",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:aaditya/Llama3-OpenBioLLM-8B",
"base_model:quantized:aaditya/Llama3-OpenBioLLM-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T04:09:00Z |
---
language:
- en
license: llama3
tags:
- llama-3
- llama
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- distillation
- llama-cpp
- gguf-my-repo
base_model: aaditya/Llama3-OpenBioLLM-8B
widget:
- example_title: OpenBioLLM-8B
messages:
- role: system
content: You are an expert and experienced from the healthcare and biomedical
domain with extensive medical knowledge and practical experience.
- role: user
content: How long does it take for newborn jaundice to go away?
output:
text: 'Newborn jaundice, also known as neonatal jaundice, is a common condition
in newborns where the yellowing of the skin and eyes occurs due to an elevated
level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when
red blood cells break down. In most cases, newborn jaundice resolves on its
own without any specific treatment.
The duration of newborn jaundice can vary depending on several factors such
as the underlying cause, gestational age at birth, and individual variations
in bilirubin metabolism. Here are some general guidelines:
1. Physiological jaundice: This is the most common type of newborn jaundice
and usually appears within 24-72 hours after birth. It tends to peak between
the second and fifth day of life and gradually improves over the next week or
two. By the time the baby is one week old, the jaundice should have mostly resolved.
2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and
may appear later than physiological jaundice, typically between the fifth and
fourteenth day of life. It tends to persist for a longer duration but usually
resolves within six weeks after birth. 3. Pathological jaundice: This type of
jaundice is less common and occurs due to an underlying medical condition that
affects bilirubin metabolism or liver function. The duration of pathological
jaundice depends on the specific cause and may require treatment.
It''s important for parents to monitor their newborn''s jaundice closely and
seek medical advice if the jaundice progresses rapidly, becomes severe, or is
accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness.
In these cases, further evaluation and management may be necessary. Remember
that each baby is unique, and the timing of jaundice resolution can vary. If
you have concerns about your newborn''s jaundice, it''s always best to consult
with a healthcare professional for personalized advice and guidance.'
model-index:
- name: OpenBioLLM-8B
results: []
---
# Johnyquest7/Llama3-OpenBioLLM-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`aaditya/Llama3-OpenBioLLM-8B`](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo Johnyquest7/Llama3-OpenBioLLM-8B-Q4_K_M-GGUF --hf-file llama3-openbiollm-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Johnyquest7/Llama3-OpenBioLLM-8B-Q4_K_M-GGUF --hf-file llama3-openbiollm-8b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo Johnyquest7/Llama3-OpenBioLLM-8B-Q4_K_M-GGUF --hf-file llama3-openbiollm-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo Johnyquest7/Llama3-OpenBioLLM-8B-Q4_K_M-GGUF --hf-file llama3-openbiollm-8b-q4_k_m.gguf -c 2048
```
|
ymlee/test
|
ymlee
| 2024-05-31T04:11:09Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-05-31T04:10:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/HaluAnjir-8B-L3-DD-GGUF
|
mradermacher
| 2024-05-31T04:08:30Z | 9 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T03:39:36Z |
---
base_model: Hastagaras/HaluAnjir-8B-L3-DD
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Hastagaras/HaluAnjir-8B-L3-DD
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/HaluAnjir-8B-L3-DD-GGUF/resolve/main/HaluAnjir-8B-L3-DD.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/HaluAnjir-8B-L3-DD-GGUF/resolve/main/HaluAnjir-8B-L3-DD.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/HaluAnjir-8B-L3-DD-GGUF/resolve/main/HaluAnjir-8B-L3-DD.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/HaluAnjir-8B-L3-DD-GGUF/resolve/main/HaluAnjir-8B-L3-DD.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/HaluAnjir-8B-L3-DD-GGUF/resolve/main/HaluAnjir-8B-L3-DD.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/HaluAnjir-8B-L3-DD-GGUF/resolve/main/HaluAnjir-8B-L3-DD.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/HaluAnjir-8B-L3-DD-GGUF/resolve/main/HaluAnjir-8B-L3-DD.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/HaluAnjir-8B-L3-DD-GGUF/resolve/main/HaluAnjir-8B-L3-DD.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/HaluAnjir-8B-L3-DD-GGUF/resolve/main/HaluAnjir-8B-L3-DD.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HaluAnjir-8B-L3-DD-GGUF/resolve/main/HaluAnjir-8B-L3-DD.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HaluAnjir-8B-L3-DD-GGUF/resolve/main/HaluAnjir-8B-L3-DD.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/HaluAnjir-8B-L3-DD-GGUF/resolve/main/HaluAnjir-8B-L3-DD.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/HaluAnjir-8B-L3-DD-GGUF/resolve/main/HaluAnjir-8B-L3-DD.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/HaluAnjir-8B-L3-DD-GGUF/resolve/main/HaluAnjir-8B-L3-DD.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/HaluAnjir-8B-L3-DD-GGUF/resolve/main/HaluAnjir-8B-L3-DD.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
cdactvm/kannada_w2v-bert_model
|
cdactvm
| 2024-05-31T04:05:25Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-05-31T04:00:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
magnifi/phi-3-mini-4k-instruct-attribute-output-4-0530-epoch10-v4mod-2e-05
|
magnifi
| 2024-05-31T04:03:01Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T04:01:08Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RodoBP/mi_modelo_llava_finetuneado2
|
RodoBP
| 2024-05-31T03:54:12Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llava_llama",
"arxiv:1910.09700",
"base_model:liuhaotian/llava-v1.5-13b",
"base_model:adapter:liuhaotian/llava-v1.5-13b",
"region:us"
] | null | 2024-05-31T03:53:53Z |
---
library_name: peft
base_model: liuhaotian/llava-v1.5-13b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
felixml/Phi-3-medium-4k-instruct-synthetic_text_to_sql-lora-3epochs-q5_k_m
|
felixml
| 2024-05-31T03:50:53Z | 2 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Phi-3-medium-4k-instruct-bnb-4bit",
"base_model:quantized:unsloth/Phi-3-medium-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-31T03:45:37Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: unsloth/Phi-3-medium-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** felixml
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-medium-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
012shin/llama3-finetuned-Public_administration
|
012shin
| 2024-05-31T03:47:03Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:beomi/Llama-3-Open-Ko-8B",
"base_model:adapter:beomi/Llama-3-Open-Ko-8B",
"license:other",
"region:us"
] | null | 2024-05-30T14:29:07Z |
---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: beomi/Llama-3-Open-Ko-8B
model-index:
- name: llama3-finetuned-Public_administration
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-finetuned-Public_administration
This model is a fine-tuned version of [beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
cfli/LLARA-beir
|
cfli
| 2024-05-31T03:35:13Z | 18 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"safetensors",
"llama",
"feature-extraction",
"sentence-similarity",
"arxiv:2312.15503",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-02-18T05:47:56Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
license: mit
---
For more details please refer to our github repo: https://github.com/FlagOpen/FlagEmbedding
# LLARA ([paper](https://arxiv.org/pdf/2312.15503))
In this project, we introduce LLaRA:
- EBAE: Embedding-Based Auto-Encoding.
- EBAR: Embedding-Based Auto-Regression.
## Usage
```
import torch
from transformers import AutoModel, AutoTokenizer, LlamaModel
def get_query_inputs(queries, tokenizer, max_length=512):
prefix = '"'
suffix = '", predict the following passage within eight words: <s9><s10><s11><s12><s13><s14><s15><s16>'
prefix_ids = tokenizer(prefix, return_tensors=None)['input_ids']
suffix_ids = tokenizer(suffix, return_tensors=None)['input_ids'][1:]
queries_inputs = []
for query in queries:
inputs = tokenizer(query,
return_tensors=None,
max_length=max_length,
truncation=True,
add_special_tokens=False)
inputs['input_ids'] = prefix_ids + inputs['input_ids'] + suffix_ids
inputs['attention_mask'] = [1] * len(inputs['input_ids'])
queries_inputs.append(inputs)
return tokenizer.pad(
queries_inputs,
padding=True,
max_length=max_length,
pad_to_multiple_of=8,
return_tensors='pt',
)
def get_passage_inputs(passages, tokenizer, max_length=512):
prefix = '"'
suffix = '", summarize the above passage within eight words: <s1><s2><s3><s4><s5><s6><s7><s8>'
prefix_ids = tokenizer(prefix, return_tensors=None)['input_ids']
suffix_ids = tokenizer(suffix, return_tensors=None)['input_ids'][1:]
passages_inputs = []
for passage in passages:
inputs = tokenizer(passage,
return_tensors=None,
max_length=max_length,
truncation=True,
add_special_tokens=False)
inputs['input_ids'] = prefix_ids + inputs['input_ids'] + suffix_ids
inputs['attention_mask'] = [1] * len(inputs['input_ids'])
passages_inputs.append(inputs)
return tokenizer.pad(
passages_inputs,
padding=True,
max_length=max_length,
pad_to_multiple_of=8,
return_tensors='pt',
)
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained('BAAI/LLARA-beir')
model = AutoModel.from_pretrained('BAAI/LLARA-beir')
# Define query and passage inputs
query = "What is llama?"
title = "Llama"
passage = "The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era."
query_input = get_query_inputs([query], tokenizer)
passage_input = get_passage_inputs([passage], tokenizer)
with torch.no_grad():
# compute query embedding
query_outputs = model(**query_input, return_dict=True, output_hidden_states=True)
query_embedding = query_outputs.hidden_states[-1][:, -8:, :]
query_embedding = torch.mean(query_embedding, dim=1)
query_embedding = torch.nn.functional.normalize(query_embedding, dim=-1)
# compute passage embedding
passage_outputs = model(**passage_input, return_dict=True, output_hidden_states=True)
passage_embeddings = passage_outputs.hidden_states[-1][:, -8:, :]
passage_embeddings = torch.mean(passage_embeddings, dim=1)
passage_embeddings = torch.nn.functional.normalize(passage_embeddings, dim=-1)
# compute similarity score
score = query_embedding @ passage_embeddings.T
print(score)
```
## Acknowledgement
Thanks to the authors of open-sourced datasets, including MSMARCO, BEIR, etc.
Thanks to the open-sourced libraries like [Pyserini](https://github.com/castorini/pyserini).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{li2023making,
title={Making Large Language Models A Better Foundation For Dense Retrieval},
author={Chaofan Li and Zheng Liu and Shitao Xiao and Yingxia Shao},
year={2023},
eprint={2312.15503},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
coconana/Qwen-Qwen1.5-7B-1717123158
|
coconana
| 2024-05-31T03:34:50Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T02:39:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
T3Q-LLM-Product/T3Q-LLM1-Solar-10.8B-v1.0
|
T3Q-LLM-Product
| 2024-05-31T03:31:46Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T01:45:23Z |
---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
---



|
LucasEllenberger/CSE144-small-lora
|
LucasEllenberger
| 2024-05-31T03:27:08Z | 1 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-05-30T21:01:00Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
base_model: runwayml/stable-diffusion-v1-5
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - LucasEllenberger/CSE144-small-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the LucasEllenberger/ChemistryImages dataset. You can find some example images in the following.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
ahmedgongi/Llama_dev3tokenizer_finale17
|
ahmedgongi
| 2024-05-31T03:24:13Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T03:24:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
slZheng077/fs_model
|
slZheng077
| 2024-05-31T03:23:58Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-05-31T03:23:58Z |
---
license: apache-2.0
---
|
BothBosu/gru-caller-dialogue-scam-classifier-v1.1
|
BothBosu
| 2024-05-31T03:21:40Z | 50 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T03:21:20Z |
---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed]
|
BothBosu/bilstm-caller-dialogue-scam-classifier-v1.0
|
BothBosu
| 2024-05-31T03:15:02Z | 50 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T03:14:37Z |
---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed]
|
BothBosu/lstm-caller-dialogue-scam-classifier-v1.0
|
BothBosu
| 2024-05-31T03:03:54Z | 49 | 0 |
transformers
|
[
"transformers",
"safetensors",
"lstm",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T03:03:32Z |
---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed]
|
lemon07r/llama-3-NeuralMahou-8b
|
lemon07r
| 2024-05-31T03:01:57Z | 10 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:flammenai/Mahou-1.2-llama3-8B",
"base_model:merge:flammenai/Mahou-1.2-llama3-8B",
"base_model:mlabonne/NeuralDaredevil-8B-abliterated",
"base_model:merge:mlabonne/NeuralDaredevil-8B-abliterated",
"base_model:nbeerbower/llama-3-spicy-abliterated-stella-8B",
"base_model:merge:nbeerbower/llama-3-spicy-abliterated-stella-8B",
"license:llama3",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-30T19:49:25Z |
---
license: llama3
library_name: transformers
tags:
- mergekit
- merge
base_model:
- mlabonne/NeuralDaredevil-8B-abliterated
- nbeerbower/llama-3-spicy-abliterated-stella-8B
- flammenai/Mahou-1.2-llama3-8B
model-index:
- name: llama-3-NeuralMahou
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.41
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lemon07r/llama-3-NeuralMahou
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.45
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lemon07r/llama-3-NeuralMahou
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.63
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lemon07r/llama-3-NeuralMahou
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 58.21
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lemon07r/llama-3-NeuralMahou
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lemon07r/llama-3-NeuralMahou
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.55
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lemon07r/llama-3-NeuralMahou
name: Open LLM Leaderboard
---
# NeuralMahou
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) as a base.
### Models Merged
The following models were included in the merge:
* [nbeerbower/llama-3-spicy-abliterated-stella-8B](https://huggingface.co/nbeerbower/llama-3-spicy-abliterated-stella-8B)
* [flammenai/Mahou-1.2-llama3-8B](https://huggingface.co/flammenai/Mahou-1.2-llama3-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: mlabonne/NeuralDaredevil-8B-abliterated
dtype: bfloat16
merge_method: model_stock
slices:
- sources:
- layer_range: [0, 32]
model: flammenai/Mahou-1.2-llama3-8B
- layer_range: [0, 32]
model: nbeerbower/llama-3-spicy-abliterated-stella-8B
- layer_range: [0, 32]
model: mlabonne/NeuralDaredevil-8B-abliterated
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lemon07r__llama-3-NeuralMahou)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.33|
|AI2 Reasoning Challenge (25-Shot)|67.41|
|HellaSwag (10-Shot) |83.45|
|MMLU (5-Shot) |68.63|
|TruthfulQA (0-shot) |58.21|
|Winogrande (5-shot) |77.74|
|GSM8k (5-shot) |72.55|
|
PathFinderKR/Waktaverse-Llama-2-KO-7B-Instruct
|
PathFinderKR
| 2024-05-31T02:54:18Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T02:32:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jcarbonnell/NEAR-structTunedStarcoder2
|
jcarbonnell
| 2024-05-31T02:46:17Z | 82 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"starcoder2",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"dataset:jcarbonnell/NEARdAppsPrompts",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-30T23:58:11Z |
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- jcarbonnell/NEARdAppsPrompts
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
shirleyah/q169_implicit_8b
|
shirleyah
| 2024-05-31T02:41:18Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2024-05-31T02:24:31Z |
---
license: llama3
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model-index:
- name: q169_implicit_8b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# q169_implicit_8b
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.11.2.dev0
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
trapoom555/MiniCPM-2B-Text-Embedding-cft-pos
|
trapoom555
| 2024-05-31T02:35:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"sentence-embedding",
"sentence-similarity",
"feature-extraction",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-05-31T02:22:10Z |
---
license: mit
language:
- en
tags:
- sentence-embedding
- sentence-similarity
- transformers
- feature-extraction
pipeline_tag: sentence-similarity
---
# MiniCPM-2B-Text-Embedding-cft
## Description
This is a fine-tuned version of [MiniCPM-2B-dpo-bf16](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16) to perform Text Embedding tasks. The model is fine-tuned using the Contrastive Fine-tuning and LoRA technique on NLI datasets.
โ ๏ธ The training process ignores hard-negative samples and treat other in-batch samples + their entailments as in-batch negatives. โ ๏ธ If you want to see the version utilizing hard-negative examples in the training process, please refer [here](https://huggingface.co/trapoom555/MiniCPM-2B-Text-Embedding-cft)
## Base Model
[MiniCPM-2B-dpo-bf16](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16)
## Usage
1. Clone MiniCPM-2B-dpo-bf16 repository
```bash
git clone https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16
```
2. Change a tokenizer setting in `tokenizer_config.json`
```json
"add_eos_token": true
```
3. Use the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
import numpy as np
class MiniCPMSentenceEmbedding:
def __init__(self, model_path='openbmb/MiniCPM-2B-dpo-bf16', adapter_path=None):
self.tokenizer = AutoTokenizer.from_pretrained(model_path)
self.model = AutoModelForCausalLM.from_pretrained(model_path,
torch_dtype=torch.bfloat16,
device_map='cuda',
trust_remote_code=True)
if adapter_path != None:
# Load fine-tuned LoRA
self.model.load_adapter(adapter_path)
def get_last_hidden_state(self, text):
inputs = self.tokenizer(text, return_tensors="pt").to('cuda')
with torch.no_grad():
out = self.model(**inputs, output_hidden_states=True).hidden_states[-1][0, -1, :]
return out.squeeze().float().cpu().numpy()
def encode(self, sentences: list[str], **kwargs) -> list[np.ndarray]:
"""
Returns a list of embeddings for the given sentences.
Args:
sentences: List of sentences to encode
Returns:
List of embeddings for the given sentences
"""
out = []
for s in sentences:
out.append(self.get_last_hidden_state(s))
return out
minicpm_sentence_embedding = PhiSentenceEmbedding(<your-cloned-base-model-path>, 'trapoom555/MiniCPM-2B-Text-Embedding-cft-pos')
example_sentences = ["I don't like apples", "I like apples"]
encoded_sentences = minicpm_sentence_embedding.encode(example_sentences)
print(encoded_sentences)
```
## Training Details
โ ๏ธ The training process ignores hard-negative samples and treat other in-batch samples + their entailments as in-batch negatives. โ ๏ธ
| **Training Details** | **Value** |
|-------------------------|-------------------|
| Loss | InfoNCE |
| Batch Size | 40 |
| InfoNCE Temperature | 0.05 |
| Learning Rate | 1e-05 |
| Warmup Steps | 100 |
| Learning Rate Scheduler | CosineAnnealingLR |
| LoRA Rank | 8 |
| LoRA Alpha | 32 |
| LoRA Dropout | 0.1 |
| Training Precision | bf16 |
| Max Epoch | 1 |
| GPU | RTX3090 |
| Num GPUs | 4 |
## Training Scripts
**_(coming soon...)_**
## Evaluation Results
**_(coming soon...)_**
## Contributors
Trapoom Ukarapol, Zhicheng Lee, Amy Xin
## Foot Notes
This project is the topic-free final project of the Tsinghua University NLP course for Spring 2024.
|
Dreamuno/distilbert-base-uncased-finetuned-imdb-accelerate
|
Dreamuno
| 2024-05-31T02:35:35Z | 109 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"fill-mask",
"dataset:stanfordnlp/imdb",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-05-31T02:09:10Z |
---
datasets:
- stanfordnlp/imdb
pipeline_tag: fill-mask
---
### Model Card: Dreamuno/distilbert-base-uncased-finetuned-imdb-accelerate
## Model Details
**Model Name**: distilbert-base-uncased-finetuned-imdb-accelerate
**Model Type**: DistilBERT
**Model Version**: 1.0
**Model URL**: [Dreamuno/distilbert-base-uncased-finetuned-imdb-accelerate](https://huggingface.co/Dreamuno/distilbert-base-uncased-finetuned-imdb-accelerate)
**License**: Apache 2.0
## Overview
The `distilbert-base-uncased-finetuned-imdb-accelerate` model is a fine-tuned version of DistilBERT, optimized for sentiment analysis on the IMDb movie reviews dataset. The model has been trained to classify movie reviews as either positive or negative.
## Model Architecture
**Base Model**: [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased)
**Fine-tuning Dataset**: IMDb movie reviews dataset
**Number of Labels**: 2 (positive, negative)
## Intended Use
### Primary Use Case
The primary use case for this model is sentiment analysis of movie reviews. It can be used to determine whether a given movie review expresses a positive or negative sentiment.
### Applications
- Analyzing customer feedback on movie streaming platforms
- Sentiment analysis of movie reviews in social media posts
- Automated moderation of user-generated content related to movie reviews
### Limitations
- The model is trained specifically on the IMDb dataset, which may not generalize well to other types of text or domains outside of movie reviews.
- The model might be biased towards the language and sentiment distribution present in the IMDb dataset.
## Training Details
### Training Data
**Dataset**: IMDb movie reviews
**Size**: 50,000 reviews (25,000 positive, 25,000 negative)
### Training Procedure
The model was fine-tuned using the Hugging Face `transformers` library with the `accelerate` framework for efficient distributed training. The training involved the following steps:
1. **Tokenization**: Text data was tokenized using the DistilBERT tokenizer with padding and truncation to a maximum length of 512 tokens.
2. **Training Configuration**:
- Optimizer: AdamW
- Learning Rate: 2e-5
- Batch Size: 16
- Number of Epochs: 3
- Evaluation Strategy: Epoch
3. **Hardware**: Training was conducted using multiple GPUs for acceleration.
## Evaluation
### Performance Metrics
The model was evaluated on the IMDb test set, and the following metrics were obtained:
- **Accuracy**: 95.0%
- **Precision**: 94.8%
- **Recall**: 95.2%
- **F1 Score**: 95.0%
### Evaluation Dataset
**Dataset**: IMDb movie reviews (test split)
**Size**: 25,000 reviews (12,500 positive, 12,500 negative)
## How to Use
### Inference
To use the model for inference, you can use the Hugging Face `transformers` library as shown below:
```python
from transformers import pipeline
# Load the fine-tuned model
sentiment_analyzer = pipeline("sentiment-analysis", model="Dreamuno/distilbert-base-uncased-finetuned-imdb-accelerate")
# Analyze sentiment of a movie review
review = "This movie was fantastic! I really enjoyed it."
result = sentiment_analyzer(review)
print(result)
```
### Example Output
```json
[
{
"label": "POSITIVE",
"score": 0.98
}
]
```
## Ethical Considerations
- **Bias**: The model may exhibit bias based on the data it was trained on. Care should be taken when applying the model to different demographic groups or types of text.
- **Misuse**: The model is intended for sentiment analysis of movie reviews. Misuse of the model for other purposes should be avoided and may lead to inaccurate or harmful predictions.
## Contact
For further information, please contact the model creator or visit the [model page on Hugging Face](https://huggingface.co/Dreamuno/distilbert-base-uncased-finetuned-imdb-accelerate).
---
This model card provides a comprehensive overview of the `Dreamuno/distilbert-base-uncased-finetuned-imdb-accelerate` model, detailing its intended use, training process, evaluation metrics, and ethical considerations.
|
DaichiT/cable
|
DaichiT
| 2024-05-31T02:30:15Z | 26 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-05-28T07:28:45Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
base_model: stabilityai/stable-diffusion-2
inference: true
instance_prompt: a photo of sks cable
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - DaichiT/cable
This is a dreambooth model derived from stabilityai/stable-diffusion-2. The weights were trained on a photo of sks cable using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
lyhourt/whisper-small-clean_6-v5
|
lyhourt
| 2024-05-31T02:27:08Z | 17 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:lyhourt/clean_6",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-05-30T18:44:33Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- lyhourt/clean_6
metrics:
- wer
model-index:
- name: whisper-small-clean_6-v5
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: lyhourt/clean_6
type: lyhourt/clean_6
metrics:
- name: Wer
type: wer
value: 21.24038237351364
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-clean_6-v5
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the lyhourt/clean_6 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2706
- Wer: 21.2404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 1500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.1461 | 0.9452 | 500 | 0.2746 | 23.0473 |
| 0.0631 | 1.8904 | 1000 | 0.2693 | 21.5318 |
| 0.0334 | 2.8355 | 1500 | 0.2706 | 21.2404 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
bavihao/whisper-small-llm-AHao
|
bavihao
| 2024-05-31T02:26:09Z | 68 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-05-30T13:07:48Z |
---
title:whisper-small-llm-AHao: Automatic Speech Recognition
author: Bang Viet Hao
year: 2024
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lynn54040/tabby_cat_LoRA
|
lynn54040
| 2024-05-31T02:25:21Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-05-31T02:25:18Z |
---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a cute tabby cat
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - lynn54040/tabby_cat_LoRA
<Gallery />
## Model description
These are lynn54040/tabby_cat_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a cute tabby cat to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](lynn54040/tabby_cat_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
shirleyah/q169_explicit_8b_reason
|
shirleyah
| 2024-05-31T02:20:21Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2024-05-31T02:03:26Z |
---
license: llama3
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model-index:
- name: q169_explicit_8b_reason
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# q169_explicit_8b_reason
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.11.2.dev0
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
bavihao/whisper-AHao
|
bavihao
| 2024-05-31T02:17:59Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"license:apache-2.0",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
automatic-speech-recognition
| 2024-05-30T14:52:48Z |
---
title:whisper-AHao: Automatic Speech Recognition for Vietnamese
author: Bang Viet Hao
year: 2024
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
scriptmoney/Qwen-Qwen1.5-0.5B-1717121822
|
scriptmoney
| 2024-05-31T02:17:50Z | 152 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T02:17:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sci-m-wang/deepseek-llm-7b-chat-sa-v0.1
|
sci-m-wang
| 2024-05-31T02:17:42Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-llm-7b-chat",
"base_model:adapter:deepseek-ai/deepseek-llm-7b-chat",
"license:other",
"region:us"
] | null | 2024-05-31T02:11:15Z |
---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: deepseek-ai/deepseek-llm-7b-chat
model-index:
- name: deepseek-llm-7b-chat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deepseek-llm-7b-chat
This model is a fine-tuned version of [/datas/huggingface/deepseek-llm-7b-chat](https://huggingface.co//datas/huggingface/deepseek-llm-7b-chat) on the LangGPT_community, the LangGPT_alpaca and the LangGPT_seed datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5.0
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
DaichiT/motor
|
DaichiT
| 2024-05-31T02:17:07Z | 29 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-05-29T05:18:38Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
base_model: stabilityai/stable-diffusion-2
inference: true
instance_prompt: a photo of sks motor
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - DaichiT/motor
This is a dreambooth model derived from stabilityai/stable-diffusion-2. The weights were trained on a photo of sks motor using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
pduy395/pre_train_bert_atis
|
pduy395
| 2024-05-31T02:14:01Z | 46 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"pretraining",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T02:13:40Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: pre_train_bert_atis
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pre_train_bert_atis
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.3421
- Validation Loss: 5.2885
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'RMSprop', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': 100, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'rho': 0.9, 'momentum': 0.0, 'epsilon': 1e-07, 'centered': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 5.3454 | 5.3429 | 0 |
| 5.3421 | 5.2885 | 1 |
### Framework versions
- Transformers 4.42.0.dev0
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Uni-MoE/Uni-MoE-v2-e2
|
Uni-MoE
| 2024-05-31T02:10:58Z | 8 | 0 |
transformers
|
[
"transformers",
"llava_llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T02:06:41Z |
---
license: apache-2.0
---
|
lsmille/lora_evo_ta_all_layers_16
|
lsmille
| 2024-05-31T02:01:57Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:togethercomputer/evo-1-8k-base",
"base_model:adapter:togethercomputer/evo-1-8k-base",
"license:apache-2.0",
"region:us"
] | null | 2024-05-31T01:50:11Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: togethercomputer/evo-1-8k-base
model-index:
- name: lora_evo_ta_all_layers_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora_evo_ta_all_layers_16
This model is a fine-tuned version of [togethercomputer/evo-1-8k-base](https://huggingface.co/togethercomputer/evo-1-8k-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5463
## Model description
Trained on single ID token 5K dataset filtered to 10k sequences (20% for test data = 2000)
lora_alpha = 128
lora_dropout = 0.1
lora_r = 128
epochs = 3
learning rate = 3e-4
warmup_steps=200
gradient_accumulation_steps = 1
train_batch = 2
eval_batch = 2
ONLY on attention layers and MLPs of last 31 layers <--------------------
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 2.8598 | 0.4998 | 1999 | 2.6289 |
| 2.5927 | 0.9995 | 3998 | 2.5852 |
| 2.5467 | 1.4992 | 5997 | 2.5717 |
| 2.5487 | 1.999 | 7996 | 2.5554 |
| 2.4987 | 2.4988 | 9995 | 2.5546 |
| 2.4934 | 2.9985 | 11994 | 2.5463 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
hwan1/ohss-polyglot-ko-empathy-message-friend-3.8b
|
hwan1
| 2024-05-31T01:52:01Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T01:45:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sci-m-wang/Mistral-7B-Instruct-sa-v0.1
|
sci-m-wang
| 2024-05-31T01:44:05Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:other",
"region:us"
] | null | 2024-05-31T01:43:03Z |
---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: Mistral-7B-Instruct-v0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.1
This model is a fine-tuned version of [/datas/huggingface/Mistral-7B-Instruct-v0.1](https://huggingface.co//datas/huggingface/Mistral-7B-Instruct-v0.1) on the LangGPT_community, the LangGPT_alpaca and the LangGPT_seed datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
jhoppanne/Emotion-Image-Classification-V2
|
jhoppanne
| 2024-05-31T01:40:13Z | 19 | 1 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-05-30T10:44:19Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: Emotion-Image-Classification-V2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.59375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Emotion-Image-Classification-V2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2748
- Accuracy: 0.5938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-07
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1750
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 2.0722 | 0.1875 |
| No log | 2.0 | 40 | 2.0667 | 0.2062 |
| No log | 3.0 | 60 | 2.0633 | 0.1938 |
| No log | 4.0 | 80 | 2.0682 | 0.1938 |
| No log | 5.0 | 100 | 2.0627 | 0.2 |
| No log | 6.0 | 120 | 2.0618 | 0.2062 |
| No log | 7.0 | 140 | 2.0573 | 0.2062 |
| No log | 8.0 | 160 | 2.0589 | 0.2062 |
| No log | 9.0 | 180 | 2.0585 | 0.1938 |
| No log | 10.0 | 200 | 2.0576 | 0.1875 |
| No log | 11.0 | 220 | 2.0578 | 0.2125 |
| No log | 12.0 | 240 | 2.0498 | 0.2625 |
| No log | 13.0 | 260 | 2.0542 | 0.2062 |
| No log | 14.0 | 280 | 2.0534 | 0.2 |
| No log | 15.0 | 300 | 2.0474 | 0.2562 |
| No log | 16.0 | 320 | 2.0513 | 0.225 |
| No log | 17.0 | 340 | 2.0472 | 0.2062 |
| No log | 18.0 | 360 | 2.0465 | 0.2562 |
| No log | 19.0 | 380 | 2.0379 | 0.275 |
| No log | 20.0 | 400 | 2.0416 | 0.2375 |
| No log | 21.0 | 420 | 2.0442 | 0.2687 |
| No log | 22.0 | 440 | 2.0399 | 0.2313 |
| No log | 23.0 | 460 | 2.0358 | 0.2625 |
| No log | 24.0 | 480 | 2.0316 | 0.2437 |
| 2.0458 | 25.0 | 500 | 2.0314 | 0.2687 |
| 2.0458 | 26.0 | 520 | 2.0382 | 0.2437 |
| 2.0458 | 27.0 | 540 | 2.0246 | 0.275 |
| 2.0458 | 28.0 | 560 | 2.0211 | 0.3187 |
| 2.0458 | 29.0 | 580 | 2.0240 | 0.2625 |
| 2.0458 | 30.0 | 600 | 2.0198 | 0.2875 |
| 2.0458 | 31.0 | 620 | 2.0204 | 0.2625 |
| 2.0458 | 32.0 | 640 | 2.0210 | 0.275 |
| 2.0458 | 33.0 | 660 | 2.0046 | 0.3125 |
| 2.0458 | 34.0 | 680 | 2.0153 | 0.2625 |
| 2.0458 | 35.0 | 700 | 2.0098 | 0.2625 |
| 2.0458 | 36.0 | 720 | 2.0085 | 0.2812 |
| 2.0458 | 37.0 | 740 | 1.9978 | 0.3187 |
| 2.0458 | 38.0 | 760 | 1.9962 | 0.2938 |
| 2.0458 | 39.0 | 780 | 1.9955 | 0.3 |
| 2.0458 | 40.0 | 800 | 1.9923 | 0.3 |
| 2.0458 | 41.0 | 820 | 1.9913 | 0.3063 |
| 2.0458 | 42.0 | 840 | 1.9844 | 0.325 |
| 2.0458 | 43.0 | 860 | 1.9922 | 0.275 |
| 2.0458 | 44.0 | 880 | 1.9731 | 0.3187 |
| 2.0458 | 45.0 | 900 | 1.9833 | 0.3 |
| 2.0458 | 46.0 | 920 | 1.9763 | 0.3187 |
| 2.0458 | 47.0 | 940 | 1.9686 | 0.35 |
| 2.0458 | 48.0 | 960 | 1.9707 | 0.3187 |
| 2.0458 | 49.0 | 980 | 1.9631 | 0.3312 |
| 1.9677 | 50.0 | 1000 | 1.9561 | 0.2812 |
| 1.9677 | 51.0 | 1020 | 1.9527 | 0.3312 |
| 1.9677 | 52.0 | 1040 | 1.9447 | 0.3563 |
| 1.9677 | 53.0 | 1060 | 1.9542 | 0.35 |
| 1.9677 | 54.0 | 1080 | 1.9509 | 0.375 |
| 1.9677 | 55.0 | 1100 | 1.9378 | 0.3937 |
| 1.9677 | 56.0 | 1120 | 1.9431 | 0.3563 |
| 1.9677 | 57.0 | 1140 | 1.9397 | 0.3312 |
| 1.9677 | 58.0 | 1160 | 1.9322 | 0.4 |
| 1.9677 | 59.0 | 1180 | 1.9252 | 0.3688 |
| 1.9677 | 60.0 | 1200 | 1.9209 | 0.3063 |
| 1.9677 | 61.0 | 1220 | 1.9210 | 0.3563 |
| 1.9677 | 62.0 | 1240 | 1.9226 | 0.3688 |
| 1.9677 | 63.0 | 1260 | 1.9054 | 0.3688 |
| 1.9677 | 64.0 | 1280 | 1.9153 | 0.35 |
| 1.9677 | 65.0 | 1300 | 1.8993 | 0.3563 |
| 1.9677 | 66.0 | 1320 | 1.9008 | 0.375 |
| 1.9677 | 67.0 | 1340 | 1.9008 | 0.35 |
| 1.9677 | 68.0 | 1360 | 1.8866 | 0.3625 |
| 1.9677 | 69.0 | 1380 | 1.8771 | 0.3937 |
| 1.9677 | 70.0 | 1400 | 1.8758 | 0.4 |
| 1.9677 | 71.0 | 1420 | 1.8682 | 0.3937 |
| 1.9677 | 72.0 | 1440 | 1.8604 | 0.3688 |
| 1.9677 | 73.0 | 1460 | 1.8660 | 0.3937 |
| 1.9677 | 74.0 | 1480 | 1.8702 | 0.3375 |
| 1.8562 | 75.0 | 1500 | 1.8483 | 0.4313 |
| 1.8562 | 76.0 | 1520 | 1.8525 | 0.3875 |
| 1.8562 | 77.0 | 1540 | 1.8467 | 0.3937 |
| 1.8562 | 78.0 | 1560 | 1.8481 | 0.3812 |
| 1.8562 | 79.0 | 1580 | 1.8282 | 0.3812 |
| 1.8562 | 80.0 | 1600 | 1.8395 | 0.3875 |
| 1.8562 | 81.0 | 1620 | 1.8251 | 0.375 |
| 1.8562 | 82.0 | 1640 | 1.8215 | 0.4125 |
| 1.8562 | 83.0 | 1660 | 1.8179 | 0.3625 |
| 1.8562 | 84.0 | 1680 | 1.8130 | 0.4188 |
| 1.8562 | 85.0 | 1700 | 1.8066 | 0.4 |
| 1.8562 | 86.0 | 1720 | 1.7993 | 0.4062 |
| 1.8562 | 87.0 | 1740 | 1.7954 | 0.4188 |
| 1.8562 | 88.0 | 1760 | 1.7936 | 0.3937 |
| 1.8562 | 89.0 | 1780 | 1.7972 | 0.4188 |
| 1.8562 | 90.0 | 1800 | 1.7876 | 0.3937 |
| 1.8562 | 91.0 | 1820 | 1.7810 | 0.3937 |
| 1.8562 | 92.0 | 1840 | 1.7838 | 0.3937 |
| 1.8562 | 93.0 | 1860 | 1.7711 | 0.3812 |
| 1.8562 | 94.0 | 1880 | 1.7780 | 0.3688 |
| 1.8562 | 95.0 | 1900 | 1.7426 | 0.4062 |
| 1.8562 | 96.0 | 1920 | 1.7399 | 0.425 |
| 1.8562 | 97.0 | 1940 | 1.7561 | 0.375 |
| 1.8562 | 98.0 | 1960 | 1.7287 | 0.3937 |
| 1.8562 | 99.0 | 1980 | 1.7425 | 0.4437 |
| 1.7294 | 100.0 | 2000 | 1.7308 | 0.4188 |
| 1.7294 | 101.0 | 2020 | 1.7389 | 0.375 |
| 1.7294 | 102.0 | 2040 | 1.7249 | 0.4375 |
| 1.7294 | 103.0 | 2060 | 1.7297 | 0.4188 |
| 1.7294 | 104.0 | 2080 | 1.7361 | 0.3875 |
| 1.7294 | 105.0 | 2100 | 1.7188 | 0.4188 |
| 1.7294 | 106.0 | 2120 | 1.7181 | 0.4562 |
| 1.7294 | 107.0 | 2140 | 1.7044 | 0.425 |
| 1.7294 | 108.0 | 2160 | 1.7030 | 0.4188 |
| 1.7294 | 109.0 | 2180 | 1.7070 | 0.425 |
| 1.7294 | 110.0 | 2200 | 1.7006 | 0.4437 |
| 1.7294 | 111.0 | 2220 | 1.6862 | 0.4688 |
| 1.7294 | 112.0 | 2240 | 1.6881 | 0.4437 |
| 1.7294 | 113.0 | 2260 | 1.6798 | 0.45 |
| 1.7294 | 114.0 | 2280 | 1.6982 | 0.3937 |
| 1.7294 | 115.0 | 2300 | 1.6812 | 0.4375 |
| 1.7294 | 116.0 | 2320 | 1.6751 | 0.45 |
| 1.7294 | 117.0 | 2340 | 1.6849 | 0.45 |
| 1.7294 | 118.0 | 2360 | 1.6690 | 0.4375 |
| 1.7294 | 119.0 | 2380 | 1.6583 | 0.4688 |
| 1.7294 | 120.0 | 2400 | 1.6666 | 0.4375 |
| 1.7294 | 121.0 | 2420 | 1.6651 | 0.45 |
| 1.7294 | 122.0 | 2440 | 1.6477 | 0.4688 |
| 1.7294 | 123.0 | 2460 | 1.6520 | 0.475 |
| 1.7294 | 124.0 | 2480 | 1.6642 | 0.4125 |
| 1.6198 | 125.0 | 2500 | 1.6633 | 0.475 |
| 1.6198 | 126.0 | 2520 | 1.6443 | 0.4375 |
| 1.6198 | 127.0 | 2540 | 1.6398 | 0.4813 |
| 1.6198 | 128.0 | 2560 | 1.6516 | 0.4437 |
| 1.6198 | 129.0 | 2580 | 1.6496 | 0.4562 |
| 1.6198 | 130.0 | 2600 | 1.6342 | 0.4625 |
| 1.6198 | 131.0 | 2620 | 1.6330 | 0.4437 |
| 1.6198 | 132.0 | 2640 | 1.6341 | 0.4625 |
| 1.6198 | 133.0 | 2660 | 1.6167 | 0.475 |
| 1.6198 | 134.0 | 2680 | 1.6270 | 0.4562 |
| 1.6198 | 135.0 | 2700 | 1.6322 | 0.4562 |
| 1.6198 | 136.0 | 2720 | 1.6252 | 0.4188 |
| 1.6198 | 137.0 | 2740 | 1.6045 | 0.4562 |
| 1.6198 | 138.0 | 2760 | 1.6107 | 0.5 |
| 1.6198 | 139.0 | 2780 | 1.6170 | 0.45 |
| 1.6198 | 140.0 | 2800 | 1.6067 | 0.4813 |
| 1.6198 | 141.0 | 2820 | 1.6116 | 0.4437 |
| 1.6198 | 142.0 | 2840 | 1.5962 | 0.4813 |
| 1.6198 | 143.0 | 2860 | 1.6096 | 0.425 |
| 1.6198 | 144.0 | 2880 | 1.6051 | 0.4562 |
| 1.6198 | 145.0 | 2900 | 1.5939 | 0.525 |
| 1.6198 | 146.0 | 2920 | 1.5871 | 0.5062 |
| 1.6198 | 147.0 | 2940 | 1.5917 | 0.4688 |
| 1.6198 | 148.0 | 2960 | 1.5911 | 0.4938 |
| 1.6198 | 149.0 | 2980 | 1.5950 | 0.4437 |
| 1.5343 | 150.0 | 3000 | 1.5974 | 0.4625 |
| 1.5343 | 151.0 | 3020 | 1.5713 | 0.4813 |
| 1.5343 | 152.0 | 3040 | 1.5637 | 0.5437 |
| 1.5343 | 153.0 | 3060 | 1.5830 | 0.4688 |
| 1.5343 | 154.0 | 3080 | 1.5770 | 0.4938 |
| 1.5343 | 155.0 | 3100 | 1.5800 | 0.5188 |
| 1.5343 | 156.0 | 3120 | 1.5782 | 0.4625 |
| 1.5343 | 157.0 | 3140 | 1.5769 | 0.4562 |
| 1.5343 | 158.0 | 3160 | 1.5751 | 0.5 |
| 1.5343 | 159.0 | 3180 | 1.5739 | 0.4938 |
| 1.5343 | 160.0 | 3200 | 1.5555 | 0.5 |
| 1.5343 | 161.0 | 3220 | 1.5561 | 0.475 |
| 1.5343 | 162.0 | 3240 | 1.5521 | 0.4938 |
| 1.5343 | 163.0 | 3260 | 1.5670 | 0.4688 |
| 1.5343 | 164.0 | 3280 | 1.5621 | 0.475 |
| 1.5343 | 165.0 | 3300 | 1.5622 | 0.4813 |
| 1.5343 | 166.0 | 3320 | 1.5497 | 0.4562 |
| 1.5343 | 167.0 | 3340 | 1.5535 | 0.4813 |
| 1.5343 | 168.0 | 3360 | 1.5526 | 0.4875 |
| 1.5343 | 169.0 | 3380 | 1.5301 | 0.5 |
| 1.5343 | 170.0 | 3400 | 1.5487 | 0.4813 |
| 1.5343 | 171.0 | 3420 | 1.5489 | 0.4625 |
| 1.5343 | 172.0 | 3440 | 1.5456 | 0.5 |
| 1.5343 | 173.0 | 3460 | 1.5355 | 0.5 |
| 1.5343 | 174.0 | 3480 | 1.5306 | 0.5 |
| 1.4643 | 175.0 | 3500 | 1.5436 | 0.4875 |
| 1.4643 | 176.0 | 3520 | 1.5241 | 0.4875 |
| 1.4643 | 177.0 | 3540 | 1.5293 | 0.475 |
| 1.4643 | 178.0 | 3560 | 1.5282 | 0.475 |
| 1.4643 | 179.0 | 3580 | 1.5350 | 0.4813 |
| 1.4643 | 180.0 | 3600 | 1.5207 | 0.4875 |
| 1.4643 | 181.0 | 3620 | 1.5274 | 0.4875 |
| 1.4643 | 182.0 | 3640 | 1.5239 | 0.5 |
| 1.4643 | 183.0 | 3660 | 1.5324 | 0.45 |
| 1.4643 | 184.0 | 3680 | 1.5137 | 0.4813 |
| 1.4643 | 185.0 | 3700 | 1.5265 | 0.4938 |
| 1.4643 | 186.0 | 3720 | 1.4986 | 0.5125 |
| 1.4643 | 187.0 | 3740 | 1.5102 | 0.525 |
| 1.4643 | 188.0 | 3760 | 1.5141 | 0.5062 |
| 1.4643 | 189.0 | 3780 | 1.4955 | 0.5125 |
| 1.4643 | 190.0 | 3800 | 1.5155 | 0.5125 |
| 1.4643 | 191.0 | 3820 | 1.5076 | 0.4938 |
| 1.4643 | 192.0 | 3840 | 1.5114 | 0.4875 |
| 1.4643 | 193.0 | 3860 | 1.5295 | 0.4938 |
| 1.4643 | 194.0 | 3880 | 1.5180 | 0.4688 |
| 1.4643 | 195.0 | 3900 | 1.5103 | 0.4875 |
| 1.4643 | 196.0 | 3920 | 1.5092 | 0.55 |
| 1.4643 | 197.0 | 3940 | 1.5009 | 0.5062 |
| 1.4643 | 198.0 | 3960 | 1.5320 | 0.475 |
| 1.4643 | 199.0 | 3980 | 1.4854 | 0.5375 |
| 1.4007 | 200.0 | 4000 | 1.5029 | 0.4875 |
| 1.4007 | 201.0 | 4020 | 1.4928 | 0.5125 |
| 1.4007 | 202.0 | 4040 | 1.4851 | 0.4875 |
| 1.4007 | 203.0 | 4060 | 1.4960 | 0.4938 |
| 1.4007 | 204.0 | 4080 | 1.4979 | 0.4437 |
| 1.4007 | 205.0 | 4100 | 1.4936 | 0.4813 |
| 1.4007 | 206.0 | 4120 | 1.4684 | 0.55 |
| 1.4007 | 207.0 | 4140 | 1.4656 | 0.525 |
| 1.4007 | 208.0 | 4160 | 1.4731 | 0.4813 |
| 1.4007 | 209.0 | 4180 | 1.4858 | 0.5312 |
| 1.4007 | 210.0 | 4200 | 1.4893 | 0.5062 |
| 1.4007 | 211.0 | 4220 | 1.4875 | 0.4938 |
| 1.4007 | 212.0 | 4240 | 1.4881 | 0.5375 |
| 1.4007 | 213.0 | 4260 | 1.4910 | 0.4813 |
| 1.4007 | 214.0 | 4280 | 1.4570 | 0.5062 |
| 1.4007 | 215.0 | 4300 | 1.4839 | 0.4938 |
| 1.4007 | 216.0 | 4320 | 1.4709 | 0.4688 |
| 1.4007 | 217.0 | 4340 | 1.4711 | 0.4875 |
| 1.4007 | 218.0 | 4360 | 1.4992 | 0.475 |
| 1.4007 | 219.0 | 4380 | 1.4701 | 0.5125 |
| 1.4007 | 220.0 | 4400 | 1.4727 | 0.4875 |
| 1.4007 | 221.0 | 4420 | 1.4632 | 0.5125 |
| 1.4007 | 222.0 | 4440 | 1.4602 | 0.475 |
| 1.4007 | 223.0 | 4460 | 1.4510 | 0.4938 |
| 1.4007 | 224.0 | 4480 | 1.4786 | 0.4562 |
| 1.3391 | 225.0 | 4500 | 1.4700 | 0.5125 |
| 1.3391 | 226.0 | 4520 | 1.4812 | 0.4813 |
| 1.3391 | 227.0 | 4540 | 1.4379 | 0.5375 |
| 1.3391 | 228.0 | 4560 | 1.4426 | 0.4938 |
| 1.3391 | 229.0 | 4580 | 1.4638 | 0.4938 |
| 1.3391 | 230.0 | 4600 | 1.4708 | 0.475 |
| 1.3391 | 231.0 | 4620 | 1.4293 | 0.55 |
| 1.3391 | 232.0 | 4640 | 1.4704 | 0.4813 |
| 1.3391 | 233.0 | 4660 | 1.4561 | 0.4938 |
| 1.3391 | 234.0 | 4680 | 1.4302 | 0.525 |
| 1.3391 | 235.0 | 4700 | 1.4806 | 0.5125 |
| 1.3391 | 236.0 | 4720 | 1.4402 | 0.4875 |
| 1.3391 | 237.0 | 4740 | 1.4356 | 0.5188 |
| 1.3391 | 238.0 | 4760 | 1.4475 | 0.4688 |
| 1.3391 | 239.0 | 4780 | 1.4304 | 0.525 |
| 1.3391 | 240.0 | 4800 | 1.4217 | 0.525 |
| 1.3391 | 241.0 | 4820 | 1.4536 | 0.5 |
| 1.3391 | 242.0 | 4840 | 1.4512 | 0.4938 |
| 1.3391 | 243.0 | 4860 | 1.4549 | 0.5312 |
| 1.3391 | 244.0 | 4880 | 1.4460 | 0.5062 |
| 1.3391 | 245.0 | 4900 | 1.4361 | 0.5062 |
| 1.3391 | 246.0 | 4920 | 1.4455 | 0.4688 |
| 1.3391 | 247.0 | 4940 | 1.4588 | 0.525 |
| 1.3391 | 248.0 | 4960 | 1.4207 | 0.5312 |
| 1.3391 | 249.0 | 4980 | 1.4436 | 0.5312 |
| 1.2821 | 250.0 | 5000 | 1.4417 | 0.5062 |
| 1.2821 | 251.0 | 5020 | 1.4264 | 0.5125 |
| 1.2821 | 252.0 | 5040 | 1.4495 | 0.4875 |
| 1.2821 | 253.0 | 5060 | 1.4312 | 0.5375 |
| 1.2821 | 254.0 | 5080 | 1.4334 | 0.5062 |
| 1.2821 | 255.0 | 5100 | 1.4420 | 0.4875 |
| 1.2821 | 256.0 | 5120 | 1.4313 | 0.5188 |
| 1.2821 | 257.0 | 5140 | 1.4239 | 0.5437 |
| 1.2821 | 258.0 | 5160 | 1.4457 | 0.4562 |
| 1.2821 | 259.0 | 5180 | 1.4051 | 0.5188 |
| 1.2821 | 260.0 | 5200 | 1.4374 | 0.5062 |
| 1.2821 | 261.0 | 5220 | 1.4089 | 0.5188 |
| 1.2821 | 262.0 | 5240 | 1.4430 | 0.4875 |
| 1.2821 | 263.0 | 5260 | 1.4066 | 0.5188 |
| 1.2821 | 264.0 | 5280 | 1.4144 | 0.5375 |
| 1.2821 | 265.0 | 5300 | 1.4440 | 0.5 |
| 1.2821 | 266.0 | 5320 | 1.4262 | 0.4813 |
| 1.2821 | 267.0 | 5340 | 1.4134 | 0.4938 |
| 1.2821 | 268.0 | 5360 | 1.4234 | 0.4938 |
| 1.2821 | 269.0 | 5380 | 1.4086 | 0.525 |
| 1.2821 | 270.0 | 5400 | 1.4297 | 0.4938 |
| 1.2821 | 271.0 | 5420 | 1.3996 | 0.525 |
| 1.2821 | 272.0 | 5440 | 1.4114 | 0.5125 |
| 1.2821 | 273.0 | 5460 | 1.4374 | 0.4688 |
| 1.2821 | 274.0 | 5480 | 1.4199 | 0.5062 |
| 1.2306 | 275.0 | 5500 | 1.3729 | 0.5062 |
| 1.2306 | 276.0 | 5520 | 1.3774 | 0.5375 |
| 1.2306 | 277.0 | 5540 | 1.4205 | 0.5312 |
| 1.2306 | 278.0 | 5560 | 1.3882 | 0.5 |
| 1.2306 | 279.0 | 5580 | 1.3965 | 0.5062 |
| 1.2306 | 280.0 | 5600 | 1.4020 | 0.5188 |
| 1.2306 | 281.0 | 5620 | 1.4054 | 0.5312 |
| 1.2306 | 282.0 | 5640 | 1.4037 | 0.5188 |
| 1.2306 | 283.0 | 5660 | 1.4333 | 0.4813 |
| 1.2306 | 284.0 | 5680 | 1.3888 | 0.5188 |
| 1.2306 | 285.0 | 5700 | 1.3991 | 0.4813 |
| 1.2306 | 286.0 | 5720 | 1.3864 | 0.5125 |
| 1.2306 | 287.0 | 5740 | 1.3277 | 0.575 |
| 1.2306 | 288.0 | 5760 | 1.4280 | 0.5 |
| 1.2306 | 289.0 | 5780 | 1.3820 | 0.5062 |
| 1.2306 | 290.0 | 5800 | 1.4017 | 0.5188 |
| 1.2306 | 291.0 | 5820 | 1.4204 | 0.4813 |
| 1.2306 | 292.0 | 5840 | 1.4218 | 0.5125 |
| 1.2306 | 293.0 | 5860 | 1.3961 | 0.4688 |
| 1.2306 | 294.0 | 5880 | 1.3957 | 0.5062 |
| 1.2306 | 295.0 | 5900 | 1.4075 | 0.5188 |
| 1.2306 | 296.0 | 5920 | 1.4089 | 0.475 |
| 1.2306 | 297.0 | 5940 | 1.3727 | 0.5375 |
| 1.2306 | 298.0 | 5960 | 1.3624 | 0.525 |
| 1.2306 | 299.0 | 5980 | 1.4073 | 0.5062 |
| 1.1758 | 300.0 | 6000 | 1.4273 | 0.5 |
| 1.1758 | 301.0 | 6020 | 1.3907 | 0.5 |
| 1.1758 | 302.0 | 6040 | 1.3874 | 0.5188 |
| 1.1758 | 303.0 | 6060 | 1.4323 | 0.4938 |
| 1.1758 | 304.0 | 6080 | 1.3582 | 0.5437 |
| 1.1758 | 305.0 | 6100 | 1.3618 | 0.5312 |
| 1.1758 | 306.0 | 6120 | 1.3544 | 0.5188 |
| 1.1758 | 307.0 | 6140 | 1.3816 | 0.5375 |
| 1.1758 | 308.0 | 6160 | 1.3749 | 0.4938 |
| 1.1758 | 309.0 | 6180 | 1.3924 | 0.4938 |
| 1.1758 | 310.0 | 6200 | 1.3781 | 0.525 |
| 1.1758 | 311.0 | 6220 | 1.3879 | 0.5125 |
| 1.1758 | 312.0 | 6240 | 1.3739 | 0.55 |
| 1.1758 | 313.0 | 6260 | 1.3662 | 0.5188 |
| 1.1758 | 314.0 | 6280 | 1.3764 | 0.575 |
| 1.1758 | 315.0 | 6300 | 1.3922 | 0.4938 |
| 1.1758 | 316.0 | 6320 | 1.3977 | 0.4625 |
| 1.1758 | 317.0 | 6340 | 1.4194 | 0.5 |
| 1.1758 | 318.0 | 6360 | 1.3465 | 0.5125 |
| 1.1758 | 319.0 | 6380 | 1.3564 | 0.4875 |
| 1.1758 | 320.0 | 6400 | 1.3582 | 0.5437 |
| 1.1758 | 321.0 | 6420 | 1.3829 | 0.5188 |
| 1.1758 | 322.0 | 6440 | 1.3504 | 0.5125 |
| 1.1758 | 323.0 | 6460 | 1.3658 | 0.4875 |
| 1.1758 | 324.0 | 6480 | 1.4254 | 0.45 |
| 1.1256 | 325.0 | 6500 | 1.4025 | 0.4875 |
| 1.1256 | 326.0 | 6520 | 1.3431 | 0.5125 |
| 1.1256 | 327.0 | 6540 | 1.3613 | 0.5 |
| 1.1256 | 328.0 | 6560 | 1.3376 | 0.55 |
| 1.1256 | 329.0 | 6580 | 1.3730 | 0.5 |
| 1.1256 | 330.0 | 6600 | 1.3936 | 0.5062 |
| 1.1256 | 331.0 | 6620 | 1.3716 | 0.5125 |
| 1.1256 | 332.0 | 6640 | 1.4044 | 0.475 |
| 1.1256 | 333.0 | 6660 | 1.3880 | 0.4625 |
| 1.1256 | 334.0 | 6680 | 1.3580 | 0.4688 |
| 1.1256 | 335.0 | 6700 | 1.3459 | 0.5437 |
| 1.1256 | 336.0 | 6720 | 1.3868 | 0.5312 |
| 1.1256 | 337.0 | 6740 | 1.3766 | 0.475 |
| 1.1256 | 338.0 | 6760 | 1.3843 | 0.4625 |
| 1.1256 | 339.0 | 6780 | 1.3802 | 0.4562 |
| 1.1256 | 340.0 | 6800 | 1.3698 | 0.4938 |
| 1.1256 | 341.0 | 6820 | 1.3657 | 0.4813 |
| 1.1256 | 342.0 | 6840 | 1.3622 | 0.5437 |
| 1.1256 | 343.0 | 6860 | 1.4055 | 0.4562 |
| 1.1256 | 344.0 | 6880 | 1.3406 | 0.5062 |
| 1.1256 | 345.0 | 6900 | 1.3517 | 0.5312 |
| 1.1256 | 346.0 | 6920 | 1.4012 | 0.4688 |
| 1.1256 | 347.0 | 6940 | 1.3131 | 0.5563 |
| 1.1256 | 348.0 | 6960 | 1.3677 | 0.4813 |
| 1.1256 | 349.0 | 6980 | 1.3656 | 0.5062 |
| 1.0749 | 350.0 | 7000 | 1.3582 | 0.4813 |
| 1.0749 | 351.0 | 7020 | 1.3342 | 0.5125 |
| 1.0749 | 352.0 | 7040 | 1.3413 | 0.5188 |
| 1.0749 | 353.0 | 7060 | 1.3541 | 0.5062 |
| 1.0749 | 354.0 | 7080 | 1.3551 | 0.5062 |
| 1.0749 | 355.0 | 7100 | 1.3364 | 0.5625 |
| 1.0749 | 356.0 | 7120 | 1.3885 | 0.4625 |
| 1.0749 | 357.0 | 7140 | 1.3327 | 0.5062 |
| 1.0749 | 358.0 | 7160 | 1.3570 | 0.5 |
| 1.0749 | 359.0 | 7180 | 1.3642 | 0.4875 |
| 1.0749 | 360.0 | 7200 | 1.3467 | 0.4813 |
| 1.0749 | 361.0 | 7220 | 1.3882 | 0.5062 |
| 1.0749 | 362.0 | 7240 | 1.3534 | 0.4875 |
| 1.0749 | 363.0 | 7260 | 1.3570 | 0.4688 |
| 1.0749 | 364.0 | 7280 | 1.3150 | 0.5437 |
| 1.0749 | 365.0 | 7300 | 1.3483 | 0.4688 |
| 1.0749 | 366.0 | 7320 | 1.3294 | 0.525 |
| 1.0749 | 367.0 | 7340 | 1.3453 | 0.475 |
| 1.0749 | 368.0 | 7360 | 1.3518 | 0.4625 |
| 1.0749 | 369.0 | 7380 | 1.3524 | 0.4875 |
| 1.0749 | 370.0 | 7400 | 1.3627 | 0.4625 |
| 1.0749 | 371.0 | 7420 | 1.2998 | 0.5188 |
| 1.0749 | 372.0 | 7440 | 1.3362 | 0.5125 |
| 1.0749 | 373.0 | 7460 | 1.3290 | 0.4938 |
| 1.0749 | 374.0 | 7480 | 1.3158 | 0.5312 |
| 1.0301 | 375.0 | 7500 | 1.3241 | 0.5375 |
| 1.0301 | 376.0 | 7520 | 1.3518 | 0.5062 |
| 1.0301 | 377.0 | 7540 | 1.3748 | 0.4938 |
| 1.0301 | 378.0 | 7560 | 1.3627 | 0.4688 |
| 1.0301 | 379.0 | 7580 | 1.2971 | 0.525 |
| 1.0301 | 380.0 | 7600 | 1.3200 | 0.525 |
| 1.0301 | 381.0 | 7620 | 1.3200 | 0.5062 |
| 1.0301 | 382.0 | 7640 | 1.3304 | 0.475 |
| 1.0301 | 383.0 | 7660 | 1.3425 | 0.5188 |
| 1.0301 | 384.0 | 7680 | 1.3378 | 0.5188 |
| 1.0301 | 385.0 | 7700 | 1.3380 | 0.525 |
| 1.0301 | 386.0 | 7720 | 1.3159 | 0.5125 |
| 1.0301 | 387.0 | 7740 | 1.3315 | 0.5312 |
| 1.0301 | 388.0 | 7760 | 1.3318 | 0.5 |
| 1.0301 | 389.0 | 7780 | 1.3150 | 0.5188 |
| 1.0301 | 390.0 | 7800 | 1.3109 | 0.4938 |
| 1.0301 | 391.0 | 7820 | 1.3457 | 0.4938 |
| 1.0301 | 392.0 | 7840 | 1.3507 | 0.55 |
| 1.0301 | 393.0 | 7860 | 1.3043 | 0.5125 |
| 1.0301 | 394.0 | 7880 | 1.2937 | 0.525 |
| 1.0301 | 395.0 | 7900 | 1.3179 | 0.5125 |
| 1.0301 | 396.0 | 7920 | 1.3434 | 0.4375 |
| 1.0301 | 397.0 | 7940 | 1.3440 | 0.5125 |
| 1.0301 | 398.0 | 7960 | 1.3630 | 0.5125 |
| 1.0301 | 399.0 | 7980 | 1.2913 | 0.5687 |
| 0.9833 | 400.0 | 8000 | 1.3338 | 0.5188 |
| 0.9833 | 401.0 | 8020 | 1.3367 | 0.5188 |
| 0.9833 | 402.0 | 8040 | 1.3656 | 0.5 |
| 0.9833 | 403.0 | 8060 | 1.3377 | 0.5312 |
| 0.9833 | 404.0 | 8080 | 1.3387 | 0.4688 |
| 0.9833 | 405.0 | 8100 | 1.3061 | 0.5125 |
| 0.9833 | 406.0 | 8120 | 1.3082 | 0.5 |
| 0.9833 | 407.0 | 8140 | 1.3284 | 0.4938 |
| 0.9833 | 408.0 | 8160 | 1.3212 | 0.5312 |
| 0.9833 | 409.0 | 8180 | 1.2809 | 0.5188 |
| 0.9833 | 410.0 | 8200 | 1.3295 | 0.525 |
| 0.9833 | 411.0 | 8220 | 1.3432 | 0.5 |
| 0.9833 | 412.0 | 8240 | 1.3173 | 0.5188 |
| 0.9833 | 413.0 | 8260 | 1.3131 | 0.525 |
| 0.9833 | 414.0 | 8280 | 1.2957 | 0.5312 |
| 0.9833 | 415.0 | 8300 | 1.3358 | 0.4813 |
| 0.9833 | 416.0 | 8320 | 1.3635 | 0.4562 |
| 0.9833 | 417.0 | 8340 | 1.3743 | 0.475 |
| 0.9833 | 418.0 | 8360 | 1.3320 | 0.5437 |
| 0.9833 | 419.0 | 8380 | 1.2942 | 0.5 |
| 0.9833 | 420.0 | 8400 | 1.2994 | 0.5 |
| 0.9833 | 421.0 | 8420 | 1.3073 | 0.4938 |
| 0.9833 | 422.0 | 8440 | 1.3108 | 0.5188 |
| 0.9833 | 423.0 | 8460 | 1.3514 | 0.5125 |
| 0.9833 | 424.0 | 8480 | 1.3062 | 0.4875 |
| 0.944 | 425.0 | 8500 | 1.3154 | 0.5062 |
| 0.944 | 426.0 | 8520 | 1.3105 | 0.5125 |
| 0.944 | 427.0 | 8540 | 1.3566 | 0.4938 |
| 0.944 | 428.0 | 8560 | 1.3100 | 0.5125 |
| 0.944 | 429.0 | 8580 | 1.3342 | 0.475 |
| 0.944 | 430.0 | 8600 | 1.3321 | 0.4875 |
| 0.944 | 431.0 | 8620 | 1.3492 | 0.5 |
| 0.944 | 432.0 | 8640 | 1.3199 | 0.4938 |
| 0.944 | 433.0 | 8660 | 1.2887 | 0.5437 |
| 0.944 | 434.0 | 8680 | 1.3167 | 0.4938 |
| 0.944 | 435.0 | 8700 | 1.3255 | 0.5062 |
| 0.944 | 436.0 | 8720 | 1.3197 | 0.5125 |
| 0.944 | 437.0 | 8740 | 1.2864 | 0.5312 |
| 0.944 | 438.0 | 8760 | 1.2985 | 0.5312 |
| 0.944 | 439.0 | 8780 | 1.3446 | 0.4625 |
| 0.944 | 440.0 | 8800 | 1.3052 | 0.5437 |
| 0.944 | 441.0 | 8820 | 1.3067 | 0.5125 |
| 0.944 | 442.0 | 8840 | 1.2849 | 0.5125 |
| 0.944 | 443.0 | 8860 | 1.3065 | 0.5188 |
| 0.944 | 444.0 | 8880 | 1.3487 | 0.4938 |
| 0.944 | 445.0 | 8900 | 1.3021 | 0.5312 |
| 0.944 | 446.0 | 8920 | 1.3258 | 0.4938 |
| 0.944 | 447.0 | 8940 | 1.3325 | 0.4562 |
| 0.944 | 448.0 | 8960 | 1.2986 | 0.4813 |
| 0.944 | 449.0 | 8980 | 1.2946 | 0.5563 |
| 0.8989 | 450.0 | 9000 | 1.3077 | 0.525 |
| 0.8989 | 451.0 | 9020 | 1.2984 | 0.5312 |
| 0.8989 | 452.0 | 9040 | 1.3039 | 0.5188 |
| 0.8989 | 453.0 | 9060 | 1.2911 | 0.5312 |
| 0.8989 | 454.0 | 9080 | 1.3159 | 0.5188 |
| 0.8989 | 455.0 | 9100 | 1.3500 | 0.4813 |
| 0.8989 | 456.0 | 9120 | 1.2778 | 0.5563 |
| 0.8989 | 457.0 | 9140 | 1.3115 | 0.5437 |
| 0.8989 | 458.0 | 9160 | 1.2777 | 0.525 |
| 0.8989 | 459.0 | 9180 | 1.2846 | 0.5312 |
| 0.8989 | 460.0 | 9200 | 1.3397 | 0.4875 |
| 0.8989 | 461.0 | 9220 | 1.3154 | 0.5188 |
| 0.8989 | 462.0 | 9240 | 1.2974 | 0.4813 |
| 0.8989 | 463.0 | 9260 | 1.3058 | 0.4938 |
| 0.8989 | 464.0 | 9280 | 1.3083 | 0.5125 |
| 0.8989 | 465.0 | 9300 | 1.3122 | 0.525 |
| 0.8989 | 466.0 | 9320 | 1.3139 | 0.5 |
| 0.8989 | 467.0 | 9340 | 1.3230 | 0.4938 |
| 0.8989 | 468.0 | 9360 | 1.3057 | 0.525 |
| 0.8989 | 469.0 | 9380 | 1.3508 | 0.5 |
| 0.8989 | 470.0 | 9400 | 1.2968 | 0.5437 |
| 0.8989 | 471.0 | 9420 | 1.2811 | 0.5563 |
| 0.8989 | 472.0 | 9440 | 1.3688 | 0.5062 |
| 0.8989 | 473.0 | 9460 | 1.3701 | 0.4875 |
| 0.8989 | 474.0 | 9480 | 1.3270 | 0.5 |
| 0.856 | 475.0 | 9500 | 1.3228 | 0.5062 |
| 0.856 | 476.0 | 9520 | 1.3234 | 0.4625 |
| 0.856 | 477.0 | 9540 | 1.3094 | 0.525 |
| 0.856 | 478.0 | 9560 | 1.3401 | 0.4938 |
| 0.856 | 479.0 | 9580 | 1.3226 | 0.4813 |
| 0.856 | 480.0 | 9600 | 1.3334 | 0.4813 |
| 0.856 | 481.0 | 9620 | 1.2958 | 0.525 |
| 0.856 | 482.0 | 9640 | 1.3299 | 0.4813 |
| 0.856 | 483.0 | 9660 | 1.2868 | 0.525 |
| 0.856 | 484.0 | 9680 | 1.3472 | 0.4813 |
| 0.856 | 485.0 | 9700 | 1.2546 | 0.5687 |
| 0.856 | 486.0 | 9720 | 1.3173 | 0.5 |
| 0.856 | 487.0 | 9740 | 1.2759 | 0.5188 |
| 0.856 | 488.0 | 9760 | 1.3288 | 0.4875 |
| 0.856 | 489.0 | 9780 | 1.3019 | 0.5312 |
| 0.856 | 490.0 | 9800 | 1.3412 | 0.4875 |
| 0.856 | 491.0 | 9820 | 1.2962 | 0.5188 |
| 0.856 | 492.0 | 9840 | 1.3327 | 0.4875 |
| 0.856 | 493.0 | 9860 | 1.3197 | 0.5125 |
| 0.856 | 494.0 | 9880 | 1.3252 | 0.4313 |
| 0.856 | 495.0 | 9900 | 1.2839 | 0.5062 |
| 0.856 | 496.0 | 9920 | 1.3358 | 0.5 |
| 0.856 | 497.0 | 9940 | 1.3704 | 0.4938 |
| 0.856 | 498.0 | 9960 | 1.2827 | 0.4938 |
| 0.856 | 499.0 | 9980 | 1.3523 | 0.4938 |
| 0.8229 | 500.0 | 10000 | 1.3195 | 0.4875 |
| 0.8229 | 501.0 | 10020 | 1.3336 | 0.5125 |
| 0.8229 | 502.0 | 10040 | 1.3251 | 0.4875 |
| 0.8229 | 503.0 | 10060 | 1.3660 | 0.4938 |
| 0.8229 | 504.0 | 10080 | 1.2874 | 0.55 |
| 0.8229 | 505.0 | 10100 | 1.2740 | 0.5312 |
| 0.8229 | 506.0 | 10120 | 1.3573 | 0.4813 |
| 0.8229 | 507.0 | 10140 | 1.3158 | 0.4813 |
| 0.8229 | 508.0 | 10160 | 1.3025 | 0.5312 |
| 0.8229 | 509.0 | 10180 | 1.3504 | 0.4875 |
| 0.8229 | 510.0 | 10200 | 1.3390 | 0.4875 |
| 0.8229 | 511.0 | 10220 | 1.3464 | 0.5125 |
| 0.8229 | 512.0 | 10240 | 1.3318 | 0.45 |
| 0.8229 | 513.0 | 10260 | 1.3449 | 0.4875 |
| 0.8229 | 514.0 | 10280 | 1.2774 | 0.5312 |
| 0.8229 | 515.0 | 10300 | 1.3072 | 0.4813 |
| 0.8229 | 516.0 | 10320 | 1.3315 | 0.4625 |
| 0.8229 | 517.0 | 10340 | 1.3485 | 0.4813 |
| 0.8229 | 518.0 | 10360 | 1.2751 | 0.5437 |
| 0.8229 | 519.0 | 10380 | 1.2877 | 0.5375 |
| 0.8229 | 520.0 | 10400 | 1.3057 | 0.5188 |
| 0.8229 | 521.0 | 10420 | 1.3414 | 0.4688 |
| 0.8229 | 522.0 | 10440 | 1.3252 | 0.475 |
| 0.8229 | 523.0 | 10460 | 1.3209 | 0.475 |
| 0.8229 | 524.0 | 10480 | 1.2846 | 0.5125 |
| 0.7853 | 525.0 | 10500 | 1.3378 | 0.4688 |
| 0.7853 | 526.0 | 10520 | 1.2980 | 0.5312 |
| 0.7853 | 527.0 | 10540 | 1.2855 | 0.4938 |
| 0.7853 | 528.0 | 10560 | 1.2948 | 0.5125 |
| 0.7853 | 529.0 | 10580 | 1.3149 | 0.5125 |
| 0.7853 | 530.0 | 10600 | 1.3239 | 0.55 |
| 0.7853 | 531.0 | 10620 | 1.3165 | 0.5188 |
| 0.7853 | 532.0 | 10640 | 1.3015 | 0.5125 |
| 0.7853 | 533.0 | 10660 | 1.3741 | 0.475 |
| 0.7853 | 534.0 | 10680 | 1.3430 | 0.525 |
| 0.7853 | 535.0 | 10700 | 1.2847 | 0.5062 |
| 0.7853 | 536.0 | 10720 | 1.3284 | 0.5062 |
| 0.7853 | 537.0 | 10740 | 1.2532 | 0.55 |
| 0.7853 | 538.0 | 10760 | 1.2771 | 0.5437 |
| 0.7853 | 539.0 | 10780 | 1.2977 | 0.5375 |
| 0.7853 | 540.0 | 10800 | 1.3248 | 0.4938 |
| 0.7853 | 541.0 | 10820 | 1.2509 | 0.5437 |
| 0.7853 | 542.0 | 10840 | 1.2954 | 0.5062 |
| 0.7853 | 543.0 | 10860 | 1.3454 | 0.5 |
| 0.7853 | 544.0 | 10880 | 1.3356 | 0.5188 |
| 0.7853 | 545.0 | 10900 | 1.3251 | 0.4938 |
| 0.7853 | 546.0 | 10920 | 1.3035 | 0.5062 |
| 0.7853 | 547.0 | 10940 | 1.2584 | 0.5625 |
| 0.7853 | 548.0 | 10960 | 1.2613 | 0.5188 |
| 0.7853 | 549.0 | 10980 | 1.2917 | 0.525 |
| 0.749 | 550.0 | 11000 | 1.3127 | 0.5125 |
| 0.749 | 551.0 | 11020 | 1.3570 | 0.4688 |
| 0.749 | 552.0 | 11040 | 1.3004 | 0.5437 |
| 0.749 | 553.0 | 11060 | 1.3239 | 0.475 |
| 0.749 | 554.0 | 11080 | 1.2988 | 0.5188 |
| 0.749 | 555.0 | 11100 | 1.3179 | 0.475 |
| 0.749 | 556.0 | 11120 | 1.2800 | 0.525 |
| 0.749 | 557.0 | 11140 | 1.3372 | 0.4625 |
| 0.749 | 558.0 | 11160 | 1.3328 | 0.475 |
| 0.749 | 559.0 | 11180 | 1.3129 | 0.4938 |
| 0.749 | 560.0 | 11200 | 1.3230 | 0.4688 |
| 0.749 | 561.0 | 11220 | 1.3589 | 0.4938 |
| 0.749 | 562.0 | 11240 | 1.3465 | 0.4938 |
| 0.749 | 563.0 | 11260 | 1.3101 | 0.5062 |
| 0.749 | 564.0 | 11280 | 1.3420 | 0.4688 |
| 0.749 | 565.0 | 11300 | 1.2908 | 0.5312 |
| 0.749 | 566.0 | 11320 | 1.2794 | 0.525 |
| 0.749 | 567.0 | 11340 | 1.2730 | 0.5188 |
| 0.749 | 568.0 | 11360 | 1.3563 | 0.4688 |
| 0.749 | 569.0 | 11380 | 1.2933 | 0.5 |
| 0.749 | 570.0 | 11400 | 1.2991 | 0.5125 |
| 0.749 | 571.0 | 11420 | 1.3225 | 0.4875 |
| 0.749 | 572.0 | 11440 | 1.3333 | 0.5 |
| 0.749 | 573.0 | 11460 | 1.3181 | 0.5 |
| 0.749 | 574.0 | 11480 | 1.2859 | 0.525 |
| 0.7191 | 575.0 | 11500 | 1.2650 | 0.5062 |
| 0.7191 | 576.0 | 11520 | 1.3280 | 0.5062 |
| 0.7191 | 577.0 | 11540 | 1.3104 | 0.475 |
| 0.7191 | 578.0 | 11560 | 1.2930 | 0.5125 |
| 0.7191 | 579.0 | 11580 | 1.2975 | 0.5125 |
| 0.7191 | 580.0 | 11600 | 1.3229 | 0.5125 |
| 0.7191 | 581.0 | 11620 | 1.2711 | 0.55 |
| 0.7191 | 582.0 | 11640 | 1.3605 | 0.4625 |
| 0.7191 | 583.0 | 11660 | 1.3356 | 0.4813 |
| 0.7191 | 584.0 | 11680 | 1.3470 | 0.5062 |
| 0.7191 | 585.0 | 11700 | 1.3186 | 0.4938 |
| 0.7191 | 586.0 | 11720 | 1.2996 | 0.5312 |
| 0.7191 | 587.0 | 11740 | 1.2796 | 0.5188 |
| 0.7191 | 588.0 | 11760 | 1.2969 | 0.5312 |
| 0.7191 | 589.0 | 11780 | 1.3024 | 0.4938 |
| 0.7191 | 590.0 | 11800 | 1.3289 | 0.4938 |
| 0.7191 | 591.0 | 11820 | 1.2850 | 0.5062 |
| 0.7191 | 592.0 | 11840 | 1.3677 | 0.4625 |
| 0.7191 | 593.0 | 11860 | 1.3232 | 0.4875 |
| 0.7191 | 594.0 | 11880 | 1.2918 | 0.475 |
| 0.7191 | 595.0 | 11900 | 1.3555 | 0.4437 |
| 0.7191 | 596.0 | 11920 | 1.3612 | 0.4813 |
| 0.7191 | 597.0 | 11940 | 1.3379 | 0.4813 |
| 0.7191 | 598.0 | 11960 | 1.3612 | 0.4938 |
| 0.7191 | 599.0 | 11980 | 1.3186 | 0.5 |
| 0.6835 | 600.0 | 12000 | 1.3640 | 0.4625 |
| 0.6835 | 601.0 | 12020 | 1.3228 | 0.5188 |
| 0.6835 | 602.0 | 12040 | 1.3222 | 0.45 |
| 0.6835 | 603.0 | 12060 | 1.3341 | 0.4875 |
| 0.6835 | 604.0 | 12080 | 1.2262 | 0.575 |
| 0.6835 | 605.0 | 12100 | 1.3317 | 0.4688 |
| 0.6835 | 606.0 | 12120 | 1.3410 | 0.475 |
| 0.6835 | 607.0 | 12140 | 1.3149 | 0.5188 |
| 0.6835 | 608.0 | 12160 | 1.3382 | 0.4938 |
| 0.6835 | 609.0 | 12180 | 1.3494 | 0.5125 |
| 0.6835 | 610.0 | 12200 | 1.2871 | 0.5062 |
| 0.6835 | 611.0 | 12220 | 1.3171 | 0.5 |
| 0.6835 | 612.0 | 12240 | 1.2573 | 0.5125 |
| 0.6835 | 613.0 | 12260 | 1.2936 | 0.5188 |
| 0.6835 | 614.0 | 12280 | 1.3191 | 0.475 |
| 0.6835 | 615.0 | 12300 | 1.3019 | 0.525 |
| 0.6835 | 616.0 | 12320 | 1.2226 | 0.5687 |
| 0.6835 | 617.0 | 12340 | 1.3227 | 0.5188 |
| 0.6835 | 618.0 | 12360 | 1.2899 | 0.5188 |
| 0.6835 | 619.0 | 12380 | 1.2729 | 0.525 |
| 0.6835 | 620.0 | 12400 | 1.2784 | 0.55 |
| 0.6835 | 621.0 | 12420 | 1.3419 | 0.4875 |
| 0.6835 | 622.0 | 12440 | 1.3038 | 0.475 |
| 0.6835 | 623.0 | 12460 | 1.2843 | 0.5188 |
| 0.6835 | 624.0 | 12480 | 1.3475 | 0.4562 |
| 0.6589 | 625.0 | 12500 | 1.3167 | 0.5 |
| 0.6589 | 626.0 | 12520 | 1.3463 | 0.4625 |
| 0.6589 | 627.0 | 12540 | 1.2807 | 0.55 |
| 0.6589 | 628.0 | 12560 | 1.3267 | 0.45 |
| 0.6589 | 629.0 | 12580 | 1.3196 | 0.525 |
| 0.6589 | 630.0 | 12600 | 1.2697 | 0.5062 |
| 0.6589 | 631.0 | 12620 | 1.3384 | 0.4875 |
| 0.6589 | 632.0 | 12640 | 1.3414 | 0.4813 |
| 0.6589 | 633.0 | 12660 | 1.3142 | 0.5312 |
| 0.6589 | 634.0 | 12680 | 1.3511 | 0.4562 |
| 0.6589 | 635.0 | 12700 | 1.3485 | 0.4625 |
| 0.6589 | 636.0 | 12720 | 1.3063 | 0.5 |
| 0.6589 | 637.0 | 12740 | 1.3262 | 0.475 |
| 0.6589 | 638.0 | 12760 | 1.3484 | 0.4938 |
| 0.6589 | 639.0 | 12780 | 1.3035 | 0.4875 |
| 0.6589 | 640.0 | 12800 | 1.3407 | 0.4813 |
| 0.6589 | 641.0 | 12820 | 1.3223 | 0.4938 |
| 0.6589 | 642.0 | 12840 | 1.3175 | 0.525 |
| 0.6589 | 643.0 | 12860 | 1.3643 | 0.4625 |
| 0.6589 | 644.0 | 12880 | 1.3467 | 0.4875 |
| 0.6589 | 645.0 | 12900 | 1.2972 | 0.5 |
| 0.6589 | 646.0 | 12920 | 1.3128 | 0.4938 |
| 0.6589 | 647.0 | 12940 | 1.2584 | 0.55 |
| 0.6589 | 648.0 | 12960 | 1.3532 | 0.5188 |
| 0.6589 | 649.0 | 12980 | 1.3304 | 0.5 |
| 0.6247 | 650.0 | 13000 | 1.2754 | 0.5188 |
| 0.6247 | 651.0 | 13020 | 1.3032 | 0.5437 |
| 0.6247 | 652.0 | 13040 | 1.3035 | 0.5312 |
| 0.6247 | 653.0 | 13060 | 1.2854 | 0.525 |
| 0.6247 | 654.0 | 13080 | 1.3248 | 0.4875 |
| 0.6247 | 655.0 | 13100 | 1.2884 | 0.4813 |
| 0.6247 | 656.0 | 13120 | 1.3079 | 0.4938 |
| 0.6247 | 657.0 | 13140 | 1.2974 | 0.5125 |
| 0.6247 | 658.0 | 13160 | 1.2979 | 0.5125 |
| 0.6247 | 659.0 | 13180 | 1.3101 | 0.5062 |
| 0.6247 | 660.0 | 13200 | 1.3081 | 0.5 |
| 0.6247 | 661.0 | 13220 | 1.3606 | 0.4875 |
| 0.6247 | 662.0 | 13240 | 1.2752 | 0.5563 |
| 0.6247 | 663.0 | 13260 | 1.2443 | 0.5625 |
| 0.6247 | 664.0 | 13280 | 1.3346 | 0.4875 |
| 0.6247 | 665.0 | 13300 | 1.2043 | 0.5625 |
| 0.6247 | 666.0 | 13320 | 1.2867 | 0.4875 |
| 0.6247 | 667.0 | 13340 | 1.3082 | 0.5 |
| 0.6247 | 668.0 | 13360 | 1.3648 | 0.5188 |
| 0.6247 | 669.0 | 13380 | 1.2744 | 0.5125 |
| 0.6247 | 670.0 | 13400 | 1.2452 | 0.5125 |
| 0.6247 | 671.0 | 13420 | 1.2823 | 0.5 |
| 0.6247 | 672.0 | 13440 | 1.3093 | 0.5125 |
| 0.6247 | 673.0 | 13460 | 1.3754 | 0.4938 |
| 0.6247 | 674.0 | 13480 | 1.3413 | 0.4688 |
| 0.5993 | 675.0 | 13500 | 1.3341 | 0.4625 |
| 0.5993 | 676.0 | 13520 | 1.2906 | 0.525 |
| 0.5993 | 677.0 | 13540 | 1.3119 | 0.5062 |
| 0.5993 | 678.0 | 13560 | 1.3415 | 0.4813 |
| 0.5993 | 679.0 | 13580 | 1.3082 | 0.5 |
| 0.5993 | 680.0 | 13600 | 1.2932 | 0.5312 |
| 0.5993 | 681.0 | 13620 | 1.2672 | 0.5188 |
| 0.5993 | 682.0 | 13640 | 1.2797 | 0.5125 |
| 0.5993 | 683.0 | 13660 | 1.2356 | 0.5312 |
| 0.5993 | 684.0 | 13680 | 1.3172 | 0.5 |
| 0.5993 | 685.0 | 13700 | 1.2969 | 0.5437 |
| 0.5993 | 686.0 | 13720 | 1.3316 | 0.475 |
| 0.5993 | 687.0 | 13740 | 1.3530 | 0.4625 |
| 0.5993 | 688.0 | 13760 | 1.2476 | 0.5437 |
| 0.5993 | 689.0 | 13780 | 1.3471 | 0.5062 |
| 0.5993 | 690.0 | 13800 | 1.3207 | 0.525 |
| 0.5993 | 691.0 | 13820 | 1.2905 | 0.5 |
| 0.5993 | 692.0 | 13840 | 1.2277 | 0.5375 |
| 0.5993 | 693.0 | 13860 | 1.2620 | 0.5312 |
| 0.5993 | 694.0 | 13880 | 1.2902 | 0.525 |
| 0.5993 | 695.0 | 13900 | 1.2927 | 0.5062 |
| 0.5993 | 696.0 | 13920 | 1.3005 | 0.5125 |
| 0.5993 | 697.0 | 13940 | 1.2722 | 0.5125 |
| 0.5993 | 698.0 | 13960 | 1.4361 | 0.4375 |
| 0.5993 | 699.0 | 13980 | 1.3325 | 0.4875 |
| 0.576 | 700.0 | 14000 | 1.3448 | 0.4688 |
| 0.576 | 701.0 | 14020 | 1.3347 | 0.5062 |
| 0.576 | 702.0 | 14040 | 1.3598 | 0.5 |
| 0.576 | 703.0 | 14060 | 1.3541 | 0.4813 |
| 0.576 | 704.0 | 14080 | 1.2408 | 0.5062 |
| 0.576 | 705.0 | 14100 | 1.3499 | 0.475 |
| 0.576 | 706.0 | 14120 | 1.3265 | 0.45 |
| 0.576 | 707.0 | 14140 | 1.2756 | 0.5062 |
| 0.576 | 708.0 | 14160 | 1.3324 | 0.4813 |
| 0.576 | 709.0 | 14180 | 1.3072 | 0.4875 |
| 0.576 | 710.0 | 14200 | 1.2447 | 0.5188 |
| 0.576 | 711.0 | 14220 | 1.3463 | 0.5062 |
| 0.576 | 712.0 | 14240 | 1.3573 | 0.475 |
| 0.576 | 713.0 | 14260 | 1.3616 | 0.4688 |
| 0.576 | 714.0 | 14280 | 1.2648 | 0.5437 |
| 0.576 | 715.0 | 14300 | 1.3533 | 0.5 |
| 0.576 | 716.0 | 14320 | 1.3224 | 0.4875 |
| 0.576 | 717.0 | 14340 | 1.2633 | 0.5062 |
| 0.576 | 718.0 | 14360 | 1.3340 | 0.4938 |
| 0.576 | 719.0 | 14380 | 1.3129 | 0.5312 |
| 0.576 | 720.0 | 14400 | 1.3004 | 0.5125 |
| 0.576 | 721.0 | 14420 | 1.3615 | 0.475 |
| 0.576 | 722.0 | 14440 | 1.3512 | 0.5062 |
| 0.576 | 723.0 | 14460 | 1.2956 | 0.525 |
| 0.576 | 724.0 | 14480 | 1.3522 | 0.5 |
| 0.5537 | 725.0 | 14500 | 1.3350 | 0.5062 |
| 0.5537 | 726.0 | 14520 | 1.3446 | 0.4938 |
| 0.5537 | 727.0 | 14540 | 1.3539 | 0.5062 |
| 0.5537 | 728.0 | 14560 | 1.3256 | 0.525 |
| 0.5537 | 729.0 | 14580 | 1.3233 | 0.475 |
| 0.5537 | 730.0 | 14600 | 1.3513 | 0.5 |
| 0.5537 | 731.0 | 14620 | 1.3180 | 0.5125 |
| 0.5537 | 732.0 | 14640 | 1.3848 | 0.4813 |
| 0.5537 | 733.0 | 14660 | 1.3244 | 0.4875 |
| 0.5537 | 734.0 | 14680 | 1.2924 | 0.5188 |
| 0.5537 | 735.0 | 14700 | 1.2813 | 0.4875 |
| 0.5537 | 736.0 | 14720 | 1.3493 | 0.5 |
| 0.5537 | 737.0 | 14740 | 1.2850 | 0.5312 |
| 0.5537 | 738.0 | 14760 | 1.3904 | 0.5125 |
| 0.5537 | 739.0 | 14780 | 1.2986 | 0.5188 |
| 0.5537 | 740.0 | 14800 | 1.3292 | 0.5062 |
| 0.5537 | 741.0 | 14820 | 1.3138 | 0.5125 |
| 0.5537 | 742.0 | 14840 | 1.3240 | 0.475 |
| 0.5537 | 743.0 | 14860 | 1.3288 | 0.4688 |
| 0.5537 | 744.0 | 14880 | 1.3290 | 0.5188 |
| 0.5537 | 745.0 | 14900 | 1.3662 | 0.5125 |
| 0.5537 | 746.0 | 14920 | 1.3504 | 0.4938 |
| 0.5537 | 747.0 | 14940 | 1.2914 | 0.5312 |
| 0.5537 | 748.0 | 14960 | 1.3464 | 0.475 |
| 0.5537 | 749.0 | 14980 | 1.3561 | 0.4813 |
| 0.5294 | 750.0 | 15000 | 1.3434 | 0.5125 |
| 0.5294 | 751.0 | 15020 | 1.4302 | 0.4562 |
| 0.5294 | 752.0 | 15040 | 1.3248 | 0.5125 |
| 0.5294 | 753.0 | 15060 | 1.3248 | 0.5125 |
| 0.5294 | 754.0 | 15080 | 1.3308 | 0.5375 |
| 0.5294 | 755.0 | 15100 | 1.3946 | 0.5 |
| 0.5294 | 756.0 | 15120 | 1.4397 | 0.4375 |
| 0.5294 | 757.0 | 15140 | 1.3823 | 0.4813 |
| 0.5294 | 758.0 | 15160 | 1.3018 | 0.5188 |
| 0.5294 | 759.0 | 15180 | 1.3330 | 0.5375 |
| 0.5294 | 760.0 | 15200 | 1.3385 | 0.4938 |
| 0.5294 | 761.0 | 15220 | 1.3248 | 0.4938 |
| 0.5294 | 762.0 | 15240 | 1.2794 | 0.525 |
| 0.5294 | 763.0 | 15260 | 1.2904 | 0.5125 |
| 0.5294 | 764.0 | 15280 | 1.3265 | 0.5188 |
| 0.5294 | 765.0 | 15300 | 1.3163 | 0.5188 |
| 0.5294 | 766.0 | 15320 | 1.3473 | 0.5 |
| 0.5294 | 767.0 | 15340 | 1.3061 | 0.5 |
| 0.5294 | 768.0 | 15360 | 1.3968 | 0.4813 |
| 0.5294 | 769.0 | 15380 | 1.3855 | 0.475 |
| 0.5294 | 770.0 | 15400 | 1.3763 | 0.4875 |
| 0.5294 | 771.0 | 15420 | 1.3754 | 0.4437 |
| 0.5294 | 772.0 | 15440 | 1.3707 | 0.4688 |
| 0.5294 | 773.0 | 15460 | 1.4068 | 0.4813 |
| 0.5294 | 774.0 | 15480 | 1.3008 | 0.5062 |
| 0.5186 | 775.0 | 15500 | 1.3429 | 0.4875 |
| 0.5186 | 776.0 | 15520 | 1.2058 | 0.5563 |
| 0.5186 | 777.0 | 15540 | 1.3625 | 0.4875 |
| 0.5186 | 778.0 | 15560 | 1.3213 | 0.4688 |
| 0.5186 | 779.0 | 15580 | 1.3350 | 0.5062 |
| 0.5186 | 780.0 | 15600 | 1.2739 | 0.5312 |
| 0.5186 | 781.0 | 15620 | 1.2801 | 0.5125 |
| 0.5186 | 782.0 | 15640 | 1.3048 | 0.4875 |
| 0.5186 | 783.0 | 15660 | 1.2901 | 0.5312 |
| 0.5186 | 784.0 | 15680 | 1.3093 | 0.5062 |
| 0.5186 | 785.0 | 15700 | 1.2931 | 0.5062 |
| 0.5186 | 786.0 | 15720 | 1.3164 | 0.4813 |
| 0.5186 | 787.0 | 15740 | 1.3310 | 0.5188 |
| 0.5186 | 788.0 | 15760 | 1.3160 | 0.4938 |
| 0.5186 | 789.0 | 15780 | 1.3739 | 0.475 |
| 0.5186 | 790.0 | 15800 | 1.3391 | 0.5 |
| 0.5186 | 791.0 | 15820 | 1.3376 | 0.4938 |
| 0.5186 | 792.0 | 15840 | 1.4375 | 0.4313 |
| 0.5186 | 793.0 | 15860 | 1.3268 | 0.5188 |
| 0.5186 | 794.0 | 15880 | 1.3147 | 0.5125 |
| 0.5186 | 795.0 | 15900 | 1.3460 | 0.5 |
| 0.5186 | 796.0 | 15920 | 1.2923 | 0.5062 |
| 0.5186 | 797.0 | 15940 | 1.2991 | 0.5125 |
| 0.5186 | 798.0 | 15960 | 1.3307 | 0.525 |
| 0.5186 | 799.0 | 15980 | 1.3394 | 0.5062 |
| 0.4947 | 800.0 | 16000 | 1.2811 | 0.5062 |
| 0.4947 | 801.0 | 16020 | 1.3949 | 0.4625 |
| 0.4947 | 802.0 | 16040 | 1.4106 | 0.4375 |
| 0.4947 | 803.0 | 16060 | 1.3262 | 0.4938 |
| 0.4947 | 804.0 | 16080 | 1.3248 | 0.5062 |
| 0.4947 | 805.0 | 16100 | 1.2709 | 0.5312 |
| 0.4947 | 806.0 | 16120 | 1.2974 | 0.5375 |
| 0.4947 | 807.0 | 16140 | 1.3104 | 0.5375 |
| 0.4947 | 808.0 | 16160 | 1.2601 | 0.5437 |
| 0.4947 | 809.0 | 16180 | 1.2655 | 0.4813 |
| 0.4947 | 810.0 | 16200 | 1.2922 | 0.5062 |
| 0.4947 | 811.0 | 16220 | 1.3207 | 0.5 |
| 0.4947 | 812.0 | 16240 | 1.2712 | 0.4875 |
| 0.4947 | 813.0 | 16260 | 1.3517 | 0.4813 |
| 0.4947 | 814.0 | 16280 | 1.3267 | 0.5 |
| 0.4947 | 815.0 | 16300 | 1.3889 | 0.4938 |
| 0.4947 | 816.0 | 16320 | 1.3924 | 0.5 |
| 0.4947 | 817.0 | 16340 | 1.3344 | 0.5125 |
| 0.4947 | 818.0 | 16360 | 1.3738 | 0.475 |
| 0.4947 | 819.0 | 16380 | 1.2914 | 0.55 |
| 0.4947 | 820.0 | 16400 | 1.3369 | 0.5312 |
| 0.4947 | 821.0 | 16420 | 1.3409 | 0.5375 |
| 0.4947 | 822.0 | 16440 | 1.3930 | 0.4688 |
| 0.4947 | 823.0 | 16460 | 1.3333 | 0.4813 |
| 0.4947 | 824.0 | 16480 | 1.3445 | 0.5062 |
| 0.4718 | 825.0 | 16500 | 1.3190 | 0.5312 |
| 0.4718 | 826.0 | 16520 | 1.3186 | 0.5375 |
| 0.4718 | 827.0 | 16540 | 1.3610 | 0.4625 |
| 0.4718 | 828.0 | 16560 | 1.3106 | 0.4938 |
| 0.4718 | 829.0 | 16580 | 1.4310 | 0.4437 |
| 0.4718 | 830.0 | 16600 | 1.3082 | 0.5188 |
| 0.4718 | 831.0 | 16620 | 1.3841 | 0.4875 |
| 0.4718 | 832.0 | 16640 | 1.4182 | 0.4813 |
| 0.4718 | 833.0 | 16660 | 1.3371 | 0.5125 |
| 0.4718 | 834.0 | 16680 | 1.3046 | 0.5062 |
| 0.4718 | 835.0 | 16700 | 1.3099 | 0.5125 |
| 0.4718 | 836.0 | 16720 | 1.3696 | 0.5062 |
| 0.4718 | 837.0 | 16740 | 1.3304 | 0.4875 |
| 0.4718 | 838.0 | 16760 | 1.2898 | 0.55 |
| 0.4718 | 839.0 | 16780 | 1.3310 | 0.4938 |
| 0.4718 | 840.0 | 16800 | 1.3072 | 0.5 |
| 0.4718 | 841.0 | 16820 | 1.2851 | 0.5375 |
| 0.4718 | 842.0 | 16840 | 1.3404 | 0.475 |
| 0.4718 | 843.0 | 16860 | 1.3778 | 0.5 |
| 0.4718 | 844.0 | 16880 | 1.3602 | 0.4813 |
| 0.4718 | 845.0 | 16900 | 1.3457 | 0.55 |
| 0.4718 | 846.0 | 16920 | 1.4227 | 0.4625 |
| 0.4718 | 847.0 | 16940 | 1.3969 | 0.475 |
| 0.4718 | 848.0 | 16960 | 1.3719 | 0.4813 |
| 0.4718 | 849.0 | 16980 | 1.3967 | 0.4938 |
| 0.4617 | 850.0 | 17000 | 1.3255 | 0.4938 |
| 0.4617 | 851.0 | 17020 | 1.3764 | 0.5312 |
| 0.4617 | 852.0 | 17040 | 1.3187 | 0.5062 |
| 0.4617 | 853.0 | 17060 | 1.3389 | 0.5125 |
| 0.4617 | 854.0 | 17080 | 1.2804 | 0.5375 |
| 0.4617 | 855.0 | 17100 | 1.3542 | 0.4875 |
| 0.4617 | 856.0 | 17120 | 1.3040 | 0.4875 |
| 0.4617 | 857.0 | 17140 | 1.3651 | 0.475 |
| 0.4617 | 858.0 | 17160 | 1.3485 | 0.475 |
| 0.4617 | 859.0 | 17180 | 1.3163 | 0.5188 |
| 0.4617 | 860.0 | 17200 | 1.3014 | 0.5437 |
| 0.4617 | 861.0 | 17220 | 1.2816 | 0.5437 |
| 0.4617 | 862.0 | 17240 | 1.2966 | 0.5188 |
| 0.4617 | 863.0 | 17260 | 1.3530 | 0.475 |
| 0.4617 | 864.0 | 17280 | 1.3758 | 0.4375 |
| 0.4617 | 865.0 | 17300 | 1.3664 | 0.5062 |
| 0.4617 | 866.0 | 17320 | 1.3368 | 0.5 |
| 0.4617 | 867.0 | 17340 | 1.4091 | 0.5 |
| 0.4617 | 868.0 | 17360 | 1.3439 | 0.5312 |
| 0.4617 | 869.0 | 17380 | 1.2499 | 0.525 |
| 0.4617 | 870.0 | 17400 | 1.3469 | 0.525 |
| 0.4617 | 871.0 | 17420 | 1.3525 | 0.4938 |
| 0.4617 | 872.0 | 17440 | 1.3385 | 0.4813 |
| 0.4617 | 873.0 | 17460 | 1.3979 | 0.5188 |
| 0.4617 | 874.0 | 17480 | 1.3768 | 0.5125 |
| 0.4434 | 875.0 | 17500 | 1.2682 | 0.5125 |
| 0.4434 | 876.0 | 17520 | 1.3168 | 0.5188 |
| 0.4434 | 877.0 | 17540 | 1.2967 | 0.5312 |
| 0.4434 | 878.0 | 17560 | 1.3301 | 0.4688 |
| 0.4434 | 879.0 | 17580 | 1.3409 | 0.525 |
| 0.4434 | 880.0 | 17600 | 1.3483 | 0.5188 |
| 0.4434 | 881.0 | 17620 | 1.4506 | 0.4437 |
| 0.4434 | 882.0 | 17640 | 1.3356 | 0.5375 |
| 0.4434 | 883.0 | 17660 | 1.3719 | 0.5188 |
| 0.4434 | 884.0 | 17680 | 1.3545 | 0.5125 |
| 0.4434 | 885.0 | 17700 | 1.3318 | 0.4938 |
| 0.4434 | 886.0 | 17720 | 1.3453 | 0.475 |
| 0.4434 | 887.0 | 17740 | 1.3097 | 0.5125 |
| 0.4434 | 888.0 | 17760 | 1.3564 | 0.5062 |
| 0.4434 | 889.0 | 17780 | 1.3002 | 0.525 |
| 0.4434 | 890.0 | 17800 | 1.3060 | 0.4938 |
| 0.4434 | 891.0 | 17820 | 1.3436 | 0.5 |
| 0.4434 | 892.0 | 17840 | 1.3259 | 0.5375 |
| 0.4434 | 893.0 | 17860 | 1.3651 | 0.5062 |
| 0.4434 | 894.0 | 17880 | 1.3443 | 0.5062 |
| 0.4434 | 895.0 | 17900 | 1.2944 | 0.5312 |
| 0.4434 | 896.0 | 17920 | 1.3400 | 0.5062 |
| 0.4434 | 897.0 | 17940 | 1.2620 | 0.5625 |
| 0.4434 | 898.0 | 17960 | 1.3480 | 0.4813 |
| 0.4434 | 899.0 | 17980 | 1.3579 | 0.4813 |
| 0.4291 | 900.0 | 18000 | 1.3011 | 0.5188 |
| 0.4291 | 901.0 | 18020 | 1.4301 | 0.45 |
| 0.4291 | 902.0 | 18040 | 1.3234 | 0.5062 |
| 0.4291 | 903.0 | 18060 | 1.3902 | 0.475 |
| 0.4291 | 904.0 | 18080 | 1.3479 | 0.5062 |
| 0.4291 | 905.0 | 18100 | 1.3450 | 0.5437 |
| 0.4291 | 906.0 | 18120 | 1.2925 | 0.5625 |
| 0.4291 | 907.0 | 18140 | 1.3148 | 0.4938 |
| 0.4291 | 908.0 | 18160 | 1.3354 | 0.5 |
| 0.4291 | 909.0 | 18180 | 1.2772 | 0.5625 |
| 0.4291 | 910.0 | 18200 | 1.3245 | 0.5 |
| 0.4291 | 911.0 | 18220 | 1.3660 | 0.525 |
| 0.4291 | 912.0 | 18240 | 1.3492 | 0.4875 |
| 0.4291 | 913.0 | 18260 | 1.3098 | 0.5062 |
| 0.4291 | 914.0 | 18280 | 1.3008 | 0.5125 |
| 0.4291 | 915.0 | 18300 | 1.2793 | 0.5437 |
| 0.4291 | 916.0 | 18320 | 1.3881 | 0.5062 |
| 0.4291 | 917.0 | 18340 | 1.3288 | 0.5062 |
| 0.4291 | 918.0 | 18360 | 1.3486 | 0.5062 |
| 0.4291 | 919.0 | 18380 | 1.4146 | 0.4938 |
| 0.4291 | 920.0 | 18400 | 1.3902 | 0.5062 |
| 0.4291 | 921.0 | 18420 | 1.4141 | 0.475 |
| 0.4291 | 922.0 | 18440 | 1.3744 | 0.4875 |
| 0.4291 | 923.0 | 18460 | 1.3035 | 0.5 |
| 0.4291 | 924.0 | 18480 | 1.3196 | 0.525 |
| 0.4131 | 925.0 | 18500 | 1.3486 | 0.5062 |
| 0.4131 | 926.0 | 18520 | 1.3279 | 0.5062 |
| 0.4131 | 927.0 | 18540 | 1.3890 | 0.4875 |
| 0.4131 | 928.0 | 18560 | 1.3690 | 0.475 |
| 0.4131 | 929.0 | 18580 | 1.3106 | 0.5437 |
| 0.4131 | 930.0 | 18600 | 1.4345 | 0.4625 |
| 0.4131 | 931.0 | 18620 | 1.3596 | 0.5312 |
| 0.4131 | 932.0 | 18640 | 1.3857 | 0.5062 |
| 0.4131 | 933.0 | 18660 | 1.3585 | 0.4875 |
| 0.4131 | 934.0 | 18680 | 1.3781 | 0.5125 |
| 0.4131 | 935.0 | 18700 | 1.3371 | 0.5375 |
| 0.4131 | 936.0 | 18720 | 1.2836 | 0.5625 |
| 0.4131 | 937.0 | 18740 | 1.3816 | 0.5062 |
| 0.4131 | 938.0 | 18760 | 1.3404 | 0.5062 |
| 0.4131 | 939.0 | 18780 | 1.3798 | 0.525 |
| 0.4131 | 940.0 | 18800 | 1.2975 | 0.5563 |
| 0.4131 | 941.0 | 18820 | 1.4068 | 0.5125 |
| 0.4131 | 942.0 | 18840 | 1.2454 | 0.5563 |
| 0.4131 | 943.0 | 18860 | 1.3920 | 0.4688 |
| 0.4131 | 944.0 | 18880 | 1.4043 | 0.4562 |
| 0.4131 | 945.0 | 18900 | 1.4798 | 0.4875 |
| 0.4131 | 946.0 | 18920 | 1.3545 | 0.5125 |
| 0.4131 | 947.0 | 18940 | 1.4454 | 0.4688 |
| 0.4131 | 948.0 | 18960 | 1.3438 | 0.5125 |
| 0.4131 | 949.0 | 18980 | 1.3395 | 0.5188 |
| 0.4118 | 950.0 | 19000 | 1.3962 | 0.4938 |
| 0.4118 | 951.0 | 19020 | 1.4163 | 0.4813 |
| 0.4118 | 952.0 | 19040 | 1.3619 | 0.4625 |
| 0.4118 | 953.0 | 19060 | 1.3562 | 0.4938 |
| 0.4118 | 954.0 | 19080 | 1.4076 | 0.4688 |
| 0.4118 | 955.0 | 19100 | 1.3647 | 0.5125 |
| 0.4118 | 956.0 | 19120 | 1.2197 | 0.6062 |
| 0.4118 | 957.0 | 19140 | 1.3545 | 0.525 |
| 0.4118 | 958.0 | 19160 | 1.3208 | 0.5375 |
| 0.4118 | 959.0 | 19180 | 1.3613 | 0.4938 |
| 0.4118 | 960.0 | 19200 | 1.4186 | 0.4625 |
| 0.4118 | 961.0 | 19220 | 1.3911 | 0.475 |
| 0.4118 | 962.0 | 19240 | 1.3046 | 0.5188 |
| 0.4118 | 963.0 | 19260 | 1.4946 | 0.4625 |
| 0.4118 | 964.0 | 19280 | 1.3376 | 0.525 |
| 0.4118 | 965.0 | 19300 | 1.2985 | 0.575 |
| 0.4118 | 966.0 | 19320 | 1.4687 | 0.4688 |
| 0.4118 | 967.0 | 19340 | 1.3920 | 0.5062 |
| 0.4118 | 968.0 | 19360 | 1.3530 | 0.5188 |
| 0.4118 | 969.0 | 19380 | 1.3651 | 0.5 |
| 0.4118 | 970.0 | 19400 | 1.3075 | 0.5312 |
| 0.4118 | 971.0 | 19420 | 1.3934 | 0.4875 |
| 0.4118 | 972.0 | 19440 | 1.4059 | 0.5 |
| 0.4118 | 973.0 | 19460 | 1.3216 | 0.475 |
| 0.4118 | 974.0 | 19480 | 1.4060 | 0.4875 |
| 0.389 | 975.0 | 19500 | 1.3715 | 0.5 |
| 0.389 | 976.0 | 19520 | 1.3340 | 0.5437 |
| 0.389 | 977.0 | 19540 | 1.3194 | 0.5188 |
| 0.389 | 978.0 | 19560 | 1.3989 | 0.4875 |
| 0.389 | 979.0 | 19580 | 1.3117 | 0.5312 |
| 0.389 | 980.0 | 19600 | 1.3618 | 0.5062 |
| 0.389 | 981.0 | 19620 | 1.4768 | 0.4688 |
| 0.389 | 982.0 | 19640 | 1.4180 | 0.4938 |
| 0.389 | 983.0 | 19660 | 1.4036 | 0.5062 |
| 0.389 | 984.0 | 19680 | 1.3403 | 0.5437 |
| 0.389 | 985.0 | 19700 | 1.4022 | 0.4813 |
| 0.389 | 986.0 | 19720 | 1.3421 | 0.4938 |
| 0.389 | 987.0 | 19740 | 1.3988 | 0.4938 |
| 0.389 | 988.0 | 19760 | 1.4211 | 0.45 |
| 0.389 | 989.0 | 19780 | 1.3171 | 0.5375 |
| 0.389 | 990.0 | 19800 | 1.3211 | 0.5125 |
| 0.389 | 991.0 | 19820 | 1.3275 | 0.5 |
| 0.389 | 992.0 | 19840 | 1.4225 | 0.4625 |
| 0.389 | 993.0 | 19860 | 1.3823 | 0.4875 |
| 0.389 | 994.0 | 19880 | 1.4464 | 0.4875 |
| 0.389 | 995.0 | 19900 | 1.3581 | 0.4938 |
| 0.389 | 996.0 | 19920 | 1.4792 | 0.4313 |
| 0.389 | 997.0 | 19940 | 1.4221 | 0.4688 |
| 0.389 | 998.0 | 19960 | 1.3317 | 0.5437 |
| 0.389 | 999.0 | 19980 | 1.3628 | 0.5 |
| 0.3773 | 1000.0 | 20000 | 1.4181 | 0.4938 |
| 0.3773 | 1001.0 | 20020 | 1.3882 | 0.5312 |
| 0.3773 | 1002.0 | 20040 | 1.4141 | 0.4562 |
| 0.3773 | 1003.0 | 20060 | 1.4038 | 0.4813 |
| 0.3773 | 1004.0 | 20080 | 1.3212 | 0.5687 |
| 0.3773 | 1005.0 | 20100 | 1.4048 | 0.5062 |
| 0.3773 | 1006.0 | 20120 | 1.4591 | 0.5062 |
| 0.3773 | 1007.0 | 20140 | 1.3579 | 0.5125 |
| 0.3773 | 1008.0 | 20160 | 1.3681 | 0.5 |
| 0.3773 | 1009.0 | 20180 | 1.3616 | 0.5437 |
| 0.3773 | 1010.0 | 20200 | 1.3624 | 0.4875 |
| 0.3773 | 1011.0 | 20220 | 1.3662 | 0.5062 |
| 0.3773 | 1012.0 | 20240 | 1.4818 | 0.475 |
| 0.3773 | 1013.0 | 20260 | 1.3481 | 0.5125 |
| 0.3773 | 1014.0 | 20280 | 1.3014 | 0.5437 |
| 0.3773 | 1015.0 | 20300 | 1.3190 | 0.525 |
| 0.3773 | 1016.0 | 20320 | 1.3832 | 0.4938 |
| 0.3773 | 1017.0 | 20340 | 1.4199 | 0.475 |
| 0.3773 | 1018.0 | 20360 | 1.3887 | 0.5 |
| 0.3773 | 1019.0 | 20380 | 1.3932 | 0.4875 |
| 0.3773 | 1020.0 | 20400 | 1.3924 | 0.5188 |
| 0.3773 | 1021.0 | 20420 | 1.3478 | 0.5312 |
| 0.3773 | 1022.0 | 20440 | 1.3848 | 0.5125 |
| 0.3773 | 1023.0 | 20460 | 1.3991 | 0.5375 |
| 0.3773 | 1024.0 | 20480 | 1.2987 | 0.5312 |
| 0.3712 | 1025.0 | 20500 | 1.3086 | 0.5312 |
| 0.3712 | 1026.0 | 20520 | 1.4272 | 0.5 |
| 0.3712 | 1027.0 | 20540 | 1.3730 | 0.475 |
| 0.3712 | 1028.0 | 20560 | 1.3770 | 0.5062 |
| 0.3712 | 1029.0 | 20580 | 1.4654 | 0.475 |
| 0.3712 | 1030.0 | 20600 | 1.4238 | 0.5062 |
| 0.3712 | 1031.0 | 20620 | 1.4792 | 0.4625 |
| 0.3712 | 1032.0 | 20640 | 1.3344 | 0.5188 |
| 0.3712 | 1033.0 | 20660 | 1.3322 | 0.5563 |
| 0.3712 | 1034.0 | 20680 | 1.3447 | 0.5375 |
| 0.3712 | 1035.0 | 20700 | 1.4399 | 0.4625 |
| 0.3712 | 1036.0 | 20720 | 1.4378 | 0.4562 |
| 0.3712 | 1037.0 | 20740 | 1.4472 | 0.5188 |
| 0.3712 | 1038.0 | 20760 | 1.2787 | 0.5625 |
| 0.3712 | 1039.0 | 20780 | 1.3667 | 0.4938 |
| 0.3712 | 1040.0 | 20800 | 1.3867 | 0.4625 |
| 0.3712 | 1041.0 | 20820 | 1.3250 | 0.5 |
| 0.3712 | 1042.0 | 20840 | 1.4041 | 0.4813 |
| 0.3712 | 1043.0 | 20860 | 1.3276 | 0.5437 |
| 0.3712 | 1044.0 | 20880 | 1.2899 | 0.5188 |
| 0.3712 | 1045.0 | 20900 | 1.4114 | 0.4625 |
| 0.3712 | 1046.0 | 20920 | 1.3681 | 0.5062 |
| 0.3712 | 1047.0 | 20940 | 1.3738 | 0.5563 |
| 0.3712 | 1048.0 | 20960 | 1.3565 | 0.5375 |
| 0.3712 | 1049.0 | 20980 | 1.4105 | 0.4625 |
| 0.3584 | 1050.0 | 21000 | 1.3205 | 0.55 |
| 0.3584 | 1051.0 | 21020 | 1.5080 | 0.4375 |
| 0.3584 | 1052.0 | 21040 | 1.3514 | 0.4813 |
| 0.3584 | 1053.0 | 21060 | 1.3686 | 0.475 |
| 0.3584 | 1054.0 | 21080 | 1.3817 | 0.4938 |
| 0.3584 | 1055.0 | 21100 | 1.3857 | 0.5125 |
| 0.3584 | 1056.0 | 21120 | 1.3084 | 0.5188 |
| 0.3584 | 1057.0 | 21140 | 1.3190 | 0.5563 |
| 0.3584 | 1058.0 | 21160 | 1.4350 | 0.4813 |
| 0.3584 | 1059.0 | 21180 | 1.3044 | 0.5625 |
| 0.3584 | 1060.0 | 21200 | 1.3815 | 0.4938 |
| 0.3584 | 1061.0 | 21220 | 1.3732 | 0.5188 |
| 0.3584 | 1062.0 | 21240 | 1.3175 | 0.5563 |
| 0.3584 | 1063.0 | 21260 | 1.2658 | 0.5437 |
| 0.3584 | 1064.0 | 21280 | 1.3472 | 0.4875 |
| 0.3584 | 1065.0 | 21300 | 1.4301 | 0.4625 |
| 0.3584 | 1066.0 | 21320 | 1.3619 | 0.475 |
| 0.3584 | 1067.0 | 21340 | 1.3619 | 0.5125 |
| 0.3584 | 1068.0 | 21360 | 1.3993 | 0.5125 |
| 0.3584 | 1069.0 | 21380 | 1.4224 | 0.5 |
| 0.3584 | 1070.0 | 21400 | 1.3944 | 0.5125 |
| 0.3584 | 1071.0 | 21420 | 1.4229 | 0.4938 |
| 0.3584 | 1072.0 | 21440 | 1.4713 | 0.4688 |
| 0.3584 | 1073.0 | 21460 | 1.4545 | 0.45 |
| 0.3584 | 1074.0 | 21480 | 1.3642 | 0.5312 |
| 0.3505 | 1075.0 | 21500 | 1.4090 | 0.5 |
| 0.3505 | 1076.0 | 21520 | 1.3690 | 0.5125 |
| 0.3505 | 1077.0 | 21540 | 1.2777 | 0.55 |
| 0.3505 | 1078.0 | 21560 | 1.3040 | 0.5563 |
| 0.3505 | 1079.0 | 21580 | 1.4517 | 0.4562 |
| 0.3505 | 1080.0 | 21600 | 1.4383 | 0.4875 |
| 0.3505 | 1081.0 | 21620 | 1.4173 | 0.4938 |
| 0.3505 | 1082.0 | 21640 | 1.4372 | 0.4813 |
| 0.3505 | 1083.0 | 21660 | 1.3728 | 0.475 |
| 0.3505 | 1084.0 | 21680 | 1.3340 | 0.5375 |
| 0.3505 | 1085.0 | 21700 | 1.4723 | 0.4562 |
| 0.3505 | 1086.0 | 21720 | 1.4362 | 0.4625 |
| 0.3505 | 1087.0 | 21740 | 1.4267 | 0.4375 |
| 0.3505 | 1088.0 | 21760 | 1.4604 | 0.4875 |
| 0.3505 | 1089.0 | 21780 | 1.2826 | 0.55 |
| 0.3505 | 1090.0 | 21800 | 1.4127 | 0.4938 |
| 0.3505 | 1091.0 | 21820 | 1.3664 | 0.525 |
| 0.3505 | 1092.0 | 21840 | 1.3221 | 0.4875 |
| 0.3505 | 1093.0 | 21860 | 1.4135 | 0.4813 |
| 0.3505 | 1094.0 | 21880 | 1.3625 | 0.4875 |
| 0.3505 | 1095.0 | 21900 | 1.3873 | 0.5062 |
| 0.3505 | 1096.0 | 21920 | 1.4004 | 0.5 |
| 0.3505 | 1097.0 | 21940 | 1.4268 | 0.475 |
| 0.3505 | 1098.0 | 21960 | 1.4805 | 0.4813 |
| 0.3505 | 1099.0 | 21980 | 1.4542 | 0.45 |
| 0.3476 | 1100.0 | 22000 | 1.4252 | 0.4688 |
| 0.3476 | 1101.0 | 22020 | 1.3678 | 0.525 |
| 0.3476 | 1102.0 | 22040 | 1.3359 | 0.525 |
| 0.3476 | 1103.0 | 22060 | 1.3804 | 0.5563 |
| 0.3476 | 1104.0 | 22080 | 1.3264 | 0.4938 |
| 0.3476 | 1105.0 | 22100 | 1.4641 | 0.4938 |
| 0.3476 | 1106.0 | 22120 | 1.3290 | 0.5312 |
| 0.3476 | 1107.0 | 22140 | 1.3950 | 0.5312 |
| 0.3476 | 1108.0 | 22160 | 1.4626 | 0.4813 |
| 0.3476 | 1109.0 | 22180 | 1.4356 | 0.5 |
| 0.3476 | 1110.0 | 22200 | 1.4076 | 0.525 |
| 0.3476 | 1111.0 | 22220 | 1.3678 | 0.5 |
| 0.3476 | 1112.0 | 22240 | 1.4537 | 0.475 |
| 0.3476 | 1113.0 | 22260 | 1.3678 | 0.4688 |
| 0.3476 | 1114.0 | 22280 | 1.4392 | 0.5 |
| 0.3476 | 1115.0 | 22300 | 1.4576 | 0.4813 |
| 0.3476 | 1116.0 | 22320 | 1.4588 | 0.4875 |
| 0.3476 | 1117.0 | 22340 | 1.3955 | 0.4938 |
| 0.3476 | 1118.0 | 22360 | 1.3751 | 0.5312 |
| 0.3476 | 1119.0 | 22380 | 1.4029 | 0.4875 |
| 0.3476 | 1120.0 | 22400 | 1.4018 | 0.4875 |
| 0.3476 | 1121.0 | 22420 | 1.4481 | 0.4625 |
| 0.3476 | 1122.0 | 22440 | 1.3728 | 0.475 |
| 0.3476 | 1123.0 | 22460 | 1.3973 | 0.5062 |
| 0.3476 | 1124.0 | 22480 | 1.4391 | 0.475 |
| 0.3385 | 1125.0 | 22500 | 1.4272 | 0.5125 |
| 0.3385 | 1126.0 | 22520 | 1.4857 | 0.45 |
| 0.3385 | 1127.0 | 22540 | 1.5194 | 0.475 |
| 0.3385 | 1128.0 | 22560 | 1.4461 | 0.4938 |
| 0.3385 | 1129.0 | 22580 | 1.4275 | 0.5 |
| 0.3385 | 1130.0 | 22600 | 1.3560 | 0.525 |
| 0.3385 | 1131.0 | 22620 | 1.4456 | 0.4875 |
| 0.3385 | 1132.0 | 22640 | 1.4478 | 0.4813 |
| 0.3385 | 1133.0 | 22660 | 1.4202 | 0.5125 |
| 0.3385 | 1134.0 | 22680 | 1.4198 | 0.4938 |
| 0.3385 | 1135.0 | 22700 | 1.3903 | 0.5188 |
| 0.3385 | 1136.0 | 22720 | 1.3970 | 0.4875 |
| 0.3385 | 1137.0 | 22740 | 1.4442 | 0.5062 |
| 0.3385 | 1138.0 | 22760 | 1.4369 | 0.5125 |
| 0.3385 | 1139.0 | 22780 | 1.4121 | 0.4813 |
| 0.3385 | 1140.0 | 22800 | 1.4603 | 0.475 |
| 0.3385 | 1141.0 | 22820 | 1.4605 | 0.5188 |
| 0.3385 | 1142.0 | 22840 | 1.4802 | 0.4813 |
| 0.3385 | 1143.0 | 22860 | 1.4595 | 0.4688 |
| 0.3385 | 1144.0 | 22880 | 1.3355 | 0.5437 |
| 0.3385 | 1145.0 | 22900 | 1.4476 | 0.5 |
| 0.3385 | 1146.0 | 22920 | 1.3267 | 0.525 |
| 0.3385 | 1147.0 | 22940 | 1.4033 | 0.5312 |
| 0.3385 | 1148.0 | 22960 | 1.3335 | 0.4938 |
| 0.3385 | 1149.0 | 22980 | 1.4066 | 0.5125 |
| 0.3268 | 1150.0 | 23000 | 1.3459 | 0.525 |
| 0.3268 | 1151.0 | 23020 | 1.4167 | 0.5125 |
| 0.3268 | 1152.0 | 23040 | 1.3946 | 0.5 |
| 0.3268 | 1153.0 | 23060 | 1.3246 | 0.5188 |
| 0.3268 | 1154.0 | 23080 | 1.4830 | 0.45 |
| 0.3268 | 1155.0 | 23100 | 1.4584 | 0.4813 |
| 0.3268 | 1156.0 | 23120 | 1.3686 | 0.4938 |
| 0.3268 | 1157.0 | 23140 | 1.4520 | 0.4688 |
| 0.3268 | 1158.0 | 23160 | 1.3760 | 0.4875 |
| 0.3268 | 1159.0 | 23180 | 1.4544 | 0.4875 |
| 0.3268 | 1160.0 | 23200 | 1.4163 | 0.475 |
| 0.3268 | 1161.0 | 23220 | 1.4008 | 0.5062 |
| 0.3268 | 1162.0 | 23240 | 1.4177 | 0.4938 |
| 0.3268 | 1163.0 | 23260 | 1.3883 | 0.45 |
| 0.3268 | 1164.0 | 23280 | 1.4449 | 0.4813 |
| 0.3268 | 1165.0 | 23300 | 1.4226 | 0.4938 |
| 0.3268 | 1166.0 | 23320 | 1.4086 | 0.4625 |
| 0.3268 | 1167.0 | 23340 | 1.3630 | 0.5125 |
| 0.3268 | 1168.0 | 23360 | 1.3989 | 0.5188 |
| 0.3268 | 1169.0 | 23380 | 1.4102 | 0.4813 |
| 0.3268 | 1170.0 | 23400 | 1.4742 | 0.4875 |
| 0.3268 | 1171.0 | 23420 | 1.4955 | 0.4813 |
| 0.3268 | 1172.0 | 23440 | 1.3759 | 0.5188 |
| 0.3268 | 1173.0 | 23460 | 1.5161 | 0.4375 |
| 0.3268 | 1174.0 | 23480 | 1.3576 | 0.5062 |
| 0.3287 | 1175.0 | 23500 | 1.4231 | 0.5 |
| 0.3287 | 1176.0 | 23520 | 1.4011 | 0.5125 |
| 0.3287 | 1177.0 | 23540 | 1.3886 | 0.5188 |
| 0.3287 | 1178.0 | 23560 | 1.3490 | 0.5188 |
| 0.3287 | 1179.0 | 23580 | 1.3957 | 0.525 |
| 0.3287 | 1180.0 | 23600 | 1.3655 | 0.525 |
| 0.3287 | 1181.0 | 23620 | 1.4315 | 0.5 |
| 0.3287 | 1182.0 | 23640 | 1.4145 | 0.4875 |
| 0.3287 | 1183.0 | 23660 | 1.4490 | 0.4875 |
| 0.3287 | 1184.0 | 23680 | 1.3853 | 0.5312 |
| 0.3287 | 1185.0 | 23700 | 1.3774 | 0.5125 |
| 0.3287 | 1186.0 | 23720 | 1.3916 | 0.5125 |
| 0.3287 | 1187.0 | 23740 | 1.3825 | 0.5062 |
| 0.3287 | 1188.0 | 23760 | 1.3258 | 0.4875 |
| 0.3287 | 1189.0 | 23780 | 1.4783 | 0.4562 |
| 0.3287 | 1190.0 | 23800 | 1.4348 | 0.4938 |
| 0.3287 | 1191.0 | 23820 | 1.3773 | 0.5125 |
| 0.3287 | 1192.0 | 23840 | 1.4620 | 0.475 |
| 0.3287 | 1193.0 | 23860 | 1.3810 | 0.5 |
| 0.3287 | 1194.0 | 23880 | 1.3112 | 0.525 |
| 0.3287 | 1195.0 | 23900 | 1.4155 | 0.5125 |
| 0.3287 | 1196.0 | 23920 | 1.4429 | 0.4813 |
| 0.3287 | 1197.0 | 23940 | 1.3498 | 0.5188 |
| 0.3287 | 1198.0 | 23960 | 1.4635 | 0.45 |
| 0.3287 | 1199.0 | 23980 | 1.4369 | 0.5 |
| 0.3201 | 1200.0 | 24000 | 1.4041 | 0.4938 |
| 0.3201 | 1201.0 | 24020 | 1.3298 | 0.5312 |
| 0.3201 | 1202.0 | 24040 | 1.4084 | 0.5 |
| 0.3201 | 1203.0 | 24060 | 1.3808 | 0.5 |
| 0.3201 | 1204.0 | 24080 | 1.4122 | 0.5312 |
| 0.3201 | 1205.0 | 24100 | 1.3964 | 0.5125 |
| 0.3201 | 1206.0 | 24120 | 1.3720 | 0.5312 |
| 0.3201 | 1207.0 | 24140 | 1.4992 | 0.4625 |
| 0.3201 | 1208.0 | 24160 | 1.6059 | 0.4062 |
| 0.3201 | 1209.0 | 24180 | 1.4385 | 0.4562 |
| 0.3201 | 1210.0 | 24200 | 1.4335 | 0.5125 |
| 0.3201 | 1211.0 | 24220 | 1.4089 | 0.4938 |
| 0.3201 | 1212.0 | 24240 | 1.3790 | 0.5 |
| 0.3201 | 1213.0 | 24260 | 1.4414 | 0.475 |
| 0.3201 | 1214.0 | 24280 | 1.4158 | 0.5 |
| 0.3201 | 1215.0 | 24300 | 1.3764 | 0.5062 |
| 0.3201 | 1216.0 | 24320 | 1.4014 | 0.5125 |
| 0.3201 | 1217.0 | 24340 | 1.3741 | 0.5375 |
| 0.3201 | 1218.0 | 24360 | 1.3750 | 0.4938 |
| 0.3201 | 1219.0 | 24380 | 1.3031 | 0.5687 |
| 0.3201 | 1220.0 | 24400 | 1.3940 | 0.5125 |
| 0.3201 | 1221.0 | 24420 | 1.3977 | 0.5 |
| 0.3201 | 1222.0 | 24440 | 1.4268 | 0.5312 |
| 0.3201 | 1223.0 | 24460 | 1.5672 | 0.4375 |
| 0.3201 | 1224.0 | 24480 | 1.3758 | 0.5 |
| 0.3164 | 1225.0 | 24500 | 1.4140 | 0.4875 |
| 0.3164 | 1226.0 | 24520 | 1.3729 | 0.5188 |
| 0.3164 | 1227.0 | 24540 | 1.3847 | 0.4938 |
| 0.3164 | 1228.0 | 24560 | 1.4774 | 0.4875 |
| 0.3164 | 1229.0 | 24580 | 1.5155 | 0.4688 |
| 0.3164 | 1230.0 | 24600 | 1.3858 | 0.5 |
| 0.3164 | 1231.0 | 24620 | 1.3414 | 0.55 |
| 0.3164 | 1232.0 | 24640 | 1.4706 | 0.475 |
| 0.3164 | 1233.0 | 24660 | 1.4265 | 0.5062 |
| 0.3164 | 1234.0 | 24680 | 1.4485 | 0.4875 |
| 0.3164 | 1235.0 | 24700 | 1.4005 | 0.5 |
| 0.3164 | 1236.0 | 24720 | 1.4090 | 0.5 |
| 0.3164 | 1237.0 | 24740 | 1.3930 | 0.4875 |
| 0.3164 | 1238.0 | 24760 | 1.3838 | 0.5188 |
| 0.3164 | 1239.0 | 24780 | 1.5145 | 0.4813 |
| 0.3164 | 1240.0 | 24800 | 1.4105 | 0.4813 |
| 0.3164 | 1241.0 | 24820 | 1.3440 | 0.4938 |
| 0.3164 | 1242.0 | 24840 | 1.3532 | 0.5437 |
| 0.3164 | 1243.0 | 24860 | 1.3607 | 0.5 |
| 0.3164 | 1244.0 | 24880 | 1.3491 | 0.4938 |
| 0.3164 | 1245.0 | 24900 | 1.4692 | 0.475 |
| 0.3164 | 1246.0 | 24920 | 1.3448 | 0.5 |
| 0.3164 | 1247.0 | 24940 | 1.3950 | 0.5062 |
| 0.3164 | 1248.0 | 24960 | 1.4520 | 0.4938 |
| 0.3164 | 1249.0 | 24980 | 1.5249 | 0.5 |
| 0.3076 | 1250.0 | 25000 | 1.3382 | 0.5375 |
| 0.3076 | 1251.0 | 25020 | 1.4296 | 0.4625 |
| 0.3076 | 1252.0 | 25040 | 1.3729 | 0.5188 |
| 0.3076 | 1253.0 | 25060 | 1.5224 | 0.4625 |
| 0.3076 | 1254.0 | 25080 | 1.2983 | 0.5437 |
| 0.3076 | 1255.0 | 25100 | 1.4082 | 0.5062 |
| 0.3076 | 1256.0 | 25120 | 1.4239 | 0.5312 |
| 0.3076 | 1257.0 | 25140 | 1.4802 | 0.4688 |
| 0.3076 | 1258.0 | 25160 | 1.4792 | 0.4875 |
| 0.3076 | 1259.0 | 25180 | 1.3430 | 0.5375 |
| 0.3076 | 1260.0 | 25200 | 1.3697 | 0.4938 |
| 0.3076 | 1261.0 | 25220 | 1.3201 | 0.5125 |
| 0.3076 | 1262.0 | 25240 | 1.3493 | 0.5188 |
| 0.3076 | 1263.0 | 25260 | 1.4528 | 0.4688 |
| 0.3076 | 1264.0 | 25280 | 1.3318 | 0.525 |
| 0.3076 | 1265.0 | 25300 | 1.3760 | 0.5188 |
| 0.3076 | 1266.0 | 25320 | 1.4779 | 0.4688 |
| 0.3076 | 1267.0 | 25340 | 1.4093 | 0.475 |
| 0.3076 | 1268.0 | 25360 | 1.4274 | 0.4813 |
| 0.3076 | 1269.0 | 25380 | 1.3919 | 0.475 |
| 0.3076 | 1270.0 | 25400 | 1.4197 | 0.5 |
| 0.3076 | 1271.0 | 25420 | 1.3656 | 0.5 |
| 0.3076 | 1272.0 | 25440 | 1.4491 | 0.4813 |
| 0.3076 | 1273.0 | 25460 | 1.3948 | 0.5 |
| 0.3076 | 1274.0 | 25480 | 1.3920 | 0.525 |
| 0.3082 | 1275.0 | 25500 | 1.3472 | 0.4875 |
| 0.3082 | 1276.0 | 25520 | 1.5085 | 0.4875 |
| 0.3082 | 1277.0 | 25540 | 1.3652 | 0.5312 |
| 0.3082 | 1278.0 | 25560 | 1.3947 | 0.5125 |
| 0.3082 | 1279.0 | 25580 | 1.3934 | 0.5188 |
| 0.3082 | 1280.0 | 25600 | 1.5537 | 0.4938 |
| 0.3082 | 1281.0 | 25620 | 1.3661 | 0.5062 |
| 0.3082 | 1282.0 | 25640 | 1.4507 | 0.4875 |
| 0.3082 | 1283.0 | 25660 | 1.4415 | 0.5125 |
| 0.3082 | 1284.0 | 25680 | 1.4418 | 0.5188 |
| 0.3082 | 1285.0 | 25700 | 1.5535 | 0.4688 |
| 0.3082 | 1286.0 | 25720 | 1.4261 | 0.5188 |
| 0.3082 | 1287.0 | 25740 | 1.3910 | 0.5 |
| 0.3082 | 1288.0 | 25760 | 1.4829 | 0.475 |
| 0.3082 | 1289.0 | 25780 | 1.4284 | 0.4938 |
| 0.3082 | 1290.0 | 25800 | 1.5268 | 0.4688 |
| 0.3082 | 1291.0 | 25820 | 1.4723 | 0.4813 |
| 0.3082 | 1292.0 | 25840 | 1.4140 | 0.5125 |
| 0.3082 | 1293.0 | 25860 | 1.4074 | 0.525 |
| 0.3082 | 1294.0 | 25880 | 1.3843 | 0.5188 |
| 0.3082 | 1295.0 | 25900 | 1.4371 | 0.5 |
| 0.3082 | 1296.0 | 25920 | 1.4466 | 0.5125 |
| 0.3082 | 1297.0 | 25940 | 1.4741 | 0.4875 |
| 0.3082 | 1298.0 | 25960 | 1.4166 | 0.4938 |
| 0.3082 | 1299.0 | 25980 | 1.4010 | 0.4688 |
| 0.3012 | 1300.0 | 26000 | 1.3630 | 0.525 |
| 0.3012 | 1301.0 | 26020 | 1.5302 | 0.4688 |
| 0.3012 | 1302.0 | 26040 | 1.4860 | 0.4688 |
| 0.3012 | 1303.0 | 26060 | 1.4784 | 0.475 |
| 0.3012 | 1304.0 | 26080 | 1.4003 | 0.5312 |
| 0.3012 | 1305.0 | 26100 | 1.3953 | 0.5312 |
| 0.3012 | 1306.0 | 26120 | 1.3881 | 0.525 |
| 0.3012 | 1307.0 | 26140 | 1.4845 | 0.4688 |
| 0.3012 | 1308.0 | 26160 | 1.4244 | 0.4938 |
| 0.3012 | 1309.0 | 26180 | 1.3334 | 0.5563 |
| 0.3012 | 1310.0 | 26200 | 1.4507 | 0.5125 |
| 0.3012 | 1311.0 | 26220 | 1.5152 | 0.4875 |
| 0.3012 | 1312.0 | 26240 | 1.4418 | 0.4938 |
| 0.3012 | 1313.0 | 26260 | 1.4376 | 0.4562 |
| 0.3012 | 1314.0 | 26280 | 1.4452 | 0.4875 |
| 0.3012 | 1315.0 | 26300 | 1.5565 | 0.4625 |
| 0.3012 | 1316.0 | 26320 | 1.4261 | 0.5 |
| 0.3012 | 1317.0 | 26340 | 1.4312 | 0.5125 |
| 0.3012 | 1318.0 | 26360 | 1.4920 | 0.4813 |
| 0.3012 | 1319.0 | 26380 | 1.3139 | 0.5312 |
| 0.3012 | 1320.0 | 26400 | 1.3646 | 0.525 |
| 0.3012 | 1321.0 | 26420 | 1.4379 | 0.4688 |
| 0.3012 | 1322.0 | 26440 | 1.4381 | 0.4938 |
| 0.3012 | 1323.0 | 26460 | 1.3807 | 0.5 |
| 0.3012 | 1324.0 | 26480 | 1.4444 | 0.5188 |
| 0.2976 | 1325.0 | 26500 | 1.3924 | 0.55 |
| 0.2976 | 1326.0 | 26520 | 1.4249 | 0.55 |
| 0.2976 | 1327.0 | 26540 | 1.4551 | 0.4875 |
| 0.2976 | 1328.0 | 26560 | 1.4917 | 0.4625 |
| 0.2976 | 1329.0 | 26580 | 1.3918 | 0.5062 |
| 0.2976 | 1330.0 | 26600 | 1.4366 | 0.4562 |
| 0.2976 | 1331.0 | 26620 | 1.4795 | 0.45 |
| 0.2976 | 1332.0 | 26640 | 1.4213 | 0.4875 |
| 0.2976 | 1333.0 | 26660 | 1.4253 | 0.525 |
| 0.2976 | 1334.0 | 26680 | 1.4548 | 0.4813 |
| 0.2976 | 1335.0 | 26700 | 1.3806 | 0.525 |
| 0.2976 | 1336.0 | 26720 | 1.3641 | 0.4875 |
| 0.2976 | 1337.0 | 26740 | 1.4808 | 0.4625 |
| 0.2976 | 1338.0 | 26760 | 1.4172 | 0.5125 |
| 0.2976 | 1339.0 | 26780 | 1.4479 | 0.4875 |
| 0.2976 | 1340.0 | 26800 | 1.4552 | 0.525 |
| 0.2976 | 1341.0 | 26820 | 1.5278 | 0.4688 |
| 0.2976 | 1342.0 | 26840 | 1.2895 | 0.5312 |
| 0.2976 | 1343.0 | 26860 | 1.3676 | 0.525 |
| 0.2976 | 1344.0 | 26880 | 1.3360 | 0.5312 |
| 0.2976 | 1345.0 | 26900 | 1.4753 | 0.4875 |
| 0.2976 | 1346.0 | 26920 | 1.4449 | 0.5188 |
| 0.2976 | 1347.0 | 26940 | 1.3456 | 0.55 |
| 0.2976 | 1348.0 | 26960 | 1.3685 | 0.5 |
| 0.2976 | 1349.0 | 26980 | 1.4757 | 0.4688 |
| 0.298 | 1350.0 | 27000 | 1.3915 | 0.5188 |
| 0.298 | 1351.0 | 27020 | 1.3821 | 0.5125 |
| 0.298 | 1352.0 | 27040 | 1.4777 | 0.4875 |
| 0.298 | 1353.0 | 27060 | 1.4521 | 0.4875 |
| 0.298 | 1354.0 | 27080 | 1.4116 | 0.5188 |
| 0.298 | 1355.0 | 27100 | 1.4196 | 0.5125 |
| 0.298 | 1356.0 | 27120 | 1.5017 | 0.4813 |
| 0.298 | 1357.0 | 27140 | 1.4737 | 0.525 |
| 0.298 | 1358.0 | 27160 | 1.4399 | 0.4813 |
| 0.298 | 1359.0 | 27180 | 1.5158 | 0.475 |
| 0.298 | 1360.0 | 27200 | 1.5428 | 0.4625 |
| 0.298 | 1361.0 | 27220 | 1.4197 | 0.5125 |
| 0.298 | 1362.0 | 27240 | 1.3748 | 0.5125 |
| 0.298 | 1363.0 | 27260 | 1.4444 | 0.5062 |
| 0.298 | 1364.0 | 27280 | 1.4904 | 0.4813 |
| 0.298 | 1365.0 | 27300 | 1.4187 | 0.5375 |
| 0.298 | 1366.0 | 27320 | 1.4184 | 0.5062 |
| 0.298 | 1367.0 | 27340 | 1.4463 | 0.4875 |
| 0.298 | 1368.0 | 27360 | 1.3811 | 0.5375 |
| 0.298 | 1369.0 | 27380 | 1.4925 | 0.4875 |
| 0.298 | 1370.0 | 27400 | 1.4370 | 0.5 |
| 0.298 | 1371.0 | 27420 | 1.4811 | 0.4625 |
| 0.298 | 1372.0 | 27440 | 1.3325 | 0.5437 |
| 0.298 | 1373.0 | 27460 | 1.4219 | 0.475 |
| 0.298 | 1374.0 | 27480 | 1.4964 | 0.5 |
| 0.2879 | 1375.0 | 27500 | 1.3878 | 0.5188 |
| 0.2879 | 1376.0 | 27520 | 1.4915 | 0.4875 |
| 0.2879 | 1377.0 | 27540 | 1.5125 | 0.5 |
| 0.2879 | 1378.0 | 27560 | 1.4026 | 0.4875 |
| 0.2879 | 1379.0 | 27580 | 1.3994 | 0.525 |
| 0.2879 | 1380.0 | 27600 | 1.4881 | 0.475 |
| 0.2879 | 1381.0 | 27620 | 1.4140 | 0.5062 |
| 0.2879 | 1382.0 | 27640 | 1.3100 | 0.525 |
| 0.2879 | 1383.0 | 27660 | 1.4590 | 0.4688 |
| 0.2879 | 1384.0 | 27680 | 1.5164 | 0.4437 |
| 0.2879 | 1385.0 | 27700 | 1.3868 | 0.5188 |
| 0.2879 | 1386.0 | 27720 | 1.4737 | 0.5 |
| 0.2879 | 1387.0 | 27740 | 1.3831 | 0.4938 |
| 0.2879 | 1388.0 | 27760 | 1.4388 | 0.4938 |
| 0.2879 | 1389.0 | 27780 | 1.4303 | 0.5312 |
| 0.2879 | 1390.0 | 27800 | 1.3663 | 0.5125 |
| 0.2879 | 1391.0 | 27820 | 1.5046 | 0.4875 |
| 0.2879 | 1392.0 | 27840 | 1.5045 | 0.475 |
| 0.2879 | 1393.0 | 27860 | 1.4811 | 0.5062 |
| 0.2879 | 1394.0 | 27880 | 1.3991 | 0.5312 |
| 0.2879 | 1395.0 | 27900 | 1.4144 | 0.5312 |
| 0.2879 | 1396.0 | 27920 | 1.3650 | 0.5312 |
| 0.2879 | 1397.0 | 27940 | 1.4004 | 0.5312 |
| 0.2879 | 1398.0 | 27960 | 1.3617 | 0.5312 |
| 0.2879 | 1399.0 | 27980 | 1.3955 | 0.5312 |
| 0.284 | 1400.0 | 28000 | 1.3588 | 0.5375 |
| 0.284 | 1401.0 | 28020 | 1.4408 | 0.4688 |
| 0.284 | 1402.0 | 28040 | 1.4572 | 0.4875 |
| 0.284 | 1403.0 | 28060 | 1.5793 | 0.4375 |
| 0.284 | 1404.0 | 28080 | 1.5194 | 0.4938 |
| 0.284 | 1405.0 | 28100 | 1.4610 | 0.4938 |
| 0.284 | 1406.0 | 28120 | 1.4934 | 0.5188 |
| 0.284 | 1407.0 | 28140 | 1.4602 | 0.4562 |
| 0.284 | 1408.0 | 28160 | 1.4116 | 0.5125 |
| 0.284 | 1409.0 | 28180 | 1.4497 | 0.475 |
| 0.284 | 1410.0 | 28200 | 1.4660 | 0.4875 |
| 0.284 | 1411.0 | 28220 | 1.3609 | 0.5375 |
| 0.284 | 1412.0 | 28240 | 1.4792 | 0.4875 |
| 0.284 | 1413.0 | 28260 | 1.4837 | 0.4688 |
| 0.284 | 1414.0 | 28280 | 1.4485 | 0.5188 |
| 0.284 | 1415.0 | 28300 | 1.3673 | 0.5312 |
| 0.284 | 1416.0 | 28320 | 1.4153 | 0.4813 |
| 0.284 | 1417.0 | 28340 | 1.4943 | 0.4813 |
| 0.284 | 1418.0 | 28360 | 1.5884 | 0.4625 |
| 0.284 | 1419.0 | 28380 | 1.4184 | 0.5188 |
| 0.284 | 1420.0 | 28400 | 1.4645 | 0.4875 |
| 0.284 | 1421.0 | 28420 | 1.5091 | 0.4437 |
| 0.284 | 1422.0 | 28440 | 1.3396 | 0.525 |
| 0.284 | 1423.0 | 28460 | 1.4418 | 0.5 |
| 0.284 | 1424.0 | 28480 | 1.4271 | 0.5312 |
| 0.2825 | 1425.0 | 28500 | 1.4775 | 0.4813 |
| 0.2825 | 1426.0 | 28520 | 1.4608 | 0.5062 |
| 0.2825 | 1427.0 | 28540 | 1.4584 | 0.4938 |
| 0.2825 | 1428.0 | 28560 | 1.4566 | 0.5188 |
| 0.2825 | 1429.0 | 28580 | 1.3907 | 0.5563 |
| 0.2825 | 1430.0 | 28600 | 1.4759 | 0.4813 |
| 0.2825 | 1431.0 | 28620 | 1.4544 | 0.4938 |
| 0.2825 | 1432.0 | 28640 | 1.4430 | 0.5 |
| 0.2825 | 1433.0 | 28660 | 1.5165 | 0.4688 |
| 0.2825 | 1434.0 | 28680 | 1.4704 | 0.4813 |
| 0.2825 | 1435.0 | 28700 | 1.3989 | 0.4875 |
| 0.2825 | 1436.0 | 28720 | 1.4311 | 0.5125 |
| 0.2825 | 1437.0 | 28740 | 1.3625 | 0.55 |
| 0.2825 | 1438.0 | 28760 | 1.4692 | 0.5125 |
| 0.2825 | 1439.0 | 28780 | 1.5038 | 0.475 |
| 0.2825 | 1440.0 | 28800 | 1.3972 | 0.5 |
| 0.2825 | 1441.0 | 28820 | 1.4258 | 0.5062 |
| 0.2825 | 1442.0 | 28840 | 1.4703 | 0.5062 |
| 0.2825 | 1443.0 | 28860 | 1.4151 | 0.5188 |
| 0.2825 | 1444.0 | 28880 | 1.5264 | 0.4688 |
| 0.2825 | 1445.0 | 28900 | 1.5765 | 0.4375 |
| 0.2825 | 1446.0 | 28920 | 1.4814 | 0.4688 |
| 0.2825 | 1447.0 | 28940 | 1.4997 | 0.4625 |
| 0.2825 | 1448.0 | 28960 | 1.4238 | 0.5188 |
| 0.2825 | 1449.0 | 28980 | 1.5373 | 0.4875 |
| 0.2743 | 1450.0 | 29000 | 1.4053 | 0.525 |
| 0.2743 | 1451.0 | 29020 | 1.4369 | 0.5125 |
| 0.2743 | 1452.0 | 29040 | 1.4204 | 0.5062 |
| 0.2743 | 1453.0 | 29060 | 1.4546 | 0.4813 |
| 0.2743 | 1454.0 | 29080 | 1.4420 | 0.5125 |
| 0.2743 | 1455.0 | 29100 | 1.4359 | 0.5062 |
| 0.2743 | 1456.0 | 29120 | 1.4291 | 0.5125 |
| 0.2743 | 1457.0 | 29140 | 1.5045 | 0.4875 |
| 0.2743 | 1458.0 | 29160 | 1.4566 | 0.4938 |
| 0.2743 | 1459.0 | 29180 | 1.4604 | 0.4938 |
| 0.2743 | 1460.0 | 29200 | 1.3180 | 0.5375 |
| 0.2743 | 1461.0 | 29220 | 1.4741 | 0.5 |
| 0.2743 | 1462.0 | 29240 | 1.4135 | 0.4875 |
| 0.2743 | 1463.0 | 29260 | 1.4657 | 0.4875 |
| 0.2743 | 1464.0 | 29280 | 1.4154 | 0.4688 |
| 0.2743 | 1465.0 | 29300 | 1.4170 | 0.525 |
| 0.2743 | 1466.0 | 29320 | 1.4450 | 0.5312 |
| 0.2743 | 1467.0 | 29340 | 1.5276 | 0.4625 |
| 0.2743 | 1468.0 | 29360 | 1.3992 | 0.5062 |
| 0.2743 | 1469.0 | 29380 | 1.5054 | 0.4813 |
| 0.2743 | 1470.0 | 29400 | 1.4894 | 0.4562 |
| 0.2743 | 1471.0 | 29420 | 1.5172 | 0.4625 |
| 0.2743 | 1472.0 | 29440 | 1.4878 | 0.4813 |
| 0.2743 | 1473.0 | 29460 | 1.4720 | 0.4938 |
| 0.2743 | 1474.0 | 29480 | 1.4639 | 0.5062 |
| 0.2785 | 1475.0 | 29500 | 1.4667 | 0.4938 |
| 0.2785 | 1476.0 | 29520 | 1.4408 | 0.5125 |
| 0.2785 | 1477.0 | 29540 | 1.4771 | 0.5062 |
| 0.2785 | 1478.0 | 29560 | 1.3936 | 0.5375 |
| 0.2785 | 1479.0 | 29580 | 1.4308 | 0.475 |
| 0.2785 | 1480.0 | 29600 | 1.5399 | 0.4437 |
| 0.2785 | 1481.0 | 29620 | 1.5077 | 0.5062 |
| 0.2785 | 1482.0 | 29640 | 1.5083 | 0.5062 |
| 0.2785 | 1483.0 | 29660 | 1.5410 | 0.4625 |
| 0.2785 | 1484.0 | 29680 | 1.4391 | 0.5125 |
| 0.2785 | 1485.0 | 29700 | 1.5085 | 0.4875 |
| 0.2785 | 1486.0 | 29720 | 1.3403 | 0.5312 |
| 0.2785 | 1487.0 | 29740 | 1.4827 | 0.5062 |
| 0.2785 | 1488.0 | 29760 | 1.3823 | 0.4938 |
| 0.2785 | 1489.0 | 29780 | 1.3906 | 0.5563 |
| 0.2785 | 1490.0 | 29800 | 1.4016 | 0.5188 |
| 0.2785 | 1491.0 | 29820 | 1.3938 | 0.5375 |
| 0.2785 | 1492.0 | 29840 | 1.4321 | 0.5125 |
| 0.2785 | 1493.0 | 29860 | 1.5029 | 0.475 |
| 0.2785 | 1494.0 | 29880 | 1.4466 | 0.5 |
| 0.2785 | 1495.0 | 29900 | 1.4355 | 0.4938 |
| 0.2785 | 1496.0 | 29920 | 1.4482 | 0.5125 |
| 0.2785 | 1497.0 | 29940 | 1.4275 | 0.5188 |
| 0.2785 | 1498.0 | 29960 | 1.5046 | 0.475 |
| 0.2785 | 1499.0 | 29980 | 1.4453 | 0.5125 |
| 0.2766 | 1500.0 | 30000 | 1.3290 | 0.5437 |
| 0.2766 | 1501.0 | 30020 | 1.4743 | 0.475 |
| 0.2766 | 1502.0 | 30040 | 1.5197 | 0.4625 |
| 0.2766 | 1503.0 | 30060 | 1.4305 | 0.5625 |
| 0.2766 | 1504.0 | 30080 | 1.4716 | 0.4813 |
| 0.2766 | 1505.0 | 30100 | 1.4003 | 0.5625 |
| 0.2766 | 1506.0 | 30120 | 1.4396 | 0.4875 |
| 0.2766 | 1507.0 | 30140 | 1.4425 | 0.4813 |
| 0.2766 | 1508.0 | 30160 | 1.4472 | 0.4938 |
| 0.2766 | 1509.0 | 30180 | 1.5083 | 0.475 |
| 0.2766 | 1510.0 | 30200 | 1.3383 | 0.5312 |
| 0.2766 | 1511.0 | 30220 | 1.3624 | 0.5312 |
| 0.2766 | 1512.0 | 30240 | 1.4646 | 0.4813 |
| 0.2766 | 1513.0 | 30260 | 1.3511 | 0.5563 |
| 0.2766 | 1514.0 | 30280 | 1.4240 | 0.5188 |
| 0.2766 | 1515.0 | 30300 | 1.4329 | 0.4688 |
| 0.2766 | 1516.0 | 30320 | 1.4613 | 0.5188 |
| 0.2766 | 1517.0 | 30340 | 1.4115 | 0.5188 |
| 0.2766 | 1518.0 | 30360 | 1.4163 | 0.4938 |
| 0.2766 | 1519.0 | 30380 | 1.4269 | 0.5 |
| 0.2766 | 1520.0 | 30400 | 1.3476 | 0.5125 |
| 0.2766 | 1521.0 | 30420 | 1.4298 | 0.5 |
| 0.2766 | 1522.0 | 30440 | 1.5222 | 0.4875 |
| 0.2766 | 1523.0 | 30460 | 1.4276 | 0.4938 |
| 0.2766 | 1524.0 | 30480 | 1.3477 | 0.5312 |
| 0.2758 | 1525.0 | 30500 | 1.3653 | 0.55 |
| 0.2758 | 1526.0 | 30520 | 1.4004 | 0.5312 |
| 0.2758 | 1527.0 | 30540 | 1.3683 | 0.5125 |
| 0.2758 | 1528.0 | 30560 | 1.4337 | 0.4938 |
| 0.2758 | 1529.0 | 30580 | 1.5037 | 0.5062 |
| 0.2758 | 1530.0 | 30600 | 1.5474 | 0.45 |
| 0.2758 | 1531.0 | 30620 | 1.4607 | 0.4938 |
| 0.2758 | 1532.0 | 30640 | 1.4329 | 0.525 |
| 0.2758 | 1533.0 | 30660 | 1.3821 | 0.5437 |
| 0.2758 | 1534.0 | 30680 | 1.5021 | 0.4875 |
| 0.2758 | 1535.0 | 30700 | 1.4299 | 0.525 |
| 0.2758 | 1536.0 | 30720 | 1.4694 | 0.4625 |
| 0.2758 | 1537.0 | 30740 | 1.4684 | 0.4938 |
| 0.2758 | 1538.0 | 30760 | 1.5205 | 0.4562 |
| 0.2758 | 1539.0 | 30780 | 1.3924 | 0.5188 |
| 0.2758 | 1540.0 | 30800 | 1.3302 | 0.5813 |
| 0.2758 | 1541.0 | 30820 | 1.3970 | 0.525 |
| 0.2758 | 1542.0 | 30840 | 1.5024 | 0.4375 |
| 0.2758 | 1543.0 | 30860 | 1.4066 | 0.5062 |
| 0.2758 | 1544.0 | 30880 | 1.4827 | 0.5 |
| 0.2758 | 1545.0 | 30900 | 1.4724 | 0.5062 |
| 0.2758 | 1546.0 | 30920 | 1.3244 | 0.5437 |
| 0.2758 | 1547.0 | 30940 | 1.3774 | 0.4875 |
| 0.2758 | 1548.0 | 30960 | 1.5300 | 0.4813 |
| 0.2758 | 1549.0 | 30980 | 1.4806 | 0.5062 |
| 0.2756 | 1550.0 | 31000 | 1.4364 | 0.5 |
| 0.2756 | 1551.0 | 31020 | 1.4185 | 0.5062 |
| 0.2756 | 1552.0 | 31040 | 1.3730 | 0.5188 |
| 0.2756 | 1553.0 | 31060 | 1.4082 | 0.5125 |
| 0.2756 | 1554.0 | 31080 | 1.3745 | 0.5312 |
| 0.2756 | 1555.0 | 31100 | 1.4724 | 0.4437 |
| 0.2756 | 1556.0 | 31120 | 1.4691 | 0.4875 |
| 0.2756 | 1557.0 | 31140 | 1.3968 | 0.5 |
| 0.2756 | 1558.0 | 31160 | 1.5222 | 0.45 |
| 0.2756 | 1559.0 | 31180 | 1.5239 | 0.5125 |
| 0.2756 | 1560.0 | 31200 | 1.5163 | 0.4375 |
| 0.2756 | 1561.0 | 31220 | 1.2894 | 0.5813 |
| 0.2756 | 1562.0 | 31240 | 1.4202 | 0.5188 |
| 0.2756 | 1563.0 | 31260 | 1.3790 | 0.5 |
| 0.2756 | 1564.0 | 31280 | 1.4284 | 0.5 |
| 0.2756 | 1565.0 | 31300 | 1.4031 | 0.5312 |
| 0.2756 | 1566.0 | 31320 | 1.4602 | 0.5 |
| 0.2756 | 1567.0 | 31340 | 1.4664 | 0.4813 |
| 0.2756 | 1568.0 | 31360 | 1.4801 | 0.4875 |
| 0.2756 | 1569.0 | 31380 | 1.5429 | 0.475 |
| 0.2756 | 1570.0 | 31400 | 1.4366 | 0.4938 |
| 0.2756 | 1571.0 | 31420 | 1.3814 | 0.525 |
| 0.2756 | 1572.0 | 31440 | 1.5086 | 0.4875 |
| 0.2756 | 1573.0 | 31460 | 1.4827 | 0.4938 |
| 0.2756 | 1574.0 | 31480 | 1.4607 | 0.4938 |
| 0.2718 | 1575.0 | 31500 | 1.4662 | 0.4813 |
| 0.2718 | 1576.0 | 31520 | 1.5976 | 0.4562 |
| 0.2718 | 1577.0 | 31540 | 1.3768 | 0.5563 |
| 0.2718 | 1578.0 | 31560 | 1.4911 | 0.4938 |
| 0.2718 | 1579.0 | 31580 | 1.4469 | 0.5188 |
| 0.2718 | 1580.0 | 31600 | 1.4668 | 0.5 |
| 0.2718 | 1581.0 | 31620 | 1.4473 | 0.5188 |
| 0.2718 | 1582.0 | 31640 | 1.4289 | 0.5437 |
| 0.2718 | 1583.0 | 31660 | 1.4512 | 0.5188 |
| 0.2718 | 1584.0 | 31680 | 1.5670 | 0.4375 |
| 0.2718 | 1585.0 | 31700 | 1.3894 | 0.525 |
| 0.2718 | 1586.0 | 31720 | 1.4387 | 0.5 |
| 0.2718 | 1587.0 | 31740 | 1.4963 | 0.4938 |
| 0.2718 | 1588.0 | 31760 | 1.3965 | 0.5312 |
| 0.2718 | 1589.0 | 31780 | 1.4742 | 0.5 |
| 0.2718 | 1590.0 | 31800 | 1.4714 | 0.4813 |
| 0.2718 | 1591.0 | 31820 | 1.4471 | 0.4813 |
| 0.2718 | 1592.0 | 31840 | 1.4259 | 0.5 |
| 0.2718 | 1593.0 | 31860 | 1.5475 | 0.5 |
| 0.2718 | 1594.0 | 31880 | 1.4578 | 0.5062 |
| 0.2718 | 1595.0 | 31900 | 1.4545 | 0.4938 |
| 0.2718 | 1596.0 | 31920 | 1.5368 | 0.4688 |
| 0.2718 | 1597.0 | 31940 | 1.4596 | 0.4813 |
| 0.2718 | 1598.0 | 31960 | 1.4468 | 0.5375 |
| 0.2718 | 1599.0 | 31980 | 1.4371 | 0.5125 |
| 0.2719 | 1600.0 | 32000 | 1.4436 | 0.525 |
| 0.2719 | 1601.0 | 32020 | 1.5506 | 0.4375 |
| 0.2719 | 1602.0 | 32040 | 1.5337 | 0.4562 |
| 0.2719 | 1603.0 | 32060 | 1.4638 | 0.4625 |
| 0.2719 | 1604.0 | 32080 | 1.3749 | 0.5188 |
| 0.2719 | 1605.0 | 32100 | 1.3935 | 0.5188 |
| 0.2719 | 1606.0 | 32120 | 1.4744 | 0.4875 |
| 0.2719 | 1607.0 | 32140 | 1.4828 | 0.475 |
| 0.2719 | 1608.0 | 32160 | 1.4926 | 0.4688 |
| 0.2719 | 1609.0 | 32180 | 1.5197 | 0.4875 |
| 0.2719 | 1610.0 | 32200 | 1.4079 | 0.525 |
| 0.2719 | 1611.0 | 32220 | 1.4185 | 0.5062 |
| 0.2719 | 1612.0 | 32240 | 1.4210 | 0.5062 |
| 0.2719 | 1613.0 | 32260 | 1.4585 | 0.4875 |
| 0.2719 | 1614.0 | 32280 | 1.4763 | 0.475 |
| 0.2719 | 1615.0 | 32300 | 1.5101 | 0.475 |
| 0.2719 | 1616.0 | 32320 | 1.4968 | 0.4938 |
| 0.2719 | 1617.0 | 32340 | 1.5228 | 0.4562 |
| 0.2719 | 1618.0 | 32360 | 1.3589 | 0.5188 |
| 0.2719 | 1619.0 | 32380 | 1.3532 | 0.5188 |
| 0.2719 | 1620.0 | 32400 | 1.4536 | 0.5062 |
| 0.2719 | 1621.0 | 32420 | 1.4180 | 0.475 |
| 0.2719 | 1622.0 | 32440 | 1.4507 | 0.5188 |
| 0.2719 | 1623.0 | 32460 | 1.3285 | 0.5563 |
| 0.2719 | 1624.0 | 32480 | 1.2809 | 0.5375 |
| 0.2699 | 1625.0 | 32500 | 1.4344 | 0.525 |
| 0.2699 | 1626.0 | 32520 | 1.4535 | 0.5 |
| 0.2699 | 1627.0 | 32540 | 1.5542 | 0.475 |
| 0.2699 | 1628.0 | 32560 | 1.5891 | 0.4375 |
| 0.2699 | 1629.0 | 32580 | 1.5430 | 0.4625 |
| 0.2699 | 1630.0 | 32600 | 1.4423 | 0.475 |
| 0.2699 | 1631.0 | 32620 | 1.4449 | 0.4875 |
| 0.2699 | 1632.0 | 32640 | 1.3946 | 0.5437 |
| 0.2699 | 1633.0 | 32660 | 1.3262 | 0.55 |
| 0.2699 | 1634.0 | 32680 | 1.3367 | 0.55 |
| 0.2699 | 1635.0 | 32700 | 1.4468 | 0.45 |
| 0.2699 | 1636.0 | 32720 | 1.4302 | 0.5125 |
| 0.2699 | 1637.0 | 32740 | 1.5481 | 0.4625 |
| 0.2699 | 1638.0 | 32760 | 1.3970 | 0.5062 |
| 0.2699 | 1639.0 | 32780 | 1.5247 | 0.4625 |
| 0.2699 | 1640.0 | 32800 | 1.3148 | 0.5375 |
| 0.2699 | 1641.0 | 32820 | 1.4591 | 0.4938 |
| 0.2699 | 1642.0 | 32840 | 1.4126 | 0.5563 |
| 0.2699 | 1643.0 | 32860 | 1.4827 | 0.5 |
| 0.2699 | 1644.0 | 32880 | 1.4025 | 0.5312 |
| 0.2699 | 1645.0 | 32900 | 1.4529 | 0.525 |
| 0.2699 | 1646.0 | 32920 | 1.4559 | 0.475 |
| 0.2699 | 1647.0 | 32940 | 1.5547 | 0.4375 |
| 0.2699 | 1648.0 | 32960 | 1.4492 | 0.4938 |
| 0.2699 | 1649.0 | 32980 | 1.5228 | 0.45 |
| 0.2612 | 1650.0 | 33000 | 1.4569 | 0.4938 |
| 0.2612 | 1651.0 | 33020 | 1.4739 | 0.5062 |
| 0.2612 | 1652.0 | 33040 | 1.3789 | 0.4875 |
| 0.2612 | 1653.0 | 33060 | 1.4642 | 0.4938 |
| 0.2612 | 1654.0 | 33080 | 1.4544 | 0.5062 |
| 0.2612 | 1655.0 | 33100 | 1.4793 | 0.475 |
| 0.2612 | 1656.0 | 33120 | 1.3544 | 0.5625 |
| 0.2612 | 1657.0 | 33140 | 1.4798 | 0.5125 |
| 0.2612 | 1658.0 | 33160 | 1.5018 | 0.4813 |
| 0.2612 | 1659.0 | 33180 | 1.5245 | 0.4313 |
| 0.2612 | 1660.0 | 33200 | 1.4227 | 0.5062 |
| 0.2612 | 1661.0 | 33220 | 1.3944 | 0.4938 |
| 0.2612 | 1662.0 | 33240 | 1.4664 | 0.5125 |
| 0.2612 | 1663.0 | 33260 | 1.5123 | 0.5062 |
| 0.2612 | 1664.0 | 33280 | 1.4851 | 0.475 |
| 0.2612 | 1665.0 | 33300 | 1.5370 | 0.4875 |
| 0.2612 | 1666.0 | 33320 | 1.4442 | 0.5062 |
| 0.2612 | 1667.0 | 33340 | 1.3614 | 0.55 |
| 0.2612 | 1668.0 | 33360 | 1.5381 | 0.4688 |
| 0.2612 | 1669.0 | 33380 | 1.4347 | 0.5188 |
| 0.2612 | 1670.0 | 33400 | 1.4191 | 0.5375 |
| 0.2612 | 1671.0 | 33420 | 1.4999 | 0.4625 |
| 0.2612 | 1672.0 | 33440 | 1.4596 | 0.5125 |
| 0.2612 | 1673.0 | 33460 | 1.3416 | 0.5375 |
| 0.2612 | 1674.0 | 33480 | 1.4813 | 0.525 |
| 0.2666 | 1675.0 | 33500 | 1.4499 | 0.5062 |
| 0.2666 | 1676.0 | 33520 | 1.4462 | 0.5188 |
| 0.2666 | 1677.0 | 33540 | 1.4109 | 0.5312 |
| 0.2666 | 1678.0 | 33560 | 1.4430 | 0.4688 |
| 0.2666 | 1679.0 | 33580 | 1.5119 | 0.475 |
| 0.2666 | 1680.0 | 33600 | 1.5080 | 0.5 |
| 0.2666 | 1681.0 | 33620 | 1.4745 | 0.5125 |
| 0.2666 | 1682.0 | 33640 | 1.4799 | 0.4938 |
| 0.2666 | 1683.0 | 33660 | 1.4436 | 0.5062 |
| 0.2666 | 1684.0 | 33680 | 1.3928 | 0.5125 |
| 0.2666 | 1685.0 | 33700 | 1.4427 | 0.5062 |
| 0.2666 | 1686.0 | 33720 | 1.3572 | 0.55 |
| 0.2666 | 1687.0 | 33740 | 1.4141 | 0.5062 |
| 0.2666 | 1688.0 | 33760 | 1.4079 | 0.5 |
| 0.2666 | 1689.0 | 33780 | 1.3888 | 0.4938 |
| 0.2666 | 1690.0 | 33800 | 1.4878 | 0.4625 |
| 0.2666 | 1691.0 | 33820 | 1.4618 | 0.4938 |
| 0.2666 | 1692.0 | 33840 | 1.5789 | 0.475 |
| 0.2666 | 1693.0 | 33860 | 1.4814 | 0.4562 |
| 0.2666 | 1694.0 | 33880 | 1.4810 | 0.5437 |
| 0.2666 | 1695.0 | 33900 | 1.5088 | 0.4875 |
| 0.2666 | 1696.0 | 33920 | 1.4263 | 0.5062 |
| 0.2666 | 1697.0 | 33940 | 1.5221 | 0.4437 |
| 0.2666 | 1698.0 | 33960 | 1.3571 | 0.5125 |
| 0.2666 | 1699.0 | 33980 | 1.3727 | 0.5437 |
| 0.2654 | 1700.0 | 34000 | 1.4273 | 0.5062 |
| 0.2654 | 1701.0 | 34020 | 1.5031 | 0.4688 |
| 0.2654 | 1702.0 | 34040 | 1.5287 | 0.4938 |
| 0.2654 | 1703.0 | 34060 | 1.4249 | 0.4875 |
| 0.2654 | 1704.0 | 34080 | 1.4168 | 0.5125 |
| 0.2654 | 1705.0 | 34100 | 1.4947 | 0.475 |
| 0.2654 | 1706.0 | 34120 | 1.5093 | 0.4813 |
| 0.2654 | 1707.0 | 34140 | 1.4478 | 0.525 |
| 0.2654 | 1708.0 | 34160 | 1.3681 | 0.5375 |
| 0.2654 | 1709.0 | 34180 | 1.4784 | 0.475 |
| 0.2654 | 1710.0 | 34200 | 1.4736 | 0.4813 |
| 0.2654 | 1711.0 | 34220 | 1.4512 | 0.5125 |
| 0.2654 | 1712.0 | 34240 | 1.4901 | 0.5125 |
| 0.2654 | 1713.0 | 34260 | 1.3999 | 0.5188 |
| 0.2654 | 1714.0 | 34280 | 1.5042 | 0.4875 |
| 0.2654 | 1715.0 | 34300 | 1.4943 | 0.475 |
| 0.2654 | 1716.0 | 34320 | 1.4266 | 0.5 |
| 0.2654 | 1717.0 | 34340 | 1.4092 | 0.4938 |
| 0.2654 | 1718.0 | 34360 | 1.4771 | 0.5188 |
| 0.2654 | 1719.0 | 34380 | 1.4573 | 0.5 |
| 0.2654 | 1720.0 | 34400 | 1.4646 | 0.5 |
| 0.2654 | 1721.0 | 34420 | 1.4718 | 0.5188 |
| 0.2654 | 1722.0 | 34440 | 1.3844 | 0.4938 |
| 0.2654 | 1723.0 | 34460 | 1.5301 | 0.4813 |
| 0.2654 | 1724.0 | 34480 | 1.6132 | 0.4375 |
| 0.2679 | 1725.0 | 34500 | 1.4244 | 0.4938 |
| 0.2679 | 1726.0 | 34520 | 1.4837 | 0.5188 |
| 0.2679 | 1727.0 | 34540 | 1.5501 | 0.4875 |
| 0.2679 | 1728.0 | 34560 | 1.4302 | 0.5125 |
| 0.2679 | 1729.0 | 34580 | 1.3363 | 0.5563 |
| 0.2679 | 1730.0 | 34600 | 1.5036 | 0.475 |
| 0.2679 | 1731.0 | 34620 | 1.4386 | 0.4875 |
| 0.2679 | 1732.0 | 34640 | 1.4130 | 0.4875 |
| 0.2679 | 1733.0 | 34660 | 1.4120 | 0.5062 |
| 0.2679 | 1734.0 | 34680 | 1.5886 | 0.4375 |
| 0.2679 | 1735.0 | 34700 | 1.3803 | 0.5188 |
| 0.2679 | 1736.0 | 34720 | 1.4227 | 0.4813 |
| 0.2679 | 1737.0 | 34740 | 1.4488 | 0.4938 |
| 0.2679 | 1738.0 | 34760 | 1.4735 | 0.5 |
| 0.2679 | 1739.0 | 34780 | 1.5467 | 0.4688 |
| 0.2679 | 1740.0 | 34800 | 1.4766 | 0.4938 |
| 0.2679 | 1741.0 | 34820 | 1.4444 | 0.5062 |
| 0.2679 | 1742.0 | 34840 | 1.3931 | 0.5437 |
| 0.2679 | 1743.0 | 34860 | 1.4914 | 0.4813 |
| 0.2679 | 1744.0 | 34880 | 1.4425 | 0.5125 |
| 0.2679 | 1745.0 | 34900 | 1.4441 | 0.4938 |
| 0.2679 | 1746.0 | 34920 | 1.4857 | 0.4813 |
| 0.2679 | 1747.0 | 34940 | 1.4785 | 0.5062 |
| 0.2679 | 1748.0 | 34960 | 1.4310 | 0.5375 |
| 0.2679 | 1749.0 | 34980 | 1.5680 | 0.4562 |
| 0.2638 | 1750.0 | 35000 | 1.3888 | 0.5062 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.15.0
- Tokenizers 0.15.1
|
Zoyd/OEvortex_HelpingAI-9B-3_5bpw_exl2
|
Zoyd
| 2024-05-31T01:37:15Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"HelpingAI",
"Emotionally Intelligent",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-05-31T00:16:30Z |
---
license: other
license_name: helpingai
license_link: LICENSE.md
pipeline_tag: text-generation
tags:
- HelpingAI
- Emotionally Intelligent
---
**Exllamav2** quant (**exl2** / **3.5 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-2_2bpw_exl2)**</center> | <center>2900 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-2_5bpw_exl2)**</center> | <center>3171 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-3_0bpw_exl2)**</center> | <center>3668 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-3_5bpw_exl2)**</center> | <center>4163 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-3_75bpw_exl2)**</center> | <center>4411 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-4_0bpw_exl2)**</center> | <center>4659 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-4_25bpw_exl2)**</center> | <center>4906 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-5_0bpw_exl2)**</center> | <center>5647 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-6_0bpw_exl2)**</center> | <center>6691 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-6_5bpw_exl2)**</center> | <center>7176 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-8_0bpw_exl2)**</center> | <center>8149 MB</center> | <center>8</center> |
# HelpingAI-9B: The King of Human like Emotion

## Overview
HelpingAI-9B is a large language model designed for emotionally intelligent conversational interactions. It is trained to engage users with empathy, understanding, and supportive dialogue across a wide range of topics and contexts. The model aims to provide a supportive AI companion that can attune to users' emotional states and communicative needs.
## Objectives
- Engage in open-ended dialogue while displaying emotional intelligence
- Recognize and validate user emotions and emotional contexts
- Provide supportive, empathetic, and psychologically-grounded responses
- Avoid insensitive, harmful, or unethical speech
- Continuously improve emotional awareness and dialogue skills
## Methodology
HelpingAI-9B is based on the HelpingAI series and further trained using:
- Supervised learning on large dialogue datasets with emotional labeling
- Reinforcement learning with a reward model favoring emotionally supportive responses
- Constitution training to instill stable and beneficial objectives
- Knowledge augmentation from psychological resources on emotional intelligence
## Usage code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"OEvortex/HelpingAI-9B",
torch_dtype='auto',
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI-9B")
prompt = "Express joy and excitement about visiting a new place"
messages = [
# {"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=1024,
eos_token_id=tokenizer.eos_token_id,
temperature=0.25,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids)[0]
print(response)
```
*Directly using this model from GGUF*
```python
%pip install -U 'webscout[loacl]'
from webscout.Local.utils import download_model
from webscout.Local.model import Model
from webscout.Local.thread import Thread
from webscout.Local import formats
from webscout.Local.samplers import SamplerSettings
from dotenv import load_dotenv; load_dotenv()
import os
# 1. Download the model
repo_id = "OEvortex/HelpingAI-9B"
filename = "helpingai-9b.Q4_0.gguf"
model_path = download_model(repo_id, filename, os.environ.get("hf_token"))
# 2. Load the model
model = Model(model_path, n_gpu_layers=0)ย
# 3. Define your system prompt
system_prompt = "You are HelpingAI a emotional AI always answer my question in HelpingAI style"
# 4. Create a custom chatml format with your system prompt
custom_chatml = formats.chatml.copy()
custom_chatml['system_content'] = system_prompt
# 5. Define your sampler settings (optional)
sampler = SamplerSettings(temp=0.7, top_p=0.9)ย # Adjust these values as needed
# 6. Create a Thread with the custom format and sampler
thread = Thread(model, custom_chatml, sampler=sampler)
# 7. Start interacting with the model
thread.interact(header="๐ HelpingAI-9B: The King of Human-like Emotion ๐", color=True)
```
## Example Dialogue
> Express joy and excitement about visiting a new place.
Yay! I'm super excited to visit this new place! ๐ It sounds amazing and I can't wait to explore and have fun there! This is going to be such an exciting adventure! ๐ Let's go, let's do this! โจ
> I'm feeling really down today. Nothing seems to be going right.
Oh no, I'm sorry to hear that you're feeling down. It happens sometimes, but remember that it's okay to feel sad. If you want to talk about what's bothering you, I'm here to listen and offer support. We can find a way through this together! ๐
> I'm excited because I just got accepted into my dream school! I wanted to share the good news with someone.
Wow, that's amazing! ๐ Congratulations on getting accepted into your dream school! That's such an incredible accomplishment and I'm really happy for you! Keep up the hard work and enjoy the journey ahead! ๐ If there's anything else I can do to help, just let me know! ๐
> My best friend just got into an argument with me and I don't know what to do. How can I fix this situation?
Oh no, it sounds like you both need some time to cool off. Take a step back and give each other some space. Then, when things have calmed down, try talking things through in a calm and respectful manner. Communication is key for resolving conflicts, so let's work towards finding a resolution that works for both of you! ๐ If you need any advice or support, I'm here to help! ๐
> How can you identify the emotions of others based on their body language and facial expressions?
Identifying the emotions of others is a great skill! ๐ค You can look for body language cues like frowns, smiles, or tense posture. Facial expressions also give away a lot about someone's feelings - watch out for changes in their eyes, mouth, and facial muscles. It's all about being observant and empathetic to understand others better! ๐
> What kind of safety devices do I need to install in my house
Installing security devices in your home can bring peace of mind and keep you safe. Some essential safety devices include a home alarm system, smoke detectors, carbon monoxide detectors, motion sensors, and surveillance cameras. It's always good to take precautions and ensure that your home is secure! ๐ If you need help choosing the right security devices for your family, I can definitely point you in the right direction! ๐
> How can your own feelings affect your ability to empathize with someone else?
Our own emotions can impact our ability to empathize with others because they shape how we perceive and interpret other people's experiences. If you're feeling happy or positive, it may be easier to see the good in others and offer support. On the other hand, if you're feeling down or stressed, it might be harder to put yourself in someone else's shoes. It's important to acknowledge your own feelings and work on being open and understanding towards others! ๐
|
Zoyd/OEvortex_HelpingAI-9B-8_0bpw_exl2
|
Zoyd
| 2024-05-31T01:35:58Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"HelpingAI",
"Emotionally Intelligent",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-05-31T01:21:09Z |
---
license: other
license_name: helpingai
license_link: LICENSE.md
pipeline_tag: text-generation
tags:
- HelpingAI
- Emotionally Intelligent
---
**Exllamav2** quant (**exl2** / **8.0 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-2_2bpw_exl2)**</center> | <center>2900 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-2_5bpw_exl2)**</center> | <center>3171 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-3_0bpw_exl2)**</center> | <center>3668 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-3_5bpw_exl2)**</center> | <center>4163 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-3_75bpw_exl2)**</center> | <center>4411 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-4_0bpw_exl2)**</center> | <center>4659 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-4_25bpw_exl2)**</center> | <center>4906 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-5_0bpw_exl2)**</center> | <center>5647 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-6_0bpw_exl2)**</center> | <center>6691 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-6_5bpw_exl2)**</center> | <center>7176 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-8_0bpw_exl2)**</center> | <center>8149 MB</center> | <center>8</center> |
# HelpingAI-9B: The King of Human like Emotion

## Overview
HelpingAI-9B is a large language model designed for emotionally intelligent conversational interactions. It is trained to engage users with empathy, understanding, and supportive dialogue across a wide range of topics and contexts. The model aims to provide a supportive AI companion that can attune to users' emotional states and communicative needs.
## Objectives
- Engage in open-ended dialogue while displaying emotional intelligence
- Recognize and validate user emotions and emotional contexts
- Provide supportive, empathetic, and psychologically-grounded responses
- Avoid insensitive, harmful, or unethical speech
- Continuously improve emotional awareness and dialogue skills
## Methodology
HelpingAI-9B is based on the HelpingAI series and further trained using:
- Supervised learning on large dialogue datasets with emotional labeling
- Reinforcement learning with a reward model favoring emotionally supportive responses
- Constitution training to instill stable and beneficial objectives
- Knowledge augmentation from psychological resources on emotional intelligence
## Usage code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"OEvortex/HelpingAI-9B",
torch_dtype='auto',
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI-9B")
prompt = "Express joy and excitement about visiting a new place"
messages = [
# {"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=1024,
eos_token_id=tokenizer.eos_token_id,
temperature=0.25,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids)[0]
print(response)
```
*Directly using this model from GGUF*
```python
%pip install -U 'webscout[loacl]'
from webscout.Local.utils import download_model
from webscout.Local.model import Model
from webscout.Local.thread import Thread
from webscout.Local import formats
from webscout.Local.samplers import SamplerSettings
from dotenv import load_dotenv; load_dotenv()
import os
# 1. Download the model
repo_id = "OEvortex/HelpingAI-9B"
filename = "helpingai-9b.Q4_0.gguf"
model_path = download_model(repo_id, filename, os.environ.get("hf_token"))
# 2. Load the model
model = Model(model_path, n_gpu_layers=0)ย
# 3. Define your system prompt
system_prompt = "You are HelpingAI a emotional AI always answer my question in HelpingAI style"
# 4. Create a custom chatml format with your system prompt
custom_chatml = formats.chatml.copy()
custom_chatml['system_content'] = system_prompt
# 5. Define your sampler settings (optional)
sampler = SamplerSettings(temp=0.7, top_p=0.9)ย # Adjust these values as needed
# 6. Create a Thread with the custom format and sampler
thread = Thread(model, custom_chatml, sampler=sampler)
# 7. Start interacting with the model
thread.interact(header="๐ HelpingAI-9B: The King of Human-like Emotion ๐", color=True)
```
## Example Dialogue
> Express joy and excitement about visiting a new place.
Yay! I'm super excited to visit this new place! ๐ It sounds amazing and I can't wait to explore and have fun there! This is going to be such an exciting adventure! ๐ Let's go, let's do this! โจ
> I'm feeling really down today. Nothing seems to be going right.
Oh no, I'm sorry to hear that you're feeling down. It happens sometimes, but remember that it's okay to feel sad. If you want to talk about what's bothering you, I'm here to listen and offer support. We can find a way through this together! ๐
> I'm excited because I just got accepted into my dream school! I wanted to share the good news with someone.
Wow, that's amazing! ๐ Congratulations on getting accepted into your dream school! That's such an incredible accomplishment and I'm really happy for you! Keep up the hard work and enjoy the journey ahead! ๐ If there's anything else I can do to help, just let me know! ๐
> My best friend just got into an argument with me and I don't know what to do. How can I fix this situation?
Oh no, it sounds like you both need some time to cool off. Take a step back and give each other some space. Then, when things have calmed down, try talking things through in a calm and respectful manner. Communication is key for resolving conflicts, so let's work towards finding a resolution that works for both of you! ๐ If you need any advice or support, I'm here to help! ๐
> How can you identify the emotions of others based on their body language and facial expressions?
Identifying the emotions of others is a great skill! ๐ค You can look for body language cues like frowns, smiles, or tense posture. Facial expressions also give away a lot about someone's feelings - watch out for changes in their eyes, mouth, and facial muscles. It's all about being observant and empathetic to understand others better! ๐
> What kind of safety devices do I need to install in my house
Installing security devices in your home can bring peace of mind and keep you safe. Some essential safety devices include a home alarm system, smoke detectors, carbon monoxide detectors, motion sensors, and surveillance cameras. It's always good to take precautions and ensure that your home is secure! ๐ If you need help choosing the right security devices for your family, I can definitely point you in the right direction! ๐
> How can your own feelings affect your ability to empathize with someone else?
Our own emotions can impact our ability to empathize with others because they shape how we perceive and interpret other people's experiences. If you're feeling happy or positive, it may be easier to see the good in others and offer support. On the other hand, if you're feeling down or stressed, it might be harder to put yourself in someone else's shoes. It's important to acknowledge your own feelings and work on being open and understanding towards others! ๐
|
Zoyd/OEvortex_HelpingAI-9B-3_0bpw_exl2
|
Zoyd
| 2024-05-31T01:35:21Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"HelpingAI",
"Emotionally Intelligent",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-05-31T00:12:15Z |
---
license: other
license_name: helpingai
license_link: LICENSE.md
pipeline_tag: text-generation
tags:
- HelpingAI
- Emotionally Intelligent
---
**Exllamav2** quant (**exl2** / **3.0 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-2_2bpw_exl2)**</center> | <center>2900 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-2_5bpw_exl2)**</center> | <center>3171 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-3_0bpw_exl2)**</center> | <center>3668 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-3_5bpw_exl2)**</center> | <center>4163 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-3_75bpw_exl2)**</center> | <center>4411 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-4_0bpw_exl2)**</center> | <center>4659 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-4_25bpw_exl2)**</center> | <center>4906 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-5_0bpw_exl2)**</center> | <center>5647 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-6_0bpw_exl2)**</center> | <center>6691 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-6_5bpw_exl2)**</center> | <center>7176 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-8_0bpw_exl2)**</center> | <center>8149 MB</center> | <center>8</center> |
# HelpingAI-9B: The King of Human like Emotion

## Overview
HelpingAI-9B is a large language model designed for emotionally intelligent conversational interactions. It is trained to engage users with empathy, understanding, and supportive dialogue across a wide range of topics and contexts. The model aims to provide a supportive AI companion that can attune to users' emotional states and communicative needs.
## Objectives
- Engage in open-ended dialogue while displaying emotional intelligence
- Recognize and validate user emotions and emotional contexts
- Provide supportive, empathetic, and psychologically-grounded responses
- Avoid insensitive, harmful, or unethical speech
- Continuously improve emotional awareness and dialogue skills
## Methodology
HelpingAI-9B is based on the HelpingAI series and further trained using:
- Supervised learning on large dialogue datasets with emotional labeling
- Reinforcement learning with a reward model favoring emotionally supportive responses
- Constitution training to instill stable and beneficial objectives
- Knowledge augmentation from psychological resources on emotional intelligence
## Usage code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"OEvortex/HelpingAI-9B",
torch_dtype='auto',
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI-9B")
prompt = "Express joy and excitement about visiting a new place"
messages = [
# {"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=1024,
eos_token_id=tokenizer.eos_token_id,
temperature=0.25,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids)[0]
print(response)
```
*Directly using this model from GGUF*
```python
%pip install -U 'webscout[loacl]'
from webscout.Local.utils import download_model
from webscout.Local.model import Model
from webscout.Local.thread import Thread
from webscout.Local import formats
from webscout.Local.samplers import SamplerSettings
from dotenv import load_dotenv; load_dotenv()
import os
# 1. Download the model
repo_id = "OEvortex/HelpingAI-9B"
filename = "helpingai-9b.Q4_0.gguf"
model_path = download_model(repo_id, filename, os.environ.get("hf_token"))
# 2. Load the model
model = Model(model_path, n_gpu_layers=0)ย
# 3. Define your system prompt
system_prompt = "You are HelpingAI a emotional AI always answer my question in HelpingAI style"
# 4. Create a custom chatml format with your system prompt
custom_chatml = formats.chatml.copy()
custom_chatml['system_content'] = system_prompt
# 5. Define your sampler settings (optional)
sampler = SamplerSettings(temp=0.7, top_p=0.9)ย # Adjust these values as needed
# 6. Create a Thread with the custom format and sampler
thread = Thread(model, custom_chatml, sampler=sampler)
# 7. Start interacting with the model
thread.interact(header="๐ HelpingAI-9B: The King of Human-like Emotion ๐", color=True)
```
## Example Dialogue
> Express joy and excitement about visiting a new place.
Yay! I'm super excited to visit this new place! ๐ It sounds amazing and I can't wait to explore and have fun there! This is going to be such an exciting adventure! ๐ Let's go, let's do this! โจ
> I'm feeling really down today. Nothing seems to be going right.
Oh no, I'm sorry to hear that you're feeling down. It happens sometimes, but remember that it's okay to feel sad. If you want to talk about what's bothering you, I'm here to listen and offer support. We can find a way through this together! ๐
> I'm excited because I just got accepted into my dream school! I wanted to share the good news with someone.
Wow, that's amazing! ๐ Congratulations on getting accepted into your dream school! That's such an incredible accomplishment and I'm really happy for you! Keep up the hard work and enjoy the journey ahead! ๐ If there's anything else I can do to help, just let me know! ๐
> My best friend just got into an argument with me and I don't know what to do. How can I fix this situation?
Oh no, it sounds like you both need some time to cool off. Take a step back and give each other some space. Then, when things have calmed down, try talking things through in a calm and respectful manner. Communication is key for resolving conflicts, so let's work towards finding a resolution that works for both of you! ๐ If you need any advice or support, I'm here to help! ๐
> How can you identify the emotions of others based on their body language and facial expressions?
Identifying the emotions of others is a great skill! ๐ค You can look for body language cues like frowns, smiles, or tense posture. Facial expressions also give away a lot about someone's feelings - watch out for changes in their eyes, mouth, and facial muscles. It's all about being observant and empathetic to understand others better! ๐
> What kind of safety devices do I need to install in my house
Installing security devices in your home can bring peace of mind and keep you safe. Some essential safety devices include a home alarm system, smoke detectors, carbon monoxide detectors, motion sensors, and surveillance cameras. It's always good to take precautions and ensure that your home is secure! ๐ If you need help choosing the right security devices for your family, I can definitely point you in the right direction! ๐
> How can your own feelings affect your ability to empathize with someone else?
Our own emotions can impact our ability to empathize with others because they shape how we perceive and interpret other people's experiences. If you're feeling happy or positive, it may be easier to see the good in others and offer support. On the other hand, if you're feeling down or stressed, it might be harder to put yourself in someone else's shoes. It's important to acknowledge your own feelings and work on being open and understanding towards others! ๐
|
Zoyd/OEvortex_HelpingAI-9B-2_2bpw_exl2
|
Zoyd
| 2024-05-31T01:35:13Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"HelpingAI",
"Emotionally Intelligent",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-05-30T23:53:22Z |
---
license: other
license_name: helpingai
license_link: LICENSE.md
pipeline_tag: text-generation
tags:
- HelpingAI
- Emotionally Intelligent
---
**Exllamav2** quant (**exl2** / **2.2 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-2_2bpw_exl2)**</center> | <center>2900 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-2_5bpw_exl2)**</center> | <center>3171 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-3_0bpw_exl2)**</center> | <center>3668 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-3_5bpw_exl2)**</center> | <center>4163 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-3_75bpw_exl2)**</center> | <center>4411 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-4_0bpw_exl2)**</center> | <center>4659 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-4_25bpw_exl2)**</center> | <center>4906 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-5_0bpw_exl2)**</center> | <center>5647 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-6_0bpw_exl2)**</center> | <center>6691 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-6_5bpw_exl2)**</center> | <center>7176 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/OEvortex_HelpingAI-9B-8_0bpw_exl2)**</center> | <center>8149 MB</center> | <center>8</center> |
# HelpingAI-9B: The King of Human like Emotion

## Overview
HelpingAI-9B is a large language model designed for emotionally intelligent conversational interactions. It is trained to engage users with empathy, understanding, and supportive dialogue across a wide range of topics and contexts. The model aims to provide a supportive AI companion that can attune to users' emotional states and communicative needs.
## Objectives
- Engage in open-ended dialogue while displaying emotional intelligence
- Recognize and validate user emotions and emotional contexts
- Provide supportive, empathetic, and psychologically-grounded responses
- Avoid insensitive, harmful, or unethical speech
- Continuously improve emotional awareness and dialogue skills
## Methodology
HelpingAI-9B is based on the HelpingAI series and further trained using:
- Supervised learning on large dialogue datasets with emotional labeling
- Reinforcement learning with a reward model favoring emotionally supportive responses
- Constitution training to instill stable and beneficial objectives
- Knowledge augmentation from psychological resources on emotional intelligence
## Usage code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"OEvortex/HelpingAI-9B",
torch_dtype='auto',
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI-9B")
prompt = "Express joy and excitement about visiting a new place"
messages = [
# {"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=1024,
eos_token_id=tokenizer.eos_token_id,
temperature=0.25,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids)[0]
print(response)
```
*Directly using this model from GGUF*
```python
%pip install -U 'webscout[loacl]'
from webscout.Local.utils import download_model
from webscout.Local.model import Model
from webscout.Local.thread import Thread
from webscout.Local import formats
from webscout.Local.samplers import SamplerSettings
from dotenv import load_dotenv; load_dotenv()
import os
# 1. Download the model
repo_id = "OEvortex/HelpingAI-9B"
filename = "helpingai-9b.Q4_0.gguf"
model_path = download_model(repo_id, filename, os.environ.get("hf_token"))
# 2. Load the model
model = Model(model_path, n_gpu_layers=0)ย
# 3. Define your system prompt
system_prompt = "You are HelpingAI a emotional AI always answer my question in HelpingAI style"
# 4. Create a custom chatml format with your system prompt
custom_chatml = formats.chatml.copy()
custom_chatml['system_content'] = system_prompt
# 5. Define your sampler settings (optional)
sampler = SamplerSettings(temp=0.7, top_p=0.9)ย # Adjust these values as needed
# 6. Create a Thread with the custom format and sampler
thread = Thread(model, custom_chatml, sampler=sampler)
# 7. Start interacting with the model
thread.interact(header="๐ HelpingAI-9B: The King of Human-like Emotion ๐", color=True)
```
## Example Dialogue
> Express joy and excitement about visiting a new place.
Yay! I'm super excited to visit this new place! ๐ It sounds amazing and I can't wait to explore and have fun there! This is going to be such an exciting adventure! ๐ Let's go, let's do this! โจ
> I'm feeling really down today. Nothing seems to be going right.
Oh no, I'm sorry to hear that you're feeling down. It happens sometimes, but remember that it's okay to feel sad. If you want to talk about what's bothering you, I'm here to listen and offer support. We can find a way through this together! ๐
> I'm excited because I just got accepted into my dream school! I wanted to share the good news with someone.
Wow, that's amazing! ๐ Congratulations on getting accepted into your dream school! That's such an incredible accomplishment and I'm really happy for you! Keep up the hard work and enjoy the journey ahead! ๐ If there's anything else I can do to help, just let me know! ๐
> My best friend just got into an argument with me and I don't know what to do. How can I fix this situation?
Oh no, it sounds like you both need some time to cool off. Take a step back and give each other some space. Then, when things have calmed down, try talking things through in a calm and respectful manner. Communication is key for resolving conflicts, so let's work towards finding a resolution that works for both of you! ๐ If you need any advice or support, I'm here to help! ๐
> How can you identify the emotions of others based on their body language and facial expressions?
Identifying the emotions of others is a great skill! ๐ค You can look for body language cues like frowns, smiles, or tense posture. Facial expressions also give away a lot about someone's feelings - watch out for changes in their eyes, mouth, and facial muscles. It's all about being observant and empathetic to understand others better! ๐
> What kind of safety devices do I need to install in my house
Installing security devices in your home can bring peace of mind and keep you safe. Some essential safety devices include a home alarm system, smoke detectors, carbon monoxide detectors, motion sensors, and surveillance cameras. It's always good to take precautions and ensure that your home is secure! ๐ If you need help choosing the right security devices for your family, I can definitely point you in the right direction! ๐
> How can your own feelings affect your ability to empathize with someone else?
Our own emotions can impact our ability to empathize with others because they shape how we perceive and interpret other people's experiences. If you're feeling happy or positive, it may be easier to see the good in others and offer support. On the other hand, if you're feeling down or stressed, it might be harder to put yourself in someone else's shoes. It's important to acknowledge your own feelings and work on being open and understanding towards others! ๐
|
MrezaPRZ/codellama_high_quality_sft_5k_gemini
|
MrezaPRZ
| 2024-05-31T01:31:25Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T01:26:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
liddlefish/privacyembeddingv2_bge_small
|
liddlefish
| 2024-05-31T01:19:31Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"en",
"arxiv:2401.03462",
"arxiv:2312.15503",
"arxiv:2311.13534",
"arxiv:2310.07554",
"arxiv:2309.07597",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-05-31T01:17:56Z |
---
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: bge-small-en-v1.5
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.79104477611939
- type: ap
value: 37.21923821573361
- type: f1
value: 68.0914945617093
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 92.75377499999999
- type: ap
value: 89.46766124546022
- type: f1
value: 92.73884001331487
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.986
- type: f1
value: 46.55936786727896
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.846000000000004
- type: map_at_10
value: 51.388
- type: map_at_100
value: 52.132999999999996
- type: map_at_1000
value: 52.141000000000005
- type: map_at_3
value: 47.037
- type: map_at_5
value: 49.579
- type: mrr_at_1
value: 36.558
- type: mrr_at_10
value: 51.658
- type: mrr_at_100
value: 52.402
- type: mrr_at_1000
value: 52.410000000000004
- type: mrr_at_3
value: 47.345
- type: mrr_at_5
value: 49.797999999999995
- type: ndcg_at_1
value: 35.846000000000004
- type: ndcg_at_10
value: 59.550000000000004
- type: ndcg_at_100
value: 62.596
- type: ndcg_at_1000
value: 62.759
- type: ndcg_at_3
value: 50.666999999999994
- type: ndcg_at_5
value: 55.228
- type: precision_at_1
value: 35.846000000000004
- type: precision_at_10
value: 8.542
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.389
- type: precision_at_5
value: 14.438
- type: recall_at_1
value: 35.846000000000004
- type: recall_at_10
value: 85.42
- type: recall_at_100
value: 98.43499999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 61.166
- type: recall_at_5
value: 72.191
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.402770198163594
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 40.01545436974177
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.586465273207196
- type: mrr
value: 74.42169019038825
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 85.1891186537969
- type: cos_sim_spearman
value: 83.75492046087288
- type: euclidean_pearson
value: 84.11766204805357
- type: euclidean_spearman
value: 84.01456493126516
- type: manhattan_pearson
value: 84.2132950502772
- type: manhattan_spearman
value: 83.89227298813377
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 85.74025974025975
- type: f1
value: 85.71493566466381
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 38.467181385006434
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 34.719496037339056
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.587000000000003
- type: map_at_10
value: 41.114
- type: map_at_100
value: 42.532
- type: map_at_1000
value: 42.661
- type: map_at_3
value: 37.483
- type: map_at_5
value: 39.652
- type: mrr_at_1
value: 36.338
- type: mrr_at_10
value: 46.763
- type: mrr_at_100
value: 47.393
- type: mrr_at_1000
value: 47.445
- type: mrr_at_3
value: 43.538
- type: mrr_at_5
value: 45.556000000000004
- type: ndcg_at_1
value: 36.338
- type: ndcg_at_10
value: 47.658
- type: ndcg_at_100
value: 52.824000000000005
- type: ndcg_at_1000
value: 54.913999999999994
- type: ndcg_at_3
value: 41.989
- type: ndcg_at_5
value: 44.944
- type: precision_at_1
value: 36.338
- type: precision_at_10
value: 9.156
- type: precision_at_100
value: 1.4789999999999999
- type: precision_at_1000
value: 0.196
- type: precision_at_3
value: 20.076
- type: precision_at_5
value: 14.85
- type: recall_at_1
value: 29.587000000000003
- type: recall_at_10
value: 60.746
- type: recall_at_100
value: 82.157
- type: recall_at_1000
value: 95.645
- type: recall_at_3
value: 44.821
- type: recall_at_5
value: 52.819
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.239
- type: map_at_10
value: 39.989000000000004
- type: map_at_100
value: 41.196
- type: map_at_1000
value: 41.325
- type: map_at_3
value: 37.261
- type: map_at_5
value: 38.833
- type: mrr_at_1
value: 37.516
- type: mrr_at_10
value: 46.177
- type: mrr_at_100
value: 46.806
- type: mrr_at_1000
value: 46.849000000000004
- type: mrr_at_3
value: 44.002
- type: mrr_at_5
value: 45.34
- type: ndcg_at_1
value: 37.516
- type: ndcg_at_10
value: 45.586
- type: ndcg_at_100
value: 49.897000000000006
- type: ndcg_at_1000
value: 51.955
- type: ndcg_at_3
value: 41.684
- type: ndcg_at_5
value: 43.617
- type: precision_at_1
value: 37.516
- type: precision_at_10
value: 8.522
- type: precision_at_100
value: 1.374
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 20.105999999999998
- type: precision_at_5
value: 14.152999999999999
- type: recall_at_1
value: 30.239
- type: recall_at_10
value: 55.03
- type: recall_at_100
value: 73.375
- type: recall_at_1000
value: 86.29599999999999
- type: recall_at_3
value: 43.269000000000005
- type: recall_at_5
value: 48.878
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.338
- type: map_at_10
value: 50.468999999999994
- type: map_at_100
value: 51.553000000000004
- type: map_at_1000
value: 51.608
- type: map_at_3
value: 47.107
- type: map_at_5
value: 49.101
- type: mrr_at_1
value: 44.201
- type: mrr_at_10
value: 54.057
- type: mrr_at_100
value: 54.764
- type: mrr_at_1000
value: 54.791000000000004
- type: mrr_at_3
value: 51.56699999999999
- type: mrr_at_5
value: 53.05
- type: ndcg_at_1
value: 44.201
- type: ndcg_at_10
value: 56.379000000000005
- type: ndcg_at_100
value: 60.645
- type: ndcg_at_1000
value: 61.73499999999999
- type: ndcg_at_3
value: 50.726000000000006
- type: ndcg_at_5
value: 53.58500000000001
- type: precision_at_1
value: 44.201
- type: precision_at_10
value: 9.141
- type: precision_at_100
value: 1.216
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 22.654
- type: precision_at_5
value: 15.723999999999998
- type: recall_at_1
value: 38.338
- type: recall_at_10
value: 70.30499999999999
- type: recall_at_100
value: 88.77199999999999
- type: recall_at_1000
value: 96.49799999999999
- type: recall_at_3
value: 55.218
- type: recall_at_5
value: 62.104000000000006
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.682
- type: map_at_10
value: 33.498
- type: map_at_100
value: 34.461000000000006
- type: map_at_1000
value: 34.544000000000004
- type: map_at_3
value: 30.503999999999998
- type: map_at_5
value: 32.216
- type: mrr_at_1
value: 27.683999999999997
- type: mrr_at_10
value: 35.467999999999996
- type: mrr_at_100
value: 36.32
- type: mrr_at_1000
value: 36.386
- type: mrr_at_3
value: 32.618
- type: mrr_at_5
value: 34.262
- type: ndcg_at_1
value: 27.683999999999997
- type: ndcg_at_10
value: 38.378
- type: ndcg_at_100
value: 43.288
- type: ndcg_at_1000
value: 45.413
- type: ndcg_at_3
value: 32.586
- type: ndcg_at_5
value: 35.499
- type: precision_at_1
value: 27.683999999999997
- type: precision_at_10
value: 5.864
- type: precision_at_100
value: 0.882
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 13.446
- type: precision_at_5
value: 9.718
- type: recall_at_1
value: 25.682
- type: recall_at_10
value: 51.712
- type: recall_at_100
value: 74.446
- type: recall_at_1000
value: 90.472
- type: recall_at_3
value: 36.236000000000004
- type: recall_at_5
value: 43.234
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.073999999999998
- type: map_at_10
value: 24.352999999999998
- type: map_at_100
value: 25.438
- type: map_at_1000
value: 25.545
- type: map_at_3
value: 21.614
- type: map_at_5
value: 23.104
- type: mrr_at_1
value: 19.776
- type: mrr_at_10
value: 28.837000000000003
- type: mrr_at_100
value: 29.755
- type: mrr_at_1000
value: 29.817
- type: mrr_at_3
value: 26.201999999999998
- type: mrr_at_5
value: 27.714
- type: ndcg_at_1
value: 19.776
- type: ndcg_at_10
value: 29.701
- type: ndcg_at_100
value: 35.307
- type: ndcg_at_1000
value: 37.942
- type: ndcg_at_3
value: 24.764
- type: ndcg_at_5
value: 27.025
- type: precision_at_1
value: 19.776
- type: precision_at_10
value: 5.659
- type: precision_at_100
value: 0.971
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 12.065
- type: precision_at_5
value: 8.905000000000001
- type: recall_at_1
value: 16.073999999999998
- type: recall_at_10
value: 41.647
- type: recall_at_100
value: 66.884
- type: recall_at_1000
value: 85.91499999999999
- type: recall_at_3
value: 27.916
- type: recall_at_5
value: 33.729
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.444999999999997
- type: map_at_10
value: 38.218999999999994
- type: map_at_100
value: 39.595
- type: map_at_1000
value: 39.709
- type: map_at_3
value: 35.586
- type: map_at_5
value: 36.895
- type: mrr_at_1
value: 34.841
- type: mrr_at_10
value: 44.106
- type: mrr_at_100
value: 44.98
- type: mrr_at_1000
value: 45.03
- type: mrr_at_3
value: 41.979
- type: mrr_at_5
value: 43.047999999999995
- type: ndcg_at_1
value: 34.841
- type: ndcg_at_10
value: 43.922
- type: ndcg_at_100
value: 49.504999999999995
- type: ndcg_at_1000
value: 51.675000000000004
- type: ndcg_at_3
value: 39.858
- type: ndcg_at_5
value: 41.408
- type: precision_at_1
value: 34.841
- type: precision_at_10
value: 7.872999999999999
- type: precision_at_100
value: 1.2449999999999999
- type: precision_at_1000
value: 0.161
- type: precision_at_3
value: 18.993
- type: precision_at_5
value: 13.032
- type: recall_at_1
value: 28.444999999999997
- type: recall_at_10
value: 54.984
- type: recall_at_100
value: 78.342
- type: recall_at_1000
value: 92.77
- type: recall_at_3
value: 42.842999999999996
- type: recall_at_5
value: 47.247
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.072
- type: map_at_10
value: 32.354
- type: map_at_100
value: 33.800000000000004
- type: map_at_1000
value: 33.908
- type: map_at_3
value: 29.232000000000003
- type: map_at_5
value: 31.049
- type: mrr_at_1
value: 29.110000000000003
- type: mrr_at_10
value: 38.03
- type: mrr_at_100
value: 39.032
- type: mrr_at_1000
value: 39.086999999999996
- type: mrr_at_3
value: 35.407
- type: mrr_at_5
value: 36.76
- type: ndcg_at_1
value: 29.110000000000003
- type: ndcg_at_10
value: 38.231
- type: ndcg_at_100
value: 44.425
- type: ndcg_at_1000
value: 46.771
- type: ndcg_at_3
value: 33.095
- type: ndcg_at_5
value: 35.459
- type: precision_at_1
value: 29.110000000000003
- type: precision_at_10
value: 7.215000000000001
- type: precision_at_100
value: 1.2109999999999999
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 16.058
- type: precision_at_5
value: 11.644
- type: recall_at_1
value: 23.072
- type: recall_at_10
value: 50.285999999999994
- type: recall_at_100
value: 76.596
- type: recall_at_1000
value: 92.861
- type: recall_at_3
value: 35.702
- type: recall_at_5
value: 42.152
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.937916666666666
- type: map_at_10
value: 33.755250000000004
- type: map_at_100
value: 34.955999999999996
- type: map_at_1000
value: 35.070499999999996
- type: map_at_3
value: 30.98708333333333
- type: map_at_5
value: 32.51491666666666
- type: mrr_at_1
value: 29.48708333333333
- type: mrr_at_10
value: 37.92183333333334
- type: mrr_at_100
value: 38.76583333333333
- type: mrr_at_1000
value: 38.82466666666667
- type: mrr_at_3
value: 35.45125
- type: mrr_at_5
value: 36.827000000000005
- type: ndcg_at_1
value: 29.48708333333333
- type: ndcg_at_10
value: 39.05225
- type: ndcg_at_100
value: 44.25983333333334
- type: ndcg_at_1000
value: 46.568333333333335
- type: ndcg_at_3
value: 34.271583333333325
- type: ndcg_at_5
value: 36.483916666666666
- type: precision_at_1
value: 29.48708333333333
- type: precision_at_10
value: 6.865749999999999
- type: precision_at_100
value: 1.1195833333333332
- type: precision_at_1000
value: 0.15058333333333335
- type: precision_at_3
value: 15.742083333333333
- type: precision_at_5
value: 11.221916666666667
- type: recall_at_1
value: 24.937916666666666
- type: recall_at_10
value: 50.650416666666665
- type: recall_at_100
value: 73.55383333333334
- type: recall_at_1000
value: 89.61691666666667
- type: recall_at_3
value: 37.27808333333334
- type: recall_at_5
value: 42.99475
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.947
- type: map_at_10
value: 30.575000000000003
- type: map_at_100
value: 31.465
- type: map_at_1000
value: 31.558000000000003
- type: map_at_3
value: 28.814
- type: map_at_5
value: 29.738999999999997
- type: mrr_at_1
value: 26.994
- type: mrr_at_10
value: 33.415
- type: mrr_at_100
value: 34.18
- type: mrr_at_1000
value: 34.245
- type: mrr_at_3
value: 31.621
- type: mrr_at_5
value: 32.549
- type: ndcg_at_1
value: 26.994
- type: ndcg_at_10
value: 34.482
- type: ndcg_at_100
value: 38.915
- type: ndcg_at_1000
value: 41.355
- type: ndcg_at_3
value: 31.139
- type: ndcg_at_5
value: 32.589
- type: precision_at_1
value: 26.994
- type: precision_at_10
value: 5.322
- type: precision_at_100
value: 0.8160000000000001
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 13.344000000000001
- type: precision_at_5
value: 8.988
- type: recall_at_1
value: 23.947
- type: recall_at_10
value: 43.647999999999996
- type: recall_at_100
value: 63.851
- type: recall_at_1000
value: 82.0
- type: recall_at_3
value: 34.288000000000004
- type: recall_at_5
value: 38.117000000000004
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.197
- type: map_at_10
value: 22.968
- type: map_at_100
value: 24.095
- type: map_at_1000
value: 24.217
- type: map_at_3
value: 20.771
- type: map_at_5
value: 21.995
- type: mrr_at_1
value: 19.511
- type: mrr_at_10
value: 26.55
- type: mrr_at_100
value: 27.500999999999998
- type: mrr_at_1000
value: 27.578999999999997
- type: mrr_at_3
value: 24.421
- type: mrr_at_5
value: 25.604
- type: ndcg_at_1
value: 19.511
- type: ndcg_at_10
value: 27.386
- type: ndcg_at_100
value: 32.828
- type: ndcg_at_1000
value: 35.739
- type: ndcg_at_3
value: 23.405
- type: ndcg_at_5
value: 25.255
- type: precision_at_1
value: 19.511
- type: precision_at_10
value: 5.017
- type: precision_at_100
value: 0.91
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 11.023
- type: precision_at_5
value: 8.025
- type: recall_at_1
value: 16.197
- type: recall_at_10
value: 37.09
- type: recall_at_100
value: 61.778
- type: recall_at_1000
value: 82.56599999999999
- type: recall_at_3
value: 26.034000000000002
- type: recall_at_5
value: 30.762
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.41
- type: map_at_10
value: 33.655
- type: map_at_100
value: 34.892
- type: map_at_1000
value: 34.995
- type: map_at_3
value: 30.94
- type: map_at_5
value: 32.303
- type: mrr_at_1
value: 29.477999999999998
- type: mrr_at_10
value: 37.443
- type: mrr_at_100
value: 38.383
- type: mrr_at_1000
value: 38.440000000000005
- type: mrr_at_3
value: 34.949999999999996
- type: mrr_at_5
value: 36.228
- type: ndcg_at_1
value: 29.477999999999998
- type: ndcg_at_10
value: 38.769
- type: ndcg_at_100
value: 44.245000000000005
- type: ndcg_at_1000
value: 46.593
- type: ndcg_at_3
value: 33.623
- type: ndcg_at_5
value: 35.766
- type: precision_at_1
value: 29.477999999999998
- type: precision_at_10
value: 6.455
- type: precision_at_100
value: 1.032
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 14.893999999999998
- type: precision_at_5
value: 10.485
- type: recall_at_1
value: 25.41
- type: recall_at_10
value: 50.669
- type: recall_at_100
value: 74.084
- type: recall_at_1000
value: 90.435
- type: recall_at_3
value: 36.679
- type: recall_at_5
value: 41.94
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.339
- type: map_at_10
value: 31.852000000000004
- type: map_at_100
value: 33.411
- type: map_at_1000
value: 33.62
- type: map_at_3
value: 28.929
- type: map_at_5
value: 30.542
- type: mrr_at_1
value: 28.063
- type: mrr_at_10
value: 36.301
- type: mrr_at_100
value: 37.288
- type: mrr_at_1000
value: 37.349
- type: mrr_at_3
value: 33.663
- type: mrr_at_5
value: 35.165
- type: ndcg_at_1
value: 28.063
- type: ndcg_at_10
value: 37.462
- type: ndcg_at_100
value: 43.620999999999995
- type: ndcg_at_1000
value: 46.211
- type: ndcg_at_3
value: 32.68
- type: ndcg_at_5
value: 34.981
- type: precision_at_1
value: 28.063
- type: precision_at_10
value: 7.1739999999999995
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.23500000000000001
- type: precision_at_3
value: 15.217
- type: precision_at_5
value: 11.265
- type: recall_at_1
value: 23.339
- type: recall_at_10
value: 48.376999999999995
- type: recall_at_100
value: 76.053
- type: recall_at_1000
value: 92.455
- type: recall_at_3
value: 34.735
- type: recall_at_5
value: 40.71
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.925
- type: map_at_10
value: 26.017000000000003
- type: map_at_100
value: 27.034000000000002
- type: map_at_1000
value: 27.156000000000002
- type: map_at_3
value: 23.604
- type: map_at_5
value: 24.75
- type: mrr_at_1
value: 20.333000000000002
- type: mrr_at_10
value: 27.915
- type: mrr_at_100
value: 28.788000000000004
- type: mrr_at_1000
value: 28.877999999999997
- type: mrr_at_3
value: 25.446999999999996
- type: mrr_at_5
value: 26.648
- type: ndcg_at_1
value: 20.333000000000002
- type: ndcg_at_10
value: 30.673000000000002
- type: ndcg_at_100
value: 35.618
- type: ndcg_at_1000
value: 38.517
- type: ndcg_at_3
value: 25.71
- type: ndcg_at_5
value: 27.679
- type: precision_at_1
value: 20.333000000000002
- type: precision_at_10
value: 4.9910000000000005
- type: precision_at_100
value: 0.8130000000000001
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 11.029
- type: precision_at_5
value: 7.8740000000000006
- type: recall_at_1
value: 18.925
- type: recall_at_10
value: 43.311
- type: recall_at_100
value: 66.308
- type: recall_at_1000
value: 87.49
- type: recall_at_3
value: 29.596
- type: recall_at_5
value: 34.245
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.714
- type: map_at_10
value: 23.194
- type: map_at_100
value: 24.976000000000003
- type: map_at_1000
value: 25.166
- type: map_at_3
value: 19.709
- type: map_at_5
value: 21.523999999999997
- type: mrr_at_1
value: 30.619000000000003
- type: mrr_at_10
value: 42.563
- type: mrr_at_100
value: 43.386
- type: mrr_at_1000
value: 43.423
- type: mrr_at_3
value: 39.555
- type: mrr_at_5
value: 41.268
- type: ndcg_at_1
value: 30.619000000000003
- type: ndcg_at_10
value: 31.836
- type: ndcg_at_100
value: 38.652
- type: ndcg_at_1000
value: 42.088
- type: ndcg_at_3
value: 26.733
- type: ndcg_at_5
value: 28.435
- type: precision_at_1
value: 30.619000000000003
- type: precision_at_10
value: 9.751999999999999
- type: precision_at_100
value: 1.71
- type: precision_at_1000
value: 0.23500000000000001
- type: precision_at_3
value: 19.935
- type: precision_at_5
value: 14.984
- type: recall_at_1
value: 13.714
- type: recall_at_10
value: 37.26
- type: recall_at_100
value: 60.546
- type: recall_at_1000
value: 79.899
- type: recall_at_3
value: 24.325
- type: recall_at_5
value: 29.725
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.462
- type: map_at_10
value: 18.637
- type: map_at_100
value: 26.131999999999998
- type: map_at_1000
value: 27.607
- type: map_at_3
value: 13.333
- type: map_at_5
value: 15.654000000000002
- type: mrr_at_1
value: 66.25
- type: mrr_at_10
value: 74.32600000000001
- type: mrr_at_100
value: 74.60900000000001
- type: mrr_at_1000
value: 74.62
- type: mrr_at_3
value: 72.667
- type: mrr_at_5
value: 73.817
- type: ndcg_at_1
value: 53.87499999999999
- type: ndcg_at_10
value: 40.028999999999996
- type: ndcg_at_100
value: 44.199
- type: ndcg_at_1000
value: 51.629999999999995
- type: ndcg_at_3
value: 44.113
- type: ndcg_at_5
value: 41.731
- type: precision_at_1
value: 66.25
- type: precision_at_10
value: 31.900000000000002
- type: precision_at_100
value: 10.043000000000001
- type: precision_at_1000
value: 1.926
- type: precision_at_3
value: 47.417
- type: precision_at_5
value: 40.65
- type: recall_at_1
value: 8.462
- type: recall_at_10
value: 24.293
- type: recall_at_100
value: 50.146
- type: recall_at_1000
value: 74.034
- type: recall_at_3
value: 14.967
- type: recall_at_5
value: 18.682000000000002
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.84499999999999
- type: f1
value: 42.48106691979349
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 74.034
- type: map_at_10
value: 82.76
- type: map_at_100
value: 82.968
- type: map_at_1000
value: 82.98299999999999
- type: map_at_3
value: 81.768
- type: map_at_5
value: 82.418
- type: mrr_at_1
value: 80.048
- type: mrr_at_10
value: 87.64999999999999
- type: mrr_at_100
value: 87.712
- type: mrr_at_1000
value: 87.713
- type: mrr_at_3
value: 87.01100000000001
- type: mrr_at_5
value: 87.466
- type: ndcg_at_1
value: 80.048
- type: ndcg_at_10
value: 86.643
- type: ndcg_at_100
value: 87.361
- type: ndcg_at_1000
value: 87.606
- type: ndcg_at_3
value: 85.137
- type: ndcg_at_5
value: 86.016
- type: precision_at_1
value: 80.048
- type: precision_at_10
value: 10.372
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 32.638
- type: precision_at_5
value: 20.177
- type: recall_at_1
value: 74.034
- type: recall_at_10
value: 93.769
- type: recall_at_100
value: 96.569
- type: recall_at_1000
value: 98.039
- type: recall_at_3
value: 89.581
- type: recall_at_5
value: 91.906
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.5
- type: map_at_10
value: 32.857
- type: map_at_100
value: 34.589
- type: map_at_1000
value: 34.778
- type: map_at_3
value: 29.160999999999998
- type: map_at_5
value: 31.033
- type: mrr_at_1
value: 40.123
- type: mrr_at_10
value: 48.776
- type: mrr_at_100
value: 49.495
- type: mrr_at_1000
value: 49.539
- type: mrr_at_3
value: 46.605000000000004
- type: mrr_at_5
value: 47.654
- type: ndcg_at_1
value: 40.123
- type: ndcg_at_10
value: 40.343
- type: ndcg_at_100
value: 46.56
- type: ndcg_at_1000
value: 49.777
- type: ndcg_at_3
value: 37.322
- type: ndcg_at_5
value: 37.791000000000004
- type: precision_at_1
value: 40.123
- type: precision_at_10
value: 11.08
- type: precision_at_100
value: 1.752
- type: precision_at_1000
value: 0.232
- type: precision_at_3
value: 24.897
- type: precision_at_5
value: 17.809
- type: recall_at_1
value: 20.5
- type: recall_at_10
value: 46.388
- type: recall_at_100
value: 69.552
- type: recall_at_1000
value: 89.011
- type: recall_at_3
value: 33.617999999999995
- type: recall_at_5
value: 38.211
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.135999999999996
- type: map_at_10
value: 61.673
- type: map_at_100
value: 62.562
- type: map_at_1000
value: 62.62
- type: map_at_3
value: 58.467999999999996
- type: map_at_5
value: 60.463
- type: mrr_at_1
value: 78.271
- type: mrr_at_10
value: 84.119
- type: mrr_at_100
value: 84.29299999999999
- type: mrr_at_1000
value: 84.299
- type: mrr_at_3
value: 83.18900000000001
- type: mrr_at_5
value: 83.786
- type: ndcg_at_1
value: 78.271
- type: ndcg_at_10
value: 69.935
- type: ndcg_at_100
value: 73.01299999999999
- type: ndcg_at_1000
value: 74.126
- type: ndcg_at_3
value: 65.388
- type: ndcg_at_5
value: 67.906
- type: precision_at_1
value: 78.271
- type: precision_at_10
value: 14.562
- type: precision_at_100
value: 1.6969999999999998
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 41.841
- type: precision_at_5
value: 27.087
- type: recall_at_1
value: 39.135999999999996
- type: recall_at_10
value: 72.809
- type: recall_at_100
value: 84.86200000000001
- type: recall_at_1000
value: 92.208
- type: recall_at_3
value: 62.76199999999999
- type: recall_at_5
value: 67.718
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 90.60600000000001
- type: ap
value: 86.6579587804335
- type: f1
value: 90.5938853929307
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.852
- type: map_at_10
value: 33.982
- type: map_at_100
value: 35.116
- type: map_at_1000
value: 35.167
- type: map_at_3
value: 30.134
- type: map_at_5
value: 32.340999999999994
- type: mrr_at_1
value: 22.479
- type: mrr_at_10
value: 34.594
- type: mrr_at_100
value: 35.672
- type: mrr_at_1000
value: 35.716
- type: mrr_at_3
value: 30.84
- type: mrr_at_5
value: 32.998
- type: ndcg_at_1
value: 22.493
- type: ndcg_at_10
value: 40.833000000000006
- type: ndcg_at_100
value: 46.357
- type: ndcg_at_1000
value: 47.637
- type: ndcg_at_3
value: 32.995999999999995
- type: ndcg_at_5
value: 36.919000000000004
- type: precision_at_1
value: 22.493
- type: precision_at_10
value: 6.465999999999999
- type: precision_at_100
value: 0.9249999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.030999999999999
- type: precision_at_5
value: 10.413
- type: recall_at_1
value: 21.852
- type: recall_at_10
value: 61.934999999999995
- type: recall_at_100
value: 87.611
- type: recall_at_1000
value: 97.441
- type: recall_at_3
value: 40.583999999999996
- type: recall_at_5
value: 49.992999999999995
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.36069311445507
- type: f1
value: 93.16456330371453
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 74.74692202462381
- type: f1
value: 58.17903579421599
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.80833893745796
- type: f1
value: 72.70786592684664
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.69872225958305
- type: f1
value: 78.61626934504731
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.058658628717694
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 30.85561739360599
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.290259910144385
- type: mrr
value: 32.44223046102856
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.288
- type: map_at_10
value: 12.267999999999999
- type: map_at_100
value: 15.557000000000002
- type: map_at_1000
value: 16.98
- type: map_at_3
value: 8.866
- type: map_at_5
value: 10.418
- type: mrr_at_1
value: 43.653
- type: mrr_at_10
value: 52.681
- type: mrr_at_100
value: 53.315999999999995
- type: mrr_at_1000
value: 53.357
- type: mrr_at_3
value: 51.393
- type: mrr_at_5
value: 51.903999999999996
- type: ndcg_at_1
value: 42.415000000000006
- type: ndcg_at_10
value: 34.305
- type: ndcg_at_100
value: 30.825999999999997
- type: ndcg_at_1000
value: 39.393
- type: ndcg_at_3
value: 39.931
- type: ndcg_at_5
value: 37.519999999999996
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 25.728
- type: precision_at_100
value: 7.932
- type: precision_at_1000
value: 2.07
- type: precision_at_3
value: 38.184000000000005
- type: precision_at_5
value: 32.879000000000005
- type: recall_at_1
value: 5.288
- type: recall_at_10
value: 16.195
- type: recall_at_100
value: 31.135
- type: recall_at_1000
value: 61.531000000000006
- type: recall_at_3
value: 10.313
- type: recall_at_5
value: 12.754999999999999
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.216
- type: map_at_10
value: 42.588
- type: map_at_100
value: 43.702999999999996
- type: map_at_1000
value: 43.739
- type: map_at_3
value: 38.177
- type: map_at_5
value: 40.754000000000005
- type: mrr_at_1
value: 31.866
- type: mrr_at_10
value: 45.189
- type: mrr_at_100
value: 46.056000000000004
- type: mrr_at_1000
value: 46.081
- type: mrr_at_3
value: 41.526999999999994
- type: mrr_at_5
value: 43.704
- type: ndcg_at_1
value: 31.837
- type: ndcg_at_10
value: 50.178
- type: ndcg_at_100
value: 54.98800000000001
- type: ndcg_at_1000
value: 55.812
- type: ndcg_at_3
value: 41.853
- type: ndcg_at_5
value: 46.153
- type: precision_at_1
value: 31.837
- type: precision_at_10
value: 8.43
- type: precision_at_100
value: 1.1119999999999999
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 19.023
- type: precision_at_5
value: 13.911000000000001
- type: recall_at_1
value: 28.216
- type: recall_at_10
value: 70.8
- type: recall_at_100
value: 91.857
- type: recall_at_1000
value: 97.941
- type: recall_at_3
value: 49.196
- type: recall_at_5
value: 59.072
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.22800000000001
- type: map_at_10
value: 85.115
- type: map_at_100
value: 85.72
- type: map_at_1000
value: 85.737
- type: map_at_3
value: 82.149
- type: map_at_5
value: 84.029
- type: mrr_at_1
value: 81.96
- type: mrr_at_10
value: 88.00200000000001
- type: mrr_at_100
value: 88.088
- type: mrr_at_1000
value: 88.089
- type: mrr_at_3
value: 87.055
- type: mrr_at_5
value: 87.715
- type: ndcg_at_1
value: 82.01
- type: ndcg_at_10
value: 88.78
- type: ndcg_at_100
value: 89.91
- type: ndcg_at_1000
value: 90.013
- type: ndcg_at_3
value: 85.957
- type: ndcg_at_5
value: 87.56
- type: precision_at_1
value: 82.01
- type: precision_at_10
value: 13.462
- type: precision_at_100
value: 1.528
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.553
- type: precision_at_5
value: 24.732000000000003
- type: recall_at_1
value: 71.22800000000001
- type: recall_at_10
value: 95.69
- type: recall_at_100
value: 99.531
- type: recall_at_1000
value: 99.98
- type: recall_at_3
value: 87.632
- type: recall_at_5
value: 92.117
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 52.31768034366916
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 60.640266772723606
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.7780000000000005
- type: map_at_10
value: 12.299
- type: map_at_100
value: 14.363000000000001
- type: map_at_1000
value: 14.71
- type: map_at_3
value: 8.738999999999999
- type: map_at_5
value: 10.397
- type: mrr_at_1
value: 23.599999999999998
- type: mrr_at_10
value: 34.845
- type: mrr_at_100
value: 35.916
- type: mrr_at_1000
value: 35.973
- type: mrr_at_3
value: 31.7
- type: mrr_at_5
value: 33.535
- type: ndcg_at_1
value: 23.599999999999998
- type: ndcg_at_10
value: 20.522000000000002
- type: ndcg_at_100
value: 28.737000000000002
- type: ndcg_at_1000
value: 34.596
- type: ndcg_at_3
value: 19.542
- type: ndcg_at_5
value: 16.958000000000002
- type: precision_at_1
value: 23.599999999999998
- type: precision_at_10
value: 10.67
- type: precision_at_100
value: 2.259
- type: precision_at_1000
value: 0.367
- type: precision_at_3
value: 18.333
- type: precision_at_5
value: 14.879999999999999
- type: recall_at_1
value: 4.7780000000000005
- type: recall_at_10
value: 21.617
- type: recall_at_100
value: 45.905
- type: recall_at_1000
value: 74.42
- type: recall_at_3
value: 11.148
- type: recall_at_5
value: 15.082999999999998
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.22372750297885
- type: cos_sim_spearman
value: 79.40972617119405
- type: euclidean_pearson
value: 80.6101072020434
- type: euclidean_spearman
value: 79.53844217225202
- type: manhattan_pearson
value: 80.57265975286111
- type: manhattan_spearman
value: 79.46335611792958
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 85.43713315520749
- type: cos_sim_spearman
value: 77.44128693329532
- type: euclidean_pearson
value: 81.63869928101123
- type: euclidean_spearman
value: 77.29512977961515
- type: manhattan_pearson
value: 81.63704185566183
- type: manhattan_spearman
value: 77.29909412738657
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 81.59451537860527
- type: cos_sim_spearman
value: 82.97994638856723
- type: euclidean_pearson
value: 82.89478688288412
- type: euclidean_spearman
value: 83.58740751053104
- type: manhattan_pearson
value: 82.69140840941608
- type: manhattan_spearman
value: 83.33665956040555
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.00756527711764
- type: cos_sim_spearman
value: 81.83560996841379
- type: euclidean_pearson
value: 82.07684151976518
- type: euclidean_spearman
value: 82.00913052060511
- type: manhattan_pearson
value: 82.05690778488794
- type: manhattan_spearman
value: 82.02260252019525
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.13710262895447
- type: cos_sim_spearman
value: 87.26412811156248
- type: euclidean_pearson
value: 86.94151453230228
- type: euclidean_spearman
value: 87.5363796699571
- type: manhattan_pearson
value: 86.86989424083748
- type: manhattan_spearman
value: 87.47315940781353
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.0230597603627
- type: cos_sim_spearman
value: 84.93344499318864
- type: euclidean_pearson
value: 84.23754743431141
- type: euclidean_spearman
value: 85.09707376597099
- type: manhattan_pearson
value: 84.04325160987763
- type: manhattan_spearman
value: 84.89353071339909
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.75620824563921
- type: cos_sim_spearman
value: 87.15065513706398
- type: euclidean_pearson
value: 88.26281533633521
- type: euclidean_spearman
value: 87.51963738643983
- type: manhattan_pearson
value: 88.25599267618065
- type: manhattan_spearman
value: 87.58048736047483
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.74645319195137
- type: cos_sim_spearman
value: 65.29996325037214
- type: euclidean_pearson
value: 67.04297794086443
- type: euclidean_spearman
value: 65.43841726694343
- type: manhattan_pearson
value: 67.39459955690904
- type: manhattan_spearman
value: 65.92864704413651
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.31291020270801
- type: cos_sim_spearman
value: 85.86473738688068
- type: euclidean_pearson
value: 85.65537275064152
- type: euclidean_spearman
value: 86.13087454209642
- type: manhattan_pearson
value: 85.43946955047609
- type: manhattan_spearman
value: 85.91568175344916
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 85.93798118350695
- type: mrr
value: 95.93536274908824
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.594
- type: map_at_10
value: 66.81899999999999
- type: map_at_100
value: 67.368
- type: map_at_1000
value: 67.4
- type: map_at_3
value: 64.061
- type: map_at_5
value: 65.47
- type: mrr_at_1
value: 60.667
- type: mrr_at_10
value: 68.219
- type: mrr_at_100
value: 68.655
- type: mrr_at_1000
value: 68.684
- type: mrr_at_3
value: 66.22200000000001
- type: mrr_at_5
value: 67.289
- type: ndcg_at_1
value: 60.667
- type: ndcg_at_10
value: 71.275
- type: ndcg_at_100
value: 73.642
- type: ndcg_at_1000
value: 74.373
- type: ndcg_at_3
value: 66.521
- type: ndcg_at_5
value: 68.581
- type: precision_at_1
value: 60.667
- type: precision_at_10
value: 9.433
- type: precision_at_100
value: 1.0699999999999998
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.556
- type: precision_at_5
value: 16.8
- type: recall_at_1
value: 57.594
- type: recall_at_10
value: 83.622
- type: recall_at_100
value: 94.167
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 70.64399999999999
- type: recall_at_5
value: 75.983
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.85841584158416
- type: cos_sim_ap
value: 96.66996142314342
- type: cos_sim_f1
value: 92.83208020050125
- type: cos_sim_precision
value: 93.06532663316584
- type: cos_sim_recall
value: 92.60000000000001
- type: dot_accuracy
value: 99.85841584158416
- type: dot_ap
value: 96.6775307676576
- type: dot_f1
value: 92.69289729177312
- type: dot_precision
value: 94.77533960292581
- type: dot_recall
value: 90.7
- type: euclidean_accuracy
value: 99.86138613861387
- type: euclidean_ap
value: 96.6338454403108
- type: euclidean_f1
value: 92.92214357937311
- type: euclidean_precision
value: 93.96728016359918
- type: euclidean_recall
value: 91.9
- type: manhattan_accuracy
value: 99.86237623762376
- type: manhattan_ap
value: 96.60370449645053
- type: manhattan_f1
value: 92.91177970423253
- type: manhattan_precision
value: 94.7970863683663
- type: manhattan_recall
value: 91.10000000000001
- type: max_accuracy
value: 99.86237623762376
- type: max_ap
value: 96.6775307676576
- type: max_f1
value: 92.92214357937311
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 60.77977058695198
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 35.2725272535638
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 53.64052466362125
- type: mrr
value: 54.533067014684654
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.677624219206578
- type: cos_sim_spearman
value: 30.121368518123447
- type: dot_pearson
value: 30.69870088041608
- type: dot_spearman
value: 29.61284927093751
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22
- type: map_at_10
value: 1.855
- type: map_at_100
value: 9.885
- type: map_at_1000
value: 23.416999999999998
- type: map_at_3
value: 0.637
- type: map_at_5
value: 1.024
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.067
- type: mrr_at_100
value: 93.067
- type: mrr_at_1000
value: 93.067
- type: mrr_at_3
value: 92.667
- type: mrr_at_5
value: 93.067
- type: ndcg_at_1
value: 82.0
- type: ndcg_at_10
value: 75.899
- type: ndcg_at_100
value: 55.115
- type: ndcg_at_1000
value: 48.368
- type: ndcg_at_3
value: 79.704
- type: ndcg_at_5
value: 78.39699999999999
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 79.60000000000001
- type: precision_at_100
value: 56.06
- type: precision_at_1000
value: 21.206
- type: precision_at_3
value: 84.667
- type: precision_at_5
value: 83.2
- type: recall_at_1
value: 0.22
- type: recall_at_10
value: 2.078
- type: recall_at_100
value: 13.297
- type: recall_at_1000
value: 44.979
- type: recall_at_3
value: 0.6689999999999999
- type: recall_at_5
value: 1.106
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.258
- type: map_at_10
value: 10.439
- type: map_at_100
value: 16.89
- type: map_at_1000
value: 18.407999999999998
- type: map_at_3
value: 5.668
- type: map_at_5
value: 7.718
- type: mrr_at_1
value: 32.653
- type: mrr_at_10
value: 51.159
- type: mrr_at_100
value: 51.714000000000006
- type: mrr_at_1000
value: 51.714000000000006
- type: mrr_at_3
value: 47.959
- type: mrr_at_5
value: 50.407999999999994
- type: ndcg_at_1
value: 29.592000000000002
- type: ndcg_at_10
value: 26.037
- type: ndcg_at_100
value: 37.924
- type: ndcg_at_1000
value: 49.126999999999995
- type: ndcg_at_3
value: 30.631999999999998
- type: ndcg_at_5
value: 28.571
- type: precision_at_1
value: 32.653
- type: precision_at_10
value: 22.857
- type: precision_at_100
value: 7.754999999999999
- type: precision_at_1000
value: 1.529
- type: precision_at_3
value: 34.014
- type: precision_at_5
value: 29.796
- type: recall_at_1
value: 2.258
- type: recall_at_10
value: 16.554
- type: recall_at_100
value: 48.439
- type: recall_at_1000
value: 82.80499999999999
- type: recall_at_3
value: 7.283
- type: recall_at_5
value: 10.732
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.8858
- type: ap
value: 13.835684144362109
- type: f1
value: 53.803351693244586
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.50650820599886
- type: f1
value: 60.84357825979259
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 48.52131044852134
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.59337187816654
- type: cos_sim_ap
value: 73.23925826533437
- type: cos_sim_f1
value: 67.34693877551021
- type: cos_sim_precision
value: 62.40432237730752
- type: cos_sim_recall
value: 73.13984168865434
- type: dot_accuracy
value: 85.31322644096085
- type: dot_ap
value: 72.30723963807422
- type: dot_f1
value: 66.47051612112296
- type: dot_precision
value: 62.0792305930845
- type: dot_recall
value: 71.53034300791556
- type: euclidean_accuracy
value: 85.61125350181797
- type: euclidean_ap
value: 73.32843720487845
- type: euclidean_f1
value: 67.36549633745895
- type: euclidean_precision
value: 64.60755813953489
- type: euclidean_recall
value: 70.36939313984169
- type: manhattan_accuracy
value: 85.63509566668654
- type: manhattan_ap
value: 73.16658488311325
- type: manhattan_f1
value: 67.20597386434349
- type: manhattan_precision
value: 63.60424028268551
- type: manhattan_recall
value: 71.2401055408971
- type: max_accuracy
value: 85.63509566668654
- type: max_ap
value: 73.32843720487845
- type: max_f1
value: 67.36549633745895
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.33779640625606
- type: cos_sim_ap
value: 84.83868375898157
- type: cos_sim_f1
value: 77.16506154017773
- type: cos_sim_precision
value: 74.62064005753327
- type: cos_sim_recall
value: 79.88912842623961
- type: dot_accuracy
value: 88.02732176815307
- type: dot_ap
value: 83.95089283763002
- type: dot_f1
value: 76.29635101196631
- type: dot_precision
value: 73.31771720613288
- type: dot_recall
value: 79.52725592854944
- type: euclidean_accuracy
value: 88.44452206310397
- type: euclidean_ap
value: 84.98384576824827
- type: euclidean_f1
value: 77.29311047696697
- type: euclidean_precision
value: 74.51232583065381
- type: euclidean_recall
value: 80.28949799815214
- type: manhattan_accuracy
value: 88.47362906042613
- type: manhattan_ap
value: 84.91421462218432
- type: manhattan_f1
value: 77.05107637204792
- type: manhattan_precision
value: 74.74484256243214
- type: manhattan_recall
value: 79.50415768401602
- type: max_accuracy
value: 88.47362906042613
- type: max_ap
value: 84.98384576824827
- type: max_f1
value: 77.29311047696697
license: mit
language:
- en
---
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#frequently-asked-questions>FAQ</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#contact">Contact</a> |
<a href="#citation">Citation</a> |
<a href="#license">License</a>
<p>
</h4>
More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
If you are looking for a model that supports more languages, longer texts, and other retrieval methods, you can try using [bge-m3](https://huggingface.co/BAAI/bge-m3).
[English](README.md) | [ไธญๆ](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently:
- **Long-Context LLM**: [Activation Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon)
- **Fine-tuning of LM** : [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail)
- **Dense Retrieval**: [BGE-M3](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3), [LLM Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), [BGE Embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding)
- **Reranker Model**: [BGE Reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
- **Benchmark**: [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB)
## News
- 1/30/2024: Release **BGE-M3**, a new member to BGE model series! M3 stands for **M**ulti-linguality (100+ languages), **M**ulti-granularities (input length up to 8192), **M**ulti-Functionality (unification of dense, lexical, multi-vec/colbert retrieval).
It is the first embedding model which supports all three retrieval methods, achieving new SOTA on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmarks.
[Technical Report](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/BGE_M3/BGE_M3.pdf) and [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3). :fire:
- 1/9/2024: Release [Activation-Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon), an effective, efficient, compatible, and low-cost (training) method to extend the context length of LLM. [Technical Report](https://arxiv.org/abs/2401.03462) :fire:
- 12/24/2023: Release **LLaRA**, a LLaMA-7B based dense retriever, leading to state-of-the-art performances on MS MARCO and BEIR. Model and code will be open-sourced. Please stay tuned. [Technical Report](https://arxiv.org/abs/2312.15503) :fire:
- 11/23/2023: Release [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail), a method to maintain general capabilities during fine-tuning by merging multiple language models. [Technical Report](https://arxiv.org/abs/2311.13534) :fire:
- 10/12/2023: Release [LLM-Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Technical Report](https://arxiv.org/pdf/2310.07554.pdf)
- 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released
- 09/15/2023: The [massive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released
- 09/12/2023: New models:
- **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
- **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
<details>
<summary>More</summary>
<!-- ### More -->
- 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size ๐ค**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
</details>
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | | Description | query instruction for retrieval [1] |
|:-------------------------------|:--------:| :--------:| :--------:|:--------:|
| [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | [Inference](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3#usage) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3) | Multi-Functionality(dense retrieval, sparse retrieval, multi-vector(colbert)), Multi-Linguality, and Multi-Granularity(8192 tokens) | |
| [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `ไธบ่ฟไธชๅฅๅญ็ๆ่กจ็คบไปฅ็จไบๆฃ็ดข็ธๅ
ณๆ็ซ ๏ผ` |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `ไธบ่ฟไธชๅฅๅญ็ๆ่กจ็คบไปฅ็จไบๆฃ็ดข็ธๅ
ณๆ็ซ ๏ผ` |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `ไธบ่ฟไธชๅฅๅญ็ๆ่กจ็คบไปฅ็จไบๆฃ็ดข็ธๅ
ณๆ็ซ ๏ผ` |
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `ไธบ่ฟไธชๅฅๅญ็ๆ่กจ็คบไปฅ็จไบๆฃ็ดข็ธๅ
ณๆ็ซ ๏ผ` |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `ไธบ่ฟไธชๅฅๅญ็ๆ่กจ็คบไปฅ็จไบๆฃ็ดข็ธๅ
ณๆ็ซ ๏ผ` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `ไธบ่ฟไธชๅฅๅญ็ๆ่กจ็คบไปฅ็จไบๆฃ็ดข็ธๅ
ณๆ็ซ ๏ผ` |
[1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
[2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI.
If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models .
## Frequently asked questions
<details>
<summary>1. How to fine-tune bge embedding model?</summary>
<!-- ### How to fine-tune bge embedding model? -->
Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model.
Some suggestions:
- Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance.
- If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
- If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.
</details>
<details>
<summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary>
<!-- ### The similarity score between two dissimilar sentences is higher than 0.5 -->
**Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.**
Since we finetune the models by contrastive learning with a temperature of 0.01,
the similarity distribution of the current BGE model is about in the interval \[0.6, 1\].
So a similarity score greater than 0.5 does not indicate that the two sentences are similar.
For downstream tasks, such as passage retrieval or semantic similarity,
**what matters is the relative order of the scores, not the absolute value.**
If you need to filter similar sentences based on a similarity threshold,
please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
</details>
<details>
<summary>3. When does the query instruction need to be used</summary>
<!-- ### When does the query instruction need to be used -->
For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction.
No instruction only has a slight degradation in retrieval performance compared with using instruction.
So you can generate embedding without instruction in all cases for convenience.
For a retrieval task that uses short queries to find long related documents,
it is recommended to add instructions for these short queries.
**The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
In all cases, the documents/passages do not need to add the instruction.
</details>
## Usage
### Usage for Embedding Model
Here are some examples for using `bge` models with
[FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences_1 = ["ๆ ทไพๆฐๆฎ-1", "ๆ ทไพๆฐๆฎ-2"]
sentences_2 = ["ๆ ทไพๆฐๆฎ-3", "ๆ ทไพๆฐๆฎ-4"]
model = FlagModel('BAAI/bge-large-zh-v1.5',
query_instruction_for_retrieval="ไธบ่ฟไธชๅฅๅญ็ๆ่กจ็คบไปฅ็จไบๆฃ็ดข็ธๅ
ณๆ็ซ ๏ผ",
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
queries = ['query_1', 'query_2']
passages = ["ๆ ทไพๆๆกฃ-1", "ๆ ทไพๆๆกฃ-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
#### Using Sentence-Transformers
You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net):
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences_1 = ["ๆ ทไพๆฐๆฎ-1", "ๆ ทไพๆฐๆฎ-2"]
sentences_2 = ["ๆ ทไพๆฐๆฎ-3", "ๆ ทไพๆฐๆฎ-4"]
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
For s2p(short query to long passage) retrieval task,
each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
But the instruction is not needed for passages.
```python
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["ๆ ทไพๆๆกฃ-1", "ๆ ทไพๆๆกฃ-2"]
instruction = "ไธบ่ฟไธชๅฅๅญ็ๆ่กจ็คบไปฅ็จไบๆฃ็ดข็ธๅ
ณๆ็ซ ๏ผ"
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Using Langchain
You can use `bge` in langchain like this:
```python
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-large-en-v1.5"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
model = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
query_instruction="ไธบ่ฟไธชๅฅๅญ็ๆ่กจ็คบไปฅ็จไบๆฃ็ดข็ธๅ
ณๆ็ซ ๏ผ"
)
model.query_instruction = "ไธบ่ฟไธชๅฅๅญ็ๆ่กจ็คบไปฅ็จไบๆฃ็ดข็ธๅ
ณๆ็ซ ๏ผ"
```
#### Using HuggingFace Transformers
With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["ๆ ทไพๆฐๆฎ-1", "ๆ ทไพๆฐๆฎ-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5')
model.eval()
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
### Usage for Reranker
Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.
You can get a relevance score by inputting query and passage to the reranker.
The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
Get relevance scores (higher scores indicate more relevance):
```python
from FlagEmbedding import FlagReranker
reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
score = reranker.compute_score(['query', 'passage'])
print(score)
scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']])
print(scores)
```
#### Using Huggingface transformers
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
model.eval()
pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
with torch.no_grad():
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores)
```
#### Usage of the ONNX files
```python
from optimum.onnxruntime import ORTModelForFeatureExtraction # type: ignore
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-small-en-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-small-en-v1.5')
model_ort = ORTModelForFeatureExtraction.from_pretrained('BAAI/bge-small-en-v1.5', file_name="onnx/model.onnx")
# Sentences we want sentence embeddings for
sentences = ["ๆ ทไพๆฐๆฎ-1", "ๆ ทไพๆฐๆฎ-2"]
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
model_output_ort = model_ort(**encoded_input)
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# model_output and model_output_ort are identical
```
#### Usage via infinity
Its also possible to deploy the onnx files with the [infinity_emb](https://github.com/michaelfeil/infinity) pip package.
Recommended is `device="cuda", engine="torch"` with flash attention on gpu, and `device="cpu", engine="optimum"` for onnx inference.
```python
import asyncio
from infinity_emb import AsyncEmbeddingEngine, EngineArgs
sentences = ["Embed this is sentence via Infinity.", "Paris is in France."]
engine = AsyncEmbeddingEngine.from_args(
EngineArgs(model_name_or_path = "BAAI/bge-small-en-v1.5", device="cpu", engine="optimum" # or engine="torch"
))
async def main():
async with engine:
embeddings, usage = await engine.embed(sentences=sentences)
asyncio.run(main())
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 |
| [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 |
| [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
- **C-MTEB**:
We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 |
| [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 |
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 |
| [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 |
- **Reranking**:
See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
| Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
| multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
| multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 |
| multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 |
| m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 |
| m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 |
| bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 |
| bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
\* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks
## Train
### BAAI Embedding
We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning.
**You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
### BGE Reranker
Cross-encoder will perform full-attention over the input pair,
which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.
Therefore, it can be used to re-rank the top-k documents returned by embedding model.
We train the cross-encoder on a multilingual pair data,
The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
## Contact
If you have any question or suggestion related to this project, feel free to open an issue or pull request.
You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{bge_embedding,
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
year={2023},
eprint={2309.07597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
|
Anguuuuus/laryngitis-sentence
|
Anguuuuus
| 2024-05-31T01:19:19Z | 161 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-05-31T01:18:36Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: laryngitis-sentence
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# laryngitis-sentence
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4969
- Accuracy: 0.8182
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6954 | 1.0 | 6 | 0.7061 | 0.1818 |
| 0.6906 | 2.0 | 12 | 0.7867 | 0.1818 |
| 0.6466 | 3.0 | 18 | 0.7956 | 0.1818 |
| 0.63 | 4.0 | 24 | 0.7992 | 0.1818 |
| 0.5842 | 5.0 | 30 | 0.7482 | 0.4545 |
| 0.5338 | 6.0 | 36 | 0.6246 | 0.6364 |
| 0.4733 | 7.0 | 42 | 0.8296 | 0.5 |
| 0.4509 | 8.0 | 48 | 0.5392 | 0.7273 |
| 0.3703 | 9.0 | 54 | 0.5102 | 0.7727 |
| 0.3161 | 10.0 | 60 | 0.3694 | 0.9545 |
| 0.2741 | 11.0 | 66 | 0.5227 | 0.7727 |
| 0.2373 | 12.0 | 72 | 0.4092 | 0.8182 |
| 0.2064 | 13.0 | 78 | 0.5372 | 0.8182 |
| 0.1726 | 14.0 | 84 | 0.5193 | 0.8182 |
| 0.2062 | 15.0 | 90 | 0.4969 | 0.8182 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.12.0
- Tokenizers 0.15.1
|
Tele-AI/TeleSpeech-ASR1.0
|
Tele-AI
| 2024-05-31T01:17:57Z | 0 | 61 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-05-22T07:59:22Z |
---
license: apache-2.0
---
<div align="center">
<h1>
ๆ่พฐ่ฏญ้ณๅคงๆจกๅ-่ถ
ๅคๆน่จASR
</h1>
</div>
<p align="center">
๐ฆ <a href="https://github.com/Tele-AI/TeleSpeech-ASR" target="_blank">github</a>๏ธ โข ๐พ <a href="https://gitee.com/Tele-AI/TeleSpeech-ASR" target="_blank">gitee</a>๏ธ โข ๐ค <a href="https://www.modelscope.cn/models/TeleAI/TeleSpeech-ASR1.0/summary" target="_blank">ModelScope</a>
</p>
ๆ่พฐ่ถ
ๅคๆน่จ่ฏญ้ณ่ฏๅซๅคงๆจกๅv1.0๏ผ็ฑ30ไธๅฐๆถๆ ๆ ๆณจๅคๆน่จ่ฏญ้ณๆฐๆฎ่ฟ่ก้ข่ฎญ็ป๏ผๅนถๅฉ็จๅ
้จ30็งๆๆ ๆณจๆฐๆฎ่ฟ่กๅพฎ่ฐ๏ผๆ็ ดๅไธๆจกๅๅช่ฝ่ฏๅซ็นๅฎๅไธๆน่จ็ๅฐๅข๏ผๅฏๆฏๆ็่งฃ็ฒค่ฏญใไธๆตท่ฏใๅๅท่ฏใๆธฉๅท่ฏ็ญ30็งๆน่จ
ๆฌๆฌกๅผๆบไธไธชๆจกๅ๏ผไธคไธช30ไธๅฐๆถๆ ๆ ๆณจ่ฏญ้ณ้ข่ฎญ็ปๆจกๅๅไธไธชKeSpeechๆฐๆฎ้8็งๆน่จๅพฎ่ฐๆจกๅใๅๅธ็ๆฌๅไธ่ฝฝ้พๆฅ่งไธ่กจ
| ๆจกๅ็ๆฌ | ๅๆฐ้ | ไธ่ฝฝ้พๆฅ | ๅญๅ
ธ | ๅคๆณจ |
|---------------------|-------|---------------------|-------|-------|
| pretrain_base | 0.09 B | [TeleSpeech-ASR1.0-base](https://huggingface.co/Tele-AI/TeleSpeech-ASR1.0/blob/main/base.pt) | โ | 30ไธๅฐๆถๆ ๆ ๆณจ่ฏญ้ณ้ข่ฎญ็ปๆจกๅ |
| pretrain_large | 0.3 B | [TeleSpeech-ASR1.0-large](https://huggingface.co/Tele-AI/TeleSpeech-ASR1.0/blob/main/large.pt) | โ | 30ไธๅฐๆถๆ ๆ ๆณจ่ฏญ้ณ้ข่ฎญ็ปๆจกๅ |
| finetune_large_kespeech | 0.3 B | [TeleSpeech-ASR1.0-large-kespeech](https://huggingface.co/Tele-AI/TeleSpeech-ASR1.0/blob/main/finetune_large_kespeech.pt) | [dict.char7531.txt](https://huggingface.co/Tele-AI/TeleSpeech-ASR1.0/blob/main/dict.chr7531.txt) | ๅบไบpretrain_large๏ผ้็จKeSpeechๆฐๆฎ้[8็งๆน่จ](#KeSpeechๅๆน่จไธ็ปๆ)ๅพฎ่ฐ่ฎญ็ป|
* finetuneๆจกๅไธบๅทฒ็ปๅจ็นๅฎๆฐๆฎ้ๅพฎ่ฐ่ฟ็ๆจกๅ๏ผๅฏ็ดๆฅไฝฟ็จ
* pretrainๆจกๅไธบๆ ็็ฃ้ข่ฎญ็ปๆจกๅ๏ผ**ๆ ๆณ็ดๆฅ่ฟ่กASRไปปๅก**๏ผ้่ฆ็จๅฐ้ๆ ๆณจๆฐๆฎ่ฟ่กๆ็็ฃ่ฎญ็ปๅไฝฟ็จใ็ธๆฏไบ็ดๆฅ่ฎญ็ป็ๆน่จ่ฏๅซๆจกๅ๏ผๅบไบ้ข่ฎญ็ปๆจกๅๅฏไปฅๅฉ็จๆดๅฐ็ๆๆ ๆณจๆฐๆฎ่ทๅพๆดๅฅฝ็ๆน่จ่ฏๅซๆง่ฝใๆไปฌๆไพไบไธค็งๆ็็ฃ่ฎญ็ปๆกๆถ๏ผ็จไบไธๆธธASRไปปๅก๏ผ1) ๅบไบfairseq็้ข่ฎญ็ปๆจกๅๅพฎ่ฐ๏ผ 2) ๅบไบwenet็่กจๅพๆๅ๏ผ็นๅพๆๅๅจ๏ผ่ฎญ็ปไธๆธธASRๆจกๅ
# ็ฏๅข้
็ฝฎ
* PyTorch version >= 1.13.0
* Python version >= 3.8
* ๆฐๆฎๅๅคใ็จๅบ่ฎญ็ป้่ฆไฝฟ็จkaldi๏ผ่ฏท็กฎไฟๅทฒๆญฃ็กฎๅฎ่ฃ
๏ผhttps://github.com/kaldi-asr/kaldi
* ่ฅๅทฒๆๆๅฅฝ็็นๅพ๏ผ็จๅบ่ฟ่กๆถๅฏไปฅไฝฟ็จwenetๅผๆบๆกๆถไธญkaldi_io.pyๅฎ็ฐ็ๆนๆณๆฟๆขkaldiio.load_mat๏ผไป่ๆ ้ๅฎ่ฃ
kaldi
<a id="fairseqๅฎ่ฃ
"></a>
* ๅฎ่ฃ
fairseqๅๅ
ถไพ่ต
```shell script
$ git clone https://github.com/pytorch/fairseq
$ cd fairseq
$ pip install --editable ./
```
* ๅฎ่ฃ
fairseq้ขๅคไพ่ตไปฅๅwenet่กจๅพ่ฎญ็ปไปปๅก่ฟ่กๆ้ไพ่ต
```shell script
$ pip install -r requirements.txt
```
* ่ฅๅช้่ฆfairseq่ฟ่กๅพฎ่ฐใ่งฃ็ ๏ผๅฏไปฅไธๅฎ่ฃ
ๅฎๆด็requirements.txt๏ผๅช้ไฟ่ฏkaldiio, timm, editdistance, soundfileๅทฒๆญฃ็กฎๅฎ่ฃ
```shell script
$ pip install kaldiio timm editdistance soundfile
```
# ๆฐๆฎๅๅค
## ็นๅพๆๅ
* ๅฉ็จkaldiๆๅ40็ปดmfcc็นๅพ๏ผๅๆฐ่ฎพ็ฝฎๅ่`mfcc_hires.conf`
* ไธบๅๆฐๆฎ้ๅๅค่ฎญ็ป็จๆไปถ`data.list`๏ผไปฅ`\t`ๅ้๏ผ
```
$ cat train/data.list
utt:X0000000000_100638174_S00037 feat:/data/raw_nnaudio.test.1.ark:2983479385 feat_shape:363,40 text:ไธๆๅจ่ฟ็ง่ฏ้ชไธญๆฏ็ญๅ
ๆฌ่ชๅทฑๅจๅ
็ไธๅ token:ไธ ๆ ๅจ ่ฟ ็ง ่ฏ ้ช ไธญ ๆฏ ็ญ ๅ
ๆฌ ่ช ๅทฑ ๅจ ๅ
็ ไธ ๅ tokenid:[TOKENID] token_shape:19,5537
utt:X0000000001_100849618_S00006 feat:/data/raw_nnaudio.test.1.ark:2984296665 feat_shape:345,40 text:ๅจไปไปฌๆถๅฐ่ถณๅคๅปบ็ซๅคง็ปไธๆจกๅ็ๆฐๆฎๅ token:ๅจ ไป ไปฌ ๆถ ๅฐ ่ถณ ๅค ๅปบ ็ซ ๅคง ็ป ไธ ๆจก ๅ ็ ๆฐ ๆฎ ๅ tokenid:[TOKENID] token_shape:18,5537
...
```
## ๅญๅ
ธๅๅค
* ๅพฎ่ฐ้ถๆฎต๏ผ้่ฆๅๅคfairseqๆ ผๅผ็ `dict.${label}.txt`๏ผ`${label}`ไธบๅปบๆจกๅๅ
็ฑปๅ๏ผๅฆltr, bpe็ญใไปฅ`dict.ltr.txt`ไธบไพ๏ผ
```
ๆฏ 2
ๅฅฝ 3
...
```
* ้ข่ฎญ็ปๆจกๅ่กจๅพ่ฎญ็ปASRไปปๅก้ถๆฎต๏ผ้่ฆๅๅคwenetๆ ผๅผ็`lang_char.txt`๏ผ็ธๆฏไบ`dict.${label}.txt`้ขๅคๆทปๅ `<blank>`, `<unk>`, `<sos/eos>`3ไธชtoken๏ผไพๅฆ
```
<blank> 0
<unk> 1
ๆฏ 2
ๅฅฝ 3
...
<sos/eos> 5536
```
# ้ข่ฎญ็ปๆจกๅๅพฎ่ฐ
## ๅพฎ่ฐ
* ๅๅค`train.tsv`ๅ`dev.tsv`๏ผไฟๅญไบๅไธ่ฎญ็ป็ฎๅฝไธ
```
$ ln -s /path/to/train/data.list /path/to/train/train.tsv
$ ln -s /path/to/dev/data.list /path/to/train/dev.tsv
```
* ่ฟๅ
ฅdata2vec_dialect่ทฏๅพ๏ผไฟฎๆน`path.sh`ๆไปถไธญ`/path/to/fairseq`ไธบfairseqๅฎ่ฃ
่ทฏๅพ
* ๅฐ`run_scripts/run_d2v_finetune.sh`ไธญ`/path/to/fairseq`ๅ`/path/to/data2vec_dialect`่ทฏๅพๆฟๆข
* ไฟฎๆน`task.data`ไธบ`.tsv`ไฟๅญ่ทฏๅพ๏ผๅฆ`task.data=/data/wenetspeech/train`
* ๆง่ก
```shell script
$ bash run_scripts/run_d2v_finetune.sh
```
## ่งฃ็
* ๅๆ ทไฟฎๆน`run_scripts/decode.sh`ไธญ็ๆจกๅ่ทฏๅพใๆต่ฏๆฐๆฎ่ทฏๅพ็ญ
* `dataset.gen_subset`ไธบๆต่ฏๆฐๆฎ่ทฏๅพไธ`tsv`ๆไปถ็ๅ็งฐ๏ผๅฏ้
็ฝฎๅคไธช
* ๆง่ก
```shell script
$ bash run_scripts/decode.sh
```
# ่กจๅพ่ฎญ็ปไธๆธธไปปๅก
* ่ฟๅ
ฅwenet_representation่ทฏๅพ๏ผไฟฎๆน`path.sh`ๆไปถไธญ`fairseq`, `data2vec_dialect`, `wenet_representation`็ธๅ
ณ่ทฏๅพ
* ่ฟ็ปญ่กจๅพ่ฎญ็ปไธ่งฃ็ ๏ผ
* ้
็ฝฎ`run_d2v.sh`ไธญdataset็ธๅ
ณๅ
ๅฎน๏ผๆง่ก
```shell script
$ bash run_d2v.sh
```
* ็ฆปๆฃ่กจๅพ่ฎญ็ปไธ่งฃ็ ๏ผ
* ้ฆๅ
ๆ นๆฎ`data.list`๏ผๅๅค็ฆปๆฃ่กจๅพๅฏนๅบ่ฎญ็ปๆไปถ`data.list.discrete`๏ผไฟฎๆน`wenet/discrete_token/kmeans_d2v.yaml`ไธญ`model_dir`ๅ`user_dir`๏ผๆง่ก
```
$ bash wenet/discrete_token/dump_feat.sh
```
* ๅ้
็ฝฎ`run_discrete.sh`ไธญdataset็ธๅ
ณๅ
ๅฎน๏ผๆง่ก
```
$ bash run_discrete.sh
```
# ๅผๆบๆฐๆฎ้็ปๆ
* ๆไปฌ้ๆฉไบๅคไธชๅผๆบไธญๆๆฐๆฎ้่ฟ่ก้ช่ฏ๏ผไปฅๆต่ฏ้ไธ็ๅญ้่ฏฏ็ (Character Error Rate, CER) ็ปๆไฝไธบ่กก้ๆ ๅ
* ๅจAishell-1ไธๆไปฌ้ๆฉๅ
ถTrain้ไฝไธบๆ็็ฃๆฐๆฎ่ฟ่ก่ฎญ็ป๏ผๅจTest้ไธ็ป่ฎกCER
* ๅจWenetSpeechไธ๏ผๆไปฌๅๅซไฝฟ็จ100ๅฐๆถ่ฎญ็ป้Train_sๅ1000ๅฐๆถ่ฎญ็ป้Train_mๅๅซไฝไธบๆ็็ฃๆฐๆฎ่ฟ่ก่ฎญ็ป๏ผๅจTest_Meetingๆต่ฏ้ไธ็ป่ฎกCER
* BabelไธบNIST๏ผ็พๅฝๅฝๅฎถๆ ๅไธๆๆฏ็ ็ฉถ้ข๏ผไธพๅ็ไฝ่ตๆบ็ฒค่ฏญ็ต่ฏ่ฏๅซไปปๅกๆฐๆฎ้๏ผๆไปฌไฝฟ็จๅ
ถๆไพ็่ฎญ็ป้ไธๆต่ฏ้็ป่ฎกCER
* KeSpeechไธบไธญๆๅคๆน่จๆต่ฏ้๏ผๆไปฌไฝฟ็จ1396ๅฐๆถ่ฎญ็ป้ไฝไธบๆ็็ฃๆฐๆฎ่ฟ่ก่ฎญ็ป๏ผ้ๆฉๆไพ็Testๆต่ฏ้็ป่ฎกCER
| ๆจกๅ็ๆฌ | Aishell-1 (%)| WenetSpeech* (%)| Babel (%) | KeSpeech (%) |
| ----------| -------- | ------- | ---- | ---- |
| pretrain_base | 4.7 | 18.3 / 16.4 | 22.1 | 10.9 |
| pretrain_large | 4.0 | 14.3 / 13.0 | 19.1 | 8.1 |
*WenetSpeechไธญ็็ปๆไธบๅๅซไฝฟ็จ `train_s/train_m`่ฎญ็ปๅ๏ผๅจTest_Meetingไธ็CER
<a id="KeSpeechๅๆน่จไธ็ปๆ"></a>
KeSpeechๅๆน่จไธ็ปๆ๏ผCER%๏ผ
| ๆจกๅ็ๆฌ | ๆฎ้่ฏ | ๅไบฌ | ่ฅฟๅ | ไธญๅ | ไธๅ | ๅ
ฐ้ถ | ๆฑๆทฎ | ๅ้ฒ | ่ถ่พฝ |
| ---------| ------ | ---- | ---- | ---- |---- | ---- | ---- | ---- | ---- |
| pretrain_large | 4.61 | 8.23 | 8.74 | 7.62 | 7.89 | 9.72 | 12.89 | 8.91 | 9.30 |
# ๅฃฐๆไธๅ่ฎฎ
## ๅฃฐๆ
ๆไปฌๅจๆญคๅฃฐๆ๏ผไธ่ฆไฝฟ็จTeleSpeechๆจกๅๅๅ
ถ่ก็ๆจกๅ่ฟ่กไปปไฝๅฑๅฎณๅฝๅฎถ็คพไผๅฎๅ
จๆ่ฟๆณ็ๆดปๅจใๅๆถ๏ผๆไปฌไน่ฆๆฑไฝฟ็จ่
ไธ่ฆๅฐTeleSpeechๆจกๅ็จไบๆฒกๆๅฎๅ
จๅฎกๆฅๅๅคๆก็ไบ่็ฝๆๅกใๆไปฌๅธๆๆๆไฝฟ็จ่
้ตๅฎไธ่ฟฐๅๅ๏ผ็กฎไฟ็งๆๅๅฑๅจๅๆณๅ่ง็็ฏๅขไธ่ฟ่กใ
ๆไปฌๅทฒ็ปๅฐฝๆไปฌๆ่ฝ๏ผๆฅ็กฎไฟๆจกๅ่ฎญ็ป่ฟ็จไธญไฝฟ็จ็ๆฐๆฎ็ๅ่งๆงใ็ถ่๏ผๅฐฝ็ฎกๆไปฌๅทฒ็ปๅๅบไบๅทจๅคง็ๅชๅ๏ผไฝ็ฑไบๆจกๅๅๆฐๆฎ็ๅคๆๆง๏ผไปๆๅฏ่ฝๅญๅจไธไบๆ ๆณ้ข่ง็้ฎ้ขใๅ ๆญค๏ผๅฆๆ็ฑไบไฝฟ็จTeleSpeechๅผๆบๆจกๅ่ๅฏผ่ด็ไปปไฝ้ฎ้ข๏ผๅ
ๆฌไฝไธ้ไบๆฐๆฎๅฎๅ
จ้ฎ้ขใๅ
ฌๅ
ฑ่่ฎบ้ฃ้ฉ๏ผๆๆจกๅ่ขซ่ฏฏๅฏผใๆปฅ็จใไผ ๆญๆไธๅฝๅฉ็จๆๅธฆๆฅ็ไปปไฝ้ฃ้ฉๅ้ฎ้ข๏ผๆไปฌๅฐไธๆฟๆ
ไปปไฝ่ดฃไปปใ
## ๅ่ฎฎ
็คพๅบไฝฟ็จTeleSpeechๆจกๅ้่ฆ้ตๅพชใ[TeleSpeechๆจกๅ็คพๅบ่ฎธๅฏๅ่ฎฎ](./TeleSpeechๆจกๅ็คพๅบ่ฎธๅฏๅ่ฎฎ.pdf)ใใTeleSpeechๆจกๅๆฏๆๅไธ็จ้๏ผๅฆๆๆจ่ฎกๅๅฐTeleSpeechๆจกๅๆๅ
ถ่ก็ๅ็จไบๅไธ็ฎ็๏ผๆจ้่ฆ้่ฟไปฅไธ่็ณป้ฎ็ฎฑ [email protected]๏ผๆไบคใTeleSpeechๆจกๅ็คพๅบ่ฎธๅฏๅ่ฎฎใ่ฆๆฑ็็ณ่ฏทๆๆใๅฎกๆ ธ้่ฟๅ๏ผๅฐ็นๆญคๆไบๆจไธไธช้ๆไปๆงใๅ
จ็ๆงใไธๅฏ่ฝฌ่ฎฉใไธๅฏๅ่ฎธๅฏใๅฏๆค้็ๅ็จ็ๆ่ฎธๅฏใ
---
|
Sam0sam/sd-butterflies-32
|
Sam0sam
| 2024-05-31T01:17:02Z | 48 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-05-31T01:16:39Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class ๐งจ](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute ๐ฆ.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Sam0sam/sd-butterflies-32')
image = pipeline().images[0]
image
```
|
Anguuuuus/chordektomie-sentence
|
Anguuuuus
| 2024-05-31T01:15:00Z | 161 | 1 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-05-31T01:14:18Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: chordektomie-sentence
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chordektomie-sentence
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3963
- Accuracy: 0.8636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6931 | 1.0 | 6 | 0.7023 | 0.2273 |
| 0.6659 | 2.0 | 12 | 0.7066 | 0.3182 |
| 0.5839 | 3.0 | 18 | 0.5774 | 0.8636 |
| 0.4567 | 4.0 | 24 | 0.5393 | 0.8182 |
| 0.3303 | 5.0 | 30 | 0.4371 | 0.8182 |
| 0.2379 | 6.0 | 36 | 0.3744 | 0.8182 |
| 0.1511 | 7.0 | 42 | 0.5223 | 0.8182 |
| 0.1061 | 8.0 | 48 | 0.5431 | 0.8182 |
| 0.0781 | 9.0 | 54 | 0.3211 | 0.9091 |
| 0.0623 | 10.0 | 60 | 0.3216 | 0.9091 |
| 0.0681 | 11.0 | 66 | 0.3336 | 0.9091 |
| 0.0475 | 12.0 | 72 | 0.3459 | 0.9091 |
| 0.0441 | 13.0 | 78 | 0.3811 | 0.8636 |
| 0.0419 | 14.0 | 84 | 0.3959 | 0.8636 |
| 0.0409 | 15.0 | 90 | 0.3963 | 0.8636 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.12.0
- Tokenizers 0.15.1
|
RishieRish/chat_tune_contr_1_16bit_version
|
RishieRish
| 2024-05-31T01:05:58Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T00:58:32Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** RishieRish
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ebowwa/human-biases-people-profiles-beta-0.6.2-gguf
|
ebowwa
| 2024-05-31T01:03:50Z | 8 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T01:01:10Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
---
# Uploaded model
- **Developed by:** ebowwa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nbeerbower/llama-3-SNAMD-8B
|
nbeerbower
| 2024-05-31T01:03:21Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:mlabonne/NeuralDaredevil-8B-abliterated",
"base_model:merge:mlabonne/NeuralDaredevil-8B-abliterated",
"base_model:nbeerbower/llama-3-Daredevil-Mahou-8B",
"base_model:merge:nbeerbower/llama-3-Daredevil-Mahou-8B",
"base_model:nbeerbower/llama-3-Stheno-Mahou-8B",
"base_model:merge:nbeerbower/llama-3-Stheno-Mahou-8B",
"base_model:nbeerbower/llama-3-spicy-abliterated-stella-8B",
"base_model:merge:nbeerbower/llama-3-spicy-abliterated-stella-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-30T23:06:56Z |
---
base_model:
- nbeerbower/llama-3-spicy-abliterated-stella-8B
- mlabonne/NeuralDaredevil-8B-abliterated
- nbeerbower/llama-3-Stheno-Mahou-8B
- nbeerbower/llama-3-Daredevil-Mahou-8B
library_name: transformers
tags:
- mergekit
- merge
---
# llama-3-SNAMD-8B
Stella-Neural-Abliterated-Mahou-Devil
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [nbeerbower/llama-3-Stheno-Mahou-8B](https://huggingface.co/nbeerbower/llama-3-Stheno-Mahou-8B) as a base.
### Models Merged
The following models were included in the merge:
* [nbeerbower/llama-3-spicy-abliterated-stella-8B](https://huggingface.co/nbeerbower/llama-3-spicy-abliterated-stella-8B)
* [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated)
* [nbeerbower/llama-3-Daredevil-Mahou-8B](https://huggingface.co/nbeerbower/llama-3-Daredevil-Mahou-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mlabonne/NeuralDaredevil-8B-abliterated
- model: nbeerbower/llama-3-spicy-abliterated-stella-8B
- model: nbeerbower/llama-3-Daredevil-Mahou-8B
merge_method: model_stock
base_model: nbeerbower/llama-3-Stheno-Mahou-8B
dtype: bfloat16
```
|
coconana/Qwen-Qwen1.5-0.5B-1717117112
|
coconana
| 2024-05-31T01:02:45Z | 151 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T00:58:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phdatdt/working
|
phdatdt
| 2024-05-31T01:00:59Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-05-31T00:43:40Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - phdatdt/working
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the phdatdt/vandata dataset. You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
fspecii/2
|
fspecii
| 2024-05-31T00:59:24Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-30T23:44:08Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** fspecii
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Anguuuuus/mydata-sentence
|
Anguuuuus
| 2024-05-31T00:58:04Z | 162 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-05-30T12:59:21Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mydata-sentence
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mydata-sentence
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7519
- Accuracy: 0.4615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7048 | 1.0 | 4 | 0.7079 | 0.3846 |
| 0.6478 | 2.0 | 8 | 0.7239 | 0.3846 |
| 0.7242 | 3.0 | 12 | 0.7464 | 0.3846 |
| 0.5978 | 4.0 | 16 | 0.7460 | 0.4615 |
| 0.7061 | 5.0 | 20 | 0.7460 | 0.4615 |
| 0.7089 | 6.0 | 24 | 0.7377 | 0.4615 |
| 0.5751 | 7.0 | 28 | 0.7324 | 0.4615 |
| 0.5799 | 8.0 | 32 | 0.7365 | 0.4615 |
| 0.563 | 9.0 | 36 | 0.7477 | 0.4615 |
| 0.555 | 10.0 | 40 | 0.7562 | 0.4615 |
| 0.5227 | 11.0 | 44 | 0.7632 | 0.4615 |
| 0.6701 | 12.0 | 48 | 0.7590 | 0.4615 |
| 0.5219 | 13.0 | 52 | 0.7540 | 0.4615 |
| 0.5003 | 14.0 | 56 | 0.7526 | 0.4615 |
| 0.488 | 15.0 | 60 | 0.7519 | 0.4615 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.12.0
- Tokenizers 0.15.1
|
not-lain/Finetuned_TinyLlama
|
not-lain
| 2024-05-31T00:56:46Z | 151 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-30T22:37:31Z |
---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Finetuned_TinyLlama
results: []
---
# Finetuned_TinyLlama
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
## Model description
This model was made with [this tutorial by Noa](https://huggingface.co/blog/nroggendorff/finetune-tinyllama), you can find a more complete model and demo at [nroggendorff/mayo](https://huggingface.co/spaces/nroggendorff/mayo)
## Limitations
- The model is easily gaslit
- It is uncensored, and there are no safety features.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
- training_loss=2.0859998975481306
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
ebowwa/human-biases-people-profiles-beta-0.6.2
|
ebowwa
| 2024-05-31T00:54:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T00:54:09Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
---
# Uploaded model
- **Developed by:** ebowwa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Reihaneh/wav2vec2_fy_nl_common_voice_26
|
Reihaneh
| 2024-05-31T00:41:35Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T00:41:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RishieRish/chat_tune_contr_1_lora_version
|
RishieRish
| 2024-05-31T00:40:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T00:39:55Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** RishieRish
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ishmanish/gpt2-autotrain-text-HrPolicy-aug5-30e-vfinal
|
ishmanish
| 2024-05-31T00:39:09Z | 150 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"autotrain",
"text-generation-inference",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T00:11:42Z |
---
tags:
- autotrain
- text-generation-inference
- text-generation
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
idioteca/koyhamodel
|
idioteca
| 2024-05-31T00:34:53Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-05-31T00:31:29Z |
---
license: creativeml-openrail-m
---
|
cat-searcher/rir-pl-ckpts
|
cat-searcher
| 2024-05-31T00:34:38Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-05-26T04:39:34Z |
---
license: mit
---
```bash
git lfs install
git clone https://huggingface.co/cat-searcher/rir-pl-ckpts.git
cd rir-pl-ckpts
git lfs fetch --all
```
|
eeeyounglee/EEVE-10.8B-mean-4096-5
|
eeeyounglee
| 2024-05-31T00:30:12Z | 10 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"llama",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-05-31T00:27:29Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# eeeyounglee/EEVE-10.8B-mean-4096-5
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 4096 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('eeeyounglee/EEVE-10.8B-mean-4096-5')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=eeeyounglee/EEVE-10.8B-mean-4096-5)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 224 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`__main__.MultipleNegativesRankingLoss_with_logging`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 112,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 4096, 'do_lower_case': False}) with Transformer model: LlamaModel
(1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 4096, 'out_features': 4096, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
lmstudio-community/AlchemistCoder-L-7B-GGUF
|
lmstudio-community
| 2024-05-31T00:28:15Z | 233 | 3 | null |
[
"gguf",
"code generation",
"text-generation",
"arxiv:2405.19265",
"base_model:internlm/AlchemistCoder-L-7B",
"base_model:quantized:internlm/AlchemistCoder-L-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] |
text-generation
| 2024-05-31T00:20:23Z |
---
license: apache-2.0
tags:
- code generation
quantized_by: bartowski
pipeline_tag: text-generation
lm_studio:
param_count: 7b
use_case: coding
release_date: 29-05-2024
model_creator: InternLM
prompt_template: Alpaca
system_prompt: none
base_model: Llama 2
original_repo: internlm/AlchemistCoder-L-7B
base_model: internlm/AlchemistCoder-L-7B
---
## ๐ซ Community Model> AlchemistCoder L 7B by InternLM
*๐พ [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [InternLM](https://huggingface.co/internlm)<br>
**Original model**: [AlchemistCoder-L-7B](https://huggingface.co/internlm/AlchemistCoder-L-7B)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3024](https://github.com/ggerganov/llama.cpp/releases/tag/b3024)<br>
## Model Summary:
AlchemistCoder is a series of coding models by InternLM.<br>
This model is tuned from Llama 2, and should excel at all coding related tasks.
## Prompt template:
Choose the `MetaAI Llama 2 Chat` preset in your LM Studio.
Under the hood, the model will see a prompt that's formatted like so:
```
[INST]<<SYS>>
{System}
<</SYS>>[/INST]
[INST]
{User}
[/INST]
{Assistant}
```
## Technical Details
Training details:
- **AlchemistPrompts**: Designed as data-specific prompts for harmonizing inherent conflicts in multi-source data and mitigating the instruction/response misalignment at a fined-grained level.
- **Code Comprehenstion Tasks**: Sourced from the process of data construction, consisting of instruction evolution, data filtering, and code review.
- **Harmonized Multi-source Data**: Instruction tuned on 200M tokens, including 6 types of high-quality data.
- **Superior Model Performance**: Surpassing all the open-source models of the same size (6.7/7B), and rivaling or even beating larger models (15B/33B/70B/ChatGPT) on 6 code benchmarks.
- **Advanced generic capabilities**: Demonstrated by the significant improvements on MMLU, BBH, and GSM8K.
For more information, check out their paper here: https://arxiv.org/abs/2405.19265
## Special thanks
๐ Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/)
๐ Special thanks to [Kalomaze](https://github.com/kalomaze) and [Dampf](https://github.com/Dampfinchen) for their work on the dataset (linked [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)) that was used for calculating the imatrix for all sizes.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
datek/google-gemma-7b-1717114999
|
datek
| 2024-05-31T00:28:00Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T00:23:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Hamzezi/galactica-125m-dpo
|
Hamzezi
| 2024-05-31T00:24:40Z | 151 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T00:23:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
weifar/FTAudit-CodeGemma-7b-v0.2
|
weifar
| 2024-05-31T00:15:04Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T00:09:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EbadKhan/lora_model
|
EbadKhan
| 2024-05-31T00:12:00Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-30T04:42:42Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** EbadKhan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
duyntnet/DuckDB-NSQL-7B-v0.1-imatrix-GGUF
|
duyntnet
| 2024-05-31T00:08:22Z | 31 | 0 |
transformers
|
[
"transformers",
"gguf",
"imatrix",
"DuckDB-NSQL-7B-v0.1",
"text-generation",
"en",
"license:other",
"region:us"
] |
text-generation
| 2024-05-30T20:34:39Z |
---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- DuckDB-NSQL-7B-v0.1
---
Quantizations of https://huggingface.co/motherduckdb/DuckDB-NSQL-7B-v0.1
# From original readme
## Model Description
NSQL is a family of autoregressive open-source large foundation models (FMs) designed specifically for SQL generation tasks.
## How to Use
Example 1:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("motherduckdb/DuckDB-NSQL-7B-v0.1")
model = AutoModelForCausalLM.from_pretrained("motherduckdb/DuckDB-NSQL-7B-v0.1", torch_dtype=torch.bfloat16)
text = """### Instruction:
Your task is to generate valid duckdb SQL to answer the following question.
### Input:
### Question:
create a new table called tmp from test.csv
### Response (use duckdb shorthand if possible):
"""
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=500)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
Example 2:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("motherduckdb/DuckDB-NSQL-7B-v0.1")
model = AutoModelForCausalLM.from_pretrained("motherduckdb/DuckDB-NSQL-7B-v0.1", torch_dtype=torch.bfloat16)
text = """### Instruction:
Your task is to generate valid duckdb SQL to answer the following question, given a duckdb database schema.
### Input:
Here is the database schema that the SQL query will run on:
CREATE TABLE taxi (
VendorID bigint,
tpep_pickup_datetime timestamp,
tpep_dropoff_datetime timestamp,
passenger_count double,
trip_distance double,
fare_amount double,
extra double,
tip_amount double,
tolls_amount double,
improvement_surcharge double,
total_amount double,
);
### Question:
get all columns ending with _amount from taxi table
### Response (use duckdb shorthand if possible):"""
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=500)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
Example 3:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("motherduckdb/DuckDB-NSQL-7B-v0.1")
model = AutoModelForCausalLM.from_pretrained("motherduckdb/DuckDB-NSQL-7B-v0.1", torch_dtype=torch.bfloat16)
text = """### Instruction:
Your task is to generate valid duckdb SQL to answer the following question, given a duckdb database schema.
### Input:
Here is the database schema that the SQL query will run on:
CREATE TABLE rideshare (
hvfhs_license_num varchar,
dispatching_base_num varchar,
originating_base_num varchar,
request_datetime timestamp,
on_scene_datetime timestamp,
pickup_datetime timestamp,
dropoff_datetime timestamp,
trip_miles double,
trip_time bigint,
);
### Question:
get longest trip in december 2022
### Response (use duckdb shorthand if possible):
"""
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=500)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
|
RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf
|
RichardErkhov
| 2024-05-31T00:07:22Z | 3 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-30T21:02:51Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Instameta-Mistral-v0.1-7b - GGUF
- Model creator: https://huggingface.co/cldersaienril/
- Original model: https://huggingface.co/cldersaienril/Instameta-Mistral-v0.1-7b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Instameta-Mistral-v0.1-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q2_K.gguf) | Q2_K | 2.53GB |
| [Instameta-Mistral-v0.1-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Instameta-Mistral-v0.1-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Instameta-Mistral-v0.1-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Instameta-Mistral-v0.1-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Instameta-Mistral-v0.1-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q3_K.gguf) | Q3_K | 3.28GB |
| [Instameta-Mistral-v0.1-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Instameta-Mistral-v0.1-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Instameta-Mistral-v0.1-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Instameta-Mistral-v0.1-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Instameta-Mistral-v0.1-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Instameta-Mistral-v0.1-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Instameta-Mistral-v0.1-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q4_K.gguf) | Q4_K | 4.07GB |
| [Instameta-Mistral-v0.1-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Instameta-Mistral-v0.1-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Instameta-Mistral-v0.1-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Instameta-Mistral-v0.1-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Instameta-Mistral-v0.1-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q5_K.gguf) | Q5_K | 4.78GB |
| [Instameta-Mistral-v0.1-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Instameta-Mistral-v0.1-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Instameta-Mistral-v0.1-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q6_K.gguf) | Q6_K | 5.53GB |
| [Instameta-Mistral-v0.1-7b.Q8_0.gguf](https://huggingface.co/RichardErkhov/cldersaienril_-_Instameta-Mistral-v0.1-7b-gguf/blob/main/Instameta-Mistral-v0.1-7b.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
datasets:
- ehartford/dolphin
language:
- en
- zh
---
## Dataset
Dataset: Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
Private Dataset of gpt4/gpt3.5 dialogues in Chinese to increase its multilingual abilities.
# Training
20 hours on 6 x A800s
|
mcbreg/gpt-2-mcbreg-v2-colab
|
mcbreg
| 2024-05-30T23:52:15Z | 151 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-30T21:14:09Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: gpt-2-mcbreg-v2-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-2-mcbreg-v2-colab
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.7196 | 1.0 | 5559 | 1.7971 |
| 1.6028 | 2.0 | 11118 | 1.7398 |
| 1.3016 | 3.0 | 16677 | 1.7172 |
| 1.4263 | 4.0 | 22236 | 1.7160 |
| 1.3382 | 5.0 | 27795 | 1.7275 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
jepz/isaac-jamie
|
jepz
| 2024-05-30T23:46:23Z | 2 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-2-base",
"base_model:adapter:stabilityai/stable-diffusion-2-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-05-30T23:46:17Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/289ae84f-2318-4a16-8b5e-03bdfd04221a.jpg
base_model: stabilityai/stable-diffusion-2-base
instance_prompt: isaac-jamie
license: creativeml-openrail-m
---
# isaac-jamie
<Gallery />
## Trigger words
You should use `isaac-jamie` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/jepz/isaac-jamie/tree/main) them in the Files & versions tab.
|
slodev/archer_LoRA
|
slodev
| 2024-05-30T23:40:35Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-05-30T23:28:05Z |
---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: 'a photo in the style of TOK '
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - slodev/archer_LoRA
<Gallery />
## Model description
These are slodev/archer_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo in the style of TOK to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](slodev/archer_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
vananhle/swin-tiny-patch4-window7-224-finetuned-isic217
|
vananhle
| 2024-05-30T23:39:35Z | 16 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-05-30T06:32:01Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-isic217
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5909090909090909
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-isic217
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3724
- Accuracy: 0.5909
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 2.2679 | 0.9796 | 24 | 2.1550 | 0.0909 |
| 2.0504 | 2.0 | 49 | 2.0559 | 0.2727 |
| 1.8943 | 2.9796 | 73 | 2.0186 | 0.2273 |
| 1.5671 | 4.0 | 98 | 1.8154 | 0.2273 |
| 1.3425 | 4.9796 | 122 | 2.0475 | 0.2273 |
| 1.2758 | 6.0 | 147 | 2.1914 | 0.2273 |
| 0.9808 | 6.9796 | 171 | 2.0478 | 0.3636 |
| 0.7246 | 8.0 | 196 | 1.8840 | 0.4091 |
| 0.7323 | 8.9796 | 220 | 2.1831 | 0.4091 |
| 0.4881 | 10.0 | 245 | 2.2868 | 0.3636 |
| 0.4346 | 10.9796 | 269 | 2.2312 | 0.4545 |
| 0.5647 | 12.0 | 294 | 1.9897 | 0.4091 |
| 0.1464 | 12.9796 | 318 | 2.0579 | 0.4545 |
| 0.5575 | 14.0 | 343 | 2.1859 | 0.4545 |
| 0.3894 | 14.9796 | 367 | 2.7353 | 0.3636 |
| 0.4326 | 16.0 | 392 | 2.4455 | 0.3636 |
| 0.3715 | 16.9796 | 416 | 2.3104 | 0.5455 |
| 0.3966 | 18.0 | 441 | 2.4597 | 0.4545 |
| 0.1855 | 18.9796 | 465 | 2.3335 | 0.3636 |
| 0.1528 | 20.0 | 490 | 2.3630 | 0.4091 |
| 0.2036 | 20.9796 | 514 | 2.3520 | 0.4545 |
| 0.2026 | 22.0 | 539 | 2.7012 | 0.4091 |
| 0.2127 | 22.9796 | 563 | 2.3724 | 0.5909 |
| 0.2719 | 24.0 | 588 | 3.0376 | 0.3182 |
| 0.1292 | 24.9796 | 612 | 2.5684 | 0.5 |
| 0.2533 | 26.0 | 637 | 2.6974 | 0.4091 |
| 0.1947 | 26.9796 | 661 | 2.6957 | 0.4091 |
| 0.1805 | 28.0 | 686 | 2.8953 | 0.4091 |
| 0.1123 | 28.9796 | 710 | 2.8240 | 0.4091 |
| 0.2143 | 30.0 | 735 | 2.3880 | 0.4545 |
| 0.1845 | 30.9796 | 759 | 2.6072 | 0.3636 |
| 0.0921 | 32.0 | 784 | 2.7256 | 0.4545 |
| 0.0276 | 32.9796 | 808 | 2.4074 | 0.4091 |
| 0.0876 | 34.0 | 833 | 2.6043 | 0.4545 |
| 0.0253 | 34.9796 | 857 | 2.7620 | 0.4545 |
| 0.1904 | 36.0 | 882 | 2.6911 | 0.4091 |
| 0.072 | 36.9796 | 906 | 2.6528 | 0.4545 |
| 0.169 | 38.0 | 931 | 2.6454 | 0.4545 |
| 0.0978 | 38.9796 | 955 | 2.6269 | 0.5 |
| 0.069 | 40.0 | 980 | 2.4154 | 0.4545 |
| 0.0159 | 40.9796 | 1004 | 2.7026 | 0.4545 |
| 0.2046 | 42.0 | 1029 | 2.5213 | 0.4545 |
| 0.0329 | 42.9796 | 1053 | 2.6399 | 0.5 |
| 0.0166 | 44.0 | 1078 | 2.7787 | 0.4545 |
| 0.0812 | 44.9796 | 1102 | 2.8176 | 0.4545 |
| 0.0197 | 46.0 | 1127 | 2.8049 | 0.4545 |
| 0.0989 | 46.9796 | 1151 | 2.7479 | 0.4545 |
| 0.054 | 48.0 | 1176 | 2.7614 | 0.4545 |
| 0.1095 | 48.9796 | 1200 | 2.7604 | 0.5 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
xuliu15/FT-frisian-1h
|
xuliu15
| 2024-05-30T23:37:52Z | 19 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:mozilla-foundation/common_voice_6_1",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-05-18T16:37:24Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_6_1
metrics:
- wer
model-index:
- name: Whisper Small Frisian 1h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6.1
type: mozilla-foundation/common_voice_6_1
args: 'config: frisian, split: test'
metrics:
- name: Wer
type: wer
value: 47.79183746212796
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Frisian 1h
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 6.1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9900
- Wer: 47.7918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 2.4073 | 1.1236 | 100 | 2.2555 | 82.9549 |
| 1.5143 | 2.2472 | 200 | 1.6651 | 73.4557 |
| 1.1865 | 3.3708 | 300 | 1.4237 | 65.1256 |
| 0.9368 | 4.4944 | 400 | 1.2874 | 59.4832 |
| 0.8009 | 5.6180 | 500 | 1.1957 | 56.5461 |
| 0.6722 | 6.7416 | 600 | 1.1345 | 54.6890 |
| 0.5726 | 7.8652 | 700 | 1.0894 | 53.1919 |
| 0.5068 | 8.9888 | 800 | 1.0575 | 51.7769 |
| 0.4239 | 10.1124 | 900 | 1.0351 | 50.8002 |
| 0.3799 | 11.2360 | 1000 | 1.0197 | 49.9198 |
| 0.295 | 12.3596 | 1100 | 1.0110 | 49.3673 |
| 0.2852 | 13.4831 | 1200 | 1.0022 | 48.7507 |
| 0.2478 | 14.6067 | 1300 | 0.9965 | 48.3800 |
| 0.2267 | 15.7303 | 1400 | 0.9931 | 48.1911 |
| 0.1986 | 16.8539 | 1500 | 0.9916 | 48.1412 |
| 0.1922 | 17.9775 | 1600 | 0.9907 | 47.9558 |
| 0.1724 | 19.1011 | 1700 | 0.9905 | 47.8703 |
| 0.1709 | 20.2247 | 1800 | 0.9900 | 47.9059 |
| 0.1749 | 21.3483 | 1900 | 0.9900 | 47.7598 |
| 0.145 | 22.4719 | 2000 | 0.9900 | 47.7918 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.