modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-03 18:27:40
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 550
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-03 18:26:42
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
dmavkgo/vilt_finetuned_200
|
dmavkgo
| 2024-06-04T05:02:13Z | 63 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vilt",
"visual-question-answering",
"generated_from_trainer",
"dataset:vqa",
"base_model:dandelin/vilt-b32-mlm",
"base_model:finetune:dandelin/vilt-b32-mlm",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
visual-question-answering
| 2024-06-04T03:32:11Z |
---
license: apache-2.0
base_model: dandelin/vilt-b32-mlm
tags:
- generated_from_trainer
datasets:
- vqa
model-index:
- name: vilt_finetuned_200
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vilt_finetuned_200
This model is a fine-tuned version of [dandelin/vilt-b32-mlm](https://huggingface.co/dandelin/vilt-b32-mlm) on the vqa dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
richardkelly/Qwen-Qwen1.5-1.8B-1717476207
|
richardkelly
| 2024-06-04T05:01:47Z | 142 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-04T04:43:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FuturisticVibes/Meta-Llama-3-70B-Instruct-abliterated-v3.5-6.0bpw-h8-exl2
|
FuturisticVibes
| 2024-06-04T04:58:52Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-06-04T04:51:48Z |
---
library_name: transformers
license: llama3
---
I have no idea what I’m doing… if this causes the apocalypse someone please let me know.
Meta-Llama-3-70B-Instruct-abliterated-v3.5 6.0bpw h8 EXL2
Includes [measurement.json](https://huggingface.co/FuturisticVibes/Meta-Llama-3-70B-Instruct-abliterated-v3.5-6.0bpw-h8-exl2/tree/measurement) file for further quantization
Up next is a new, old, long dead, but never forgotten friend… Assuming I can put enough money into RunPod to rent an H100 for a bit…
Original Model: https://huggingface.co/failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5
# Original Model Card
# Llama-3-70B-Instruct-abliterated-v3.5 Model Card
[My original Jupyter "cookbook" to replicate the methodology can be found here](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb)
[My personal library o' code used](https://github.com/FailSpy/abliterator) (WIP, looking to improve and generalize)
This is [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more.
## V3.5?
Second try. I felt that the V3 methodology of 70B wasn't well applied, and u/Nexesenex on reddit kinda confirmed my suspicions. So go blame them. :P
This one has only a single layer modified(!) and that seems to have completely eliminated moralizing disclaimers.
I hope you'll find this model better than 70B-V3! As well, this also fixes the tokenizer.
## Hang on, "abliteration"? Orthogonalization? Ablation? What is this?
TL;DR: This model has had certain weights manipulated to "inhibit" the model's ability to express refusal. It is not in anyway _guaranteed_ that it won't refuse you, understand your request, it may still lecture you about ethics/safety, etc. It is tuned in all other respects the same as the original 70B instruct model was, just with the strongest refusal directions orthogonalized out.
**TL;TL;DR;DR: It's uncensored in the purest form I can manage -- no new or changed behaviour in any other respect from the original model.**
As far as "abliteration": it's just a fun play-on-words using the original "ablation" term used in the original paper to refer to removing features, which I made up particularly to differentiate the model from "uncensored" fine-tunes.
Ablate + obliterated = Abliterated
Anyways, orthogonalization/ablation are both aspects to refer to the same thing here, the technique in which the refusal feature was "ablated" from the model was via orthogonalization.
## A little more on the methodology, and why this is interesting
To me, ablation (or applying the methodology for the inverse, "augmentation") seems to be good for inducing/removing very specific features that you'd have to spend way too many tokens on encouraging or discouraging in your system prompt.
Instead, you just apply your system prompt in the ablation script against a blank system prompt on the same dataset and orthogonalize for the desired behaviour in the final model weights.
> Why this over fine-tuning?
Ablation is much more surgical in nature whilst also being effectively executed with a _lot_ less data than fine-tuning, which I think is its main advantage.
As well, and its most valuable aspect is it keeps as much of the original model's knowledge and training intact, whilst removing its tendency to behave in one very specific undesireable manner. (In this case, refusing user requests.)
Fine tuning is still exceptionally useful and the go-to for broad behaviour changes; however, you may be able to get close to your desired behaviour with very few samples using the ablation/augmentation techniques.
It may also be a useful step to add to your model refinement: orthogonalize -> fine-tune or vice-versa.
I haven't really gotten around to exploring this model stacked with fine-tuning, I encourage others to give it a shot if they've got the capacity.
> Okay, fine, but why V3? There's no V2 70B?
Well, I released a V2 a while back for 8B under Cognitive Computations.
It ended up being not worth it to try V2 with 70B, I wanted to refine the model before wasting compute cycles on what might not even be a better model.
I am however quite pleased about this latest methodology, it seems to have induced fewer hallucinations.
So to show that it's a new fancy methodology from even that of the 8B V2, I decided to do a Microsoft and double up on my version jump because it's *such* an advancement (or so the excuse went, when in actuality it was because too many legacy but actively used Microsoft libraries checked for 'Windows 9' in the OS name to detect Windows 95/98 as one.)
## Quirkiness awareness notice
This model may come with interesting quirks, with the methodology being so new. I encourage you to play with the model, and post any quirks you notice in the community tab, as that'll help us further understand what this orthogonalization has in the way of side effects.
If you manage to develop further improvements, please share! This is really the most basic way to use ablation, but there are other possibilities that I believe are as-yet unexplored.
Additionally, feel free to reach out in any way about this. I'm on the Cognitive Computations Discord, I'm watching the Community tab, reach out! I'd love to see this methodology used in other ways, and so would gladly support whoever whenever I can.
|
mradermacher/Llama3-13B-lingyang-v1-GGUF
|
mradermacher
| 2024-06-04T04:56:59Z | 49 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"Llama3",
"en",
"base_model:wwe180/Llama3-13B-lingyang-v1",
"base_model:quantized:wwe180/Llama3-13B-lingyang-v1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-04T04:10:40Z |
---
base_model: wwe180/Llama3-13B-lingyang-v1
language:
- en
library_name: transformers
license:
- other
quantized_by: mradermacher
tags:
- mergekit
- merge
- Llama3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/wwe180/Llama3-13B-lingyang-v1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.Q2_K.gguf) | Q2_K | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.IQ3_XS.gguf) | IQ3_XS | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.Q3_K_S.gguf) | Q3_K_S | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.IQ3_S.gguf) | IQ3_S | 6.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.IQ3_M.gguf) | IQ3_M | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.Q3_K_M.gguf) | Q3_K_M | 6.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.Q3_K_L.gguf) | Q3_K_L | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.IQ4_XS.gguf) | IQ4_XS | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.Q4_K_S.gguf) | Q4_K_S | 7.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.Q4_K_M.gguf) | Q4_K_M | 8.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.Q5_K_S.gguf) | Q5_K_S | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.Q5_K_M.gguf) | Q5_K_M | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.Q6_K.gguf) | Q6_K | 11.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-13B-lingyang-v1-GGUF/resolve/main/Llama3-13B-lingyang-v1.Q8_0.gguf) | Q8_0 | 14.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mssma/ko-solar-10.7b-v0.8
|
mssma
| 2024-06-04T04:50:40Z | 62 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-04T04:41:44Z |
---
library_name: transformers
license: apache-2.0
language:
- ko
---
# usage
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
path = "mssma/ko-solar-10.7b-v0.8"
model = AutoModelForCausalLM.from_pretrained(
path,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(path)
```
|
vaibhavchavan/flan-t5-small-finetuned-xsum
|
vaibhavchavan
| 2024-06-04T04:45:04Z | 110 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-30T03:20:29Z |
---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-finetuned-xsum
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 3.5714
- Rouge2: 1.2195
- Rougel: 3.5714
- Rougelsum: 3.5714
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 1 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 2.0 | 2 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 3.0 | 3 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 4.0 | 4 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 5.0 | 5 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 6.0 | 6 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 7.0 | 7 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 8.0 | 8 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 9.0 | 9 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 10.0 | 10 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 11.0 | 11 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 12.0 | 12 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 13.0 | 13 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 14.0 | 14 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 15.0 | 15 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 16.0 | 16 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 17.0 | 17 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 18.0 | 18 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 19.0 | 19 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 20.0 | 20 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 21.0 | 21 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 22.0 | 22 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 23.0 | 23 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 24.0 | 24 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 25.0 | 25 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 26.0 | 26 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 27.0 | 27 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 28.0 | 28 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 29.0 | 29 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 30.0 | 30 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 31.0 | 31 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 32.0 | 32 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 33.0 | 33 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 34.0 | 34 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 35.0 | 35 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 36.0 | 36 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 37.0 | 37 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 38.0 | 38 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 39.0 | 39 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 40.0 | 40 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 41.0 | 41 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 42.0 | 42 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 43.0 | 43 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 44.0 | 44 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 45.0 | 45 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 46.0 | 46 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 47.0 | 47 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 48.0 | 48 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 49.0 | 49 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 50.0 | 50 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 51.0 | 51 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 52.0 | 52 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 53.0 | 53 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 54.0 | 54 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 55.0 | 55 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 56.0 | 56 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 57.0 | 57 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 58.0 | 58 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 59.0 | 59 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 60.0 | 60 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 61.0 | 61 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 62.0 | 62 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 63.0 | 63 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 64.0 | 64 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 65.0 | 65 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 66.0 | 66 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 67.0 | 67 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 68.0 | 68 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 69.0 | 69 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 70.0 | 70 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 71.0 | 71 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 72.0 | 72 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 73.0 | 73 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 74.0 | 74 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 75.0 | 75 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 76.0 | 76 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 77.0 | 77 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 78.0 | 78 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 79.0 | 79 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 80.0 | 80 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 81.0 | 81 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 82.0 | 82 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 83.0 | 83 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 84.0 | 84 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 85.0 | 85 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 86.0 | 86 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 87.0 | 87 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 88.0 | 88 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 89.0 | 89 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 90.0 | 90 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 91.0 | 91 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 92.0 | 92 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 93.0 | 93 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 94.0 | 94 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 95.0 | 95 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 96.0 | 96 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 97.0 | 97 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 98.0 | 98 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 99.0 | 99 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 100.0 | 100 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 101.0 | 101 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 102.0 | 102 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 103.0 | 103 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 104.0 | 104 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 105.0 | 105 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 106.0 | 106 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 107.0 | 107 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 108.0 | 108 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 109.0 | 109 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 110.0 | 110 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 111.0 | 111 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 112.0 | 112 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 113.0 | 113 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 114.0 | 114 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 115.0 | 115 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 116.0 | 116 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 117.0 | 117 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 118.0 | 118 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 119.0 | 119 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 120.0 | 120 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 121.0 | 121 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 122.0 | 122 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 123.0 | 123 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 124.0 | 124 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 125.0 | 125 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 126.0 | 126 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 127.0 | 127 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 128.0 | 128 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 129.0 | 129 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 130.0 | 130 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 131.0 | 131 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 132.0 | 132 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 133.0 | 133 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 134.0 | 134 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 135.0 | 135 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 136.0 | 136 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 137.0 | 137 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 138.0 | 138 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 139.0 | 139 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 140.0 | 140 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 141.0 | 141 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 142.0 | 142 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 143.0 | 143 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 144.0 | 144 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 145.0 | 145 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 146.0 | 146 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 147.0 | 147 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 148.0 | 148 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 149.0 | 149 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 150.0 | 150 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 151.0 | 151 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 152.0 | 152 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 153.0 | 153 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 154.0 | 154 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 155.0 | 155 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 156.0 | 156 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 157.0 | 157 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 158.0 | 158 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 159.0 | 159 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 160.0 | 160 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 161.0 | 161 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 162.0 | 162 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 163.0 | 163 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 164.0 | 164 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 165.0 | 165 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 166.0 | 166 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 167.0 | 167 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 168.0 | 168 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 169.0 | 169 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 170.0 | 170 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 171.0 | 171 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 172.0 | 172 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 173.0 | 173 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 174.0 | 174 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 175.0 | 175 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 176.0 | 176 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 177.0 | 177 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 178.0 | 178 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 179.0 | 179 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 180.0 | 180 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 181.0 | 181 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 182.0 | 182 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 183.0 | 183 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 184.0 | 184 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 185.0 | 185 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 186.0 | 186 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 187.0 | 187 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 188.0 | 188 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 189.0 | 189 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 190.0 | 190 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 191.0 | 191 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 192.0 | 192 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 193.0 | 193 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 194.0 | 194 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 195.0 | 195 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 196.0 | 196 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 197.0 | 197 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 198.0 | 198 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 199.0 | 199 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 200.0 | 200 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 201.0 | 201 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 202.0 | 202 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 203.0 | 203 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 204.0 | 204 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 205.0 | 205 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 206.0 | 206 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 207.0 | 207 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 208.0 | 208 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 209.0 | 209 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 210.0 | 210 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 211.0 | 211 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 212.0 | 212 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 213.0 | 213 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 214.0 | 214 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 215.0 | 215 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 216.0 | 216 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 217.0 | 217 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 218.0 | 218 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 219.0 | 219 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 220.0 | 220 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 221.0 | 221 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 222.0 | 222 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 223.0 | 223 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 224.0 | 224 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 225.0 | 225 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 226.0 | 226 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 227.0 | 227 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 228.0 | 228 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 229.0 | 229 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 230.0 | 230 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 231.0 | 231 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 232.0 | 232 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 233.0 | 233 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 234.0 | 234 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 235.0 | 235 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 236.0 | 236 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 237.0 | 237 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 238.0 | 238 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 239.0 | 239 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 240.0 | 240 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 241.0 | 241 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 242.0 | 242 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 243.0 | 243 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 244.0 | 244 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 245.0 | 245 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 246.0 | 246 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 247.0 | 247 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 248.0 | 248 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 249.0 | 249 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 250.0 | 250 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 251.0 | 251 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 252.0 | 252 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 253.0 | 253 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 254.0 | 254 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 255.0 | 255 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 256.0 | 256 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 257.0 | 257 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 258.0 | 258 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 259.0 | 259 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 260.0 | 260 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 261.0 | 261 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 262.0 | 262 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 263.0 | 263 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 264.0 | 264 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 265.0 | 265 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 266.0 | 266 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 267.0 | 267 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 268.0 | 268 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 269.0 | 269 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 270.0 | 270 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 271.0 | 271 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 272.0 | 272 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 273.0 | 273 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 274.0 | 274 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 275.0 | 275 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 276.0 | 276 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 277.0 | 277 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 278.0 | 278 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 279.0 | 279 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 280.0 | 280 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 281.0 | 281 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 282.0 | 282 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 283.0 | 283 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 284.0 | 284 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 285.0 | 285 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 286.0 | 286 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 287.0 | 287 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 288.0 | 288 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 289.0 | 289 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 290.0 | 290 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 291.0 | 291 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 292.0 | 292 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 293.0 | 293 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 294.0 | 294 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 295.0 | 295 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 296.0 | 296 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 297.0 | 297 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 298.0 | 298 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 299.0 | 299 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 300.0 | 300 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 301.0 | 301 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 302.0 | 302 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 303.0 | 303 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 304.0 | 304 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 305.0 | 305 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 306.0 | 306 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 307.0 | 307 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 308.0 | 308 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 309.0 | 309 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 310.0 | 310 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 311.0 | 311 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 312.0 | 312 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 313.0 | 313 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 314.0 | 314 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 315.0 | 315 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 316.0 | 316 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 317.0 | 317 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 318.0 | 318 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 319.0 | 319 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 320.0 | 320 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 321.0 | 321 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 322.0 | 322 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 323.0 | 323 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 324.0 | 324 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 325.0 | 325 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 326.0 | 326 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 327.0 | 327 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 328.0 | 328 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 329.0 | 329 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 330.0 | 330 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 331.0 | 331 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 332.0 | 332 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 333.0 | 333 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 334.0 | 334 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 335.0 | 335 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 336.0 | 336 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 337.0 | 337 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 338.0 | 338 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 339.0 | 339 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 340.0 | 340 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 341.0 | 341 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 342.0 | 342 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 343.0 | 343 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 344.0 | 344 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 345.0 | 345 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 346.0 | 346 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 347.0 | 347 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 348.0 | 348 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 349.0 | 349 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 350.0 | 350 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 351.0 | 351 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 352.0 | 352 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 353.0 | 353 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 354.0 | 354 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 355.0 | 355 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 356.0 | 356 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 357.0 | 357 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 358.0 | 358 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 359.0 | 359 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 360.0 | 360 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 361.0 | 361 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 362.0 | 362 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 363.0 | 363 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 364.0 | 364 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 365.0 | 365 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 366.0 | 366 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 367.0 | 367 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 368.0 | 368 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 369.0 | 369 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 370.0 | 370 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 371.0 | 371 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 372.0 | 372 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 373.0 | 373 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 374.0 | 374 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 375.0 | 375 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 376.0 | 376 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 377.0 | 377 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 378.0 | 378 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 379.0 | 379 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 380.0 | 380 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 381.0 | 381 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 382.0 | 382 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 383.0 | 383 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 384.0 | 384 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 385.0 | 385 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 386.0 | 386 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 387.0 | 387 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 388.0 | 388 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 389.0 | 389 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 390.0 | 390 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 391.0 | 391 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 392.0 | 392 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 393.0 | 393 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 394.0 | 394 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 395.0 | 395 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 396.0 | 396 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 397.0 | 397 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 398.0 | 398 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 399.0 | 399 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 400.0 | 400 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 401.0 | 401 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 402.0 | 402 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 403.0 | 403 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 404.0 | 404 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 405.0 | 405 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 406.0 | 406 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 407.0 | 407 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 408.0 | 408 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 409.0 | 409 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 410.0 | 410 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 411.0 | 411 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 412.0 | 412 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 413.0 | 413 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 414.0 | 414 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 415.0 | 415 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 416.0 | 416 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 417.0 | 417 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 418.0 | 418 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 419.0 | 419 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 420.0 | 420 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 421.0 | 421 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 422.0 | 422 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 423.0 | 423 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 424.0 | 424 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 425.0 | 425 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 426.0 | 426 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 427.0 | 427 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 428.0 | 428 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 429.0 | 429 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 430.0 | 430 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 431.0 | 431 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 432.0 | 432 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 433.0 | 433 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 434.0 | 434 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 435.0 | 435 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 436.0 | 436 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 437.0 | 437 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 438.0 | 438 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 439.0 | 439 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 440.0 | 440 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 441.0 | 441 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 442.0 | 442 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 443.0 | 443 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 444.0 | 444 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 445.0 | 445 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 446.0 | 446 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 447.0 | 447 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 448.0 | 448 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 449.0 | 449 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 450.0 | 450 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 451.0 | 451 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 452.0 | 452 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 453.0 | 453 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 454.0 | 454 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 455.0 | 455 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 456.0 | 456 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 457.0 | 457 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 458.0 | 458 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 459.0 | 459 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 460.0 | 460 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 461.0 | 461 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 462.0 | 462 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 463.0 | 463 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 464.0 | 464 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 465.0 | 465 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 466.0 | 466 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 467.0 | 467 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 468.0 | 468 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 469.0 | 469 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 470.0 | 470 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 471.0 | 471 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 472.0 | 472 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 473.0 | 473 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 474.0 | 474 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 475.0 | 475 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 476.0 | 476 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 477.0 | 477 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 478.0 | 478 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 479.0 | 479 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 480.0 | 480 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 481.0 | 481 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 482.0 | 482 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 483.0 | 483 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 484.0 | 484 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 485.0 | 485 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 486.0 | 486 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 487.0 | 487 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 488.0 | 488 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 489.0 | 489 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 490.0 | 490 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 491.0 | 491 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 492.0 | 492 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 493.0 | 493 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 494.0 | 494 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 495.0 | 495 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 496.0 | 496 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 497.0 | 497 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 498.0 | 498 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| No log | 499.0 | 499 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 500.0 | 500 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 501.0 | 501 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 502.0 | 502 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 503.0 | 503 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 504.0 | 504 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 505.0 | 505 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 506.0 | 506 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 507.0 | 507 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 508.0 | 508 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 509.0 | 509 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 510.0 | 510 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 511.0 | 511 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 512.0 | 512 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 513.0 | 513 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 514.0 | 514 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 515.0 | 515 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 516.0 | 516 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 517.0 | 517 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 518.0 | 518 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 519.0 | 519 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 520.0 | 520 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 521.0 | 521 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 522.0 | 522 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 523.0 | 523 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 524.0 | 524 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 525.0 | 525 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 526.0 | 526 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 527.0 | 527 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 528.0 | 528 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 529.0 | 529 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 530.0 | 530 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 531.0 | 531 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 532.0 | 532 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 533.0 | 533 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 534.0 | 534 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 535.0 | 535 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 536.0 | 536 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 537.0 | 537 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 538.0 | 538 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 539.0 | 539 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 540.0 | 540 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 541.0 | 541 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 542.0 | 542 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 543.0 | 543 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 544.0 | 544 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 545.0 | 545 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 546.0 | 546 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 547.0 | 547 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 548.0 | 548 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 549.0 | 549 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 550.0 | 550 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 551.0 | 551 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 552.0 | 552 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 553.0 | 553 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 554.0 | 554 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 555.0 | 555 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 556.0 | 556 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 557.0 | 557 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 558.0 | 558 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 559.0 | 559 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 560.0 | 560 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 561.0 | 561 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 562.0 | 562 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 563.0 | 563 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 564.0 | 564 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 565.0 | 565 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 566.0 | 566 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 567.0 | 567 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 568.0 | 568 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 569.0 | 569 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 570.0 | 570 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 571.0 | 571 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 572.0 | 572 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 573.0 | 573 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 574.0 | 574 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 575.0 | 575 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 576.0 | 576 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 577.0 | 577 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 578.0 | 578 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 579.0 | 579 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 580.0 | 580 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 581.0 | 581 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 582.0 | 582 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 583.0 | 583 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 584.0 | 584 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 585.0 | 585 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 586.0 | 586 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 587.0 | 587 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 588.0 | 588 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 589.0 | 589 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 590.0 | 590 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 591.0 | 591 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 592.0 | 592 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 593.0 | 593 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 594.0 | 594 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 595.0 | 595 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 596.0 | 596 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 597.0 | 597 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 598.0 | 598 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 599.0 | 599 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 600.0 | 600 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 601.0 | 601 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 602.0 | 602 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 603.0 | 603 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 604.0 | 604 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 605.0 | 605 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 606.0 | 606 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 607.0 | 607 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 608.0 | 608 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 609.0 | 609 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 610.0 | 610 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 611.0 | 611 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 612.0 | 612 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 613.0 | 613 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 614.0 | 614 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 615.0 | 615 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 616.0 | 616 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 617.0 | 617 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 618.0 | 618 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 619.0 | 619 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 620.0 | 620 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 621.0 | 621 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 622.0 | 622 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 623.0 | 623 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 624.0 | 624 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 625.0 | 625 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 626.0 | 626 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 627.0 | 627 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 628.0 | 628 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 629.0 | 629 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 630.0 | 630 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 631.0 | 631 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 632.0 | 632 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 633.0 | 633 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 634.0 | 634 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 635.0 | 635 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 636.0 | 636 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 637.0 | 637 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 638.0 | 638 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 639.0 | 639 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 640.0 | 640 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 641.0 | 641 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 642.0 | 642 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 643.0 | 643 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 644.0 | 644 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 645.0 | 645 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 646.0 | 646 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 647.0 | 647 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 648.0 | 648 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 649.0 | 649 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 650.0 | 650 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 651.0 | 651 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 652.0 | 652 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 653.0 | 653 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 654.0 | 654 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 655.0 | 655 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 656.0 | 656 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 657.0 | 657 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 658.0 | 658 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 659.0 | 659 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 660.0 | 660 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 661.0 | 661 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 662.0 | 662 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 663.0 | 663 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 664.0 | 664 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 665.0 | 665 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 666.0 | 666 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 667.0 | 667 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 668.0 | 668 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 669.0 | 669 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 670.0 | 670 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 671.0 | 671 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 672.0 | 672 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 673.0 | 673 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 674.0 | 674 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 675.0 | 675 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 676.0 | 676 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 677.0 | 677 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 678.0 | 678 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 679.0 | 679 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 680.0 | 680 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 681.0 | 681 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 682.0 | 682 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 683.0 | 683 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 684.0 | 684 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 685.0 | 685 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 686.0 | 686 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 687.0 | 687 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 688.0 | 688 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 689.0 | 689 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 690.0 | 690 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 691.0 | 691 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 692.0 | 692 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 693.0 | 693 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 694.0 | 694 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 695.0 | 695 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 696.0 | 696 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 697.0 | 697 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 698.0 | 698 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 699.0 | 699 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 700.0 | 700 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 701.0 | 701 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 702.0 | 702 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 703.0 | 703 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 704.0 | 704 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 705.0 | 705 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 706.0 | 706 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 707.0 | 707 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 708.0 | 708 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 709.0 | 709 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 710.0 | 710 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 711.0 | 711 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 712.0 | 712 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 713.0 | 713 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 714.0 | 714 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 715.0 | 715 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 716.0 | 716 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 717.0 | 717 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 718.0 | 718 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 719.0 | 719 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 720.0 | 720 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 721.0 | 721 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 722.0 | 722 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 723.0 | 723 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 724.0 | 724 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 725.0 | 725 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 726.0 | 726 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 727.0 | 727 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 728.0 | 728 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 729.0 | 729 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 730.0 | 730 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 731.0 | 731 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 732.0 | 732 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 733.0 | 733 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 734.0 | 734 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 735.0 | 735 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 736.0 | 736 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 737.0 | 737 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 738.0 | 738 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 739.0 | 739 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 740.0 | 740 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 741.0 | 741 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 742.0 | 742 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 743.0 | 743 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 744.0 | 744 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 745.0 | 745 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 746.0 | 746 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 747.0 | 747 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 748.0 | 748 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 749.0 | 749 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 750.0 | 750 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 751.0 | 751 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 752.0 | 752 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 753.0 | 753 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 754.0 | 754 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 755.0 | 755 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 756.0 | 756 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 757.0 | 757 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 758.0 | 758 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 759.0 | 759 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 760.0 | 760 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 761.0 | 761 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 762.0 | 762 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 763.0 | 763 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 764.0 | 764 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 765.0 | 765 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 766.0 | 766 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 767.0 | 767 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 768.0 | 768 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 769.0 | 769 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 770.0 | 770 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 771.0 | 771 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 772.0 | 772 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 773.0 | 773 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 774.0 | 774 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 775.0 | 775 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 776.0 | 776 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 777.0 | 777 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 778.0 | 778 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 779.0 | 779 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 780.0 | 780 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 781.0 | 781 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 782.0 | 782 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 783.0 | 783 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 784.0 | 784 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 785.0 | 785 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 786.0 | 786 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 787.0 | 787 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 788.0 | 788 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 789.0 | 789 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 790.0 | 790 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 791.0 | 791 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 792.0 | 792 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 793.0 | 793 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 794.0 | 794 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 795.0 | 795 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 796.0 | 796 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 797.0 | 797 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 798.0 | 798 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 799.0 | 799 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 800.0 | 800 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 801.0 | 801 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 802.0 | 802 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 803.0 | 803 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 804.0 | 804 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 805.0 | 805 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 806.0 | 806 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 807.0 | 807 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 808.0 | 808 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 809.0 | 809 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 810.0 | 810 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 811.0 | 811 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 812.0 | 812 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 813.0 | 813 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 814.0 | 814 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 815.0 | 815 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 816.0 | 816 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 817.0 | 817 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 818.0 | 818 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 819.0 | 819 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 820.0 | 820 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 821.0 | 821 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 822.0 | 822 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 823.0 | 823 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 824.0 | 824 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 825.0 | 825 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 826.0 | 826 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 827.0 | 827 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 828.0 | 828 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 829.0 | 829 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 830.0 | 830 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 831.0 | 831 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 832.0 | 832 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 833.0 | 833 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 834.0 | 834 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 835.0 | 835 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 836.0 | 836 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 837.0 | 837 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 838.0 | 838 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 839.0 | 839 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 840.0 | 840 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 841.0 | 841 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 842.0 | 842 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 843.0 | 843 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 844.0 | 844 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 845.0 | 845 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 846.0 | 846 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 847.0 | 847 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 848.0 | 848 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 849.0 | 849 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 850.0 | 850 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 851.0 | 851 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 852.0 | 852 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 853.0 | 853 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 854.0 | 854 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 855.0 | 855 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 856.0 | 856 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 857.0 | 857 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 858.0 | 858 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 859.0 | 859 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 860.0 | 860 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 861.0 | 861 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 862.0 | 862 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 863.0 | 863 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 864.0 | 864 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 865.0 | 865 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 866.0 | 866 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 867.0 | 867 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 868.0 | 868 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 869.0 | 869 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 870.0 | 870 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 871.0 | 871 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 872.0 | 872 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 873.0 | 873 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 874.0 | 874 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 875.0 | 875 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 876.0 | 876 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 877.0 | 877 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 878.0 | 878 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 879.0 | 879 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 880.0 | 880 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 881.0 | 881 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 882.0 | 882 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 883.0 | 883 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 884.0 | 884 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 885.0 | 885 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 886.0 | 886 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 887.0 | 887 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 888.0 | 888 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 889.0 | 889 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 890.0 | 890 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 891.0 | 891 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 892.0 | 892 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 893.0 | 893 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 894.0 | 894 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 895.0 | 895 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 896.0 | 896 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 897.0 | 897 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 898.0 | 898 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 899.0 | 899 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 900.0 | 900 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 901.0 | 901 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 902.0 | 902 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 903.0 | 903 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 904.0 | 904 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 905.0 | 905 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 906.0 | 906 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 907.0 | 907 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 908.0 | 908 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 909.0 | 909 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 910.0 | 910 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 911.0 | 911 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 912.0 | 912 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 913.0 | 913 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 914.0 | 914 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 915.0 | 915 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 916.0 | 916 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 917.0 | 917 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 918.0 | 918 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 919.0 | 919 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 920.0 | 920 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 921.0 | 921 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 922.0 | 922 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 923.0 | 923 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 924.0 | 924 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 925.0 | 925 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 926.0 | 926 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 927.0 | 927 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 928.0 | 928 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 929.0 | 929 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 930.0 | 930 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 931.0 | 931 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 932.0 | 932 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 933.0 | 933 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 934.0 | 934 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 935.0 | 935 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 936.0 | 936 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 937.0 | 937 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 938.0 | 938 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 939.0 | 939 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 940.0 | 940 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 941.0 | 941 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 942.0 | 942 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 943.0 | 943 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 944.0 | 944 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 945.0 | 945 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 946.0 | 946 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 947.0 | 947 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 948.0 | 948 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 949.0 | 949 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 950.0 | 950 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 951.0 | 951 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 952.0 | 952 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 953.0 | 953 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 954.0 | 954 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 955.0 | 955 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 956.0 | 956 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 957.0 | 957 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 958.0 | 958 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 959.0 | 959 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 960.0 | 960 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 961.0 | 961 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 962.0 | 962 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 963.0 | 963 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 964.0 | 964 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 965.0 | 965 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 966.0 | 966 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 967.0 | 967 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 968.0 | 968 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 969.0 | 969 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 970.0 | 970 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 971.0 | 971 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 972.0 | 972 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 973.0 | 973 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 974.0 | 974 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 975.0 | 975 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 976.0 | 976 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 977.0 | 977 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 978.0 | 978 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 979.0 | 979 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 980.0 | 980 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 981.0 | 981 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 982.0 | 982 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 983.0 | 983 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 984.0 | 984 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 985.0 | 985 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 986.0 | 986 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 987.0 | 987 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 988.0 | 988 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 989.0 | 989 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 990.0 | 990 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 991.0 | 991 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 992.0 | 992 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 993.0 | 993 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 994.0 | 994 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 995.0 | 995 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 996.0 | 996 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 997.0 | 997 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 998.0 | 998 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 999.0 | 999 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1000.0 | 1000 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1001.0 | 1001 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1002.0 | 1002 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1003.0 | 1003 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1004.0 | 1004 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1005.0 | 1005 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1006.0 | 1006 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1007.0 | 1007 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1008.0 | 1008 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1009.0 | 1009 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1010.0 | 1010 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1011.0 | 1011 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1012.0 | 1012 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1013.0 | 1013 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1014.0 | 1014 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1015.0 | 1015 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1016.0 | 1016 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1017.0 | 1017 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1018.0 | 1018 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1019.0 | 1019 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1020.0 | 1020 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1021.0 | 1021 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1022.0 | 1022 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1023.0 | 1023 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1024.0 | 1024 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1025.0 | 1025 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1026.0 | 1026 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1027.0 | 1027 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1028.0 | 1028 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1029.0 | 1029 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1030.0 | 1030 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1031.0 | 1031 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1032.0 | 1032 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1033.0 | 1033 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1034.0 | 1034 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1035.0 | 1035 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1036.0 | 1036 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1037.0 | 1037 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1038.0 | 1038 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1039.0 | 1039 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1040.0 | 1040 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1041.0 | 1041 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1042.0 | 1042 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1043.0 | 1043 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1044.0 | 1044 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1045.0 | 1045 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1046.0 | 1046 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1047.0 | 1047 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1048.0 | 1048 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1049.0 | 1049 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1050.0 | 1050 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1051.0 | 1051 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1052.0 | 1052 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1053.0 | 1053 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1054.0 | 1054 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1055.0 | 1055 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1056.0 | 1056 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1057.0 | 1057 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1058.0 | 1058 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1059.0 | 1059 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1060.0 | 1060 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1061.0 | 1061 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1062.0 | 1062 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1063.0 | 1063 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1064.0 | 1064 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1065.0 | 1065 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1066.0 | 1066 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1067.0 | 1067 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1068.0 | 1068 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1069.0 | 1069 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1070.0 | 1070 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1071.0 | 1071 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1072.0 | 1072 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1073.0 | 1073 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1074.0 | 1074 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1075.0 | 1075 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1076.0 | 1076 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1077.0 | 1077 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1078.0 | 1078 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1079.0 | 1079 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1080.0 | 1080 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1081.0 | 1081 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1082.0 | 1082 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1083.0 | 1083 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1084.0 | 1084 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1085.0 | 1085 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1086.0 | 1086 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1087.0 | 1087 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1088.0 | 1088 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1089.0 | 1089 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1090.0 | 1090 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1091.0 | 1091 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1092.0 | 1092 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1093.0 | 1093 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1094.0 | 1094 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1095.0 | 1095 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1096.0 | 1096 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1097.0 | 1097 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1098.0 | 1098 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1099.0 | 1099 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1100.0 | 1100 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1101.0 | 1101 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1102.0 | 1102 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1103.0 | 1103 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1104.0 | 1104 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1105.0 | 1105 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1106.0 | 1106 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1107.0 | 1107 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1108.0 | 1108 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1109.0 | 1109 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1110.0 | 1110 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1111.0 | 1111 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1112.0 | 1112 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1113.0 | 1113 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1114.0 | 1114 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1115.0 | 1115 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1116.0 | 1116 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1117.0 | 1117 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1118.0 | 1118 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1119.0 | 1119 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1120.0 | 1120 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1121.0 | 1121 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1122.0 | 1122 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1123.0 | 1123 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1124.0 | 1124 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1125.0 | 1125 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1126.0 | 1126 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1127.0 | 1127 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1128.0 | 1128 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1129.0 | 1129 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1130.0 | 1130 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1131.0 | 1131 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1132.0 | 1132 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1133.0 | 1133 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1134.0 | 1134 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1135.0 | 1135 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1136.0 | 1136 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1137.0 | 1137 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1138.0 | 1138 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1139.0 | 1139 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1140.0 | 1140 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1141.0 | 1141 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1142.0 | 1142 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1143.0 | 1143 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1144.0 | 1144 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1145.0 | 1145 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1146.0 | 1146 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1147.0 | 1147 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1148.0 | 1148 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1149.0 | 1149 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1150.0 | 1150 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1151.0 | 1151 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1152.0 | 1152 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1153.0 | 1153 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1154.0 | 1154 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1155.0 | 1155 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1156.0 | 1156 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1157.0 | 1157 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1158.0 | 1158 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1159.0 | 1159 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1160.0 | 1160 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1161.0 | 1161 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1162.0 | 1162 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1163.0 | 1163 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1164.0 | 1164 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1165.0 | 1165 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1166.0 | 1166 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1167.0 | 1167 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1168.0 | 1168 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1169.0 | 1169 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1170.0 | 1170 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1171.0 | 1171 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1172.0 | 1172 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1173.0 | 1173 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1174.0 | 1174 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1175.0 | 1175 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1176.0 | 1176 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1177.0 | 1177 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1178.0 | 1178 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1179.0 | 1179 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1180.0 | 1180 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1181.0 | 1181 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1182.0 | 1182 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1183.0 | 1183 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1184.0 | 1184 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1185.0 | 1185 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1186.0 | 1186 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1187.0 | 1187 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1188.0 | 1188 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1189.0 | 1189 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1190.0 | 1190 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1191.0 | 1191 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1192.0 | 1192 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1193.0 | 1193 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1194.0 | 1194 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1195.0 | 1195 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1196.0 | 1196 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1197.0 | 1197 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1198.0 | 1198 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1199.0 | 1199 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1200.0 | 1200 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1201.0 | 1201 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1202.0 | 1202 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1203.0 | 1203 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1204.0 | 1204 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1205.0 | 1205 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1206.0 | 1206 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1207.0 | 1207 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1208.0 | 1208 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1209.0 | 1209 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1210.0 | 1210 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1211.0 | 1211 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1212.0 | 1212 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1213.0 | 1213 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1214.0 | 1214 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1215.0 | 1215 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1216.0 | 1216 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1217.0 | 1217 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1218.0 | 1218 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1219.0 | 1219 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1220.0 | 1220 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1221.0 | 1221 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1222.0 | 1222 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1223.0 | 1223 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1224.0 | 1224 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1225.0 | 1225 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1226.0 | 1226 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1227.0 | 1227 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1228.0 | 1228 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1229.0 | 1229 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1230.0 | 1230 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1231.0 | 1231 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1232.0 | 1232 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1233.0 | 1233 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1234.0 | 1234 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1235.0 | 1235 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1236.0 | 1236 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1237.0 | 1237 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1238.0 | 1238 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1239.0 | 1239 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1240.0 | 1240 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1241.0 | 1241 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1242.0 | 1242 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1243.0 | 1243 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1244.0 | 1244 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1245.0 | 1245 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1246.0 | 1246 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1247.0 | 1247 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1248.0 | 1248 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1249.0 | 1249 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1250.0 | 1250 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1251.0 | 1251 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1252.0 | 1252 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1253.0 | 1253 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1254.0 | 1254 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1255.0 | 1255 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1256.0 | 1256 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1257.0 | 1257 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1258.0 | 1258 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1259.0 | 1259 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1260.0 | 1260 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1261.0 | 1261 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1262.0 | 1262 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1263.0 | 1263 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1264.0 | 1264 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1265.0 | 1265 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1266.0 | 1266 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1267.0 | 1267 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1268.0 | 1268 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1269.0 | 1269 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1270.0 | 1270 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1271.0 | 1271 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1272.0 | 1272 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1273.0 | 1273 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1274.0 | 1274 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1275.0 | 1275 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1276.0 | 1276 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1277.0 | 1277 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1278.0 | 1278 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1279.0 | 1279 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1280.0 | 1280 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1281.0 | 1281 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1282.0 | 1282 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1283.0 | 1283 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1284.0 | 1284 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1285.0 | 1285 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1286.0 | 1286 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1287.0 | 1287 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1288.0 | 1288 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1289.0 | 1289 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1290.0 | 1290 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1291.0 | 1291 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1292.0 | 1292 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1293.0 | 1293 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1294.0 | 1294 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1295.0 | 1295 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1296.0 | 1296 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1297.0 | 1297 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1298.0 | 1298 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1299.0 | 1299 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1300.0 | 1300 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1301.0 | 1301 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1302.0 | 1302 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1303.0 | 1303 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1304.0 | 1304 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1305.0 | 1305 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1306.0 | 1306 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1307.0 | 1307 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1308.0 | 1308 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1309.0 | 1309 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1310.0 | 1310 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1311.0 | 1311 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1312.0 | 1312 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1313.0 | 1313 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1314.0 | 1314 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1315.0 | 1315 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1316.0 | 1316 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1317.0 | 1317 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1318.0 | 1318 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1319.0 | 1319 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1320.0 | 1320 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1321.0 | 1321 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1322.0 | 1322 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1323.0 | 1323 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1324.0 | 1324 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1325.0 | 1325 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1326.0 | 1326 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1327.0 | 1327 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1328.0 | 1328 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1329.0 | 1329 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1330.0 | 1330 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1331.0 | 1331 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1332.0 | 1332 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1333.0 | 1333 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1334.0 | 1334 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1335.0 | 1335 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1336.0 | 1336 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1337.0 | 1337 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1338.0 | 1338 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1339.0 | 1339 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1340.0 | 1340 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1341.0 | 1341 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1342.0 | 1342 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1343.0 | 1343 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1344.0 | 1344 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1345.0 | 1345 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1346.0 | 1346 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1347.0 | 1347 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1348.0 | 1348 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1349.0 | 1349 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1350.0 | 1350 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1351.0 | 1351 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1352.0 | 1352 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1353.0 | 1353 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1354.0 | 1354 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1355.0 | 1355 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1356.0 | 1356 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1357.0 | 1357 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1358.0 | 1358 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1359.0 | 1359 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1360.0 | 1360 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1361.0 | 1361 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1362.0 | 1362 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1363.0 | 1363 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1364.0 | 1364 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1365.0 | 1365 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1366.0 | 1366 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1367.0 | 1367 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1368.0 | 1368 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1369.0 | 1369 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1370.0 | 1370 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1371.0 | 1371 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1372.0 | 1372 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1373.0 | 1373 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1374.0 | 1374 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1375.0 | 1375 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1376.0 | 1376 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1377.0 | 1377 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1378.0 | 1378 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1379.0 | 1379 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1380.0 | 1380 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1381.0 | 1381 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1382.0 | 1382 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1383.0 | 1383 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1384.0 | 1384 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1385.0 | 1385 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1386.0 | 1386 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1387.0 | 1387 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1388.0 | 1388 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1389.0 | 1389 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1390.0 | 1390 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1391.0 | 1391 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1392.0 | 1392 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1393.0 | 1393 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1394.0 | 1394 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1395.0 | 1395 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1396.0 | 1396 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1397.0 | 1397 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1398.0 | 1398 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1399.0 | 1399 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1400.0 | 1400 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1401.0 | 1401 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1402.0 | 1402 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1403.0 | 1403 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1404.0 | 1404 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1405.0 | 1405 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1406.0 | 1406 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1407.0 | 1407 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1408.0 | 1408 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1409.0 | 1409 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1410.0 | 1410 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1411.0 | 1411 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1412.0 | 1412 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1413.0 | 1413 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1414.0 | 1414 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1415.0 | 1415 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1416.0 | 1416 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1417.0 | 1417 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1418.0 | 1418 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1419.0 | 1419 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1420.0 | 1420 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1421.0 | 1421 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1422.0 | 1422 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1423.0 | 1423 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1424.0 | 1424 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1425.0 | 1425 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1426.0 | 1426 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1427.0 | 1427 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1428.0 | 1428 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1429.0 | 1429 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1430.0 | 1430 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1431.0 | 1431 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1432.0 | 1432 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1433.0 | 1433 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1434.0 | 1434 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1435.0 | 1435 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1436.0 | 1436 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1437.0 | 1437 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1438.0 | 1438 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1439.0 | 1439 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1440.0 | 1440 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1441.0 | 1441 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1442.0 | 1442 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1443.0 | 1443 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1444.0 | 1444 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1445.0 | 1445 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1446.0 | 1446 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1447.0 | 1447 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1448.0 | 1448 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1449.0 | 1449 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1450.0 | 1450 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1451.0 | 1451 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1452.0 | 1452 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1453.0 | 1453 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1454.0 | 1454 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1455.0 | 1455 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1456.0 | 1456 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1457.0 | 1457 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1458.0 | 1458 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1459.0 | 1459 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1460.0 | 1460 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1461.0 | 1461 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1462.0 | 1462 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1463.0 | 1463 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1464.0 | 1464 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1465.0 | 1465 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1466.0 | 1466 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1467.0 | 1467 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1468.0 | 1468 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1469.0 | 1469 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1470.0 | 1470 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1471.0 | 1471 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1472.0 | 1472 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1473.0 | 1473 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1474.0 | 1474 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1475.0 | 1475 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1476.0 | 1476 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1477.0 | 1477 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1478.0 | 1478 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1479.0 | 1479 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1480.0 | 1480 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1481.0 | 1481 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1482.0 | 1482 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1483.0 | 1483 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1484.0 | 1484 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1485.0 | 1485 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1486.0 | 1486 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1487.0 | 1487 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1488.0 | 1488 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1489.0 | 1489 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1490.0 | 1490 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1491.0 | 1491 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1492.0 | 1492 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1493.0 | 1493 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1494.0 | 1494 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1495.0 | 1495 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1496.0 | 1496 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1497.0 | 1497 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1498.0 | 1498 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1499.0 | 1499 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1500.0 | 1500 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1501.0 | 1501 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1502.0 | 1502 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1503.0 | 1503 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1504.0 | 1504 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1505.0 | 1505 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1506.0 | 1506 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1507.0 | 1507 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1508.0 | 1508 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1509.0 | 1509 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1510.0 | 1510 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1511.0 | 1511 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1512.0 | 1512 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1513.0 | 1513 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1514.0 | 1514 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1515.0 | 1515 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1516.0 | 1516 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1517.0 | 1517 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1518.0 | 1518 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1519.0 | 1519 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1520.0 | 1520 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1521.0 | 1521 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1522.0 | 1522 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1523.0 | 1523 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1524.0 | 1524 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1525.0 | 1525 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1526.0 | 1526 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1527.0 | 1527 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1528.0 | 1528 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1529.0 | 1529 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1530.0 | 1530 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1531.0 | 1531 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1532.0 | 1532 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1533.0 | 1533 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1534.0 | 1534 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1535.0 | 1535 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1536.0 | 1536 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1537.0 | 1537 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1538.0 | 1538 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1539.0 | 1539 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1540.0 | 1540 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1541.0 | 1541 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1542.0 | 1542 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1543.0 | 1543 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1544.0 | 1544 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1545.0 | 1545 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1546.0 | 1546 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1547.0 | 1547 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1548.0 | 1548 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1549.0 | 1549 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1550.0 | 1550 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1551.0 | 1551 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1552.0 | 1552 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1553.0 | 1553 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1554.0 | 1554 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1555.0 | 1555 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1556.0 | 1556 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1557.0 | 1557 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1558.0 | 1558 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1559.0 | 1559 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1560.0 | 1560 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1561.0 | 1561 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1562.0 | 1562 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1563.0 | 1563 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1564.0 | 1564 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1565.0 | 1565 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1566.0 | 1566 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1567.0 | 1567 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1568.0 | 1568 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1569.0 | 1569 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1570.0 | 1570 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1571.0 | 1571 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1572.0 | 1572 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1573.0 | 1573 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1574.0 | 1574 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1575.0 | 1575 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1576.0 | 1576 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1577.0 | 1577 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1578.0 | 1578 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1579.0 | 1579 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1580.0 | 1580 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1581.0 | 1581 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1582.0 | 1582 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1583.0 | 1583 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1584.0 | 1584 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1585.0 | 1585 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1586.0 | 1586 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1587.0 | 1587 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1588.0 | 1588 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1589.0 | 1589 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1590.0 | 1590 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1591.0 | 1591 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1592.0 | 1592 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1593.0 | 1593 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1594.0 | 1594 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1595.0 | 1595 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1596.0 | 1596 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1597.0 | 1597 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1598.0 | 1598 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1599.0 | 1599 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1600.0 | 1600 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1601.0 | 1601 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1602.0 | 1602 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1603.0 | 1603 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1604.0 | 1604 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1605.0 | 1605 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1606.0 | 1606 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1607.0 | 1607 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1608.0 | 1608 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1609.0 | 1609 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1610.0 | 1610 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1611.0 | 1611 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1612.0 | 1612 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1613.0 | 1613 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1614.0 | 1614 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1615.0 | 1615 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1616.0 | 1616 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1617.0 | 1617 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1618.0 | 1618 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1619.0 | 1619 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1620.0 | 1620 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1621.0 | 1621 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1622.0 | 1622 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1623.0 | 1623 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1624.0 | 1624 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1625.0 | 1625 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1626.0 | 1626 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1627.0 | 1627 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1628.0 | 1628 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1629.0 | 1629 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1630.0 | 1630 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1631.0 | 1631 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1632.0 | 1632 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1633.0 | 1633 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1634.0 | 1634 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1635.0 | 1635 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1636.0 | 1636 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1637.0 | 1637 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1638.0 | 1638 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1639.0 | 1639 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1640.0 | 1640 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1641.0 | 1641 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1642.0 | 1642 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1643.0 | 1643 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1644.0 | 1644 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1645.0 | 1645 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1646.0 | 1646 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1647.0 | 1647 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1648.0 | 1648 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1649.0 | 1649 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1650.0 | 1650 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1651.0 | 1651 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1652.0 | 1652 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1653.0 | 1653 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1654.0 | 1654 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1655.0 | 1655 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1656.0 | 1656 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1657.0 | 1657 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1658.0 | 1658 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1659.0 | 1659 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1660.0 | 1660 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1661.0 | 1661 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1662.0 | 1662 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1663.0 | 1663 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1664.0 | 1664 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1665.0 | 1665 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1666.0 | 1666 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1667.0 | 1667 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1668.0 | 1668 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1669.0 | 1669 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1670.0 | 1670 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1671.0 | 1671 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1672.0 | 1672 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1673.0 | 1673 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1674.0 | 1674 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1675.0 | 1675 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1676.0 | 1676 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1677.0 | 1677 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1678.0 | 1678 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1679.0 | 1679 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1680.0 | 1680 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1681.0 | 1681 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1682.0 | 1682 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1683.0 | 1683 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1684.0 | 1684 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1685.0 | 1685 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1686.0 | 1686 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1687.0 | 1687 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1688.0 | 1688 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1689.0 | 1689 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1690.0 | 1690 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1691.0 | 1691 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1692.0 | 1692 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1693.0 | 1693 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1694.0 | 1694 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1695.0 | 1695 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1696.0 | 1696 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1697.0 | 1697 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1698.0 | 1698 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1699.0 | 1699 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1700.0 | 1700 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1701.0 | 1701 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1702.0 | 1702 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1703.0 | 1703 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1704.0 | 1704 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1705.0 | 1705 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1706.0 | 1706 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1707.0 | 1707 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1708.0 | 1708 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1709.0 | 1709 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1710.0 | 1710 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1711.0 | 1711 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1712.0 | 1712 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1713.0 | 1713 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1714.0 | 1714 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1715.0 | 1715 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1716.0 | 1716 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1717.0 | 1717 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1718.0 | 1718 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1719.0 | 1719 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1720.0 | 1720 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1721.0 | 1721 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1722.0 | 1722 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1723.0 | 1723 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1724.0 | 1724 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1725.0 | 1725 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1726.0 | 1726 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1727.0 | 1727 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1728.0 | 1728 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1729.0 | 1729 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1730.0 | 1730 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1731.0 | 1731 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1732.0 | 1732 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1733.0 | 1733 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1734.0 | 1734 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1735.0 | 1735 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1736.0 | 1736 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1737.0 | 1737 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1738.0 | 1738 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1739.0 | 1739 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1740.0 | 1740 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1741.0 | 1741 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1742.0 | 1742 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1743.0 | 1743 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1744.0 | 1744 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1745.0 | 1745 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1746.0 | 1746 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1747.0 | 1747 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1748.0 | 1748 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1749.0 | 1749 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1750.0 | 1750 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1751.0 | 1751 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1752.0 | 1752 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1753.0 | 1753 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1754.0 | 1754 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1755.0 | 1755 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1756.0 | 1756 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1757.0 | 1757 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1758.0 | 1758 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1759.0 | 1759 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1760.0 | 1760 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1761.0 | 1761 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1762.0 | 1762 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1763.0 | 1763 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1764.0 | 1764 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1765.0 | 1765 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1766.0 | 1766 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1767.0 | 1767 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1768.0 | 1768 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1769.0 | 1769 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1770.0 | 1770 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1771.0 | 1771 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1772.0 | 1772 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1773.0 | 1773 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1774.0 | 1774 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1775.0 | 1775 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1776.0 | 1776 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1777.0 | 1777 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1778.0 | 1778 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1779.0 | 1779 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1780.0 | 1780 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1781.0 | 1781 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1782.0 | 1782 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1783.0 | 1783 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1784.0 | 1784 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1785.0 | 1785 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1786.0 | 1786 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1787.0 | 1787 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1788.0 | 1788 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1789.0 | 1789 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1790.0 | 1790 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1791.0 | 1791 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1792.0 | 1792 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1793.0 | 1793 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1794.0 | 1794 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1795.0 | 1795 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1796.0 | 1796 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1797.0 | 1797 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1798.0 | 1798 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1799.0 | 1799 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1800.0 | 1800 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1801.0 | 1801 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1802.0 | 1802 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1803.0 | 1803 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1804.0 | 1804 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1805.0 | 1805 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1806.0 | 1806 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1807.0 | 1807 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1808.0 | 1808 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1809.0 | 1809 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1810.0 | 1810 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1811.0 | 1811 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1812.0 | 1812 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1813.0 | 1813 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1814.0 | 1814 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1815.0 | 1815 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1816.0 | 1816 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1817.0 | 1817 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1818.0 | 1818 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1819.0 | 1819 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1820.0 | 1820 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1821.0 | 1821 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1822.0 | 1822 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1823.0 | 1823 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1824.0 | 1824 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1825.0 | 1825 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1826.0 | 1826 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1827.0 | 1827 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1828.0 | 1828 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1829.0 | 1829 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1830.0 | 1830 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1831.0 | 1831 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1832.0 | 1832 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1833.0 | 1833 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1834.0 | 1834 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1835.0 | 1835 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1836.0 | 1836 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1837.0 | 1837 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1838.0 | 1838 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1839.0 | 1839 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1840.0 | 1840 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1841.0 | 1841 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1842.0 | 1842 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1843.0 | 1843 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1844.0 | 1844 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1845.0 | 1845 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1846.0 | 1846 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1847.0 | 1847 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1848.0 | 1848 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1849.0 | 1849 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1850.0 | 1850 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1851.0 | 1851 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1852.0 | 1852 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1853.0 | 1853 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1854.0 | 1854 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1855.0 | 1855 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1856.0 | 1856 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1857.0 | 1857 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1858.0 | 1858 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1859.0 | 1859 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1860.0 | 1860 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1861.0 | 1861 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1862.0 | 1862 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1863.0 | 1863 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1864.0 | 1864 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1865.0 | 1865 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1866.0 | 1866 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1867.0 | 1867 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1868.0 | 1868 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1869.0 | 1869 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1870.0 | 1870 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1871.0 | 1871 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1872.0 | 1872 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1873.0 | 1873 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1874.0 | 1874 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1875.0 | 1875 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1876.0 | 1876 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1877.0 | 1877 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1878.0 | 1878 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1879.0 | 1879 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1880.0 | 1880 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1881.0 | 1881 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1882.0 | 1882 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1883.0 | 1883 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1884.0 | 1884 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1885.0 | 1885 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1886.0 | 1886 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1887.0 | 1887 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1888.0 | 1888 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1889.0 | 1889 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1890.0 | 1890 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1891.0 | 1891 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1892.0 | 1892 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1893.0 | 1893 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1894.0 | 1894 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1895.0 | 1895 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1896.0 | 1896 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1897.0 | 1897 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1898.0 | 1898 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1899.0 | 1899 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1900.0 | 1900 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1901.0 | 1901 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1902.0 | 1902 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1903.0 | 1903 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1904.0 | 1904 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1905.0 | 1905 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1906.0 | 1906 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1907.0 | 1907 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1908.0 | 1908 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1909.0 | 1909 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1910.0 | 1910 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1911.0 | 1911 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1912.0 | 1912 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1913.0 | 1913 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1914.0 | 1914 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1915.0 | 1915 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1916.0 | 1916 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1917.0 | 1917 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1918.0 | 1918 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1919.0 | 1919 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1920.0 | 1920 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1921.0 | 1921 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1922.0 | 1922 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1923.0 | 1923 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1924.0 | 1924 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1925.0 | 1925 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1926.0 | 1926 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1927.0 | 1927 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1928.0 | 1928 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1929.0 | 1929 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1930.0 | 1930 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1931.0 | 1931 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1932.0 | 1932 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1933.0 | 1933 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1934.0 | 1934 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1935.0 | 1935 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1936.0 | 1936 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1937.0 | 1937 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1938.0 | 1938 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1939.0 | 1939 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1940.0 | 1940 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1941.0 | 1941 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1942.0 | 1942 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1943.0 | 1943 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1944.0 | 1944 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1945.0 | 1945 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1946.0 | 1946 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1947.0 | 1947 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1948.0 | 1948 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1949.0 | 1949 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1950.0 | 1950 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1951.0 | 1951 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1952.0 | 1952 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1953.0 | 1953 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1954.0 | 1954 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1955.0 | 1955 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1956.0 | 1956 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1957.0 | 1957 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1958.0 | 1958 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1959.0 | 1959 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1960.0 | 1960 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1961.0 | 1961 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1962.0 | 1962 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1963.0 | 1963 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1964.0 | 1964 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1965.0 | 1965 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1966.0 | 1966 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1967.0 | 1967 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1968.0 | 1968 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1969.0 | 1969 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1970.0 | 1970 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1971.0 | 1971 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1972.0 | 1972 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1973.0 | 1973 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1974.0 | 1974 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1975.0 | 1975 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1976.0 | 1976 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1977.0 | 1977 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1978.0 | 1978 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1979.0 | 1979 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1980.0 | 1980 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1981.0 | 1981 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1982.0 | 1982 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1983.0 | 1983 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1984.0 | 1984 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1985.0 | 1985 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1986.0 | 1986 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1987.0 | 1987 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1988.0 | 1988 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1989.0 | 1989 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1990.0 | 1990 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1991.0 | 1991 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1992.0 | 1992 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1993.0 | 1993 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1994.0 | 1994 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1995.0 | 1995 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1996.0 | 1996 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1997.0 | 1997 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1998.0 | 1998 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 1999.0 | 1999 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
| 0.0 | 2000.0 | 2000 | nan | 3.5714 | 1.2195 | 3.5714 | 3.5714 | 19.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
mradermacher/Machroom-3B-model_stock-GGUF
|
mradermacher
| 2024-06-04T04:36:07Z | 6 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-04T04:25:44Z |
---
base_model: DreadPoor/Machroom-3B-model_stock
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DreadPoor/Machroom-3B-model_stock
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Machroom-3B-model_stock-GGUF/resolve/main/Machroom-3B-model_stock.Q2_K.gguf) | Q2_K | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Machroom-3B-model_stock-GGUF/resolve/main/Machroom-3B-model_stock.IQ3_XS.gguf) | IQ3_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Machroom-3B-model_stock-GGUF/resolve/main/Machroom-3B-model_stock.IQ3_S.gguf) | IQ3_S | 1.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Machroom-3B-model_stock-GGUF/resolve/main/Machroom-3B-model_stock.Q3_K_S.gguf) | Q3_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Machroom-3B-model_stock-GGUF/resolve/main/Machroom-3B-model_stock.IQ3_M.gguf) | IQ3_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Machroom-3B-model_stock-GGUF/resolve/main/Machroom-3B-model_stock.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Machroom-3B-model_stock-GGUF/resolve/main/Machroom-3B-model_stock.Q3_K_L.gguf) | Q3_K_L | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Machroom-3B-model_stock-GGUF/resolve/main/Machroom-3B-model_stock.IQ4_XS.gguf) | IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Machroom-3B-model_stock-GGUF/resolve/main/Machroom-3B-model_stock.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Machroom-3B-model_stock-GGUF/resolve/main/Machroom-3B-model_stock.Q4_K_M.gguf) | Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Machroom-3B-model_stock-GGUF/resolve/main/Machroom-3B-model_stock.Q5_K_S.gguf) | Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Machroom-3B-model_stock-GGUF/resolve/main/Machroom-3B-model_stock.Q5_K_M.gguf) | Q5_K_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Machroom-3B-model_stock-GGUF/resolve/main/Machroom-3B-model_stock.Q6_K.gguf) | Q6_K | 2.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Machroom-3B-model_stock-GGUF/resolve/main/Machroom-3B-model_stock.Q8_0.gguf) | Q8_0 | 3.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Machroom-3B-model_stock-GGUF/resolve/main/Machroom-3B-model_stock.f16.gguf) | f16 | 5.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
MubarakB/zxCm3h8ADcB3R0ve2rgC
|
MubarakB
| 2024-06-04T04:35:51Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-06-04T04:35:47Z |
---
library_name: peft
base_model: NousResearch/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
Zoyd/nyunai_nyun-llama3-62B-5_0bpw_exl2
|
Zoyd
| 2024-06-04T04:34:37Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-06-03T22:38:53Z |
---
license: llama3
---
**Exllamav2** quant (**exl2** / **5.0 bpw**) made with ExLlamaV2 v0.1.3
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-2_2bpw_exl2)**</center> | <center>18625 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-2_5bpw_exl2)**</center> | <center>20645 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-3_0bpw_exl2)**</center> | <center>24211 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-3_5bpw_exl2)**</center> | <center>27784 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-3_75bpw_exl2)**</center> | <center>29572 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-4_0bpw_exl2)**</center> | <center>31359 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-4_25bpw_exl2)**</center> | <center>33139 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-5_0bpw_exl2)**</center> | <center>38500 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-6_0bpw_exl2)**</center> | <center>45805 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-6_5bpw_exl2)**</center> | <center>49410 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-8_0bpw_exl2)**</center> | <center>54655 MB</center> | <center>8</center> |
|
hdve/google-gemma-2b-1717475491
|
hdve
| 2024-06-04T04:33:55Z | 141 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-04T04:31:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bamaxi/ruBert-base-sakha
|
bamaxi
| 2024-06-04T04:28:42Z | 127 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-06-03T23:19:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MubarakB/T7KGvt4x8LnHYdJN9MQ0
|
MubarakB
| 2024-06-04T04:21:09Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-06-04T04:21:05Z |
---
library_name: peft
base_model: NousResearch/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
Zoyd/nyunai_nyun-llama3-62B-3_5bpw_exl2
|
Zoyd
| 2024-06-04T04:20:09Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-06-03T17:18:31Z |
---
license: llama3
---
**Exllamav2** quant (**exl2** / **3.5 bpw**) made with ExLlamaV2 v0.1.3
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-2_2bpw_exl2)**</center> | <center>18625 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-2_5bpw_exl2)**</center> | <center>20645 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-3_0bpw_exl2)**</center> | <center>24211 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-3_5bpw_exl2)**</center> | <center>27784 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-3_75bpw_exl2)**</center> | <center>29572 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-4_0bpw_exl2)**</center> | <center>31359 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-4_25bpw_exl2)**</center> | <center>33139 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-5_0bpw_exl2)**</center> | <center>38500 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-6_0bpw_exl2)**</center> | <center>45805 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-6_5bpw_exl2)**</center> | <center>49410 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-8_0bpw_exl2)**</center> | <center>54655 MB</center> | <center>8</center> |
|
SEHYONG/Llama-3-Open-Ko-8B-Instruct-kookmin7
|
SEHYONG
| 2024-06-04T04:11:42Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:SEHYONG/Llama-3-Open-Ko-8B-Instruct-kookmin6",
"base_model:finetune:SEHYONG/Llama-3-Open-Ko-8B-Instruct-kookmin6",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-04T04:05:37Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: SEHYONG/Llama-3-Open-Ko-8B-Instruct-kookmin6
---
# Uploaded model
- **Developed by:** SEHYONG
- **License:** apache-2.0
- **Finetuned from model :** SEHYONG/Llama-3-Open-Ko-8B-Instruct-kookmin6
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Rudra360/Emoji_Suggester
|
Rudra360
| 2024-06-04T04:09:27Z | 0 | 0 |
spacy
|
[
"spacy",
"en",
"region:us"
] | null | 2024-06-03T14:17:44Z |
---
language:
- en
library_name: spacy
---
# Emoji Suggester
Emoji Suggester is a tool designed to recommend relevant emojis based on incoming messages from social media apps, enhancing expressiveness and engagement in your conversations. The suggestions are powered by a model trained on a dataset of Twitter messages.
## Table of Contents
- [Installation](#installation)
- [Usage](#usage)
- [Contributing](#contributing)
- [License](#license)
- [Contact](#contact)
## Installation
To install Emoji Suggester, follow these steps:
1. Clone the repository:
```bash
git clone https://huggingface.co/Rudra360/Emoji_Suggester
or
```bash
git clone [email protected]:Rudra360/Emoji_Suggester.git
## Usage
Change the Directory
1. go to emoji_suggester
```bash
cd Emoji_Suggester
Then the run the follwing script
2. from util import predict
3. message = "I'm so happy today!"
suggested_emojis = predict(message)
print(suggested_emojis)
|
hdve/Qwen-Qwen1.5-7B-1717473930
|
hdve
| 2024-06-04T04:08:43Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-04T04:06:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
amazon/MegaBeam-Mistral-7B-300k
|
amazon
| 2024-06-04T04:06:40Z | 6,628 | 16 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-05-13T02:30:08Z |
---
license: apache-2.0
inference: false
---
# MegaBeam-Mistral-7B-300k Model
MegaBeam-Mistral-7B-300k is a fine-tuned [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) language model that supports input contexts up to 320k tokens. MegaBeam-Mistral-7B-300k can be deployed on a single AWS `g5.48xlarge` instance using serving frameworks such as [vLLM](https://github.com/vllm-project/vllm), Sagemaker [DJL](https://docs.aws.amazon.com/sagemaker/latest/dg/deploy-models-frameworks-djl-serving.html) endpoint, and others. Similarities and differences beween MegaBeam-Mistral-7B-300k and [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) are summarized below:
|Model|Max context length| rope_theta| prompt template|
|----------|-------------:|------------:|------------:|
| [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 32K | 1e6 | [instruction format](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2#instruction-format)|
| MegaBeam-Mistral-7B-300k | 320K | 25e6 | AS ABOVE|
## Evaluations
**[InfiniteBench: Extending Long Context Evaluation Beyond 100K Tokens](https://github.com/OpenBMB/InfiniteBench)**
_InfiniteBench is a cutting-edge benchmark tailored for evaluating the capabilities of language models to process, understand, and reason over super long contexts (100k+ tokens)_. We therefore evaluated MegaBeam-Mistral-7B-300k, [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2), [Llama-3-8B-Instruct-262k](https://huggingface.co/gradientai/Llama-3-8B-Instruct-262k), and [Llama3-70B-1M](https://huggingface.co/gradientai/Llama-3-70B-Instruct-Gradient-1048k) on InfiniteBench. The InfiniteBench authors also evaluated SOTA proprietary and open-source LLMs on InfiniteBench. We thus combined both results in the table below.
| Task Name | MegaBeam-Mistral-7B-300k | Mistral-7B-Instruct-v0.2 | Llama-3-8B-Instruct-262k | Llama3-70B-1M | GPT-4-1106-preview | YaRN-Mistral-7B | Kimi-Chat | Claude 2 | Yi-6B-200K | Yi-34B-200K | Chatglm3-6B-128K |
| ---------------- | ---------------- | ---------------- | ---------------- | ---------------- | ------ | --------------- | --------- | -------- | -----------| -----------| -----------|
| Retrieve.PassKey | 100% | 75.76% | 98.30% | 81.35% | 100% | 92.71% | 98.14% | 97.80% | 100.00% | 100.00% | 92.20% |
| Retrieve.Number | 96.10% | 25.25% | 97.79% | 97.62% | 100% | 56.61% | 95.42% | 98.14% | 94.92% | 100.00% | 80.68% |
| Retrieve.KV | 0% | 0% | 3.40% | 3% | 89.00% | < 5% | 53.60% | 65.40% | < 5% | < 5% | < 5% |
| En.Sum | 29.39% | 22.13% | 16.40% | 20.72% | 14.73% | 9.09% | 17.93% | 14.45% | < 5% | < 5% |< 5% |
| En.QA | 14.93% | 4.93% | 13.20% | 16.52% | 22.22% | 9.55% | 16.52% | 11.97% | 9.20% | 12.17% |< 5% |
| En.MC | 51.52% | 7.80% | 50.65% | 62% | 67.25% | 27.95% | 72.49% | 62.88% | 36.68% |38.43% |10.48% |
| En.Dia | 9.50% | 3.50% | 1% | 12.50% | 8.50% | 7.50% | 11.50% | 46.50% | < 5% |< 5% |< 5% |
| Zh.QA | 10.71% | 3.43% | 19.02% | 26% | 25.96% | 14.43% | 17.93% | 9.64% | 15.07% |13.61% |< 5% |
| Code.Debug | 27.41% | 11.60% | 22.08% | 23.85% | 39.59% | < 5% | 18.02% | < 5% | < 5% |< 5% |< 5% |
| Code.Run | 1.75% | 0.25% | 0% | 0% | 23.25% | < 5% | < 5% | < 5% | < 5% |< 5% |< 5% |
| Math.Calc | 0% | 0% | 0% | 0% | < 5% | < 5% | < 5% | < 5% | < 5% |< 5% |< 5% |
| Math.Find | 24.28% | 26.28% | 15.40% | 30% | 60.00% | 17.14% | 12.57% | 32.29% | < 5% |25.71% |7.71% |
| **Average** | 30.70% | 15.08% | 28.10% | 31.13% | 46.08% | 20.41% | 34.93% | 37.21% | 22.78% |25.41% |17.59% |
The 12 evaluation tasks are summarized below (as per [InfiniteBench]((https://github.com/OpenBMB/InfiniteBench)))
| Task Name | Context | # Examples | Avg Input Tokens | Avg Output Tokens | Description |
| -------------------- | ------------- | ---------- | ---------------- | ----------------- | ------------------------------------------------------------------------------------------- |
| En.Sum | Fake Book | 103 | 171.5k | 1.1k | Summarization of a fake book created with core entity substitution. |
| En.QA | Fake Book | 351 | 192.6k | 4.8 | Free-form question answering based on the fake book. |
| En.MC | Fake Book | 229 | 184.4k | 5.3 | Multiple choice questions derived from the fake book. |
| En.Dia | Script | 200 | 103.6k | 3.4 | Identification of talkers in partially anonymized scripts. |
| Zh.QA | New Book | 175 | 2068.6k | 6.3 | Question answering on a set of newly collected books. |
| Code.Debug | Code Document | 394 | 114.7k | 4.8 | Finding which function in a code repo contains an crashing error (in multiple choice form). |
| Code.Run | Synthetic | 400 | 75.2k | 1.3 | Simulating execution of multiple simple, synthetic functions. |
| Math.Calc | Synthetic | 50 | 43.9k | 43.9k | Calculations involving super-long arithmetic equations. |
| Math.Find | Synthetic | 350 | 87.9k | 1.3 | Finding special integers in a lengthy list. |
| Retrieve.PassKey | Synthetic | 590 | 122.4k | 2.0 | Retrieving hidden keys in a noisy long context. |
| Retrieve.Number | Synthetic | 590 | 122.4k | 4.0 | Locating repeated hidden numbers in a noisy long context. |
| Retrieve.KV | Synthetic | 500 | 89.9k | 22.7 | Finding the corresponding value from a dictionary and a key. |
## Serve MegaBeam-Mistral-7B-300k on EC2 instances ##
On an AWS `g5.48xlarge` instance, upgrade vLLM to the latest version as per [documentation on vLLM](https://vllm.readthedocs.io/en/latest/).
### Start the server
```shell
python3 -m vllm.entrypoints.openai.api_server --model amazon/MegaBeam-Mistral-7B-300k --tensor-parallel-size 8
```
**Important Note** - We have set the `max_position_embeddings` in the [`config.json`](config.json) to 288,800 in order to fit model's KV-cache on a single `g5.48xlarge` instance, which has 8 x A10 GPUs (24GB RAM per GPU).
On an instance with larger GPU RAM (e.g. `p4d.24xlarge`), feel free to increase the value of the `max_position_embeddings`(e.g. to 350K), which the model should be able to process.
### Run the client
```python
from openai import OpenAI
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
client = OpenAI(
# defaults to os.environ.get("OPENAI_API_KEY")
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
chat_completion = client.chat.completions.create(
messages = [
{"role": "user", "content": "What is your favourite condiment?"}, # insert your long context here
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"} # insert your long context here
],
model=model,
)
print("Chat completion results:")
print(chat_completion)
```
### Deploy the model on a SageMaker Endpoint ###
To deploy MegaBeam-Mistral-7B-300k on a SageMaker endpoint, please follow this [SageMaker DJL deployment guide](https://docs.djl.ai/docs/demos/aws/sagemaker/large-model-inference/sample-llm/vllm_deploy_mistral_7b.html).
Run the following Python code in a SageMaker notebook (with each block running in a separate cell)
```python
import sagemaker
from sagemaker import Model, image_uris, serializers, deserializers
sagemaker_session = sagemaker.Session()
region = sagemaker_session.boto_region_name
role = sagemaker.get_execution_role()
%%writefile serving.properties
engine=Python
option.model_id=amazon/MegaBeam-Mistral-7B-300k
option.dtype=bf16
option.task=text-generation
option.rolling_batch=vllm
option.tensor_parallel_degree=8
option.device_map=auto
%%sh
mkdir mymodel
mv serving.properties mymodel/
tar czvf mymodel.tar.gz mymodel/
rm -rf mymodel
image_uri = image_uris.retrieve(
framework="djl-deepspeed",
region=region,
version="0.27.0"
)
s3_code_prefix = "megaBeam-mistral-7b-300k/code"
bucket = sagemaker_session.default_bucket() # bucket to house artifacts
code_artifact = sagemaker_session.upload_data("mymodel.tar.gz", bucket, s3_code_prefix)
print(f"S3 Code or Model tar ball uploaded to --- > {code_artifact}")
model = Model(image_uri=image_uri, model_data=code_artifact, role=role)
instance_type = "ml.g5.48xlarge"
endpoint_name = sagemaker.utils.name_from_base("megaBeam-mistral-7b-300k")
model.deploy(initial_instance_count=1,
instance_type=instance_type,
endpoint_name=endpoint_name
)
# our requests and responses will be in json format so we specify the serializer and the deserializer
predictor = sagemaker.Predictor(
endpoint_name=endpoint_name,
sagemaker_session=sagemaker_session,
serializer=serializers.JSONSerializer(),
)
# test the endpoint
input_str = """<s>[INST] What is your favourite condiment? [/INST]
Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
[INST] Do you have mayonnaise recipes? [/INST]"""
predictor.predict(
{"inputs": input_str, "parameters": {"max_new_tokens": 75}}
)
```
### Invoke the model on a SageMaker Endpoint ###
To use MegaBeam-Mistral-7B-300k on a SageMaker endpoint, please try following this example:
```python
import boto3
import json
def call_endpoint(text:str, endpoint_name:str):
client = boto3.client("sagemaker-runtime")
parameters = {
"max_new_tokens": 450,
"do_sample": True,
"temperature": 0.7,
}
payload = {"inputs": text, "parameters": parameters}
response = client.invoke_endpoint(
EndpointName=endpoint_name, Body=json.dumps(payload), ContentType="application/json"
)
output = json.loads(response["Body"].read().decode())
result = output["generated_text"]
return result
# please insert your long prompt/document content here
prompt = """<s>[INST] What are the main challenges to support long contexts for a Large Language Model? [/INST]"""
#print(prompt)
endpoint_name = "megaBeam-mistral-7b-300k-2024-05-13-14-23-41-219" # please use a valid endpoint name
result = call_endpoint(prompt, endpoint_name)
print(result)
```
## Limitations ##
Before using the MegaBeam-Mistral-7B-300k model, it is important to perform your own independent assessment, and take measures to ensure that your use would comply with your own specific quality control practices and standards, and that your use would comply with the local rules, laws, regulations, licenses and terms that apply to you, and your content.
## The AWS Contributors ##
Chen Wu, Yin Song, Verdi March, Eden Duthie
|
richardkelly/Qwen-Qwen1.5-0.5B-1717473352
|
richardkelly
| 2024-06-04T04:00:37Z | 141 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-04T03:55:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cgus/AlchemistCoder-DS-6.7B-exl2
|
cgus
| 2024-06-04T03:59:59Z | 5 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"code generation",
"conversational",
"arxiv:2405.19265",
"base_model:internlm/AlchemistCoder-DS-6.7B",
"base_model:quantized:internlm/AlchemistCoder-DS-6.7B",
"license:apache-2.0",
"autotrain_compatible",
"4-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-06-03T23:59:24Z |
---
license: apache-2.0
base_model: internlm/AlchemistCoder-DS-6.7B
inference: false
tags:
- code generation
---
# AlchemistCoder-DS-6.7B-exl2
Original model: [AlchemistCoder-DS-6.7B](https://huggingface.co/internlm/AlchemistCoder-DS-6.7B)
Model creator: [InternLM](https://huggingface.co/internlm)
## Quants
[4bpw h6 (main)](https://huggingface.co/cgus/AlchemistCoder-DS-6.7B-exl2/tree/main)
[4.25bpw h6](https://huggingface.co/cgus/AlchemistCoder-DS-6.7B-exl2/tree/4.25bpw-h6)
[4.65bpw h6](https://huggingface.co/cgus/AlchemistCoder-DS-6.7B-exl2/tree/4.65bpw-h6)
[5bpw h6](https://huggingface.co/cgus/AlchemistCoder-DS-6.7B-exl2/tree/5bpw-h6)
[6bpw h6](https://huggingface.co/cgus/AlchemistCoder-DS-6.7B-exl2/tree/6bpw-h6)
[8bpw h8](https://huggingface.co/cgus/AlchemistCoder-DS-6.7B-exl2/tree/8bpw-h8)
## Quantization notes
Made with Exllamav2 0.1.3 with the default dataset.
## How to run
This model is meant to be used with Exllamav2 loader that requires the model to be fully loaded into GPU VRAM.
It primarily requires a Nvidia RTX card on Windows/Linux or AMD card on Linux.
If you want to use this model but your system doesn't meet these requirements, you should look for GGUF versions of the model.
It can be used with apps like:
[Text Generation Webui](https://github.com/oobabooga/text-generation-webui)
[KoboldAI](https://github.com/henk717/KoboldAI)
[ExUI](https://github.com/turboderp/exui)
[lollms-webui](https://github.com/ParisNeo/lollms-webui)
# Original model card
# AlchemistCoder: Harmonizing and Eliciting Code Capability by Hindsight Tuning on Multi-source Data
[[🤗 HuggingFace](https://huggingface.co/internlm/AlchemistCoder-DS-6.7B)]
[[📃 Paper](https://arxiv.org/abs/2405.19265)]
[[🌐 Project Page](https://internlm.github.io/AlchemistCoder/)]
## ✨ Highlights
> **Abstract:** *Open-source Large Language Models (LLMs) and their specialized variants, particularly Code LLMs, have recently delivered impressive performance. However, previous Code LLMs are typically fine-tuned on single-source data with limited quality and diversity, which may insufficiently elicit the potential of pre-trained Code LLMs. In this paper, we present AlchemistCoder, a series of Code LLMs with enhanced code generation and generalization capabilities fine-tuned on multi-source data. To achieve this, we pioneer to unveil inherent conflicts among the various styles and qualities in multi-source code corpora and introduce data-specific prompts with hindsight relabeling, termed AlchemistPrompts, to harmonize different data sources and instruction-response pairs. Additionally, we propose incorporating the data construction process into the fine-tuning data as code comprehension tasks, including instruction evolution, data filtering, and code review. Extensive experiments demonstrate that AlchemistCoder holds a clear lead among all models of the same size (6.7B/7B) and rivals or even surpasses larger models (15B/33B/70B), showcasing the efficacy of our method in refining instruction-following capabilities and advancing the boundaries of code intelligence.*
- **AlchemistPrompts**: Designed as data-specific prompts for harmonizing inherent conflicts in multi-source data and mitigating the instruction/response misalignment at a fined-grained level.
- **Code Comprehenstion Tasks**: Sourced from the process of data construction, consisting of instruction evolution, data filtering, and code review.
- **Harmonized Multi-source Data**: Instruction tuned on 200M tokens, including 6 types of high-quality data.
- **Superior Model Performance**: Surpassing all the open-source models of the same size (6.7/7B), and rivaling or even beating larger models (15B/33B/70B/ChatGPT) on 6 code benchmarks.
- **Advanced generic capabilities**: Demonstrated by the significant improvements on MMLU, BBH, and GSM8K.
## 🚀 Quick Start
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("internlm/AlchemistCoder-DS-6.7B", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("internlm/AlchemistCoder-DS-6.7B", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
model = model.eval()
input_text = "Implement the Dijkstra algorithm in Python"
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
The above code will generate the following result:
```python
import heapq
def dijkstra(graph, start, end):
"""
Find the shortest path between two nodes in a graph using Dijkstra's algorithm.
Args:
graph (dict): A dictionary representing the graph, where the keys are nodes and the values are dictionaries
containing the neighboring nodes and their edge weights.
start: The starting node.
end: The target node.
Returns:
list: The shortest path from the start node to the target node as a list of nodes.
Raises:
ValueError: If either the start or end node is not present in the graph.
"""
# Check if start and end nodes are in the graph
if start not in graph:
raise ValueError("Start node is not present in the graph.")
if end not in graph:
raise ValueError("End node is not present in the graph.")
# Initialize the distance dictionary with infinite distances for all nodes
distances = {node: float('inf') for node in graph}
# Set the distance of the start node to 0
distances[start] = 0
# Initialize the heap with the start node
heap = [(0, start)]
# Initialize the previous dictionary to keep track of the path
previous = {}
while heap:
# Pop the node with the smallest distance from the heap
current_distance, current_node = heapq.heappop(heap)
# If the current node is the end node, we have found the shortest path
if current_node == end:
# Reconstruct the path by following the previous nodes
path = []
while current_node in previous:
path.append(current_node)
current_node = previous[current_node]
path.append(start)
# Reverse the path to get the correct order
return path[::-1]
# Iterate over the neighboring nodes and update their distances
for neighbor, weight in graph[current_node].items():
new_distance = current_distance + weight
# If a shorter path is found, update the distance and previous node
if new_distance < distances[neighbor]:
distances[neighbor] = new_distance
previous[neighbor] = current_node
heapq.heappush(heap, (new_distance, neighbor))
# If there is no path between the start and end nodes, return an empty list
return []
```
> The `dijkstra` function takes three arguments: `graph`, `start`, and `end`. The `graph` argument is a dictionary representing the graph, where the keys are nodes and the values are dictionaries containing the neighboring nodes and their edge weights. The `start` argument is the starting node, and the `end` argument is the target node.
> The function first checks if the start and end nodes are present in the graph. If either node is not present, a `ValueError` is raised.
> The function then initializes a `distances` dictionary with infinite distances for all nodes. It sets the distance of the start node to 0. It also initializes a heap with the start node and a `previous` dictionary to keep track of the path.
> The algorithm then iterates over the nodes in the heap. For each node, it checks if it is the end node. If it is, the function reconstructs the path by following the previous nodes and returns the shortest path as a list of nodes in the correct order.
> If the current node is not the end node, the algorithm iterates over its neighboring nodes and updates their distances if a shorter path is found. It also updates the `previous` dictionary to keep track of the path.
> If there is no path between the start and end nodes, the function returns an empty list.
> Note that this implementation assumes that the graph is a directed graph, and it uses a heap data structure to efficiently select the node with the smallest distance at each step.
## 🧪 Evaluation and Fine-tune
Please refer to [**AlchemistCoder**](https://github.com/InternLM/AlchemistCoder) and [**InternLM**](https://github.com/InternLM/InternLM/tree/main).
## 😃 Acknowledgments
*AlchemistCoder* is built with [**InternLM**](https://github.com/InternLM) and [**OpenCompass**](https://github.com/open-compass). Thanks for their awesome work!
## 📧 Contact
If you have any questions, please create an issue on this repository or contact us at:
- [email protected]
- [email protected]
## 🌟 Citation
If you find our work useful, please consider citing:
```bibtex
@misc{song2024alchemistcoder,
title={AlchemistCoder: Harmonizing and Eliciting Code Capability by Hindsight Tuning on Multi-source Data},
author={Zifan Song and Yudong Wang and Wenwei Zhang and Kuikun Liu and Chengqi Lyu and Demin Song and Qipeng Guo and Hang Yan and Dahua Lin and Kai Chen and Cairong Zhao},
year={2024},
eprint={2405.19265},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
sahilkumar4ai/misteral-finetuned-samsum
|
sahilkumar4ai
| 2024-06-04T03:53:07Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-06-04T03:18:00Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
model-index:
- name: misteral-finetuned-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sahilthegnius/huggingface/runs/7be8cmf0)
# misteral-finetuned-samsum
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 259
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
stannisozbov/stann-speechtotext-medium-tr-03_06
|
stannisozbov
| 2024-06-04T03:51:08Z | 6 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"tr",
"dataset:common_voice_17_0",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-03T15:36:16Z |
---
language:
- tr
license: apache-2.0
base_model: openai/whisper-medium
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- common_voice_17_0
metrics:
- wer
model-index:
- name: stann-speechtotext-medium-tr-03_06
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_17_0
type: common_voice_17_0
config: tr
split: None
args: tr
metrics:
- name: Wer
type: wer
value: 32.78548922762497
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stann-speechtotext-medium-tr-03_06
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2244
- Wer: 32.7855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.0307 | 1.3784 | 1000 | 0.2063 | 30.0843 |
| 0.0131 | 2.7567 | 2000 | 0.2025 | 35.2159 |
| 0.0016 | 4.1351 | 3000 | 0.2244 | 32.7855 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
Zoyd/nyunai_nyun-llama3-62B-2_5bpw_exl2
|
Zoyd
| 2024-06-04T03:41:15Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-06-03T14:47:50Z |
---
license: llama3
---
**Exllamav2** quant (**exl2** / **2.5 bpw**) made with ExLlamaV2 v0.1.3
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-2_2bpw_exl2)**</center> | <center>18625 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-2_5bpw_exl2)**</center> | <center>20645 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-3_0bpw_exl2)**</center> | <center>24211 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-3_5bpw_exl2)**</center> | <center>27784 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-3_75bpw_exl2)**</center> | <center>29572 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-4_0bpw_exl2)**</center> | <center>31359 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-4_25bpw_exl2)**</center> | <center>33139 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-5_0bpw_exl2)**</center> | <center>38500 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-6_0bpw_exl2)**</center> | <center>45805 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-6_5bpw_exl2)**</center> | <center>49410 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-8_0bpw_exl2)**</center> | <center>54655 MB</center> | <center>8</center> |
|
srbdtwentyfour/mystery-llama-3-8b-v2
|
srbdtwentyfour
| 2024-06-04T03:39:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-03T08:18:31Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** srbdtwentyfour
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF
|
bartowski
| 2024-06-04T03:26:56Z | 150 | 0 | null |
[
"gguf",
"text-generation",
"en",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-06-04T03:09:56Z |
---
language:
- en
license: llama3
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Llama-3-Instruct-8B-SimPO-ExPO
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3070">b3070</a> for quantization.
Original model: https://huggingface.co/chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-Instruct-8B-SimPO-ExPO-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-ExPO-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Llama-3-Instruct-8B-SimPO-ExPO-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-ExPO-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Llama-3-Instruct-8B-SimPO-ExPO-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-ExPO-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Llama-3-Instruct-8B-SimPO-ExPO-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-ExPO-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Llama-3-Instruct-8B-SimPO-ExPO-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-ExPO-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Llama-3-Instruct-8B-SimPO-ExPO-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-ExPO-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Llama-3-Instruct-8B-SimPO-ExPO-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-ExPO-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Llama-3-Instruct-8B-SimPO-ExPO-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-ExPO-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Llama-3-Instruct-8B-SimPO-ExPO-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-ExPO-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Llama-3-Instruct-8B-SimPO-ExPO-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-ExPO-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Llama-3-Instruct-8B-SimPO-ExPO-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-ExPO-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Llama-3-Instruct-8B-SimPO-ExPO-IQ3_XS.gguf](https://huggingface.co/bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-ExPO-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Llama-3-Instruct-8B-SimPO-ExPO-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-ExPO-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Llama-3-Instruct-8B-SimPO-ExPO-Q2_K.gguf](https://huggingface.co/bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-ExPO-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Llama-3-Instruct-8B-SimPO-ExPO-IQ2_M.gguf](https://huggingface.co/bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-ExPO-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Llama-3-Instruct-8B-SimPO-ExPO-IQ2_S.gguf](https://huggingface.co/bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-ExPO-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-Instruct-8B-SimPO-ExPO-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-ExPO-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF --include "Llama-3-Instruct-8B-SimPO-ExPO-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF --include "Llama-3-Instruct-8B-SimPO-ExPO-Q8_0.gguf/*" --local-dir Llama-3-Instruct-8B-SimPO-ExPO-Q8_0
```
You can either specify a new local-dir (Llama-3-Instruct-8B-SimPO-ExPO-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
Ariffiq99/CRAB_COPA_KUCI_xlm_roberta_base_finetuned
|
Ariffiq99
| 2024-06-04T03:25:02Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:Ariffiq99/COPA_KUCI_xlm_roberta_base_finetuned",
"base_model:finetune:Ariffiq99/COPA_KUCI_xlm_roberta_base_finetuned",
"license:mit",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-06-04T02:57:25Z |
---
license: mit
base_model: Ariffiq99/COPA_KUCI_xlm_roberta_base_finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: CRAB_COPA_KUCI_xlm_roberta_base_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CRAB_COPA_KUCI_xlm_roberta_base_finetuned
This model is a fine-tuned version of [Ariffiq99/COPA_KUCI_xlm_roberta_base_finetuned](https://huggingface.co/Ariffiq99/COPA_KUCI_xlm_roberta_base_finetuned) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1600
- F1: 0.7417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.2245 | 1.0 | 2880 | 0.9044 | 0.6875 |
| 1.1396 | 2.0 | 5760 | 1.0192 | 0.7042 |
| 1.039 | 3.0 | 8640 | 1.1395 | 0.7222 |
| 0.8411 | 4.0 | 11520 | 1.1650 | 0.7389 |
| 0.7471 | 5.0 | 14400 | 1.1235 | 0.7361 |
| 0.9344 | 6.0 | 17280 | 1.1646 | 0.7375 |
| 0.7564 | 7.0 | 20160 | 1.0863 | 0.7417 |
| 0.7116 | 8.0 | 23040 | 1.1600 | 0.7417 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
Zoyd/nyunai_nyun-llama3-62B-6_0bpw_exl2
|
Zoyd
| 2024-06-04T03:19:34Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-06-03T23:58:04Z |
---
license: llama3
---
**Exllamav2** quant (**exl2** / **6.0 bpw**) made with ExLlamaV2 v0.1.3
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-2_2bpw_exl2)**</center> | <center>18625 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-2_5bpw_exl2)**</center> | <center>20645 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-3_0bpw_exl2)**</center> | <center>24211 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-3_5bpw_exl2)**</center> | <center>27784 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-3_75bpw_exl2)**</center> | <center>29572 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-4_0bpw_exl2)**</center> | <center>31359 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-4_25bpw_exl2)**</center> | <center>33139 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-5_0bpw_exl2)**</center> | <center>38500 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-6_0bpw_exl2)**</center> | <center>45805 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-6_5bpw_exl2)**</center> | <center>49410 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-8_0bpw_exl2)**</center> | <center>54655 MB</center> | <center>8</center> |
|
wgd9527/sparseocc-v299-occ-seg-flow
|
wgd9527
| 2024-06-04T03:14:33Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-05-31T09:35:38Z |
---
license: apache-2.0
---
|
ALI-B/phi3-mini
|
ALI-B
| 2024-06-04T03:10:28Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-04T03:07:14Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HuggingFaceFW/ablation-exp-dedup-global_minhash-350BT
|
HuggingFaceFW
| 2024-06-04T03:10:19Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-03T23:35:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Zoyd/nyunai_nyun-llama3-62B-4_25bpw_exl2
|
Zoyd
| 2024-06-04T03:08:17Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-06-03T21:15:12Z |
---
license: llama3
---
**Exllamav2** quant (**exl2** / **4.25 bpw**) made with ExLlamaV2 v0.1.3
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-2_2bpw_exl2)**</center> | <center>18625 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-2_5bpw_exl2)**</center> | <center>20645 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-3_0bpw_exl2)**</center> | <center>24211 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-3_5bpw_exl2)**</center> | <center>27784 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-3_75bpw_exl2)**</center> | <center>29572 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-4_0bpw_exl2)**</center> | <center>31359 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-4_25bpw_exl2)**</center> | <center>33139 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-5_0bpw_exl2)**</center> | <center>38500 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-6_0bpw_exl2)**</center> | <center>45805 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-6_5bpw_exl2)**</center> | <center>49410 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-8_0bpw_exl2)**</center> | <center>54655 MB</center> | <center>8</center> |
|
Abhinay45/outputs
|
Abhinay45
| 2024-06-04T03:08:04Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"dataset:yahma/alpaca-cleaned",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:adapter:unsloth/llama-3-8b-bnb-4bit",
"license:llama2",
"region:us"
] | null | 2024-06-04T03:05:36Z |
---
license: llama2
library_name: peft
tags:
- trl
- sft
- unsloth
- generated_from_trainer
base_model: unsloth/llama-3-8b-bnb-4bit
datasets:
- yahma/alpaca-cleaned
model-index:
- name: Alpaca + Llama-3 8b Unsloth 2x faster finetuning.
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Alpaca + Llama-3 8b Unsloth 2x faster finetuning.
This model is a fine-tuned version of [unsloth/llama-3-8b-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-bnb-4bit) on the alpaca dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 60
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
lucalolee/ppo-LunarLander-v2-1
|
lucalolee
| 2024-06-04T03:04:20Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-04T03:04:01Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 276.00 +/- 24.50
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
turnipseason/latext5
|
turnipseason
| 2024-06-04T02:58:53Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"math",
"normalization",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-26T02:36:06Z |
---
license: mit
language:
- ru
library_name: transformers
pipeline_tag: text2text-generation
tags:
- math
- normalization
---
### Описание:
Модель для нормализации русскоязычных текстов, содержащих математические сущности, в формат LaTeX.
Модель является дообученной на переведённом&аугментированном датасете "[Mathematics Stack Exchange API Q&A Data](https://zenodo.org/records/1414384)" версией модели [cointegrated/rut5-small](https://huggingface.co/cointegrated/rut5-small).
### Description:
This is a model for mathematical text normalization in Russian, based on the [cointegrated/rut5-small](https://huggingface.co/cointegrated/rut5-small) paraphraser.
The model was created by finetuning the paraphraser on a translated&augmented "[Mathematics Stack Exchange API Q&A Data](https://zenodo.org/records/1414384)" dataset.
Пример использования:
---
Usage example:
---
``` python
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from IPython.display import display, Math, Latex
model_dir = "turnipseason/latext5"
model = AutoModelForSeq2SeqLM.from_pretrained(model_dir)
tokenizer = AutoTokenizer.from_pretrained(model_dir)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
def get_latex(text):
inputs = tokenizer(text, return_tensors='pt').to(device)
with torch.no_grad():
hypotheses = model.generate(
**inputs,
do_sample=True, num_return_sequences=1,
repetition_penalty=1.2,
max_length=len(text),
num_beams=10,
early_stopping=True
)
for h in hypotheses:
display(Latex(tokenizer.decode(h, skip_special_tokens=True)))
text = '''лямбда прописная квадрат минус три равно десять игрек куб
При этом шинус икс равен интеграл от экспоненты до трёх игрек штрих'''
get_latex(text)
```
|
John6666/pony-pencil-sdxl
|
John6666
| 2024-06-04T02:57:30Z | 19 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-05-24T12:12:06Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
---
Original model is [here](https://huggingface.co/bluepen5805/pony_pencil-XL).
|
ehottl/distilbert-base-uncased-distilled-clinc
|
ehottl
| 2024-06-04T02:56:15Z | 111 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-04T02:54:19Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2792
- Accuracy: 0.9439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2695 | 1.0 | 318 | 1.6200 | 0.7197 |
| 1.264 | 2.0 | 636 | 0.8322 | 0.8616 |
| 0.6826 | 3.0 | 954 | 0.4907 | 0.9077 |
| 0.4228 | 4.0 | 1272 | 0.3628 | 0.9326 |
| 0.3128 | 5.0 | 1590 | 0.3137 | 0.9413 |
| 0.2644 | 6.0 | 1908 | 0.2946 | 0.9439 |
| 0.2424 | 7.0 | 2226 | 0.2846 | 0.9439 |
| 0.2299 | 8.0 | 2544 | 0.2806 | 0.9439 |
| 0.2253 | 9.0 | 2862 | 0.2792 | 0.9439 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2.post303
- Datasets 2.19.1
- Tokenizers 0.15.2
|
Zery/MV-LLaVA-7B
|
Zery
| 2024-06-04T02:55:57Z | 21 | 3 |
transformers
|
[
"transformers",
"pytorch",
"share4v",
"text-generation",
"image-text-to-text",
"en",
"dataset:Zery/BS-Objaverse",
"dataset:Lin-Chen/ShareGPT4V",
"arxiv:2406.00093",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
image-text-to-text
| 2024-05-13T07:18:35Z |
---
inference: false
pipeline_tag: image-text-to-text
license: apache-2.0
datasets:
- Zery/BS-Objaverse
- Lin-Chen/ShareGPT4V
language:
- en
---
<br>
<br>
# MV-LLaVA-7B Model Card
## Model details
**Model type:**
MV-LLaVA-7B is an open-source chatbot for 3D multi-view images trained by fine-tuning CLIP vision tower and LLaMA/Vicuna on GPT4-Vision-assisted [BS-Objaverse](https://huggingface.co/datasets/Zery/BS-Objaverse) data and [ShareGPT4V](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V) data.
**Model date:**
MV-LLaVA-7B was trained in Apr, 2024.
**Paper or resources for more information:**
[[Project](https://sunzey.github.io/Bootstrap3D/)] [[Paper](https://huggingface.co/papers/2406.00093)] [[Code](https://github.com/SunzeY/Bootstrap3D)]
## Usage
You can directly utilize this model as we provide in our [[repository](https://github.com/SunzeY/Bootstrap3D/tree/main/MV_LLaVA)].
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
## Intended use
**Primary intended uses:**
The primary use of ShareGPT4V-7B is research on large multimodal models and chatbots for 3D content.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 1.2M ShareGPT4V-PT data
- 30K GPT4-Vision-generated multi-view image-text pairs
- LLaVA instruction-tuning data
|
wenlianghuang/sample_phi3_finetune_example
|
wenlianghuang
| 2024-06-04T02:52:43Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"trl",
"sft",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-03T07:03:36Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Zoyd/nyunai_nyun-llama3-62B-2_2bpw_exl2
|
Zoyd
| 2024-06-04T02:52:17Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-06-03T13:33:55Z |
---
license: llama3
---
**Exllamav2** quant (**exl2** / **2.2 bpw**) made with ExLlamaV2 v0.1.3
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-2_2bpw_exl2)**</center> | <center>18625 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-2_5bpw_exl2)**</center> | <center>20645 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-3_0bpw_exl2)**</center> | <center>24211 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-3_5bpw_exl2)**</center> | <center>27784 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-3_75bpw_exl2)**</center> | <center>29572 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-4_0bpw_exl2)**</center> | <center>31359 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-4_25bpw_exl2)**</center> | <center>33139 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-5_0bpw_exl2)**</center> | <center>38500 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-6_0bpw_exl2)**</center> | <center>45805 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-6_5bpw_exl2)**</center> | <center>49410 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/nyunai_nyun-llama3-62B-8_0bpw_exl2)**</center> | <center>54655 MB</center> | <center>8</center> |
|
martinsinnona/visdecode_vega_2
|
martinsinnona
| 2024-06-04T02:42:03Z | 51 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pix2struct",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-06-04T02:01:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Chanakan5591/llama-3-typhoon-v1.5-8b-nf4
|
Chanakan5591
| 2024-06-04T02:40:51Z | 79 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-06-04T02:36:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ahmedesmail16/Paper_compared-beit-base
|
ahmedesmail16
| 2024-06-04T02:36:12Z | 211 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/beit-base-patch16-224-pt22k-ft22k",
"base_model:finetune:microsoft/beit-base-patch16-224-pt22k-ft22k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-04T00:17:01Z |
---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224-pt22k-ft22k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Paper_compared-beit-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Paper_compared-beit-base
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5363
- Accuracy: 0.8409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.6803 | 0.9492 | 14 | 0.9171 | 0.7156 |
| 0.8219 | 1.9661 | 29 | 0.5230 | 0.8330 |
| 0.2323 | 2.9831 | 44 | 0.5110 | 0.8047 |
| 0.1112 | 4.0 | 59 | 0.4968 | 0.8138 |
| 0.0387 | 4.9492 | 73 | 0.5502 | 0.8093 |
| 0.0232 | 5.9661 | 88 | 0.5506 | 0.8296 |
| 0.0096 | 6.9831 | 103 | 0.5341 | 0.8431 |
| 0.0068 | 8.0 | 118 | 0.6003 | 0.8149 |
| 0.0046 | 8.9492 | 132 | 0.5298 | 0.8409 |
| 0.0051 | 9.4915 | 140 | 0.5363 | 0.8409 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
ehottl/distilbert-base-uncased-finetuned-clinc
|
ehottl
| 2024-06-04T02:36:06Z | 113 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-04T02:24:15Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8020
- Accuracy: 0.9158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.3069 | 1.0 | 318 | 3.3020 | 0.7177 |
| 2.6569 | 2.0 | 636 | 1.9007 | 0.8468 |
| 1.5836 | 3.0 | 954 | 1.1867 | 0.8881 |
| 1.0474 | 4.0 | 1272 | 0.8876 | 0.9116 |
| 0.8287 | 5.0 | 1590 | 0.8020 | 0.9158 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2.post303
- Datasets 2.19.1
- Tokenizers 0.15.2
|
warwavn/vit-base-patch16-224-in21k-finetuned-lora-food101
|
warwavn
| 2024-06-04T02:35:22Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-04T02:29:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
flammenai/Mahou-1.3a-mistral-7B-GGUF
|
flammenai
| 2024-06-04T02:26:13Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"dataset:flammenai/MahouMix-v1",
"base_model:flammenai/Mahou-1.3a-mistral-7B",
"base_model:quantized:flammenai/Mahou-1.3a-mistral-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-02T03:39:19Z |
---
library_name: transformers
license: apache-2.0
base_model:
- flammenai/Mahou-1.3a-mistral-7B
datasets:
- flammenai/MahouMix-v1
---

# Mahou-1.3a-mistral-7B
Mahou is designed to provide short messages in a conversational context. It is capable of casual conversation and character roleplay.
### Chat Format
This model has been trained to use ChatML format.
```
<|im_start|>system
{{system}}<|im_end|>
<|im_start|>{{char}}
{{message}}<|im_end|>
<|im_start|>{{user}}
{{message}}<|im_end|>
```
### Roleplay Format
- Speech without quotes.
- Actions in `*asterisks*`
```
*leans against wall cooly* so like, i just casted a super strong spell at magician academy today, not gonna lie, felt badass.
```
### SillyTavern Settings
1. Use ChatML for the Context Template.
2. Enable Instruct Mode.
3. Use the [Mahou preset](https://huggingface.co/datasets/flammenai/Mahou-ST-ChatML-Instruct/raw/main/Mahou.json).
4. *Recommended* Additonal stopping strings: `["\n", "<|", "</"]`
### Method
DPO finetuned for 6 epochs using an A100 on Google Colab.
[Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)
|
Ariffiq99/CRAB_COPA_KUCI_xlm_roberta_large_finetuned
|
Ariffiq99
| 2024-06-04T02:25:44Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:Ariffiq99/COPA_KUCI_xlm_roberta_large_finetuned",
"base_model:finetune:Ariffiq99/COPA_KUCI_xlm_roberta_large_finetuned",
"license:mit",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-06-04T00:05:24Z |
---
license: mit
base_model: Ariffiq99/COPA_KUCI_xlm_roberta_large_finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: CRAB_COPA_KUCI_xlm_roberta_large_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CRAB_COPA_KUCI_xlm_roberta_large_finetuned
This model is a fine-tuned version of [Ariffiq99/COPA_KUCI_xlm_roberta_large_finetuned](https://huggingface.co/Ariffiq99/COPA_KUCI_xlm_roberta_large_finetuned) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2852
- F1: 0.7250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.1412 | 1.0 | 2880 | 1.4904 | 0.675 |
| 1.0659 | 2.0 | 5760 | 1.7656 | 0.6986 |
| 0.9118 | 3.0 | 8640 | 1.4802 | 0.7083 |
| 0.8833 | 4.0 | 11520 | 0.9360 | 0.7208 |
| 0.9054 | 5.0 | 14400 | 1.3935 | 0.7111 |
| 0.8062 | 6.0 | 17280 | 1.1927 | 0.7194 |
| 0.8188 | 7.0 | 20160 | 1.1275 | 0.7278 |
| 0.7608 | 8.0 | 23040 | 1.2852 | 0.7250 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
TTTXXX01/LS-zephyr-7b-sft-full
|
TTTXXX01
| 2024-06-04T02:25:00Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:finetune:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-03T18:08:07Z |
---
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: LS-zephyr-7b-sft-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LS-zephyr-7b-sft-full
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 3
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- total_train_batch_size: 9
- total_eval_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
ShenaoZ/DPO-Zephyr-7B
|
ShenaoZ
| 2024-06-04T02:24:29Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"base_model:finetune:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T17:13:41Z |
---
license: mit
base_model: HuggingFaceH4/mistral-7b-sft-beta
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: DPO-Zephyr-7B
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DPO-Zephyr-7B
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
|
Carlosslocar/test6
|
Carlosslocar
| 2024-06-04T02:23:35Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-04T02:17:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Larbz-7/swin-tiny-patch4-window7-224-finetuned-eurosat
|
Larbz-7
| 2024-06-04T02:22:07Z | 219 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-03T23:03:14Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1335
- Accuracy: 0.5414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 2.3862 | 0.9994 | 788 | 2.2541 | 0.5365 |
| 2.1651 | 2.0 | 1577 | 2.1688 | 0.5395 |
| 2.1559 | 2.9981 | 2364 | 2.1335 | 0.5414 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
hdve/Qwen-Qwen1.5-1.8B-1717467486
|
hdve
| 2024-06-04T02:20:23Z | 140 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-04T02:18:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rubenamtz0/llama-3-8b-lora-law2entity
|
rubenamtz0
| 2024-06-04T02:19:37Z | 15 | 1 |
peft
|
[
"peft",
"safetensors",
"gguf",
"llama",
"axolotl",
"generated_from_trainer",
"dataset:rubenamtz0/law_entity_recognition",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-06-02T01:21:16Z |
---
license: llama3
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B
model-index:
- name: llama-3-8b-lora-law2entity
results: []
datasets:
- rubenamtz0/law_entity_recognition
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3-8B
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: true
load_in_4bit: false
strict: false
datasets:
- path: rubenamtz0/law_entity_recognition
type: alpaca
dataset_prepared_path:
val_set_size: 0.1
output_dir: ./outputs/lora-law
hub_model_id: rubenamtz0/llama-3-8b-lora-law2entity
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: entity-relationship-claim-ft
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
# llama-3-8b-lora-law2entity
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the rubenamtz0/law_entity_recognition dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- total_eval_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.2735 | 0.05 | 1 | 0.2923 |
| 0.2852 | 0.25 | 5 | 0.2742 |
| 0.2007 | 0.5 | 10 | 0.2015 |
| 0.1742 | 0.75 | 15 | 0.1807 |
| 0.1854 | 1.0 | 20 | 0.1688 |
| 0.159 | 1.1125 | 25 | 0.1630 |
| 0.1444 | 1.3625 | 30 | 0.1592 |
| 0.1479 | 1.6125 | 35 | 0.1565 |
| 0.1505 | 1.8625 | 40 | 0.1538 |
| 0.1369 | 2.1125 | 45 | 0.1518 |
| 0.1348 | 2.2125 | 50 | 0.1512 |
| 0.1287 | 2.4625 | 55 | 0.1510 |
| 0.1359 | 2.7125 | 60 | 0.1498 |
| 0.1367 | 2.9625 | 65 | 0.1491 |
| 0.1218 | 3.075 | 70 | 0.1491 |
| 0.1285 | 3.325 | 75 | 0.1493 |
| 0.1307 | 3.575 | 80 | 0.1490 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1
|
kiatkock/sentiment_pc_oversampler
|
kiatkock
| 2024-06-04T02:15:32Z | 108 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:ahmedrachid/FinancialBERT-Sentiment-Analysis",
"base_model:finetune:ahmedrachid/FinancialBERT-Sentiment-Analysis",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-30T07:03:44Z |
---
base_model: ahmedrachid/FinancialBERT-Sentiment-Analysis
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sentiment_pc_oversampler
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_pc_oversampler
This model is a fine-tuned version of [ahmedrachid/FinancialBERT-Sentiment-Analysis](https://huggingface.co/ahmedrachid/FinancialBERT-Sentiment-Analysis) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3909
- Accuracy: 0.9291
- F1: 0.9288
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
| No log | 0.1134 | 50 | 0.5293 | 0.8154 | 0.8173 |
| No log | 0.2268 | 100 | 0.4512 | 0.8222 | 0.8224 |
| No log | 0.3401 | 150 | 0.4212 | 0.8356 | 0.8364 |
| No log | 0.4535 | 200 | 0.3978 | 0.8395 | 0.8400 |
| No log | 0.5669 | 250 | 0.3745 | 0.8631 | 0.8642 |
| No log | 0.6803 | 300 | 0.3593 | 0.8667 | 0.8675 |
| No log | 0.7937 | 350 | 0.3203 | 0.8821 | 0.8826 |
| No log | 0.9070 | 400 | 0.3130 | 0.8880 | 0.8889 |
| No log | 1.0204 | 450 | 0.3052 | 0.8903 | 0.8904 |
| 0.3514 | 1.1338 | 500 | 0.3216 | 0.8948 | 0.8954 |
| 0.3514 | 1.2472 | 550 | 0.3178 | 0.8979 | 0.8981 |
| 0.3514 | 1.3605 | 600 | 0.3366 | 0.8874 | 0.8877 |
| 0.3514 | 1.4739 | 650 | 0.3108 | 0.8951 | 0.8950 |
| 0.3514 | 1.5873 | 700 | 0.2551 | 0.9198 | 0.9200 |
| 0.3514 | 1.7007 | 750 | 0.3358 | 0.8911 | 0.8907 |
| 0.3514 | 1.8141 | 800 | 0.2812 | 0.9127 | 0.9125 |
| 0.3514 | 1.9274 | 850 | 0.2443 | 0.9240 | 0.9239 |
| 0.3514 | 2.0408 | 900 | 0.3059 | 0.9183 | 0.9182 |
| 0.3514 | 2.1542 | 950 | 0.3161 | 0.9155 | 0.9152 |
| 0.1587 | 2.2676 | 1000 | 0.2733 | 0.9237 | 0.9235 |
| 0.1587 | 2.3810 | 1050 | 0.3252 | 0.9141 | 0.9137 |
| 0.1587 | 2.4943 | 1100 | 0.3257 | 0.9141 | 0.9140 |
| 0.1587 | 2.6077 | 1150 | 0.2836 | 0.9254 | 0.9253 |
| 0.1587 | 2.7211 | 1200 | 0.3176 | 0.9166 | 0.9163 |
| 0.1587 | 2.8345 | 1250 | 0.3335 | 0.9232 | 0.9228 |
| 0.1587 | 2.9478 | 1300 | 0.3076 | 0.9257 | 0.9254 |
| 0.1587 | 3.0612 | 1350 | 0.3169 | 0.9269 | 0.9264 |
| 0.1587 | 3.1746 | 1400 | 0.3627 | 0.9240 | 0.9238 |
| 0.1587 | 3.2880 | 1450 | 0.4074 | 0.9127 | 0.9118 |
| 0.0731 | 3.4014 | 1500 | 0.3580 | 0.9251 | 0.9247 |
| 0.0731 | 3.5147 | 1550 | 0.3802 | 0.9240 | 0.9235 |
| 0.0731 | 3.6281 | 1600 | 0.3705 | 0.9257 | 0.9253 |
| 0.0731 | 3.7415 | 1650 | 0.3177 | 0.9362 | 0.9361 |
| 0.0731 | 3.8549 | 1700 | 0.3563 | 0.9314 | 0.9310 |
| 0.0731 | 3.9683 | 1750 | 0.4248 | 0.9158 | 0.9154 |
| 0.0731 | 4.0816 | 1800 | 0.3535 | 0.9314 | 0.9310 |
| 0.0731 | 4.1950 | 1850 | 0.3568 | 0.9308 | 0.9305 |
| 0.0731 | 4.3084 | 1900 | 0.4044 | 0.9266 | 0.9264 |
| 0.0731 | 4.4218 | 1950 | 0.3598 | 0.9331 | 0.9327 |
| 0.0358 | 4.5351 | 2000 | 0.3909 | 0.9291 | 0.9288 |
| 0.0358 | 4.6485 | 2050 | 0.3725 | 0.9325 | 0.9322 |
| 0.0358 | 4.7619 | 2100 | 0.3953 | 0.9305 | 0.9303 |
| 0.0358 | 4.8753 | 2150 | 0.3902 | 0.9305 | 0.9302 |
| 0.0358 | 4.9887 | 2200 | 0.3960 | 0.9286 | 0.9282 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
lemon-mint/ko-tokenizer-experiment-003
|
lemon-mint
| 2024-06-04T02:14:47Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-04T02:14:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zaynu/llama2-finetune
|
zaynu
| 2024-06-04T01:56:39Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-04T01:35:17Z |
---
license: apache-2.0
---
|
apwic/nerui-lora-r16-1
|
apwic
| 2024-06-04T01:54:08Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-28T13:06:00Z |
---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: nerui-lora-r16-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nerui-lora-r16-1
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0342
- Location Precision: 0.9316
- Location Recall: 0.9397
- Location F1: 0.9356
- Location Number: 116
- Organization Precision: 0.9484
- Organization Recall: 0.9304
- Organization F1: 0.9393
- Organization Number: 158
- Person Precision: 0.984
- Person Recall: 0.9919
- Person F1: 0.9880
- Person Number: 124
- Overall Precision: 0.9547
- Overall Recall: 0.9523
- Overall F1: 0.9535
- Overall Accuracy: 0.9896
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Location Precision | Location Recall | Location F1 | Location Number | Organization Precision | Organization Recall | Organization F1 | Organization Number | Person Precision | Person Recall | Person F1 | Person Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:----------------:|:-------------:|:---------:|:-------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.0545 | 1.0 | 96 | 0.6622 | 0.0 | 0.0 | 0.0 | 116 | 0.0 | 0.0 | 0.0 | 158 | 0.0 | 0.0 | 0.0 | 124 | 0.0 | 0.0 | 0.0 | 0.8394 |
| 0.64 | 2.0 | 192 | 0.5206 | 0.0 | 0.0 | 0.0 | 116 | 0.5 | 0.0127 | 0.0247 | 158 | 0.0 | 0.0 | 0.0 | 124 | 0.3333 | 0.0050 | 0.0099 | 0.8400 |
| 0.503 | 3.0 | 288 | 0.3728 | 0.0833 | 0.0086 | 0.0156 | 116 | 0.3625 | 0.1835 | 0.2437 | 158 | 0.36 | 0.2903 | 0.3214 | 124 | 0.3438 | 0.1658 | 0.2237 | 0.8718 |
| 0.3537 | 4.0 | 384 | 0.2518 | 0.3947 | 0.2586 | 0.3125 | 116 | 0.4885 | 0.5380 | 0.5120 | 158 | 0.5521 | 0.7258 | 0.6272 | 124 | 0.4964 | 0.5151 | 0.5055 | 0.9198 |
| 0.2513 | 5.0 | 480 | 0.1812 | 0.6111 | 0.5690 | 0.5893 | 116 | 0.5979 | 0.7342 | 0.6591 | 158 | 0.8028 | 0.9194 | 0.8571 | 124 | 0.6667 | 0.7437 | 0.7031 | 0.9498 |
| 0.1948 | 6.0 | 576 | 0.1359 | 0.7438 | 0.7759 | 0.7595 | 116 | 0.7368 | 0.7975 | 0.7660 | 158 | 0.8905 | 0.9839 | 0.9349 | 124 | 0.7879 | 0.8492 | 0.8174 | 0.9657 |
| 0.1623 | 7.0 | 672 | 0.1109 | 0.7917 | 0.8190 | 0.8051 | 116 | 0.7619 | 0.8101 | 0.7853 | 158 | 0.9104 | 0.9839 | 0.9457 | 124 | 0.8175 | 0.8668 | 0.8415 | 0.9701 |
| 0.1397 | 8.0 | 768 | 0.0954 | 0.8083 | 0.8362 | 0.8220 | 116 | 0.7976 | 0.8481 | 0.8221 | 158 | 0.9389 | 0.9919 | 0.9647 | 124 | 0.8449 | 0.8894 | 0.8666 | 0.9739 |
| 0.1266 | 9.0 | 864 | 0.0877 | 0.8189 | 0.8966 | 0.8560 | 116 | 0.8155 | 0.8671 | 0.8405 | 158 | 0.9318 | 0.9919 | 0.9609 | 124 | 0.8525 | 0.9146 | 0.8824 | 0.9761 |
| 0.1157 | 10.0 | 960 | 0.0731 | 0.8607 | 0.9052 | 0.8824 | 116 | 0.8519 | 0.8734 | 0.8625 | 158 | 0.9609 | 0.9919 | 0.9762 | 124 | 0.8883 | 0.9196 | 0.9037 | 0.9800 |
| 0.1111 | 11.0 | 1056 | 0.0673 | 0.8760 | 0.9138 | 0.8945 | 116 | 0.8606 | 0.8987 | 0.8793 | 158 | 0.9685 | 0.9919 | 0.9801 | 124 | 0.8983 | 0.9322 | 0.9149 | 0.9813 |
| 0.1044 | 12.0 | 1152 | 0.0635 | 0.8760 | 0.9138 | 0.8945 | 116 | 0.8554 | 0.8987 | 0.8765 | 158 | 0.9685 | 0.9919 | 0.9801 | 124 | 0.8961 | 0.9322 | 0.9138 | 0.9811 |
| 0.098 | 13.0 | 1248 | 0.0578 | 0.8898 | 0.9052 | 0.8974 | 116 | 0.8589 | 0.8861 | 0.8723 | 158 | 0.9762 | 0.9919 | 0.9840 | 124 | 0.9042 | 0.9246 | 0.9143 | 0.9816 |
| 0.0939 | 14.0 | 1344 | 0.0559 | 0.875 | 0.9052 | 0.8898 | 116 | 0.8642 | 0.8861 | 0.8750 | 158 | 0.9762 | 0.9919 | 0.9840 | 124 | 0.9020 | 0.9246 | 0.9132 | 0.9819 |
| 0.091 | 15.0 | 1440 | 0.0558 | 0.8824 | 0.9052 | 0.8936 | 116 | 0.8402 | 0.8987 | 0.8685 | 158 | 0.9685 | 0.9919 | 0.9801 | 124 | 0.8916 | 0.9296 | 0.9102 | 0.9816 |
| 0.088 | 16.0 | 1536 | 0.0555 | 0.875 | 0.9052 | 0.8898 | 116 | 0.8452 | 0.8987 | 0.8712 | 158 | 0.9535 | 0.9919 | 0.9723 | 124 | 0.8873 | 0.9296 | 0.9080 | 0.9811 |
| 0.0857 | 17.0 | 1632 | 0.0523 | 0.8824 | 0.9052 | 0.8936 | 116 | 0.8868 | 0.8924 | 0.8896 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9156 | 0.9271 | 0.9213 | 0.9846 |
| 0.0809 | 18.0 | 1728 | 0.0498 | 0.8678 | 0.9052 | 0.8861 | 116 | 0.8659 | 0.8987 | 0.8820 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9024 | 0.9296 | 0.9158 | 0.9833 |
| 0.0773 | 19.0 | 1824 | 0.0482 | 0.8898 | 0.9052 | 0.8974 | 116 | 0.8827 | 0.9051 | 0.8938 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9160 | 0.9322 | 0.9240 | 0.9844 |
| 0.0765 | 20.0 | 1920 | 0.0521 | 0.8833 | 0.9138 | 0.8983 | 116 | 0.8571 | 0.9114 | 0.8834 | 158 | 0.9685 | 0.9919 | 0.9801 | 124 | 0.8988 | 0.9372 | 0.9176 | 0.9822 |
| 0.0754 | 21.0 | 2016 | 0.0484 | 0.875 | 0.9052 | 0.8898 | 116 | 0.8735 | 0.9177 | 0.8951 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9075 | 0.9372 | 0.9221 | 0.9841 |
| 0.072 | 22.0 | 2112 | 0.0469 | 0.875 | 0.9052 | 0.8898 | 116 | 0.8606 | 0.8987 | 0.8793 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9024 | 0.9296 | 0.9158 | 0.9835 |
| 0.0689 | 23.0 | 2208 | 0.0440 | 0.8898 | 0.9052 | 0.8974 | 116 | 0.8944 | 0.9114 | 0.9028 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9208 | 0.9347 | 0.9277 | 0.9844 |
| 0.0697 | 24.0 | 2304 | 0.0456 | 0.8974 | 0.9052 | 0.9013 | 116 | 0.8968 | 0.8797 | 0.8882 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9244 | 0.9221 | 0.9233 | 0.9846 |
| 0.0656 | 25.0 | 2400 | 0.0436 | 0.8983 | 0.9138 | 0.9060 | 116 | 0.8812 | 0.8924 | 0.8868 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9181 | 0.9296 | 0.9238 | 0.9846 |
| 0.0658 | 26.0 | 2496 | 0.0427 | 0.8974 | 0.9052 | 0.9013 | 116 | 0.8704 | 0.8924 | 0.8812 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9134 | 0.9271 | 0.9202 | 0.9841 |
| 0.065 | 27.0 | 2592 | 0.0421 | 0.9052 | 0.9052 | 0.9052 | 116 | 0.8834 | 0.9114 | 0.8972 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9208 | 0.9347 | 0.9277 | 0.9855 |
| 0.0613 | 28.0 | 2688 | 0.0418 | 0.8833 | 0.9138 | 0.8983 | 116 | 0.8882 | 0.9051 | 0.8966 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9163 | 0.9347 | 0.9254 | 0.9855 |
| 0.0591 | 29.0 | 2784 | 0.0398 | 0.9060 | 0.9138 | 0.9099 | 116 | 0.8882 | 0.9051 | 0.8966 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9231 | 0.9347 | 0.9288 | 0.9874 |
| 0.06 | 30.0 | 2880 | 0.0395 | 0.9060 | 0.9138 | 0.9099 | 116 | 0.8994 | 0.9051 | 0.9022 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9277 | 0.9347 | 0.9312 | 0.9865 |
| 0.0566 | 31.0 | 2976 | 0.0386 | 0.8983 | 0.9138 | 0.9060 | 116 | 0.8827 | 0.9051 | 0.8938 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9185 | 0.9347 | 0.9265 | 0.9863 |
| 0.0566 | 32.0 | 3072 | 0.0392 | 0.8889 | 0.8966 | 0.8927 | 116 | 0.9045 | 0.8987 | 0.9016 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9248 | 0.9271 | 0.9260 | 0.9857 |
| 0.0566 | 33.0 | 3168 | 0.0398 | 0.8992 | 0.9224 | 0.9106 | 116 | 0.9045 | 0.8987 | 0.9016 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9277 | 0.9347 | 0.9312 | 0.9865 |
| 0.0568 | 34.0 | 3264 | 0.0396 | 0.9224 | 0.9224 | 0.9224 | 116 | 0.8951 | 0.9177 | 0.9062 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9305 | 0.9422 | 0.9363 | 0.9871 |
| 0.0532 | 35.0 | 3360 | 0.0379 | 0.8983 | 0.9138 | 0.9060 | 116 | 0.9051 | 0.9051 | 0.9051 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9277 | 0.9347 | 0.9312 | 0.9871 |
| 0.052 | 36.0 | 3456 | 0.0403 | 0.9231 | 0.9310 | 0.9270 | 116 | 0.9012 | 0.9241 | 0.9125 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9332 | 0.9472 | 0.9401 | 0.9879 |
| 0.0516 | 37.0 | 3552 | 0.0386 | 0.8983 | 0.9138 | 0.9060 | 116 | 0.9 | 0.9114 | 0.9057 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9256 | 0.9372 | 0.9313 | 0.9874 |
| 0.0497 | 38.0 | 3648 | 0.0378 | 0.8992 | 0.9224 | 0.9106 | 116 | 0.8994 | 0.9051 | 0.9022 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9256 | 0.9372 | 0.9313 | 0.9879 |
| 0.052 | 39.0 | 3744 | 0.0366 | 0.9138 | 0.9138 | 0.9138 | 116 | 0.9006 | 0.9177 | 0.9091 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9303 | 0.9397 | 0.9350 | 0.9885 |
| 0.0472 | 40.0 | 3840 | 0.0367 | 0.9138 | 0.9138 | 0.9138 | 116 | 0.8987 | 0.8987 | 0.8987 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9298 | 0.9322 | 0.9310 | 0.9868 |
| 0.0486 | 41.0 | 3936 | 0.0388 | 0.9076 | 0.9310 | 0.9191 | 116 | 0.9074 | 0.9304 | 0.9187 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9310 | 0.9497 | 0.9403 | 0.9882 |
| 0.047 | 42.0 | 4032 | 0.0375 | 0.9068 | 0.9224 | 0.9145 | 116 | 0.9161 | 0.8987 | 0.9073 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9347 | 0.9347 | 0.9347 | 0.9874 |
| 0.0481 | 43.0 | 4128 | 0.0380 | 0.8983 | 0.9138 | 0.9060 | 116 | 0.9051 | 0.9051 | 0.9051 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9277 | 0.9347 | 0.9312 | 0.9860 |
| 0.0468 | 44.0 | 4224 | 0.0391 | 0.9231 | 0.9310 | 0.9270 | 116 | 0.9062 | 0.9177 | 0.9119 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9353 | 0.9447 | 0.94 | 0.9876 |
| 0.0473 | 45.0 | 4320 | 0.0366 | 0.8992 | 0.9224 | 0.9106 | 116 | 0.9045 | 0.8987 | 0.9016 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9277 | 0.9347 | 0.9312 | 0.9868 |
| 0.0441 | 46.0 | 4416 | 0.0372 | 0.9 | 0.9310 | 0.9153 | 116 | 0.9006 | 0.9177 | 0.9091 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9261 | 0.9447 | 0.9353 | 0.9887 |
| 0.0441 | 47.0 | 4512 | 0.0375 | 0.9224 | 0.9224 | 0.9224 | 116 | 0.9068 | 0.9241 | 0.9154 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9353 | 0.9447 | 0.94 | 0.9887 |
| 0.0416 | 48.0 | 4608 | 0.0359 | 0.9237 | 0.9397 | 0.9316 | 116 | 0.9363 | 0.9304 | 0.9333 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9475 | 0.9523 | 0.9499 | 0.9898 |
| 0.0446 | 49.0 | 4704 | 0.0355 | 0.9153 | 0.9310 | 0.9231 | 116 | 0.8931 | 0.8987 | 0.8959 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9279 | 0.9372 | 0.9325 | 0.9876 |
| 0.0425 | 50.0 | 4800 | 0.0366 | 0.9160 | 0.9397 | 0.9277 | 116 | 0.9 | 0.9114 | 0.9057 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9307 | 0.9447 | 0.9377 | 0.9887 |
| 0.0422 | 51.0 | 4896 | 0.0364 | 0.9153 | 0.9310 | 0.9231 | 116 | 0.9167 | 0.9051 | 0.9108 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9373 | 0.9397 | 0.9385 | 0.9871 |
| 0.0409 | 52.0 | 4992 | 0.0357 | 0.9145 | 0.9224 | 0.9185 | 116 | 0.9074 | 0.9304 | 0.9187 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9332 | 0.9472 | 0.9401 | 0.9896 |
| 0.0414 | 53.0 | 5088 | 0.0359 | 0.9231 | 0.9310 | 0.9270 | 116 | 0.9136 | 0.9367 | 0.9250 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9381 | 0.9523 | 0.9451 | 0.9901 |
| 0.0403 | 54.0 | 5184 | 0.0353 | 0.9231 | 0.9310 | 0.9270 | 116 | 0.8963 | 0.9304 | 0.9130 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9310 | 0.9497 | 0.9403 | 0.9896 |
| 0.0393 | 55.0 | 5280 | 0.0352 | 0.9145 | 0.9224 | 0.9185 | 116 | 0.9136 | 0.9367 | 0.9250 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9356 | 0.9497 | 0.9426 | 0.9898 |
| 0.0405 | 56.0 | 5376 | 0.0359 | 0.9237 | 0.9397 | 0.9316 | 116 | 0.9430 | 0.9430 | 0.9430 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9501 | 0.9573 | 0.9537 | 0.9901 |
| 0.0404 | 57.0 | 5472 | 0.0370 | 0.9160 | 0.9397 | 0.9277 | 116 | 0.9371 | 0.9430 | 0.9401 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9454 | 0.9573 | 0.9513 | 0.9896 |
| 0.0398 | 58.0 | 5568 | 0.0355 | 0.9316 | 0.9397 | 0.9356 | 116 | 0.9308 | 0.9367 | 0.9338 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9476 | 0.9548 | 0.9512 | 0.9904 |
| 0.0382 | 59.0 | 5664 | 0.0355 | 0.9397 | 0.9397 | 0.9397 | 116 | 0.9551 | 0.9430 | 0.9490 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9597 | 0.9573 | 0.9585 | 0.9904 |
| 0.0396 | 60.0 | 5760 | 0.0344 | 0.9160 | 0.9397 | 0.9277 | 116 | 0.9125 | 0.9241 | 0.9182 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9356 | 0.9497 | 0.9426 | 0.9893 |
| 0.0362 | 61.0 | 5856 | 0.0356 | 0.9231 | 0.9310 | 0.9270 | 116 | 0.9226 | 0.9051 | 0.9137 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9421 | 0.9397 | 0.9409 | 0.9879 |
| 0.037 | 62.0 | 5952 | 0.0360 | 0.9237 | 0.9397 | 0.9316 | 116 | 0.9167 | 0.9051 | 0.9108 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9398 | 0.9422 | 0.9410 | 0.9882 |
| 0.0386 | 63.0 | 6048 | 0.0364 | 0.9310 | 0.9310 | 0.9310 | 116 | 0.9367 | 0.9367 | 0.9367 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9499 | 0.9523 | 0.9511 | 0.9896 |
| 0.0365 | 64.0 | 6144 | 0.0360 | 0.9153 | 0.9310 | 0.9231 | 116 | 0.9412 | 0.9114 | 0.9260 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9470 | 0.9422 | 0.9446 | 0.9887 |
| 0.0347 | 65.0 | 6240 | 0.0354 | 0.9237 | 0.9397 | 0.9316 | 116 | 0.9416 | 0.9177 | 0.9295 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9496 | 0.9472 | 0.9484 | 0.9887 |
| 0.0393 | 66.0 | 6336 | 0.0366 | 0.9397 | 0.9397 | 0.9397 | 116 | 0.9355 | 0.9177 | 0.9265 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9520 | 0.9472 | 0.9496 | 0.9887 |
| 0.0359 | 67.0 | 6432 | 0.0348 | 0.9316 | 0.9397 | 0.9356 | 116 | 0.9241 | 0.9241 | 0.9241 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.945 | 0.9497 | 0.9474 | 0.9893 |
| 0.0331 | 68.0 | 6528 | 0.0347 | 0.9316 | 0.9397 | 0.9356 | 116 | 0.9177 | 0.9177 | 0.9177 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9425 | 0.9472 | 0.9449 | 0.9890 |
| 0.0344 | 69.0 | 6624 | 0.0341 | 0.9391 | 0.9310 | 0.9351 | 116 | 0.9363 | 0.9304 | 0.9333 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9521 | 0.9497 | 0.9509 | 0.9898 |
| 0.0349 | 70.0 | 6720 | 0.0345 | 0.9397 | 0.9397 | 0.9397 | 116 | 0.9427 | 0.9367 | 0.9397 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9548 | 0.9548 | 0.9548 | 0.9901 |
| 0.0349 | 71.0 | 6816 | 0.0354 | 0.9310 | 0.9310 | 0.9310 | 116 | 0.9299 | 0.9241 | 0.9270 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9472 | 0.9472 | 0.9472 | 0.9885 |
| 0.0342 | 72.0 | 6912 | 0.0343 | 0.9237 | 0.9397 | 0.9316 | 116 | 0.9299 | 0.9241 | 0.9270 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.945 | 0.9497 | 0.9474 | 0.9887 |
| 0.0333 | 73.0 | 7008 | 0.0354 | 0.9391 | 0.9310 | 0.9351 | 116 | 0.9241 | 0.9241 | 0.9241 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9472 | 0.9472 | 0.9472 | 0.9890 |
| 0.0332 | 74.0 | 7104 | 0.0346 | 0.9231 | 0.9310 | 0.9270 | 116 | 0.9241 | 0.9241 | 0.9241 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9425 | 0.9472 | 0.9449 | 0.9893 |
| 0.0346 | 75.0 | 7200 | 0.0342 | 0.9310 | 0.9310 | 0.9310 | 116 | 0.9245 | 0.9304 | 0.9274 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.945 | 0.9497 | 0.9474 | 0.9896 |
| 0.0334 | 76.0 | 7296 | 0.0346 | 0.9224 | 0.9224 | 0.9224 | 116 | 0.925 | 0.9367 | 0.9308 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9426 | 0.9497 | 0.9462 | 0.9904 |
| 0.034 | 77.0 | 7392 | 0.0350 | 0.9397 | 0.9397 | 0.9397 | 116 | 0.9299 | 0.9241 | 0.9270 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9497 | 0.9497 | 0.9497 | 0.9896 |
| 0.0341 | 78.0 | 7488 | 0.0340 | 0.9316 | 0.9397 | 0.9356 | 116 | 0.9363 | 0.9304 | 0.9333 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9499 | 0.9523 | 0.9511 | 0.9904 |
| 0.033 | 79.0 | 7584 | 0.0348 | 0.9304 | 0.9224 | 0.9264 | 116 | 0.925 | 0.9367 | 0.9308 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.945 | 0.9497 | 0.9474 | 0.9896 |
| 0.0308 | 80.0 | 7680 | 0.0337 | 0.9138 | 0.9138 | 0.9138 | 116 | 0.9193 | 0.9367 | 0.9279 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9378 | 0.9472 | 0.9425 | 0.9898 |
| 0.031 | 81.0 | 7776 | 0.0341 | 0.9224 | 0.9224 | 0.9224 | 116 | 0.9193 | 0.9367 | 0.9279 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9403 | 0.9497 | 0.9450 | 0.9901 |
| 0.0315 | 82.0 | 7872 | 0.0340 | 0.9237 | 0.9397 | 0.9316 | 116 | 0.9363 | 0.9304 | 0.9333 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9475 | 0.9523 | 0.9499 | 0.9904 |
| 0.0321 | 83.0 | 7968 | 0.0343 | 0.9391 | 0.9310 | 0.9351 | 116 | 0.9367 | 0.9367 | 0.9367 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9523 | 0.9523 | 0.9523 | 0.9901 |
| 0.0317 | 84.0 | 8064 | 0.0340 | 0.9391 | 0.9310 | 0.9351 | 116 | 0.9367 | 0.9367 | 0.9367 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9523 | 0.9523 | 0.9523 | 0.9901 |
| 0.0324 | 85.0 | 8160 | 0.0340 | 0.9145 | 0.9224 | 0.9185 | 116 | 0.9187 | 0.9304 | 0.9245 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9378 | 0.9472 | 0.9425 | 0.9893 |
| 0.0317 | 86.0 | 8256 | 0.0339 | 0.9316 | 0.9397 | 0.9356 | 116 | 0.9423 | 0.9304 | 0.9363 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9523 | 0.9523 | 0.9523 | 0.9901 |
| 0.0308 | 87.0 | 8352 | 0.0347 | 0.9316 | 0.9397 | 0.9356 | 116 | 0.9423 | 0.9304 | 0.9363 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9523 | 0.9523 | 0.9523 | 0.9898 |
| 0.0311 | 88.0 | 8448 | 0.0344 | 0.9391 | 0.9310 | 0.9351 | 116 | 0.9367 | 0.9367 | 0.9367 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9523 | 0.9523 | 0.9523 | 0.9898 |
| 0.0295 | 89.0 | 8544 | 0.0346 | 0.9391 | 0.9310 | 0.9351 | 116 | 0.9427 | 0.9367 | 0.9397 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9547 | 0.9523 | 0.9535 | 0.9896 |
| 0.0304 | 90.0 | 8640 | 0.0343 | 0.9391 | 0.9310 | 0.9351 | 116 | 0.9427 | 0.9367 | 0.9397 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9547 | 0.9523 | 0.9535 | 0.9896 |
| 0.0315 | 91.0 | 8736 | 0.0343 | 0.9391 | 0.9310 | 0.9351 | 116 | 0.9427 | 0.9367 | 0.9397 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9547 | 0.9523 | 0.9535 | 0.9896 |
| 0.0314 | 92.0 | 8832 | 0.0342 | 0.9391 | 0.9310 | 0.9351 | 116 | 0.9427 | 0.9367 | 0.9397 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9547 | 0.9523 | 0.9535 | 0.9896 |
| 0.0322 | 93.0 | 8928 | 0.0340 | 0.9391 | 0.9310 | 0.9351 | 116 | 0.9427 | 0.9367 | 0.9397 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9547 | 0.9523 | 0.9535 | 0.9898 |
| 0.0303 | 94.0 | 9024 | 0.0343 | 0.9391 | 0.9310 | 0.9351 | 116 | 0.9367 | 0.9367 | 0.9367 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9523 | 0.9523 | 0.9523 | 0.9898 |
| 0.0316 | 95.0 | 9120 | 0.0343 | 0.9391 | 0.9310 | 0.9351 | 116 | 0.9367 | 0.9367 | 0.9367 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9523 | 0.9523 | 0.9523 | 0.9898 |
| 0.0317 | 96.0 | 9216 | 0.0342 | 0.9391 | 0.9310 | 0.9351 | 116 | 0.9427 | 0.9367 | 0.9397 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9547 | 0.9523 | 0.9535 | 0.9896 |
| 0.0321 | 97.0 | 9312 | 0.0341 | 0.9316 | 0.9397 | 0.9356 | 116 | 0.9484 | 0.9304 | 0.9393 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9547 | 0.9523 | 0.9535 | 0.9898 |
| 0.0295 | 98.0 | 9408 | 0.0342 | 0.9316 | 0.9397 | 0.9356 | 116 | 0.9484 | 0.9304 | 0.9393 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9547 | 0.9523 | 0.9535 | 0.9898 |
| 0.031 | 99.0 | 9504 | 0.0341 | 0.9316 | 0.9397 | 0.9356 | 116 | 0.9484 | 0.9304 | 0.9393 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9547 | 0.9523 | 0.9535 | 0.9898 |
| 0.0299 | 100.0 | 9600 | 0.0342 | 0.9316 | 0.9397 | 0.9356 | 116 | 0.9484 | 0.9304 | 0.9393 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9547 | 0.9523 | 0.9535 | 0.9896 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
vaiv/GeM2-Llamion-14B-Chat
|
vaiv
| 2024-06-04T01:49:33Z | 2,245 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-13T08:43:21Z |
---
license: apache-2.0
---
# **GeM2-Llamion-14B**
We have released **Llamion** as **GeM 2.0**, the second series of generative models developed by VAIV Company to address the our principal business needs.
**Llamion** (Llamafied Orion) is derived from transforming the [Orion model](https://huggingface.co/OrionStarAI/Orion-14B-Chat)
into [the standard LLaMA architecture](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py)
through parameter mapping and offline knowledge transfer.
Further technical specifications and study results will be detailed in our upcoming paper, available on this page.
<!-- Note that this model has NOT been contaminated to artificially inflate its scores for the Open LLM Leaderboards,
unlike some recent models which have been intentionally tainted. -->

### Contributors
- VAIV Company AI Lab ([vaiv.kr](https://www.vaiv.kr/))
|
hdve/Qwen-Qwen1.5-0.5B-1717465528
|
hdve
| 2024-06-04T01:46:32Z | 139 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-04T01:46:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ArashAhmadian/rloo_tldr_6.9b_defaultclip_512bs_05kl
|
ArashAhmadian
| 2024-06-04T01:37:51Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-04T01:34:26Z |
---
tags:
- generated_from_trainer
model-index:
- name: rloo_tldr_6.9b_defaultclip_512bs_05kl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rloo_tldr_6.9b_defaultclip_512bs_05kl
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- num_epochs: 3.0
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Trofish/Korean_syllable_roberta_512
|
Trofish
| 2024-06-04T01:34:41Z | 126 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"ko",
"dataset:klue/klue",
"arxiv:2105.09680",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-05-13T11:23:10Z |
---
license: apache-2.0
datasets:
- klue/klue
language:
- ko
metrics:
- f1
- accuracy
- pearsonr
---
# RoBERTa-base Korean
## 모델 설명
이 RoBERTa 모델은 다양한 한국어 텍스트 데이터셋에서 **음절** 단위로 사전 학습되었습니다.
자체 구축한 한국어 음절 단위 vocab을 사용하였습니다.
## 아키텍처
- **모델 유형**: RoBERTa
- **아키텍처**: RobertaForMaskedLM
- **모델 크기**: 512 hidden size, 8 hidden layers, 8 attention heads
- **max_position_embeddings**: 514
- **intermediate_size**: 2,048
- **vocab_size**: 1,428
## 학습 데이터
사용된 데이터셋은 다음과 같습니다:
- **모두의말뭉치**: 채팅, 게시판, 일상대화, 뉴스, 방송대본, 책 등
- **AIHUB**: SNS, 유튜브 댓글, 도서 문장
- **기타**: 나무위키, 한국어 위키피디아
총 합산된 데이터는 **약 11GB** 입니다. **(4B tokens)**
## 학습 상세
- **BATCH_SIZE**: 196 (GPU당)
- **ACCUMULATE**: 20
- **Total_BATCH_SIZE**: 8232
- **MAX_STEPS**: 12,500
- **TRAIN_STEPS * BATCH_SIZE**: **100M**
- **WARMUP_STEPS**: 2,400
- **최적화**: AdamW, LR 1e-3, BETA (0.9, 0.98), eps 1e-6
- **학습률 감쇠**: linear
- **사용된 하드웨어**: 2x A6000ada GPU


## 성능 평가
- **KLUE benchmark test를 통해서 성능을 평가했습니다.**
- klue-roberta-base에 비해서 매우 작은 크기라 성능이 낮기는 하지만 hidden size 512인 모델은 크기 대비 좋은 성능을 보였습니다.


## 사용 방법
### tokenizer의 경우 wordpiece가 아닌 syllable 단위이기에 AutoTokenizer가 아니라 SyllableTokenizer를 사용해야 합니다.
### (레포에서 제공하고 있는 syllabletokenizer.py를 가져와서 사용해야 합니다.)
```python
from transformers import AutoModel, AutoTokenizer
from syllabletokenizer import SyllableTokenizer
# 모델과 토크나이저 불러오기
model = AutoModelForMaskedLM.from_pretrained("Trofish/korean_syllable_roberta")
tokenizer = SyllableTokenizer(vocab_file='vocab.json',**tokenizer_kwargs)
# 텍스트를 토큰으로 변환하고 예측 수행
inputs = tokenizer("여기에 한국어 텍스트 입력", return_tensors="pt")
outputs = model(**inputs)
```
## Citation
**klue**
```
@misc{park2021klue,
title={KLUE: Korean Language Understanding Evaluation},
author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},
year={2021},
eprint={2105.09680},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
apwic/nerui-lora-r8-0
|
apwic
| 2024-06-04T01:26:47Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-28T12:12:41Z |
---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: nerui-lora-r8-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nerui-lora-r8-0
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0463
- Location Precision: 0.8462
- Location Recall: 0.9362
- Location F1: 0.8889
- Location Number: 94
- Organization Precision: 0.8667
- Organization Recall: 0.8563
- Organization F1: 0.8614
- Organization Number: 167
- Person Precision: 1.0
- Person Recall: 0.9854
- Person F1: 0.9926
- Person Number: 137
- Overall Precision: 0.9059
- Overall Recall: 0.9196
- Overall F1: 0.9127
- Overall Accuracy: 0.9848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Location Precision | Location Recall | Location F1 | Location Number | Organization Precision | Organization Recall | Organization F1 | Organization Number | Person Precision | Person Recall | Person F1 | Person Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:----------------:|:-------------:|:---------:|:-------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.1434 | 1.0 | 96 | 0.7069 | 0.0 | 0.0 | 0.0 | 94 | 0.0 | 0.0 | 0.0 | 167 | 0.0 | 0.0 | 0.0 | 137 | 0.0 | 0.0 | 0.0 | 0.8343 |
| 0.6699 | 2.0 | 192 | 0.5760 | 0.0 | 0.0 | 0.0 | 94 | 1.0 | 0.0060 | 0.0119 | 167 | 0.0 | 0.0 | 0.0 | 137 | 0.25 | 0.0025 | 0.0050 | 0.8348 |
| 0.5654 | 3.0 | 288 | 0.4641 | 0.0 | 0.0 | 0.0 | 94 | 0.4118 | 0.0419 | 0.0761 | 167 | 0.2414 | 0.0511 | 0.0843 | 137 | 0.3043 | 0.0352 | 0.0631 | 0.8420 |
| 0.4481 | 4.0 | 384 | 0.3466 | 0.2353 | 0.0426 | 0.0721 | 94 | 0.3578 | 0.2335 | 0.2826 | 167 | 0.3774 | 0.4380 | 0.4054 | 137 | 0.3614 | 0.2588 | 0.3016 | 0.8793 |
| 0.3376 | 5.0 | 480 | 0.2613 | 0.4058 | 0.2979 | 0.3436 | 94 | 0.5105 | 0.5808 | 0.5434 | 167 | 0.5081 | 0.6861 | 0.5839 | 137 | 0.4932 | 0.5503 | 0.5202 | 0.9202 |
| 0.2611 | 6.0 | 576 | 0.2025 | 0.5909 | 0.5532 | 0.5714 | 94 | 0.5588 | 0.6826 | 0.6146 | 167 | 0.6905 | 0.8467 | 0.7607 | 137 | 0.6130 | 0.7085 | 0.6573 | 0.9406 |
| 0.2071 | 7.0 | 672 | 0.1615 | 0.7021 | 0.7021 | 0.7021 | 94 | 0.6649 | 0.7605 | 0.7095 | 167 | 0.8224 | 0.9124 | 0.8651 | 137 | 0.7277 | 0.7990 | 0.7617 | 0.9555 |
| 0.1767 | 8.0 | 768 | 0.1337 | 0.7872 | 0.7872 | 0.7872 | 94 | 0.7120 | 0.7844 | 0.7464 | 167 | 0.9306 | 0.9781 | 0.9537 | 137 | 0.8033 | 0.8518 | 0.8268 | 0.9644 |
| 0.1601 | 9.0 | 864 | 0.1165 | 0.7980 | 0.8404 | 0.8187 | 94 | 0.7351 | 0.8144 | 0.7727 | 167 | 0.9306 | 0.9781 | 0.9537 | 137 | 0.8154 | 0.8769 | 0.8450 | 0.9671 |
| 0.1406 | 10.0 | 960 | 0.1041 | 0.7573 | 0.8298 | 0.7919 | 94 | 0.7816 | 0.8144 | 0.7977 | 167 | 0.9371 | 0.9781 | 0.9571 | 137 | 0.8286 | 0.8744 | 0.8509 | 0.9693 |
| 0.1283 | 11.0 | 1056 | 0.0951 | 0.8021 | 0.8191 | 0.8105 | 94 | 0.7865 | 0.8383 | 0.8116 | 167 | 0.9371 | 0.9781 | 0.9571 | 137 | 0.8417 | 0.8819 | 0.8613 | 0.9704 |
| 0.1229 | 12.0 | 1152 | 0.0895 | 0.8019 | 0.9043 | 0.8500 | 94 | 0.8 | 0.8383 | 0.8187 | 167 | 0.9375 | 0.9854 | 0.9609 | 137 | 0.8471 | 0.9045 | 0.8748 | 0.9715 |
| 0.1116 | 13.0 | 1248 | 0.0831 | 0.83 | 0.8830 | 0.8557 | 94 | 0.8314 | 0.8563 | 0.8437 | 167 | 0.9371 | 0.9781 | 0.9571 | 137 | 0.8675 | 0.9045 | 0.8856 | 0.9743 |
| 0.1077 | 14.0 | 1344 | 0.0769 | 0.8571 | 0.8936 | 0.875 | 94 | 0.8409 | 0.8862 | 0.8630 | 167 | 0.9504 | 0.9781 | 0.9640 | 137 | 0.8819 | 0.9196 | 0.9004 | 0.9760 |
| 0.1045 | 15.0 | 1440 | 0.0758 | 0.8333 | 0.9043 | 0.8673 | 94 | 0.8430 | 0.8683 | 0.8555 | 167 | 0.9371 | 0.9781 | 0.9571 | 137 | 0.8729 | 0.9146 | 0.8933 | 0.9760 |
| 0.1 | 16.0 | 1536 | 0.0753 | 0.8365 | 0.9255 | 0.8788 | 94 | 0.8111 | 0.8743 | 0.8415 | 167 | 0.9437 | 0.9781 | 0.9606 | 137 | 0.8615 | 0.9221 | 0.8908 | 0.9746 |
| 0.0961 | 17.0 | 1632 | 0.0690 | 0.8586 | 0.9043 | 0.8808 | 94 | 0.8563 | 0.8922 | 0.8739 | 167 | 0.9571 | 0.9781 | 0.9675 | 137 | 0.8910 | 0.9246 | 0.9075 | 0.9785 |
| 0.0981 | 18.0 | 1728 | 0.0676 | 0.86 | 0.9149 | 0.8866 | 94 | 0.8523 | 0.8982 | 0.8746 | 167 | 0.9504 | 0.9781 | 0.9640 | 137 | 0.8873 | 0.9296 | 0.9080 | 0.9782 |
| 0.0916 | 19.0 | 1824 | 0.0653 | 0.8333 | 0.9043 | 0.8673 | 94 | 0.8647 | 0.8802 | 0.8724 | 167 | 0.9640 | 0.9781 | 0.9710 | 137 | 0.8905 | 0.9196 | 0.9048 | 0.9790 |
| 0.0899 | 20.0 | 1920 | 0.0637 | 0.8586 | 0.9043 | 0.8808 | 94 | 0.8563 | 0.8922 | 0.8739 | 167 | 0.9640 | 0.9781 | 0.9710 | 137 | 0.8932 | 0.9246 | 0.9086 | 0.9790 |
| 0.0856 | 21.0 | 2016 | 0.0656 | 0.8113 | 0.9149 | 0.8600 | 94 | 0.8580 | 0.8683 | 0.8631 | 167 | 0.9571 | 0.9781 | 0.9675 | 137 | 0.8795 | 0.9171 | 0.8979 | 0.9773 |
| 0.0844 | 22.0 | 2112 | 0.0621 | 0.8416 | 0.9043 | 0.8718 | 94 | 0.8563 | 0.8922 | 0.8739 | 167 | 0.9571 | 0.9781 | 0.9675 | 137 | 0.8867 | 0.9246 | 0.9053 | 0.9782 |
| 0.0816 | 23.0 | 2208 | 0.0608 | 0.85 | 0.9043 | 0.8763 | 94 | 0.8647 | 0.8802 | 0.8724 | 167 | 0.9571 | 0.9781 | 0.9675 | 137 | 0.8927 | 0.9196 | 0.9059 | 0.9798 |
| 0.0803 | 24.0 | 2304 | 0.0591 | 0.8586 | 0.9043 | 0.8808 | 94 | 0.8671 | 0.8982 | 0.8824 | 167 | 0.9571 | 0.9781 | 0.9675 | 137 | 0.8956 | 0.9271 | 0.9111 | 0.9796 |
| 0.0793 | 25.0 | 2400 | 0.0577 | 0.85 | 0.9043 | 0.8763 | 94 | 0.8824 | 0.8982 | 0.8902 | 167 | 0.9710 | 0.9781 | 0.9745 | 137 | 0.9044 | 0.9271 | 0.9156 | 0.9818 |
| 0.0744 | 26.0 | 2496 | 0.0576 | 0.8529 | 0.9255 | 0.8878 | 94 | 0.8706 | 0.8862 | 0.8783 | 167 | 0.9710 | 0.9781 | 0.9745 | 137 | 0.9 | 0.9271 | 0.9134 | 0.9818 |
| 0.0761 | 27.0 | 2592 | 0.0571 | 0.8416 | 0.9043 | 0.8718 | 94 | 0.8757 | 0.8862 | 0.8810 | 167 | 0.9640 | 0.9781 | 0.9710 | 137 | 0.8973 | 0.9221 | 0.9095 | 0.9807 |
| 0.0724 | 28.0 | 2688 | 0.0559 | 0.8586 | 0.9043 | 0.8808 | 94 | 0.8655 | 0.8862 | 0.8757 | 167 | 0.9710 | 0.9781 | 0.9745 | 137 | 0.8995 | 0.9221 | 0.9107 | 0.9809 |
| 0.071 | 29.0 | 2784 | 0.0542 | 0.8687 | 0.9149 | 0.8912 | 94 | 0.8655 | 0.8862 | 0.8757 | 167 | 0.9783 | 0.9854 | 0.9818 | 137 | 0.9044 | 0.9271 | 0.9156 | 0.9818 |
| 0.0705 | 30.0 | 2880 | 0.0549 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8690 | 0.8743 | 0.8716 | 167 | 0.9854 | 0.9854 | 0.9854 | 137 | 0.9022 | 0.9271 | 0.9145 | 0.9818 |
| 0.0702 | 31.0 | 2976 | 0.0517 | 0.8687 | 0.9149 | 0.8912 | 94 | 0.8817 | 0.8922 | 0.8869 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9181 | 0.9296 | 0.9238 | 0.9834 |
| 0.065 | 32.0 | 3072 | 0.0532 | 0.8396 | 0.9468 | 0.89 | 94 | 0.8951 | 0.8683 | 0.8815 | 167 | 0.9926 | 0.9854 | 0.9890 | 137 | 0.9134 | 0.9271 | 0.9202 | 0.9826 |
| 0.0639 | 33.0 | 3168 | 0.0533 | 0.8286 | 0.9255 | 0.8744 | 94 | 0.8780 | 0.8623 | 0.8701 | 167 | 0.9926 | 0.9854 | 0.9890 | 137 | 0.9037 | 0.9196 | 0.9116 | 0.9815 |
| 0.0642 | 34.0 | 3264 | 0.0520 | 0.8529 | 0.9255 | 0.8878 | 94 | 0.875 | 0.8802 | 0.8776 | 167 | 0.9926 | 0.9854 | 0.9890 | 137 | 0.9089 | 0.9271 | 0.9179 | 0.9820 |
| 0.0652 | 35.0 | 3360 | 0.0518 | 0.8515 | 0.9149 | 0.8821 | 94 | 0.8690 | 0.8743 | 0.8716 | 167 | 0.9926 | 0.9854 | 0.9890 | 137 | 0.9062 | 0.9221 | 0.9141 | 0.9815 |
| 0.0627 | 36.0 | 3456 | 0.0533 | 0.87 | 0.9255 | 0.8969 | 94 | 0.8655 | 0.8862 | 0.8757 | 167 | 0.9854 | 0.9854 | 0.9854 | 137 | 0.9069 | 0.9296 | 0.9181 | 0.9818 |
| 0.0606 | 37.0 | 3552 | 0.0503 | 0.8878 | 0.9255 | 0.9062 | 94 | 0.8698 | 0.8802 | 0.8750 | 167 | 0.9926 | 0.9854 | 0.9890 | 137 | 0.9156 | 0.9271 | 0.9213 | 0.9826 |
| 0.0611 | 38.0 | 3648 | 0.0497 | 0.87 | 0.9255 | 0.8969 | 94 | 0.8848 | 0.8743 | 0.8795 | 167 | 0.9854 | 0.9854 | 0.9854 | 137 | 0.9154 | 0.9246 | 0.92 | 0.9829 |
| 0.0645 | 39.0 | 3744 | 0.0511 | 0.8431 | 0.9149 | 0.8776 | 94 | 0.8780 | 0.8623 | 0.8701 | 167 | 0.9926 | 0.9854 | 0.9890 | 137 | 0.9080 | 0.9171 | 0.9125 | 0.9823 |
| 0.061 | 40.0 | 3840 | 0.0487 | 0.8687 | 0.9149 | 0.8912 | 94 | 0.8765 | 0.8922 | 0.8843 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9158 | 0.9296 | 0.9227 | 0.9840 |
| 0.0591 | 41.0 | 3936 | 0.0491 | 0.8515 | 0.9149 | 0.8821 | 94 | 0.8802 | 0.8802 | 0.8802 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9132 | 0.9246 | 0.9189 | 0.9834 |
| 0.058 | 42.0 | 4032 | 0.0480 | 0.8687 | 0.9149 | 0.8912 | 94 | 0.8757 | 0.8862 | 0.8810 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9156 | 0.9271 | 0.9213 | 0.9840 |
| 0.0587 | 43.0 | 4128 | 0.0494 | 0.8350 | 0.9149 | 0.8731 | 94 | 0.8720 | 0.8563 | 0.8640 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9055 | 0.9146 | 0.91 | 0.9820 |
| 0.0562 | 44.0 | 4224 | 0.0482 | 0.8515 | 0.9149 | 0.8821 | 94 | 0.8788 | 0.8683 | 0.8735 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9127 | 0.9196 | 0.9161 | 0.9829 |
| 0.0565 | 45.0 | 4320 | 0.0471 | 0.8529 | 0.9255 | 0.8878 | 94 | 0.8795 | 0.8743 | 0.8769 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9132 | 0.9246 | 0.9189 | 0.9837 |
| 0.0541 | 46.0 | 4416 | 0.0482 | 0.8365 | 0.9255 | 0.8788 | 94 | 0.8795 | 0.8743 | 0.8769 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9086 | 0.9246 | 0.9166 | 0.9831 |
| 0.0547 | 47.0 | 4512 | 0.0487 | 0.8350 | 0.9149 | 0.8731 | 94 | 0.8720 | 0.8563 | 0.8640 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9055 | 0.9146 | 0.91 | 0.9823 |
| 0.0537 | 48.0 | 4608 | 0.0480 | 0.8269 | 0.9149 | 0.8687 | 94 | 0.8659 | 0.8503 | 0.8580 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9007 | 0.9121 | 0.9064 | 0.9829 |
| 0.0525 | 49.0 | 4704 | 0.0477 | 0.8416 | 0.9043 | 0.8718 | 94 | 0.8882 | 0.8563 | 0.8720 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9144 | 0.9121 | 0.9132 | 0.9826 |
| 0.0513 | 50.0 | 4800 | 0.0472 | 0.86 | 0.9149 | 0.8866 | 94 | 0.8596 | 0.8802 | 0.8698 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9064 | 0.9246 | 0.9154 | 0.9845 |
| 0.0507 | 51.0 | 4896 | 0.0481 | 0.8286 | 0.9255 | 0.8744 | 94 | 0.875 | 0.8383 | 0.8563 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.905 | 0.9095 | 0.9073 | 0.9820 |
| 0.0499 | 52.0 | 4992 | 0.0472 | 0.87 | 0.9255 | 0.8969 | 94 | 0.8757 | 0.8862 | 0.8810 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9158 | 0.9296 | 0.9227 | 0.9837 |
| 0.0519 | 53.0 | 5088 | 0.0471 | 0.8614 | 0.9255 | 0.8923 | 94 | 0.8743 | 0.8743 | 0.8743 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9132 | 0.9246 | 0.9189 | 0.9840 |
| 0.0523 | 54.0 | 5184 | 0.0483 | 0.8286 | 0.9255 | 0.8744 | 94 | 0.8545 | 0.8443 | 0.8494 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.8963 | 0.9121 | 0.9041 | 0.9826 |
| 0.0507 | 55.0 | 5280 | 0.0465 | 0.8447 | 0.9255 | 0.8832 | 94 | 0.8614 | 0.8563 | 0.8589 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9035 | 0.9171 | 0.9102 | 0.9831 |
| 0.0506 | 56.0 | 5376 | 0.0465 | 0.8447 | 0.9255 | 0.8832 | 94 | 0.8614 | 0.8563 | 0.8589 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9035 | 0.9171 | 0.9102 | 0.9831 |
| 0.0504 | 57.0 | 5472 | 0.0475 | 0.8208 | 0.9255 | 0.8700 | 94 | 0.8452 | 0.8503 | 0.8478 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.8900 | 0.9146 | 0.9021 | 0.9831 |
| 0.0484 | 58.0 | 5568 | 0.0462 | 0.8302 | 0.9362 | 0.88 | 94 | 0.8659 | 0.8503 | 0.8580 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9012 | 0.9171 | 0.9091 | 0.9837 |
| 0.0487 | 59.0 | 5664 | 0.0457 | 0.8447 | 0.9255 | 0.8832 | 94 | 0.8727 | 0.8623 | 0.8675 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9082 | 0.9196 | 0.9139 | 0.9837 |
| 0.0463 | 60.0 | 5760 | 0.0475 | 0.8365 | 0.9255 | 0.8788 | 94 | 0.8623 | 0.8623 | 0.8623 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9015 | 0.9196 | 0.9104 | 0.9848 |
| 0.0462 | 61.0 | 5856 | 0.0469 | 0.8529 | 0.9255 | 0.8878 | 94 | 0.8655 | 0.8862 | 0.8757 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9069 | 0.9296 | 0.9181 | 0.9848 |
| 0.0497 | 62.0 | 5952 | 0.0469 | 0.8544 | 0.9362 | 0.8934 | 94 | 0.8521 | 0.8623 | 0.8571 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9017 | 0.9221 | 0.9118 | 0.9845 |
| 0.0465 | 63.0 | 6048 | 0.0469 | 0.8515 | 0.9149 | 0.8821 | 94 | 0.8683 | 0.8683 | 0.8683 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9082 | 0.9196 | 0.9139 | 0.9848 |
| 0.0468 | 64.0 | 6144 | 0.0470 | 0.86 | 0.9149 | 0.8866 | 94 | 0.8841 | 0.8683 | 0.8761 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9173 | 0.9196 | 0.9184 | 0.9843 |
| 0.0455 | 65.0 | 6240 | 0.0467 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8675 | 0.8623 | 0.8649 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9062 | 0.9221 | 0.9141 | 0.9845 |
| 0.0456 | 66.0 | 6336 | 0.0463 | 0.8431 | 0.9149 | 0.8776 | 94 | 0.8712 | 0.8503 | 0.8606 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9075 | 0.9121 | 0.9098 | 0.9834 |
| 0.0436 | 67.0 | 6432 | 0.0457 | 0.8365 | 0.9255 | 0.8788 | 94 | 0.8773 | 0.8563 | 0.8667 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9080 | 0.9171 | 0.9125 | 0.9837 |
| 0.0442 | 68.0 | 6528 | 0.0464 | 0.8365 | 0.9255 | 0.8788 | 94 | 0.8720 | 0.8563 | 0.8640 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9057 | 0.9171 | 0.9114 | 0.9837 |
| 0.0463 | 69.0 | 6624 | 0.0463 | 0.8447 | 0.9255 | 0.8832 | 94 | 0.8720 | 0.8563 | 0.8640 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9080 | 0.9171 | 0.9125 | 0.9840 |
| 0.0445 | 70.0 | 6720 | 0.0457 | 0.8529 | 0.9255 | 0.8878 | 94 | 0.8720 | 0.8563 | 0.8640 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9102 | 0.9171 | 0.9136 | 0.9840 |
| 0.0456 | 71.0 | 6816 | 0.0474 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8788 | 0.8683 | 0.8735 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9109 | 0.9246 | 0.9177 | 0.9851 |
| 0.0473 | 72.0 | 6912 | 0.0479 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8659 | 0.8503 | 0.8580 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9035 | 0.9171 | 0.9102 | 0.9837 |
| 0.0434 | 73.0 | 7008 | 0.0475 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8712 | 0.8503 | 0.8606 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9057 | 0.9171 | 0.9114 | 0.9840 |
| 0.042 | 74.0 | 7104 | 0.0463 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8765 | 0.8503 | 0.8632 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9102 | 0.9171 | 0.9136 | 0.9837 |
| 0.0438 | 75.0 | 7200 | 0.0463 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8765 | 0.8503 | 0.8632 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9102 | 0.9171 | 0.9136 | 0.9837 |
| 0.0437 | 76.0 | 7296 | 0.0459 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8623 | 0.8623 | 0.8623 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9039 | 0.9221 | 0.9129 | 0.9843 |
| 0.0455 | 77.0 | 7392 | 0.0469 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8827 | 0.8563 | 0.8693 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9104 | 0.9196 | 0.9150 | 0.9840 |
| 0.0426 | 78.0 | 7488 | 0.0467 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8727 | 0.8623 | 0.8675 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9062 | 0.9221 | 0.9141 | 0.9848 |
| 0.043 | 79.0 | 7584 | 0.0457 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8735 | 0.8683 | 0.8709 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9064 | 0.9246 | 0.9154 | 0.9854 |
| 0.0435 | 80.0 | 7680 | 0.0462 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8727 | 0.8623 | 0.8675 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9062 | 0.9221 | 0.9141 | 0.9851 |
| 0.0411 | 81.0 | 7776 | 0.0461 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8606 | 0.8503 | 0.8554 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9012 | 0.9171 | 0.9091 | 0.9843 |
| 0.0421 | 82.0 | 7872 | 0.0458 | 0.8544 | 0.9362 | 0.8934 | 94 | 0.8720 | 0.8563 | 0.8640 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9104 | 0.9196 | 0.9150 | 0.9843 |
| 0.0416 | 83.0 | 7968 | 0.0462 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8773 | 0.8563 | 0.8667 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9082 | 0.9196 | 0.9139 | 0.9843 |
| 0.0412 | 84.0 | 8064 | 0.0461 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8788 | 0.8683 | 0.8735 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9109 | 0.9246 | 0.9177 | 0.9851 |
| 0.0428 | 85.0 | 8160 | 0.0465 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8773 | 0.8563 | 0.8667 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9104 | 0.9196 | 0.9150 | 0.9845 |
| 0.0434 | 86.0 | 8256 | 0.0467 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8720 | 0.8563 | 0.8640 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9059 | 0.9196 | 0.9127 | 0.9840 |
| 0.0411 | 87.0 | 8352 | 0.0466 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8720 | 0.8563 | 0.8640 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9059 | 0.9196 | 0.9127 | 0.9840 |
| 0.0436 | 88.0 | 8448 | 0.0467 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8780 | 0.8623 | 0.8701 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9084 | 0.9221 | 0.9152 | 0.9848 |
| 0.0413 | 89.0 | 8544 | 0.0460 | 0.8544 | 0.9362 | 0.8934 | 94 | 0.8795 | 0.8743 | 0.8769 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9134 | 0.9271 | 0.9202 | 0.9854 |
| 0.0401 | 90.0 | 8640 | 0.0467 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8675 | 0.8623 | 0.8649 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9062 | 0.9221 | 0.9141 | 0.9848 |
| 0.0421 | 91.0 | 8736 | 0.0467 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8780 | 0.8623 | 0.8701 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9107 | 0.9221 | 0.9164 | 0.9845 |
| 0.0407 | 92.0 | 8832 | 0.0462 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8773 | 0.8563 | 0.8667 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9104 | 0.9196 | 0.9150 | 0.9845 |
| 0.0449 | 93.0 | 8928 | 0.0463 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8773 | 0.8563 | 0.8667 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9104 | 0.9196 | 0.9150 | 0.9845 |
| 0.0397 | 94.0 | 9024 | 0.0462 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8667 | 0.8563 | 0.8614 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9037 | 0.9196 | 0.9116 | 0.9845 |
| 0.0417 | 95.0 | 9120 | 0.0463 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8667 | 0.8563 | 0.8614 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9037 | 0.9196 | 0.9116 | 0.9845 |
| 0.0402 | 96.0 | 9216 | 0.0465 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8780 | 0.8623 | 0.8701 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9084 | 0.9221 | 0.9152 | 0.9848 |
| 0.0422 | 97.0 | 9312 | 0.0464 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8720 | 0.8563 | 0.8640 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9082 | 0.9196 | 0.9139 | 0.9851 |
| 0.0417 | 98.0 | 9408 | 0.0463 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8720 | 0.8563 | 0.8640 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9082 | 0.9196 | 0.9139 | 0.9851 |
| 0.0409 | 99.0 | 9504 | 0.0463 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8667 | 0.8563 | 0.8614 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9059 | 0.9196 | 0.9127 | 0.9848 |
| 0.0404 | 100.0 | 9600 | 0.0463 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8667 | 0.8563 | 0.8614 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9059 | 0.9196 | 0.9127 | 0.9848 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
himanshubeniwal/mt5-base-finetuned-kk-to-en-cold-burger
|
himanshubeniwal
| 2024-06-04T01:18:43Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-04T01:13:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lcw99/llama-3-10b-ko-240604-e2f
|
lcw99
| 2024-06-04T01:17:10Z | 2,249 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-04T00:37:02Z |
---
language:
- ko
license: apache-2.0
library_name: transformers
---
# Model Card for Model ID
## Model Details
### Model Description
Korean layer added instruction tunning of meta-llama/Meta-Llama-3-8B-Instruct
#### Chat template
tokenizer.apply_chat_template(chat, tokenize=False)
|
baf2b252097d46299a/medical_summarizer_6ec63f0624e84fea9af33517007b93a4
|
baf2b252097d46299a
| 2024-06-04T01:13:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-04T01:13:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
melancholic/neotraditional_tattoo_lora
|
melancholic
| 2024-06-04T01:09:32Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-06-03T06:17:47Z |
---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a neotraditional tattoo style
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - melancholic/neotraditional_tattoo_lora
<Gallery />
## Model description
These are melancholic/neotraditional_tattoo_lora LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a neotraditional tattoo style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](melancholic/neotraditional_tattoo_lora/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
apwic/nerui-base-3
|
apwic
| 2024-06-04T01:02:10Z | 24 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-05-28T05:46:39Z |
---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: nerui-base-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nerui-base-3
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1047
- Location Precision: 0.8925
- Location Recall: 0.9651
- Location F1: 0.9274
- Location Number: 86
- Organization Precision: 0.9538
- Organization Recall: 0.9270
- Organization F1: 0.9402
- Organization Number: 178
- Person Precision: 0.9685
- Person Recall: 0.9609
- Person F1: 0.9647
- Person Number: 128
- Overall Precision: 0.9440
- Overall Recall: 0.9464
- Overall F1: 0.9452
- Overall Accuracy: 0.9876
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Location Precision | Location Recall | Location F1 | Location Number | Organization Precision | Organization Recall | Organization F1 | Organization Number | Person Precision | Person Recall | Person F1 | Person Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:----------------:|:-------------:|:---------:|:-------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.2442 | 1.0 | 96 | 0.0581 | 0.8384 | 0.9651 | 0.8973 | 86 | 0.8535 | 0.9494 | 0.8989 | 178 | 0.9690 | 0.9766 | 0.9728 | 128 | 0.8850 | 0.9617 | 0.9218 | 0.9822 |
| 0.0581 | 2.0 | 192 | 0.0548 | 0.8283 | 0.9535 | 0.8865 | 86 | 0.9464 | 0.8933 | 0.9191 | 178 | 0.9690 | 0.9766 | 0.9728 | 128 | 0.9242 | 0.9337 | 0.9289 | 0.9852 |
| 0.0357 | 3.0 | 288 | 0.0514 | 0.8542 | 0.9535 | 0.9011 | 86 | 0.9310 | 0.9101 | 0.9205 | 178 | 0.9841 | 0.9688 | 0.9764 | 128 | 0.9293 | 0.9388 | 0.9340 | 0.9857 |
| 0.0251 | 4.0 | 384 | 0.0607 | 0.8989 | 0.9302 | 0.9143 | 86 | 0.8942 | 0.9494 | 0.9210 | 178 | 0.9837 | 0.9453 | 0.9641 | 128 | 0.9227 | 0.9439 | 0.9332 | 0.9852 |
| 0.0146 | 5.0 | 480 | 0.0617 | 0.8804 | 0.9419 | 0.9101 | 86 | 0.9231 | 0.9438 | 0.9333 | 178 | 0.976 | 0.9531 | 0.9644 | 128 | 0.9298 | 0.9464 | 0.9381 | 0.9865 |
| 0.0117 | 6.0 | 576 | 0.0706 | 0.8511 | 0.9302 | 0.8889 | 86 | 0.9066 | 0.9270 | 0.9167 | 178 | 0.9758 | 0.9453 | 0.9603 | 128 | 0.915 | 0.9337 | 0.9242 | 0.9857 |
| 0.0083 | 7.0 | 672 | 0.0926 | 0.7788 | 0.9419 | 0.8526 | 86 | 0.9162 | 0.9213 | 0.9188 | 178 | 0.9462 | 0.9609 | 0.9535 | 128 | 0.8910 | 0.9388 | 0.9143 | 0.9819 |
| 0.008 | 8.0 | 768 | 0.0781 | 0.8617 | 0.9419 | 0.9000 | 86 | 0.9535 | 0.9213 | 0.9371 | 178 | 0.984 | 0.9609 | 0.9723 | 128 | 0.9412 | 0.9388 | 0.9400 | 0.9857 |
| 0.0042 | 9.0 | 864 | 0.0659 | 0.8764 | 0.9070 | 0.8914 | 86 | 0.9663 | 0.9663 | 0.9663 | 178 | 0.9764 | 0.9688 | 0.9725 | 128 | 0.9492 | 0.9541 | 0.9517 | 0.9889 |
| 0.0044 | 10.0 | 960 | 0.0712 | 0.8681 | 0.9186 | 0.8927 | 86 | 0.9389 | 0.9494 | 0.9441 | 178 | 0.9457 | 0.9531 | 0.9494 | 128 | 0.925 | 0.9439 | 0.9343 | 0.9873 |
| 0.005 | 11.0 | 1056 | 0.0855 | 0.8384 | 0.9651 | 0.8973 | 86 | 0.9438 | 0.9438 | 0.9438 | 178 | 0.9762 | 0.9609 | 0.9685 | 128 | 0.9280 | 0.9541 | 0.9409 | 0.9870 |
| 0.0036 | 12.0 | 1152 | 0.0859 | 0.8710 | 0.9419 | 0.9050 | 86 | 0.9435 | 0.9382 | 0.9408 | 178 | 0.984 | 0.9609 | 0.9723 | 128 | 0.9392 | 0.9464 | 0.9428 | 0.9873 |
| 0.0042 | 13.0 | 1248 | 0.0761 | 0.8901 | 0.9419 | 0.9153 | 86 | 0.9448 | 0.9607 | 0.9526 | 178 | 0.984 | 0.9609 | 0.9723 | 128 | 0.9446 | 0.9566 | 0.9506 | 0.9889 |
| 0.0036 | 14.0 | 1344 | 0.0843 | 0.8876 | 0.9186 | 0.9029 | 86 | 0.9538 | 0.9270 | 0.9402 | 178 | 0.9841 | 0.9688 | 0.9764 | 128 | 0.9485 | 0.9388 | 0.9436 | 0.9862 |
| 0.0028 | 15.0 | 1440 | 0.0906 | 0.8723 | 0.9535 | 0.9111 | 86 | 0.9429 | 0.9270 | 0.9348 | 178 | 0.984 | 0.9609 | 0.9723 | 128 | 0.9391 | 0.9439 | 0.9415 | 0.9868 |
| 0.0017 | 16.0 | 1536 | 0.0914 | 0.8526 | 0.9419 | 0.8950 | 86 | 0.9645 | 0.9157 | 0.9395 | 178 | 0.9683 | 0.9531 | 0.9606 | 128 | 0.9385 | 0.9337 | 0.9361 | 0.9862 |
| 0.002 | 17.0 | 1632 | 0.0828 | 0.8587 | 0.9186 | 0.8876 | 86 | 0.9545 | 0.9438 | 0.9492 | 178 | 0.9762 | 0.9609 | 0.9685 | 128 | 0.9391 | 0.9439 | 0.9415 | 0.9884 |
| 0.0033 | 18.0 | 1728 | 0.0641 | 0.8646 | 0.9651 | 0.9121 | 86 | 0.9126 | 0.9382 | 0.9252 | 178 | 0.9841 | 0.9688 | 0.9764 | 128 | 0.9235 | 0.9541 | 0.9385 | 0.9887 |
| 0.0024 | 19.0 | 1824 | 0.0982 | 0.8667 | 0.9070 | 0.8864 | 86 | 0.9297 | 0.9663 | 0.9477 | 178 | 0.9683 | 0.9531 | 0.9606 | 128 | 0.9277 | 0.9490 | 0.9382 | 0.9868 |
| 0.0037 | 20.0 | 1920 | 0.0904 | 0.8283 | 0.9535 | 0.8865 | 86 | 0.9659 | 0.9551 | 0.9605 | 178 | 0.984 | 0.9609 | 0.9723 | 128 | 0.9375 | 0.9566 | 0.9470 | 0.9887 |
| 0.0038 | 21.0 | 2016 | 0.0787 | 0.8925 | 0.9651 | 0.9274 | 86 | 0.9385 | 0.9438 | 0.9412 | 178 | 0.9609 | 0.9609 | 0.9609 | 128 | 0.935 | 0.9541 | 0.9444 | 0.9879 |
| 0.0024 | 22.0 | 2112 | 0.0697 | 0.8526 | 0.9419 | 0.8950 | 86 | 0.9286 | 0.9494 | 0.9389 | 178 | 0.9677 | 0.9375 | 0.9524 | 128 | 0.9227 | 0.9439 | 0.9332 | 0.9889 |
| 0.0041 | 23.0 | 2208 | 0.0794 | 0.9011 | 0.9535 | 0.9266 | 86 | 0.9441 | 0.9494 | 0.9468 | 178 | 0.9685 | 0.9609 | 0.9647 | 128 | 0.9421 | 0.9541 | 0.9480 | 0.9876 |
| 0.0033 | 24.0 | 2304 | 0.0830 | 0.9 | 0.9419 | 0.9205 | 86 | 0.9231 | 0.9438 | 0.9333 | 178 | 0.9758 | 0.9453 | 0.9603 | 128 | 0.9343 | 0.9439 | 0.9391 | 0.9881 |
| 0.0034 | 25.0 | 2400 | 0.0804 | 0.8632 | 0.9535 | 0.9061 | 86 | 0.9448 | 0.9607 | 0.9526 | 178 | 0.9841 | 0.9688 | 0.9764 | 128 | 0.9378 | 0.9617 | 0.9496 | 0.9881 |
| 0.0012 | 26.0 | 2496 | 0.0728 | 0.9011 | 0.9535 | 0.9266 | 86 | 0.9341 | 0.9551 | 0.9444 | 178 | 0.9841 | 0.9688 | 0.9764 | 128 | 0.9424 | 0.9592 | 0.9507 | 0.9903 |
| 0.0015 | 27.0 | 2592 | 0.0957 | 0.9101 | 0.9419 | 0.9257 | 86 | 0.9301 | 0.9719 | 0.9505 | 178 | 0.9762 | 0.9609 | 0.9685 | 128 | 0.9401 | 0.9617 | 0.9508 | 0.9881 |
| 0.0029 | 28.0 | 2688 | 0.0766 | 0.8830 | 0.9651 | 0.9222 | 86 | 0.9545 | 0.9438 | 0.9492 | 178 | 0.9841 | 0.9688 | 0.9764 | 128 | 0.9470 | 0.9566 | 0.9518 | 0.9881 |
| 0.0031 | 29.0 | 2784 | 0.0802 | 0.8571 | 0.9767 | 0.9130 | 86 | 0.9649 | 0.9270 | 0.9456 | 178 | 0.9764 | 0.9688 | 0.9725 | 128 | 0.9419 | 0.9515 | 0.9467 | 0.9879 |
| 0.0018 | 30.0 | 2880 | 0.0837 | 0.8710 | 0.9419 | 0.9050 | 86 | 0.9605 | 0.9551 | 0.9577 | 178 | 0.9841 | 0.9688 | 0.9764 | 128 | 0.9470 | 0.9566 | 0.9518 | 0.9892 |
| 0.0017 | 31.0 | 2976 | 0.0792 | 0.9222 | 0.9651 | 0.9432 | 86 | 0.9505 | 0.9719 | 0.9611 | 178 | 0.9683 | 0.9531 | 0.9606 | 128 | 0.9497 | 0.9643 | 0.9570 | 0.9903 |
| 0.0017 | 32.0 | 3072 | 0.0675 | 0.8737 | 0.9651 | 0.9171 | 86 | 0.9661 | 0.9607 | 0.9634 | 178 | 0.976 | 0.9531 | 0.9644 | 128 | 0.9471 | 0.9592 | 0.9531 | 0.9906 |
| 0.0012 | 33.0 | 3168 | 0.0909 | 0.8925 | 0.9651 | 0.9274 | 86 | 0.9709 | 0.9382 | 0.9543 | 178 | 0.984 | 0.9609 | 0.9723 | 128 | 0.9564 | 0.9515 | 0.9540 | 0.9897 |
| 0.002 | 34.0 | 3264 | 0.1077 | 0.9101 | 0.9419 | 0.9257 | 86 | 0.9422 | 0.9157 | 0.9288 | 178 | 0.968 | 0.9453 | 0.9565 | 128 | 0.9432 | 0.9311 | 0.9371 | 0.9846 |
| 0.0023 | 35.0 | 3360 | 0.0912 | 0.8913 | 0.9535 | 0.9213 | 86 | 0.9396 | 0.9607 | 0.95 | 178 | 0.9762 | 0.9609 | 0.9685 | 128 | 0.94 | 0.9592 | 0.9495 | 0.9881 |
| 0.0016 | 36.0 | 3456 | 0.0839 | 0.8925 | 0.9651 | 0.9274 | 86 | 0.9655 | 0.9438 | 0.9545 | 178 | 0.984 | 0.9609 | 0.9723 | 128 | 0.9541 | 0.9541 | 0.9541 | 0.9892 |
| 0.0012 | 37.0 | 3552 | 0.1070 | 0.8817 | 0.9535 | 0.9162 | 86 | 0.9480 | 0.9213 | 0.9345 | 178 | 0.976 | 0.9531 | 0.9644 | 128 | 0.9412 | 0.9388 | 0.9400 | 0.9857 |
| 0.0009 | 38.0 | 3648 | 0.0856 | 0.8947 | 0.9884 | 0.9392 | 86 | 0.9540 | 0.9326 | 0.9432 | 178 | 0.984 | 0.9609 | 0.9723 | 128 | 0.9492 | 0.9541 | 0.9517 | 0.9884 |
| 0.0006 | 39.0 | 3744 | 0.0964 | 0.8936 | 0.9767 | 0.9333 | 86 | 0.9483 | 0.9270 | 0.9375 | 178 | 0.9685 | 0.9609 | 0.9647 | 128 | 0.9418 | 0.9490 | 0.9454 | 0.9862 |
| 0.0011 | 40.0 | 3840 | 0.0992 | 0.9011 | 0.9535 | 0.9266 | 86 | 0.9492 | 0.9438 | 0.9465 | 178 | 0.9762 | 0.9609 | 0.9685 | 128 | 0.9467 | 0.9515 | 0.9491 | 0.9870 |
| 0.0009 | 41.0 | 3936 | 0.1072 | 0.9032 | 0.9767 | 0.9385 | 86 | 0.9489 | 0.9382 | 0.9435 | 178 | 0.976 | 0.9531 | 0.9644 | 128 | 0.9467 | 0.9515 | 0.9491 | 0.9860 |
| 0.0007 | 42.0 | 4032 | 0.1193 | 0.8936 | 0.9767 | 0.9333 | 86 | 0.9595 | 0.9326 | 0.9459 | 178 | 0.9839 | 0.9531 | 0.9683 | 128 | 0.9514 | 0.9490 | 0.9502 | 0.9865 |
| 0.0014 | 43.0 | 4128 | 0.1129 | 0.9032 | 0.9767 | 0.9385 | 86 | 0.9489 | 0.9382 | 0.9435 | 178 | 0.9683 | 0.9531 | 0.9606 | 128 | 0.9443 | 0.9515 | 0.9479 | 0.9868 |
| 0.0007 | 44.0 | 4224 | 0.1289 | 0.9130 | 0.9767 | 0.9438 | 86 | 0.9492 | 0.9438 | 0.9465 | 178 | 0.9609 | 0.9609 | 0.9609 | 128 | 0.9446 | 0.9566 | 0.9506 | 0.9849 |
| 0.0006 | 45.0 | 4320 | 0.1167 | 0.8842 | 0.9767 | 0.9282 | 86 | 0.9392 | 0.9551 | 0.9471 | 178 | 0.9688 | 0.9688 | 0.9688 | 128 | 0.9356 | 0.9643 | 0.9497 | 0.9868 |
| 0.0014 | 46.0 | 4416 | 0.1168 | 0.8646 | 0.9651 | 0.9121 | 86 | 0.9543 | 0.9382 | 0.9462 | 178 | 0.9839 | 0.9531 | 0.9683 | 128 | 0.9418 | 0.9490 | 0.9454 | 0.9873 |
| 0.0022 | 47.0 | 4512 | 0.1090 | 0.8737 | 0.9651 | 0.9171 | 86 | 0.9702 | 0.9157 | 0.9422 | 178 | 0.9841 | 0.9688 | 0.9764 | 128 | 0.9512 | 0.9439 | 0.9475 | 0.9868 |
| 0.0033 | 48.0 | 4608 | 0.0899 | 0.9222 | 0.9651 | 0.9432 | 86 | 0.9333 | 0.9438 | 0.9385 | 178 | 0.9758 | 0.9453 | 0.9603 | 128 | 0.9442 | 0.9490 | 0.9466 | 0.9889 |
| 0.001 | 49.0 | 4704 | 0.1123 | 0.8830 | 0.9651 | 0.9222 | 86 | 0.9704 | 0.9213 | 0.9452 | 178 | 0.9839 | 0.9531 | 0.9683 | 128 | 0.9535 | 0.9413 | 0.9474 | 0.9870 |
| 0.0007 | 50.0 | 4800 | 0.0937 | 0.9011 | 0.9535 | 0.9266 | 86 | 0.9486 | 0.9326 | 0.9405 | 178 | 0.984 | 0.9609 | 0.9723 | 128 | 0.9488 | 0.9464 | 0.9476 | 0.9887 |
| 0.0011 | 51.0 | 4896 | 0.1082 | 0.9032 | 0.9767 | 0.9385 | 86 | 0.9278 | 0.9382 | 0.9330 | 178 | 0.9841 | 0.9688 | 0.9764 | 128 | 0.9398 | 0.9566 | 0.9482 | 0.9865 |
| 0.0015 | 52.0 | 4992 | 0.1112 | 0.9011 | 0.9535 | 0.9266 | 86 | 0.9645 | 0.9157 | 0.9395 | 178 | 0.9762 | 0.9609 | 0.9685 | 128 | 0.9534 | 0.9388 | 0.9460 | 0.9879 |
| 0.0009 | 53.0 | 5088 | 0.1032 | 0.8925 | 0.9651 | 0.9274 | 86 | 0.9341 | 0.9551 | 0.9444 | 178 | 0.984 | 0.9609 | 0.9723 | 128 | 0.94 | 0.9592 | 0.9495 | 0.9881 |
| 0.0033 | 54.0 | 5184 | 0.1181 | 0.8925 | 0.9651 | 0.9274 | 86 | 0.9593 | 0.9270 | 0.9429 | 178 | 0.984 | 0.9609 | 0.9723 | 128 | 0.9513 | 0.9464 | 0.9488 | 0.9870 |
| 0.0008 | 55.0 | 5280 | 0.1207 | 0.9022 | 0.9651 | 0.9326 | 86 | 0.9651 | 0.9326 | 0.9486 | 178 | 0.9688 | 0.9688 | 0.9688 | 128 | 0.9515 | 0.9515 | 0.9515 | 0.9865 |
| 0.0009 | 56.0 | 5376 | 0.1379 | 0.8632 | 0.9535 | 0.9061 | 86 | 0.9702 | 0.9157 | 0.9422 | 178 | 0.984 | 0.9609 | 0.9723 | 128 | 0.9485 | 0.9388 | 0.9436 | 0.9857 |
| 0.001 | 57.0 | 5472 | 0.1120 | 0.8925 | 0.9651 | 0.9274 | 86 | 0.9708 | 0.9326 | 0.9513 | 178 | 0.984 | 0.9609 | 0.9723 | 128 | 0.9563 | 0.9490 | 0.9526 | 0.9881 |
| 0.0013 | 58.0 | 5568 | 0.1086 | 0.8830 | 0.9651 | 0.9222 | 86 | 0.9483 | 0.9270 | 0.9375 | 178 | 0.9841 | 0.9688 | 0.9764 | 128 | 0.9442 | 0.9490 | 0.9466 | 0.9862 |
| 0.0005 | 59.0 | 5664 | 0.1218 | 0.8660 | 0.9767 | 0.9180 | 86 | 0.9641 | 0.9045 | 0.9333 | 178 | 0.9538 | 0.9688 | 0.9612 | 128 | 0.9365 | 0.9413 | 0.9389 | 0.9854 |
| 0.0007 | 60.0 | 5760 | 0.0958 | 0.8913 | 0.9535 | 0.9213 | 86 | 0.9239 | 0.9551 | 0.9392 | 178 | 0.9839 | 0.9531 | 0.9683 | 128 | 0.935 | 0.9541 | 0.9444 | 0.9881 |
| 0.0002 | 61.0 | 5856 | 0.1076 | 0.8817 | 0.9535 | 0.9162 | 86 | 0.9593 | 0.9270 | 0.9429 | 178 | 0.976 | 0.9531 | 0.9644 | 128 | 0.9462 | 0.9413 | 0.9437 | 0.9879 |
| 0.0023 | 62.0 | 5952 | 0.0877 | 0.9140 | 0.9884 | 0.9497 | 86 | 0.9494 | 0.9494 | 0.9494 | 178 | 0.9764 | 0.9688 | 0.9725 | 128 | 0.9497 | 0.9643 | 0.9570 | 0.9895 |
| 0.0013 | 63.0 | 6048 | 0.0885 | 0.9032 | 0.9767 | 0.9385 | 86 | 0.9448 | 0.9607 | 0.9526 | 178 | 0.9841 | 0.9688 | 0.9764 | 128 | 0.9475 | 0.9668 | 0.9571 | 0.9895 |
| 0.0009 | 64.0 | 6144 | 0.0825 | 0.9032 | 0.9767 | 0.9385 | 86 | 0.9605 | 0.9551 | 0.9577 | 178 | 0.9841 | 0.9688 | 0.9764 | 128 | 0.9545 | 0.9643 | 0.9594 | 0.9900 |
| 0.0003 | 65.0 | 6240 | 0.0838 | 0.9222 | 0.9651 | 0.9432 | 86 | 0.96 | 0.9438 | 0.9518 | 178 | 0.9841 | 0.9688 | 0.9764 | 128 | 0.9591 | 0.9566 | 0.9579 | 0.9884 |
| 0.0006 | 66.0 | 6336 | 0.0957 | 0.9032 | 0.9767 | 0.9385 | 86 | 0.96 | 0.9438 | 0.9518 | 178 | 0.9841 | 0.9688 | 0.9764 | 128 | 0.9543 | 0.9592 | 0.9567 | 0.9887 |
| 0.0004 | 67.0 | 6432 | 0.1129 | 0.8925 | 0.9651 | 0.9274 | 86 | 0.9649 | 0.9270 | 0.9456 | 178 | 0.9841 | 0.9688 | 0.9764 | 128 | 0.9538 | 0.9490 | 0.9514 | 0.9879 |
| 0.0003 | 68.0 | 6528 | 0.1161 | 0.8936 | 0.9767 | 0.9333 | 86 | 0.9538 | 0.9270 | 0.9402 | 178 | 0.9764 | 0.9688 | 0.9725 | 128 | 0.9467 | 0.9515 | 0.9491 | 0.9870 |
| 0.0002 | 69.0 | 6624 | 0.1234 | 0.8936 | 0.9767 | 0.9333 | 86 | 0.9645 | 0.9157 | 0.9395 | 178 | 0.9688 | 0.9688 | 0.9688 | 128 | 0.9488 | 0.9464 | 0.9476 | 0.9862 |
| 0.0006 | 70.0 | 6720 | 0.1162 | 0.9231 | 0.9767 | 0.9492 | 86 | 0.9651 | 0.9326 | 0.9486 | 178 | 0.9841 | 0.9688 | 0.9764 | 128 | 0.9614 | 0.9541 | 0.9577 | 0.9884 |
| 0.0002 | 71.0 | 6816 | 0.1107 | 0.9333 | 0.9767 | 0.9545 | 86 | 0.96 | 0.9438 | 0.9518 | 178 | 0.9841 | 0.9688 | 0.9764 | 128 | 0.9616 | 0.9592 | 0.9604 | 0.9879 |
| 0.0002 | 72.0 | 6912 | 0.1121 | 0.9231 | 0.9767 | 0.9492 | 86 | 0.9598 | 0.9382 | 0.9489 | 178 | 0.9841 | 0.9688 | 0.9764 | 128 | 0.9591 | 0.9566 | 0.9579 | 0.9879 |
| 0.0002 | 73.0 | 7008 | 0.1122 | 0.9231 | 0.9767 | 0.9492 | 86 | 0.9543 | 0.9382 | 0.9462 | 178 | 0.9841 | 0.9688 | 0.9764 | 128 | 0.9566 | 0.9566 | 0.9566 | 0.9881 |
| 0.0005 | 74.0 | 7104 | 0.1127 | 0.9231 | 0.9767 | 0.9492 | 86 | 0.9543 | 0.9382 | 0.9462 | 178 | 0.9841 | 0.9688 | 0.9764 | 128 | 0.9566 | 0.9566 | 0.9566 | 0.9873 |
| 0.0004 | 75.0 | 7200 | 0.1170 | 0.9130 | 0.9767 | 0.9438 | 86 | 0.9540 | 0.9326 | 0.9432 | 178 | 0.9688 | 0.9688 | 0.9688 | 128 | 0.9492 | 0.9541 | 0.9517 | 0.9862 |
| 0.0003 | 76.0 | 7296 | 0.1089 | 0.9333 | 0.9767 | 0.9545 | 86 | 0.9444 | 0.9551 | 0.9497 | 178 | 0.9762 | 0.9609 | 0.9685 | 128 | 0.9520 | 0.9617 | 0.9569 | 0.9892 |
| 0.001 | 77.0 | 7392 | 0.1082 | 0.9231 | 0.9767 | 0.9492 | 86 | 0.9503 | 0.9663 | 0.9582 | 178 | 0.9764 | 0.9688 | 0.9725 | 128 | 0.9524 | 0.9694 | 0.9608 | 0.9895 |
| 0.0012 | 78.0 | 7488 | 0.1009 | 0.9022 | 0.9651 | 0.9326 | 86 | 0.9330 | 0.9382 | 0.9356 | 178 | 0.9688 | 0.9688 | 0.9688 | 128 | 0.9373 | 0.9541 | 0.9456 | 0.9862 |
| 0.0002 | 79.0 | 7584 | 0.1051 | 0.8632 | 0.9535 | 0.9061 | 86 | 0.9489 | 0.9382 | 0.9435 | 178 | 0.976 | 0.9531 | 0.9644 | 128 | 0.9369 | 0.9464 | 0.9416 | 0.9865 |
| 0.0002 | 80.0 | 7680 | 0.1108 | 0.8723 | 0.9535 | 0.9111 | 86 | 0.9540 | 0.9326 | 0.9432 | 178 | 0.976 | 0.9531 | 0.9644 | 128 | 0.9415 | 0.9439 | 0.9427 | 0.9865 |
| 0.0005 | 81.0 | 7776 | 0.1037 | 0.8913 | 0.9535 | 0.9213 | 86 | 0.9543 | 0.9382 | 0.9462 | 178 | 0.9762 | 0.9609 | 0.9685 | 128 | 0.9466 | 0.9490 | 0.9478 | 0.9870 |
| 0.0003 | 82.0 | 7872 | 0.1031 | 0.8710 | 0.9419 | 0.9050 | 86 | 0.9540 | 0.9326 | 0.9432 | 178 | 0.976 | 0.9531 | 0.9644 | 128 | 0.9413 | 0.9413 | 0.9413 | 0.9868 |
| 0.0003 | 83.0 | 7968 | 0.0996 | 0.9121 | 0.9651 | 0.9379 | 86 | 0.9602 | 0.9494 | 0.9548 | 178 | 0.9685 | 0.9609 | 0.9647 | 128 | 0.9518 | 0.9566 | 0.9542 | 0.9887 |
| 0.0002 | 84.0 | 8064 | 0.0987 | 0.9222 | 0.9651 | 0.9432 | 86 | 0.9602 | 0.9494 | 0.9548 | 178 | 0.9685 | 0.9609 | 0.9647 | 128 | 0.9542 | 0.9566 | 0.9554 | 0.9887 |
| 0.0004 | 85.0 | 8160 | 0.1017 | 0.9222 | 0.9651 | 0.9432 | 86 | 0.9602 | 0.9494 | 0.9548 | 178 | 0.9685 | 0.9609 | 0.9647 | 128 | 0.9542 | 0.9566 | 0.9554 | 0.9887 |
| 0.0002 | 86.0 | 8256 | 0.1018 | 0.9222 | 0.9651 | 0.9432 | 86 | 0.9602 | 0.9494 | 0.9548 | 178 | 0.9685 | 0.9609 | 0.9647 | 128 | 0.9542 | 0.9566 | 0.9554 | 0.9887 |
| 0.0001 | 87.0 | 8352 | 0.1017 | 0.9222 | 0.9651 | 0.9432 | 86 | 0.9553 | 0.9607 | 0.9580 | 178 | 0.9685 | 0.9609 | 0.9647 | 128 | 0.9520 | 0.9617 | 0.9569 | 0.9889 |
| 0.0002 | 88.0 | 8448 | 0.1028 | 0.9222 | 0.9651 | 0.9432 | 86 | 0.9602 | 0.9494 | 0.9548 | 178 | 0.9685 | 0.9609 | 0.9647 | 128 | 0.9542 | 0.9566 | 0.9554 | 0.9887 |
| 0.0001 | 89.0 | 8544 | 0.1033 | 0.9222 | 0.9651 | 0.9432 | 86 | 0.9602 | 0.9494 | 0.9548 | 178 | 0.9685 | 0.9609 | 0.9647 | 128 | 0.9542 | 0.9566 | 0.9554 | 0.9887 |
| 0.0002 | 90.0 | 8640 | 0.1026 | 0.9213 | 0.9535 | 0.9371 | 86 | 0.9545 | 0.9438 | 0.9492 | 178 | 0.9762 | 0.9609 | 0.9685 | 128 | 0.9540 | 0.9515 | 0.9527 | 0.9879 |
| 0.0002 | 91.0 | 8736 | 0.1024 | 0.9213 | 0.9535 | 0.9371 | 86 | 0.9545 | 0.9438 | 0.9492 | 178 | 0.9762 | 0.9609 | 0.9685 | 128 | 0.9540 | 0.9515 | 0.9527 | 0.9879 |
| 0.0002 | 92.0 | 8832 | 0.1025 | 0.9213 | 0.9535 | 0.9371 | 86 | 0.9545 | 0.9438 | 0.9492 | 178 | 0.9762 | 0.9609 | 0.9685 | 128 | 0.9540 | 0.9515 | 0.9527 | 0.9879 |
| 0.0002 | 93.0 | 8928 | 0.1039 | 0.9213 | 0.9535 | 0.9371 | 86 | 0.9545 | 0.9438 | 0.9492 | 178 | 0.9762 | 0.9609 | 0.9685 | 128 | 0.9540 | 0.9515 | 0.9527 | 0.9879 |
| 0.0001 | 94.0 | 9024 | 0.1034 | 0.9213 | 0.9535 | 0.9371 | 86 | 0.9545 | 0.9438 | 0.9492 | 178 | 0.9762 | 0.9609 | 0.9685 | 128 | 0.9540 | 0.9515 | 0.9527 | 0.9879 |
| 0.0001 | 95.0 | 9120 | 0.1036 | 0.9213 | 0.9535 | 0.9371 | 86 | 0.9545 | 0.9438 | 0.9492 | 178 | 0.9762 | 0.9609 | 0.9685 | 128 | 0.9540 | 0.9515 | 0.9527 | 0.9879 |
| 0.0001 | 96.0 | 9216 | 0.1087 | 0.8925 | 0.9651 | 0.9274 | 86 | 0.9538 | 0.9270 | 0.9402 | 178 | 0.9762 | 0.9609 | 0.9685 | 128 | 0.9464 | 0.9464 | 0.9464 | 0.9873 |
| 0.0005 | 97.0 | 9312 | 0.1056 | 0.8925 | 0.9651 | 0.9274 | 86 | 0.9538 | 0.9270 | 0.9402 | 178 | 0.9685 | 0.9609 | 0.9647 | 128 | 0.9440 | 0.9464 | 0.9452 | 0.9876 |
| 0.0003 | 98.0 | 9408 | 0.1045 | 0.8925 | 0.9651 | 0.9274 | 86 | 0.9538 | 0.9270 | 0.9402 | 178 | 0.9685 | 0.9609 | 0.9647 | 128 | 0.9440 | 0.9464 | 0.9452 | 0.9876 |
| 0.0001 | 99.0 | 9504 | 0.1047 | 0.8925 | 0.9651 | 0.9274 | 86 | 0.9538 | 0.9270 | 0.9402 | 178 | 0.9685 | 0.9609 | 0.9647 | 128 | 0.9440 | 0.9464 | 0.9452 | 0.9876 |
| 0.0002 | 100.0 | 9600 | 0.1047 | 0.8925 | 0.9651 | 0.9274 | 86 | 0.9538 | 0.9270 | 0.9402 | 178 | 0.9685 | 0.9609 | 0.9647 | 128 | 0.9440 | 0.9464 | 0.9452 | 0.9876 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
hienbm/llama-3-8b-Instruct-bnb-4bit
|
hienbm
| 2024-06-04T00:55:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-04T00:55:10Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** hienbm
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
harveybro/molt5-augmented-default-500-base-caption2smiles
|
harveybro
| 2024-06-04T00:55:05Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-04T00:54:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
harveybro/molt5-augmented-default-400-base-caption2smiles
|
harveybro
| 2024-06-04T00:49:43Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-04T00:49:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
datek/Qwen-Qwen1.5-7B-1717461786
|
datek
| 2024-06-04T00:47:06Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-04T00:43:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
luthfi507/emotion-classification
|
luthfi507
| 2024-06-04T00:41:26Z | 15 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-03T15:43:25Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: emotion-classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion-classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1597
- Accuracy: 0.6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.5881 | 0.4813 |
| No log | 2.0 | 80 | 1.4495 | 0.4188 |
| No log | 3.0 | 120 | 1.3173 | 0.525 |
| No log | 4.0 | 160 | 1.2644 | 0.5375 |
| No log | 5.0 | 200 | 1.1238 | 0.6125 |
| No log | 6.0 | 240 | 1.3448 | 0.5563 |
| No log | 7.0 | 280 | 1.3241 | 0.5938 |
| No log | 8.0 | 320 | 1.4283 | 0.5625 |
| No log | 9.0 | 360 | 1.3231 | 0.6062 |
| No log | 10.0 | 400 | 1.4146 | 0.5938 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
apwic/nerui-base-1
|
apwic
| 2024-06-04T00:31:44Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-05-28T04:28:17Z |
---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: nerui-base-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nerui-base-1
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0822
- Location Precision: 0.9573
- Location Recall: 0.9655
- Location F1: 0.9614
- Location Number: 116
- Organization Precision: 0.9608
- Organization Recall: 0.9304
- Organization F1: 0.9453
- Organization Number: 158
- Person Precision: 0.984
- Person Recall: 0.9919
- Person F1: 0.9880
- Person Number: 124
- Overall Precision: 0.9671
- Overall Recall: 0.9598
- Overall F1: 0.9634
- Overall Accuracy: 0.9920
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Location Precision | Location Recall | Location F1 | Location Number | Organization Precision | Organization Recall | Organization F1 | Organization Number | Person Precision | Person Recall | Person F1 | Person Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:----------------:|:-------------:|:---------:|:-------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.2668 | 1.0 | 96 | 0.0394 | 0.9145 | 0.9224 | 0.9185 | 116 | 0.9141 | 0.9430 | 0.9283 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9358 | 0.9523 | 0.9440 | 0.9879 |
| 0.0634 | 2.0 | 192 | 0.0460 | 0.9237 | 0.9397 | 0.9316 | 116 | 0.9477 | 0.9177 | 0.9325 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9520 | 0.9472 | 0.9496 | 0.9882 |
| 0.032 | 3.0 | 288 | 0.0441 | 0.9474 | 0.9310 | 0.9391 | 116 | 0.9427 | 0.9367 | 0.9397 | 158 | 0.9762 | 0.9919 | 0.9840 | 124 | 0.9547 | 0.9523 | 0.9535 | 0.9890 |
| 0.022 | 4.0 | 384 | 0.0442 | 0.9732 | 0.9397 | 0.9561 | 116 | 0.9255 | 0.9430 | 0.9342 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9573 | 0.9573 | 0.9573 | 0.9909 |
| 0.0143 | 5.0 | 480 | 0.0474 | 0.9339 | 0.9741 | 0.9536 | 116 | 0.9671 | 0.9304 | 0.9484 | 158 | 0.976 | 0.9839 | 0.9799 | 124 | 0.9598 | 0.9598 | 0.9598 | 0.9898 |
| 0.0122 | 6.0 | 576 | 0.0581 | 0.9328 | 0.9569 | 0.9447 | 116 | 0.9662 | 0.9051 | 0.9346 | 158 | 0.976 | 0.9839 | 0.9799 | 124 | 0.9592 | 0.9447 | 0.9519 | 0.9885 |
| 0.0062 | 7.0 | 672 | 0.0578 | 0.9569 | 0.9569 | 0.9569 | 116 | 0.9548 | 0.9367 | 0.9457 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9646 | 0.9598 | 0.9622 | 0.9909 |
| 0.007 | 8.0 | 768 | 0.0608 | 0.9655 | 0.9655 | 0.9655 | 116 | 0.9551 | 0.9430 | 0.9490 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9673 | 0.9648 | 0.9660 | 0.9901 |
| 0.0049 | 9.0 | 864 | 0.0656 | 0.9328 | 0.9569 | 0.9447 | 116 | 0.9530 | 0.8987 | 0.9251 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9567 | 0.9447 | 0.9507 | 0.9874 |
| 0.0056 | 10.0 | 960 | 0.0566 | 0.9569 | 0.9569 | 0.9569 | 116 | 0.9423 | 0.9304 | 0.9363 | 158 | 0.9762 | 0.9919 | 0.9840 | 124 | 0.9573 | 0.9573 | 0.9573 | 0.9896 |
| 0.0046 | 11.0 | 1056 | 0.0709 | 0.9492 | 0.9655 | 0.9573 | 116 | 0.9346 | 0.9051 | 0.9196 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9545 | 0.9497 | 0.9521 | 0.9879 |
| 0.0022 | 12.0 | 1152 | 0.0721 | 0.9412 | 0.9655 | 0.9532 | 116 | 0.9548 | 0.9367 | 0.9457 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9599 | 0.9623 | 0.9611 | 0.9901 |
| 0.0048 | 13.0 | 1248 | 0.0544 | 0.9487 | 0.9569 | 0.9528 | 116 | 0.9490 | 0.9430 | 0.9460 | 158 | 0.9762 | 0.9919 | 0.9840 | 124 | 0.9575 | 0.9623 | 0.9599 | 0.9920 |
| 0.0029 | 14.0 | 1344 | 0.0602 | 0.9649 | 0.9483 | 0.9565 | 116 | 0.9434 | 0.9494 | 0.9464 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9623 | 0.9623 | 0.9623 | 0.9918 |
| 0.0031 | 15.0 | 1440 | 0.0678 | 0.9478 | 0.9397 | 0.9437 | 116 | 0.9484 | 0.9304 | 0.9393 | 158 | 0.9762 | 0.9919 | 0.9840 | 124 | 0.9571 | 0.9523 | 0.9547 | 0.9904 |
| 0.0039 | 16.0 | 1536 | 0.0820 | 0.9244 | 0.9483 | 0.9362 | 116 | 0.96 | 0.9114 | 0.9351 | 158 | 0.9839 | 0.9839 | 0.9839 | 124 | 0.9567 | 0.9447 | 0.9507 | 0.9871 |
| 0.0021 | 17.0 | 1632 | 0.0793 | 0.9421 | 0.9828 | 0.9620 | 116 | 0.9735 | 0.9304 | 0.9515 | 158 | 0.9762 | 0.9919 | 0.9840 | 124 | 0.9648 | 0.9648 | 0.9648 | 0.9898 |
| 0.0035 | 18.0 | 1728 | 0.0844 | 0.9310 | 0.9310 | 0.9310 | 116 | 0.9545 | 0.9304 | 0.9423 | 158 | 0.9685 | 0.9919 | 0.9801 | 124 | 0.9521 | 0.9497 | 0.9509 | 0.9879 |
| 0.0039 | 19.0 | 1824 | 0.0907 | 0.9106 | 0.9655 | 0.9372 | 116 | 0.9726 | 0.8987 | 0.9342 | 158 | 0.9762 | 0.9919 | 0.9840 | 124 | 0.9544 | 0.9472 | 0.9508 | 0.9868 |
| 0.0014 | 20.0 | 1920 | 0.0629 | 0.9412 | 0.9655 | 0.9532 | 116 | 0.9608 | 0.9304 | 0.9453 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9622 | 0.9598 | 0.9610 | 0.9912 |
| 0.0019 | 21.0 | 2016 | 0.0655 | 0.9412 | 0.9655 | 0.9532 | 116 | 0.9737 | 0.9367 | 0.9548 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9672 | 0.9623 | 0.9647 | 0.9909 |
| 0.0021 | 22.0 | 2112 | 0.0593 | 0.9487 | 0.9569 | 0.9528 | 116 | 0.9371 | 0.9430 | 0.9401 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9551 | 0.9623 | 0.9587 | 0.9915 |
| 0.0038 | 23.0 | 2208 | 0.0698 | 0.9322 | 0.9483 | 0.9402 | 116 | 0.9608 | 0.9304 | 0.9453 | 158 | 0.9762 | 0.9919 | 0.9840 | 124 | 0.9572 | 0.9548 | 0.9560 | 0.9890 |
| 0.0024 | 24.0 | 2304 | 0.0686 | 0.9492 | 0.9655 | 0.9573 | 116 | 0.9669 | 0.9241 | 0.9450 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9670 | 0.9573 | 0.9621 | 0.9901 |
| 0.0032 | 25.0 | 2400 | 0.0782 | 0.9174 | 0.9569 | 0.9367 | 116 | 0.9412 | 0.9114 | 0.9260 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9474 | 0.9497 | 0.9486 | 0.9874 |
| 0.0028 | 26.0 | 2496 | 0.0841 | 0.9167 | 0.9483 | 0.9322 | 116 | 0.9865 | 0.9241 | 0.9542 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9644 | 0.9523 | 0.9583 | 0.9893 |
| 0.0024 | 27.0 | 2592 | 0.0762 | 0.9174 | 0.9569 | 0.9367 | 116 | 0.9554 | 0.9494 | 0.9524 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9529 | 0.9648 | 0.9588 | 0.9893 |
| 0.0065 | 28.0 | 2688 | 0.0943 | 0.9483 | 0.9483 | 0.9483 | 116 | 0.9662 | 0.9051 | 0.9346 | 158 | 0.9531 | 0.9839 | 0.9683 | 124 | 0.9566 | 0.9422 | 0.9494 | 0.9887 |
| 0.0026 | 29.0 | 2784 | 0.0959 | 0.9487 | 0.9569 | 0.9528 | 116 | 0.9664 | 0.9114 | 0.9381 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9668 | 0.9497 | 0.9582 | 0.9874 |
| 0.002 | 30.0 | 2880 | 0.0732 | 0.9402 | 0.9483 | 0.9442 | 116 | 0.9548 | 0.9367 | 0.9457 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9597 | 0.9573 | 0.9585 | 0.9912 |
| 0.0012 | 31.0 | 2976 | 0.0808 | 0.9487 | 0.9569 | 0.9528 | 116 | 0.9735 | 0.9304 | 0.9515 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9695 | 0.9573 | 0.9633 | 0.9901 |
| 0.001 | 32.0 | 3072 | 0.0846 | 0.9492 | 0.9655 | 0.9573 | 116 | 0.98 | 0.9304 | 0.9545 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9720 | 0.9598 | 0.9659 | 0.9898 |
| 0.0018 | 33.0 | 3168 | 0.0949 | 0.9487 | 0.9569 | 0.9528 | 116 | 0.9735 | 0.9304 | 0.9515 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9695 | 0.9573 | 0.9633 | 0.9893 |
| 0.0012 | 34.0 | 3264 | 0.0965 | 0.9322 | 0.9483 | 0.9402 | 116 | 0.9669 | 0.9241 | 0.9450 | 158 | 0.9685 | 0.9919 | 0.9801 | 124 | 0.9571 | 0.9523 | 0.9547 | 0.9879 |
| 0.0025 | 35.0 | 3360 | 0.1011 | 0.9554 | 0.9224 | 0.9386 | 116 | 0.9367 | 0.9367 | 0.9367 | 158 | 0.9762 | 0.9919 | 0.9840 | 124 | 0.9545 | 0.9497 | 0.9521 | 0.9879 |
| 0.0029 | 36.0 | 3456 | 0.0913 | 0.9487 | 0.9569 | 0.9528 | 116 | 0.9545 | 0.9304 | 0.9423 | 158 | 0.9762 | 0.9919 | 0.9840 | 124 | 0.9597 | 0.9573 | 0.9585 | 0.9882 |
| 0.0037 | 37.0 | 3552 | 0.0543 | 0.9492 | 0.9655 | 0.9573 | 116 | 0.9430 | 0.9430 | 0.9430 | 158 | 0.9762 | 0.9919 | 0.9840 | 124 | 0.9552 | 0.9648 | 0.96 | 0.9923 |
| 0.002 | 38.0 | 3648 | 0.0655 | 0.9487 | 0.9569 | 0.9528 | 116 | 0.9430 | 0.9430 | 0.9430 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9575 | 0.9623 | 0.9599 | 0.9909 |
| 0.0015 | 39.0 | 3744 | 0.0786 | 0.9565 | 0.9483 | 0.9524 | 116 | 0.9671 | 0.9304 | 0.9484 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9694 | 0.9548 | 0.9620 | 0.9893 |
| 0.001 | 40.0 | 3840 | 0.0722 | 0.9328 | 0.9569 | 0.9447 | 116 | 0.9608 | 0.9304 | 0.9453 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9597 | 0.9573 | 0.9585 | 0.9904 |
| 0.0021 | 41.0 | 3936 | 0.0722 | 0.9350 | 0.9914 | 0.9623 | 116 | 0.9737 | 0.9367 | 0.9548 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.965 | 0.9698 | 0.9674 | 0.9904 |
| 0.0018 | 42.0 | 4032 | 0.0764 | 0.9483 | 0.9483 | 0.9483 | 116 | 0.9608 | 0.9304 | 0.9453 | 158 | 0.9762 | 0.9919 | 0.9840 | 124 | 0.9620 | 0.9548 | 0.9584 | 0.9893 |
| 0.0009 | 43.0 | 4128 | 0.0854 | 0.9492 | 0.9655 | 0.9573 | 116 | 0.9735 | 0.9304 | 0.9515 | 158 | 0.9839 | 0.9839 | 0.9839 | 124 | 0.9695 | 0.9573 | 0.9633 | 0.9898 |
| 0.0007 | 44.0 | 4224 | 0.0778 | 0.9412 | 0.9655 | 0.9532 | 116 | 0.9735 | 0.9304 | 0.9515 | 158 | 0.9839 | 0.9839 | 0.9839 | 124 | 0.9670 | 0.9573 | 0.9621 | 0.9904 |
| 0.0018 | 45.0 | 4320 | 0.0880 | 0.9558 | 0.9310 | 0.9432 | 116 | 0.9481 | 0.9241 | 0.9359 | 158 | 0.976 | 0.9839 | 0.9799 | 124 | 0.9592 | 0.9447 | 0.9519 | 0.9887 |
| 0.0022 | 46.0 | 4416 | 0.0823 | 0.9412 | 0.9655 | 0.9532 | 116 | 0.9867 | 0.9367 | 0.9610 | 158 | 0.976 | 0.9839 | 0.9799 | 124 | 0.9695 | 0.9598 | 0.9646 | 0.9901 |
| 0.0013 | 47.0 | 4512 | 0.0913 | 0.9483 | 0.9483 | 0.9483 | 116 | 0.98 | 0.9304 | 0.9545 | 158 | 0.9762 | 0.9919 | 0.9840 | 124 | 0.9694 | 0.9548 | 0.9620 | 0.9896 |
| 0.0013 | 48.0 | 4608 | 0.0819 | 0.9417 | 0.9741 | 0.9576 | 116 | 0.9801 | 0.9367 | 0.9579 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9697 | 0.9648 | 0.9673 | 0.9901 |
| 0.0005 | 49.0 | 4704 | 0.0735 | 0.9412 | 0.9655 | 0.9532 | 116 | 0.9737 | 0.9367 | 0.9548 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9672 | 0.9623 | 0.9647 | 0.9909 |
| 0.0011 | 50.0 | 4800 | 0.0772 | 0.9483 | 0.9483 | 0.9483 | 116 | 0.9484 | 0.9304 | 0.9393 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9596 | 0.9548 | 0.9572 | 0.9907 |
| 0.0021 | 51.0 | 4896 | 0.0813 | 0.9492 | 0.9655 | 0.9573 | 116 | 0.9735 | 0.9304 | 0.9515 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9695 | 0.9598 | 0.9646 | 0.9904 |
| 0.0006 | 52.0 | 4992 | 0.0927 | 0.9576 | 0.9741 | 0.9658 | 116 | 0.98 | 0.9304 | 0.9545 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9746 | 0.9623 | 0.9684 | 0.9904 |
| 0.0007 | 53.0 | 5088 | 0.0791 | 0.9496 | 0.9741 | 0.9617 | 116 | 0.9740 | 0.9494 | 0.9615 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9698 | 0.9698 | 0.9698 | 0.9912 |
| 0.0011 | 54.0 | 5184 | 0.0722 | 0.9496 | 0.9741 | 0.9617 | 116 | 0.9679 | 0.9557 | 0.9618 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9675 | 0.9724 | 0.9699 | 0.9929 |
| 0.0005 | 55.0 | 5280 | 0.0721 | 0.9328 | 0.9569 | 0.9447 | 116 | 0.9557 | 0.9557 | 0.9557 | 158 | 0.9839 | 0.9839 | 0.9839 | 124 | 0.9576 | 0.9648 | 0.9612 | 0.9920 |
| 0.0005 | 56.0 | 5376 | 0.0705 | 0.9496 | 0.9741 | 0.9617 | 116 | 0.9806 | 0.9620 | 0.9712 | 158 | 0.9839 | 0.9839 | 0.9839 | 124 | 0.9724 | 0.9724 | 0.9724 | 0.9931 |
| 0.0003 | 57.0 | 5472 | 0.0651 | 0.9487 | 0.9569 | 0.9528 | 116 | 0.9677 | 0.9494 | 0.9585 | 158 | 0.9839 | 0.9839 | 0.9839 | 124 | 0.9672 | 0.9623 | 0.9647 | 0.9923 |
| 0.0011 | 58.0 | 5568 | 0.0754 | 0.9569 | 0.9569 | 0.9569 | 116 | 0.9679 | 0.9557 | 0.9618 | 158 | 0.9839 | 0.9839 | 0.9839 | 124 | 0.9697 | 0.9648 | 0.9673 | 0.9929 |
| 0.0006 | 59.0 | 5664 | 0.0718 | 0.9397 | 0.9397 | 0.9397 | 116 | 0.9618 | 0.9557 | 0.9587 | 158 | 0.9839 | 0.9839 | 0.9839 | 124 | 0.9622 | 0.9598 | 0.9610 | 0.9923 |
| 0.0005 | 60.0 | 5760 | 0.0870 | 0.9496 | 0.9741 | 0.9617 | 116 | 0.98 | 0.9304 | 0.9545 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9721 | 0.9623 | 0.9672 | 0.9898 |
| 0.0004 | 61.0 | 5856 | 0.0687 | 0.9474 | 0.9310 | 0.9391 | 116 | 0.9437 | 0.9557 | 0.9497 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9574 | 0.9598 | 0.9586 | 0.9909 |
| 0.0002 | 62.0 | 5952 | 0.0983 | 0.9402 | 0.9483 | 0.9442 | 116 | 0.9799 | 0.9241 | 0.9511 | 158 | 0.9839 | 0.9839 | 0.9839 | 124 | 0.9692 | 0.9497 | 0.9594 | 0.9893 |
| 0.0006 | 63.0 | 6048 | 0.0818 | 0.9573 | 0.9655 | 0.9614 | 116 | 0.9671 | 0.9304 | 0.9484 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9695 | 0.9598 | 0.9646 | 0.9912 |
| 0.0002 | 64.0 | 6144 | 0.0858 | 0.9573 | 0.9655 | 0.9614 | 116 | 0.9608 | 0.9304 | 0.9453 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9671 | 0.9598 | 0.9634 | 0.9915 |
| 0.0005 | 65.0 | 6240 | 0.0884 | 0.9569 | 0.9569 | 0.9569 | 116 | 0.9673 | 0.9367 | 0.9518 | 158 | 0.9683 | 0.9839 | 0.976 | 124 | 0.9646 | 0.9573 | 0.9609 | 0.9915 |
| 0.001 | 66.0 | 6336 | 0.0771 | 0.9487 | 0.9569 | 0.9528 | 116 | 0.9542 | 0.9241 | 0.9389 | 158 | 0.9683 | 0.9839 | 0.976 | 124 | 0.9571 | 0.9523 | 0.9547 | 0.9912 |
| 0.0006 | 67.0 | 6432 | 0.0808 | 0.9487 | 0.9569 | 0.9528 | 116 | 0.9735 | 0.9304 | 0.9515 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9695 | 0.9573 | 0.9633 | 0.9909 |
| 0.0002 | 68.0 | 6528 | 0.0749 | 0.9573 | 0.9655 | 0.9614 | 116 | 0.9610 | 0.9367 | 0.9487 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9672 | 0.9623 | 0.9647 | 0.9920 |
| 0.0011 | 69.0 | 6624 | 0.0784 | 0.9573 | 0.9655 | 0.9614 | 116 | 0.9608 | 0.9304 | 0.9453 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9671 | 0.9598 | 0.9634 | 0.9918 |
| 0.0005 | 70.0 | 6720 | 0.0750 | 0.9573 | 0.9655 | 0.9614 | 116 | 0.9671 | 0.9304 | 0.9484 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9695 | 0.9598 | 0.9646 | 0.9920 |
| 0.0001 | 71.0 | 6816 | 0.0758 | 0.9573 | 0.9655 | 0.9614 | 116 | 0.9671 | 0.9304 | 0.9484 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9695 | 0.9598 | 0.9646 | 0.9920 |
| 0.0005 | 72.0 | 6912 | 0.0771 | 0.9573 | 0.9655 | 0.9614 | 116 | 0.9671 | 0.9304 | 0.9484 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9695 | 0.9598 | 0.9646 | 0.9920 |
| 0.0004 | 73.0 | 7008 | 0.0733 | 0.9412 | 0.9655 | 0.9532 | 116 | 0.9542 | 0.9241 | 0.9389 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9597 | 0.9573 | 0.9585 | 0.9915 |
| 0.0001 | 74.0 | 7104 | 0.0740 | 0.9492 | 0.9655 | 0.9573 | 116 | 0.9542 | 0.9241 | 0.9389 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9621 | 0.9573 | 0.9597 | 0.9918 |
| 0.0001 | 75.0 | 7200 | 0.0795 | 0.9492 | 0.9655 | 0.9573 | 116 | 0.9669 | 0.9241 | 0.9450 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9670 | 0.9573 | 0.9621 | 0.9915 |
| 0.0002 | 76.0 | 7296 | 0.0800 | 0.9492 | 0.9655 | 0.9573 | 116 | 0.9669 | 0.9241 | 0.9450 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9670 | 0.9573 | 0.9621 | 0.9915 |
| 0.0002 | 77.0 | 7392 | 0.0781 | 0.9569 | 0.9569 | 0.9569 | 116 | 0.9608 | 0.9304 | 0.9453 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9670 | 0.9573 | 0.9621 | 0.9920 |
| 0.0002 | 78.0 | 7488 | 0.0798 | 0.9492 | 0.9655 | 0.9573 | 116 | 0.9735 | 0.9304 | 0.9515 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9695 | 0.9598 | 0.9646 | 0.9918 |
| 0.0002 | 79.0 | 7584 | 0.0785 | 0.9573 | 0.9655 | 0.9614 | 116 | 0.9737 | 0.9367 | 0.9548 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9721 | 0.9623 | 0.9672 | 0.9926 |
| 0.0001 | 80.0 | 7680 | 0.0794 | 0.9573 | 0.9655 | 0.9614 | 116 | 0.9737 | 0.9367 | 0.9548 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9721 | 0.9623 | 0.9672 | 0.9926 |
| 0.0004 | 81.0 | 7776 | 0.0812 | 0.9573 | 0.9655 | 0.9614 | 116 | 0.9737 | 0.9367 | 0.9548 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9721 | 0.9623 | 0.9672 | 0.9926 |
| 0.0001 | 82.0 | 7872 | 0.0880 | 0.9492 | 0.9655 | 0.9573 | 116 | 0.9669 | 0.9241 | 0.9450 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9670 | 0.9573 | 0.9621 | 0.9915 |
| 0.0001 | 83.0 | 7968 | 0.0832 | 0.9576 | 0.9741 | 0.9658 | 116 | 0.9671 | 0.9304 | 0.9484 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9696 | 0.9623 | 0.9660 | 0.9920 |
| 0.0007 | 84.0 | 8064 | 0.0854 | 0.9492 | 0.9655 | 0.9573 | 116 | 0.9669 | 0.9241 | 0.9450 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9670 | 0.9573 | 0.9621 | 0.9915 |
| 0.0001 | 85.0 | 8160 | 0.0863 | 0.9492 | 0.9655 | 0.9573 | 116 | 0.9669 | 0.9241 | 0.9450 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9670 | 0.9573 | 0.9621 | 0.9915 |
| 0.0001 | 86.0 | 8256 | 0.0854 | 0.9492 | 0.9655 | 0.9573 | 116 | 0.9669 | 0.9241 | 0.9450 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9670 | 0.9573 | 0.9621 | 0.9909 |
| 0.0001 | 87.0 | 8352 | 0.0789 | 0.9573 | 0.9655 | 0.9614 | 116 | 0.9673 | 0.9367 | 0.9518 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9696 | 0.9623 | 0.9660 | 0.9923 |
| 0.0001 | 88.0 | 8448 | 0.0776 | 0.9658 | 0.9741 | 0.9700 | 116 | 0.9608 | 0.9304 | 0.9453 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9696 | 0.9623 | 0.9660 | 0.9923 |
| 0.0002 | 89.0 | 8544 | 0.0786 | 0.9569 | 0.9569 | 0.9569 | 116 | 0.9610 | 0.9367 | 0.9487 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9671 | 0.9598 | 0.9634 | 0.9920 |
| 0.0001 | 90.0 | 8640 | 0.0798 | 0.9569 | 0.9569 | 0.9569 | 116 | 0.9610 | 0.9367 | 0.9487 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9671 | 0.9598 | 0.9634 | 0.9920 |
| 0.0001 | 91.0 | 8736 | 0.0816 | 0.9573 | 0.9655 | 0.9614 | 116 | 0.9608 | 0.9304 | 0.9453 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9671 | 0.9598 | 0.9634 | 0.9920 |
| 0.0005 | 92.0 | 8832 | 0.0819 | 0.9573 | 0.9655 | 0.9614 | 116 | 0.9608 | 0.9304 | 0.9453 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9671 | 0.9598 | 0.9634 | 0.9920 |
| 0.0003 | 93.0 | 8928 | 0.0819 | 0.9573 | 0.9655 | 0.9614 | 116 | 0.9608 | 0.9304 | 0.9453 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9671 | 0.9598 | 0.9634 | 0.9920 |
| 0.0003 | 94.0 | 9024 | 0.0814 | 0.9573 | 0.9655 | 0.9614 | 116 | 0.9608 | 0.9304 | 0.9453 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9671 | 0.9598 | 0.9634 | 0.9920 |
| 0.0001 | 95.0 | 9120 | 0.0814 | 0.9573 | 0.9655 | 0.9614 | 116 | 0.9608 | 0.9304 | 0.9453 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9671 | 0.9598 | 0.9634 | 0.9920 |
| 0.0001 | 96.0 | 9216 | 0.0816 | 0.9573 | 0.9655 | 0.9614 | 116 | 0.9608 | 0.9304 | 0.9453 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9671 | 0.9598 | 0.9634 | 0.9920 |
| 0.0001 | 97.0 | 9312 | 0.0817 | 0.9573 | 0.9655 | 0.9614 | 116 | 0.9608 | 0.9304 | 0.9453 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9671 | 0.9598 | 0.9634 | 0.9920 |
| 0.0001 | 98.0 | 9408 | 0.0821 | 0.9573 | 0.9655 | 0.9614 | 116 | 0.9608 | 0.9304 | 0.9453 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9671 | 0.9598 | 0.9634 | 0.9920 |
| 0.0001 | 99.0 | 9504 | 0.0822 | 0.9573 | 0.9655 | 0.9614 | 116 | 0.9608 | 0.9304 | 0.9453 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9671 | 0.9598 | 0.9634 | 0.9920 |
| 0.0001 | 100.0 | 9600 | 0.0822 | 0.9573 | 0.9655 | 0.9614 | 116 | 0.9608 | 0.9304 | 0.9453 | 158 | 0.984 | 0.9919 | 0.9880 | 124 | 0.9671 | 0.9598 | 0.9634 | 0.9920 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
ehottl/distilbert-base-uncased-finetuned-emotion
|
ehottl
| 2024-06-04T00:21:31Z | 121 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-04T00:10:46Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.929
- name: F1
type: f1
value: 0.9290384064576098
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2064
- Accuracy: 0.929
- F1: 0.9290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8175 | 1.0 | 250 | 0.2950 | 0.911 | 0.9108 |
| 0.238 | 2.0 | 500 | 0.2064 | 0.929 | 0.9290 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
dalopeza98/distilbert-base-uncased-finetuned-emotion
|
dalopeza98
| 2024-06-04T00:21:10Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-03T21:14:37Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3872
- Accuracy: 0.6802
- F1: 0.6793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 48 | 1.7238 | 0.6762 | 0.6813 |
| 0.0305 | 2.0 | 96 | 1.8028 | 0.6775 | 0.6755 |
| 0.0305 | 3.0 | 144 | 1.9018 | 0.6689 | 0.6668 |
| 0.0257 | 4.0 | 192 | 1.9426 | 0.6735 | 0.6740 |
| 0.0257 | 5.0 | 240 | 1.9829 | 0.6662 | 0.6670 |
| 0.0207 | 6.0 | 288 | 1.9462 | 0.6722 | 0.6753 |
| 0.0207 | 7.0 | 336 | 1.9573 | 0.6861 | 0.6851 |
| 0.0185 | 8.0 | 384 | 2.0147 | 0.6808 | 0.6820 |
| 0.0185 | 9.0 | 432 | 2.0982 | 0.6669 | 0.6649 |
| 0.0172 | 10.0 | 480 | 2.0431 | 0.6815 | 0.6799 |
| 0.0172 | 11.0 | 528 | 2.0935 | 0.6768 | 0.6751 |
| 0.0182 | 12.0 | 576 | 2.0599 | 0.6868 | 0.6835 |
| 0.0182 | 13.0 | 624 | 2.0953 | 0.6808 | 0.6812 |
| 0.0148 | 14.0 | 672 | 2.1115 | 0.6788 | 0.6790 |
| 0.0148 | 15.0 | 720 | 2.1529 | 0.6735 | 0.6765 |
| 0.0171 | 16.0 | 768 | 2.1873 | 0.6702 | 0.6720 |
| 0.0171 | 17.0 | 816 | 2.1534 | 0.6782 | 0.6793 |
| 0.0142 | 18.0 | 864 | 2.1803 | 0.6782 | 0.6773 |
| 0.0142 | 19.0 | 912 | 2.2252 | 0.6802 | 0.6801 |
| 0.0168 | 20.0 | 960 | 2.2221 | 0.6749 | 0.6764 |
| 0.0168 | 21.0 | 1008 | 2.2365 | 0.6821 | 0.6817 |
| 0.015 | 22.0 | 1056 | 2.2812 | 0.6742 | 0.6728 |
| 0.015 | 23.0 | 1104 | 2.2447 | 0.6729 | 0.6707 |
| 0.0145 | 24.0 | 1152 | 2.3272 | 0.6709 | 0.6700 |
| 0.0145 | 25.0 | 1200 | 2.2630 | 0.6788 | 0.6809 |
| 0.0151 | 26.0 | 1248 | 2.2751 | 0.6808 | 0.6811 |
| 0.0151 | 27.0 | 1296 | 2.3018 | 0.6768 | 0.6776 |
| 0.0144 | 28.0 | 1344 | 2.3544 | 0.6676 | 0.6681 |
| 0.0144 | 29.0 | 1392 | 2.3109 | 0.6821 | 0.6828 |
| 0.0126 | 30.0 | 1440 | 2.3234 | 0.6795 | 0.6786 |
| 0.0126 | 31.0 | 1488 | 2.3294 | 0.6755 | 0.6750 |
| 0.0142 | 32.0 | 1536 | 2.3183 | 0.6875 | 0.6886 |
| 0.0142 | 33.0 | 1584 | 2.2949 | 0.6808 | 0.6823 |
| 0.0131 | 34.0 | 1632 | 2.3451 | 0.6788 | 0.6773 |
| 0.0131 | 35.0 | 1680 | 2.3160 | 0.6828 | 0.6841 |
| 0.0143 | 36.0 | 1728 | 2.3251 | 0.6828 | 0.6815 |
| 0.0143 | 37.0 | 1776 | 2.4003 | 0.6762 | 0.6753 |
| 0.0116 | 38.0 | 1824 | 2.3675 | 0.6775 | 0.6770 |
| 0.0116 | 39.0 | 1872 | 2.3700 | 0.6749 | 0.6735 |
| 0.0126 | 40.0 | 1920 | 2.3700 | 0.6841 | 0.6831 |
| 0.0126 | 41.0 | 1968 | 2.3818 | 0.6795 | 0.6793 |
| 0.0115 | 42.0 | 2016 | 2.3518 | 0.6815 | 0.6814 |
| 0.0115 | 43.0 | 2064 | 2.3829 | 0.6802 | 0.6790 |
| 0.0135 | 44.0 | 2112 | 2.3638 | 0.6782 | 0.6775 |
| 0.0135 | 45.0 | 2160 | 2.3568 | 0.6775 | 0.6768 |
| 0.0146 | 46.0 | 2208 | 2.3633 | 0.6788 | 0.6784 |
| 0.0118 | 47.0 | 2256 | 2.3725 | 0.6788 | 0.6782 |
| 0.0118 | 48.0 | 2304 | 2.3875 | 0.6815 | 0.6806 |
| 0.0116 | 49.0 | 2352 | 2.3862 | 0.6795 | 0.6787 |
| 0.0116 | 50.0 | 2400 | 2.3872 | 0.6802 | 0.6793 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.1.2
- Datasets 2.19.1
- Tokenizers 0.19.1
|
duyntnet/Kunoichi-DPO-v2-7B-imatrix-GGUF
|
duyntnet
| 2024-06-04T00:19:17Z | 60 | 3 |
transformers
|
[
"transformers",
"gguf",
"imatrix",
"Kunoichi-DPO-v2-7B",
"text-generation",
"en",
"license:other",
"region:us"
] |
text-generation
| 2024-06-03T20:18:06Z |
---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Kunoichi-DPO-v2-7B
---
Quantizations of https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B
# From original readme
| Model | MT Bench | EQ Bench | MMLU | Logic Test |
|----------------------|----------|----------|---------|-------------|
| GPT-4-Turbo | 9.32 | - | - | - |
| GPT-4 | 8.99 | 62.52 | 86.4 | 0.86 |
| **Kunoichi-DPO-v2-7B** | **8.51** | **42.18** | **64.94**| **0.58** |
| Mixtral-8x7B-Instruct| 8.30 | 44.81 | 70.6 | 0.75 |
| **Kunoichi-DPO-7B** | **8.29** | **41.60** | **64.83** | **0.59** |
| **Kunoichi-7B** | **8.14** | **44.32** | **64.9** | **0.58** |
| Starling-7B | 8.09 | - | 63.9 | 0.51 |
| Claude-2 | 8.06 | 52.14 | 78.5 | - |
| Silicon-Maid-7B | 7.96 | 40.44 | 64.7 | 0.54 |
| Loyal-Macaroni-Maid-7B | 7.95 | 38.66 | 64.9 | 0.57 |
| GPT-3.5-Turbo | 7.94 | 50.28 | 70 | 0.57 |
| Claude-1 | 7.9 | - | 77 | - |
| Openchat-3.5 | 7.81 | 37.08 | 64.3 | 0.39 |
| Dolphin-2.6-DPO | 7.74 | 42.88 | 61.9 | 0.53 |
| Zephyr-7B-beta | 7.34 | 38.71 | 61.4 | 0.30 |
| Llama-2-70b-chat-hf | 6.86 | 51.56 | 63 | - |
| Neural-chat-7b-v3-1 | 6.84 | 43.61 | 62.4 | 0.30 |
|
azmoulai/vizwiz-blip-model
|
azmoulai
| 2024-06-04T00:16:24Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"blip",
"visual-question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
visual-question-answering
| 2024-05-29T04:12:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
apwic/nerui-base-0
|
apwic
| 2024-06-04T00:16:00Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-05-28T03:49:20Z |
---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: nerui-base-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nerui-base-0
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1084
- Location Precision: 0.89
- Location Recall: 0.9468
- Location F1: 0.9175
- Location Number: 94
- Organization Precision: 0.9387
- Organization Recall: 0.9162
- Organization F1: 0.9273
- Organization Number: 167
- Person Precision: 1.0
- Person Recall: 0.9781
- Person F1: 0.9889
- Person Number: 137
- Overall Precision: 0.9471
- Overall Recall: 0.9447
- Overall F1: 0.9459
- Overall Accuracy: 0.9887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Location Precision | Location Recall | Location F1 | Location Number | Organization Precision | Organization Recall | Organization F1 | Organization Number | Person Precision | Person Recall | Person F1 | Person Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:----------------:|:-------------:|:---------:|:-------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.2566 | 1.0 | 96 | 0.0455 | 0.9634 | 0.8404 | 0.8977 | 94 | 0.8333 | 0.9281 | 0.8782 | 167 | 0.9708 | 0.9708 | 0.9708 | 137 | 0.9062 | 0.9221 | 0.9141 | 0.9843 |
| 0.0617 | 2.0 | 192 | 0.0519 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8896 | 0.8683 | 0.8788 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.9107 | 0.9221 | 0.9164 | 0.9834 |
| 0.0356 | 3.0 | 288 | 0.0534 | 0.9062 | 0.9255 | 0.9158 | 94 | 0.8211 | 0.9341 | 0.8739 | 167 | 1.0 | 0.9708 | 0.9852 | 137 | 0.8974 | 0.9447 | 0.9204 | 0.9840 |
| 0.0235 | 4.0 | 384 | 0.0525 | 0.8866 | 0.9149 | 0.9005 | 94 | 0.9006 | 0.9222 | 0.9112 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9303 | 0.9397 | 0.9350 | 0.9856 |
| 0.0156 | 5.0 | 480 | 0.0623 | 0.9032 | 0.8936 | 0.8984 | 94 | 0.9333 | 0.9222 | 0.9277 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.9466 | 0.9347 | 0.9406 | 0.9873 |
| 0.0101 | 6.0 | 576 | 0.0590 | 0.9043 | 0.9043 | 0.9043 | 94 | 0.8929 | 0.8982 | 0.8955 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.9295 | 0.9271 | 0.9283 | 0.9859 |
| 0.0091 | 7.0 | 672 | 0.0955 | 0.8036 | 0.9574 | 0.8738 | 94 | 0.9211 | 0.8383 | 0.8777 | 167 | 0.9643 | 0.9854 | 0.9747 | 137 | 0.9035 | 0.9171 | 0.9102 | 0.9809 |
| 0.0084 | 8.0 | 768 | 0.0871 | 0.8365 | 0.9255 | 0.8788 | 94 | 0.9062 | 0.8683 | 0.8869 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9196 | 0.9196 | 0.9196 | 0.9826 |
| 0.007 | 9.0 | 864 | 0.0629 | 0.9565 | 0.9362 | 0.9462 | 94 | 0.8895 | 0.9162 | 0.9027 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9424 | 0.9447 | 0.9435 | 0.9881 |
| 0.0047 | 10.0 | 960 | 0.0564 | 0.9167 | 0.9362 | 0.9263 | 94 | 0.9512 | 0.9341 | 0.9426 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9594 | 0.9497 | 0.9545 | 0.9901 |
| 0.0043 | 11.0 | 1056 | 0.0829 | 0.9158 | 0.9255 | 0.9206 | 94 | 0.8708 | 0.9281 | 0.8986 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.9216 | 0.9447 | 0.9330 | 0.9856 |
| 0.0034 | 12.0 | 1152 | 0.0779 | 0.9247 | 0.9149 | 0.9198 | 94 | 0.8667 | 0.9341 | 0.8991 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.9216 | 0.9447 | 0.9330 | 0.9865 |
| 0.0047 | 13.0 | 1248 | 0.0781 | 0.8922 | 0.9681 | 0.9286 | 94 | 0.95 | 0.9102 | 0.9297 | 167 | 0.9854 | 0.9854 | 0.9854 | 137 | 0.9474 | 0.9497 | 0.9486 | 0.9862 |
| 0.006 | 14.0 | 1344 | 0.0682 | 0.9271 | 0.9468 | 0.9368 | 94 | 0.9236 | 0.8683 | 0.8951 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9509 | 0.9246 | 0.9376 | 0.9859 |
| 0.0031 | 15.0 | 1440 | 0.0759 | 0.9149 | 0.9149 | 0.9149 | 94 | 0.8814 | 0.9341 | 0.9070 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.9261 | 0.9447 | 0.9353 | 0.9878 |
| 0.0049 | 16.0 | 1536 | 0.0801 | 0.9082 | 0.9468 | 0.9271 | 94 | 0.9107 | 0.9162 | 0.9134 | 167 | 0.9574 | 0.9854 | 0.9712 | 137 | 0.9263 | 0.9472 | 0.9366 | 0.9865 |
| 0.0036 | 17.0 | 1632 | 0.0933 | 0.9278 | 0.9574 | 0.9424 | 94 | 0.9333 | 0.9222 | 0.9277 | 167 | 0.9853 | 0.9781 | 0.9817 | 137 | 0.9497 | 0.9497 | 0.9497 | 0.9887 |
| 0.0033 | 18.0 | 1728 | 0.0828 | 0.9167 | 0.9362 | 0.9263 | 94 | 0.9167 | 0.9222 | 0.9194 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.9424 | 0.9447 | 0.9435 | 0.9870 |
| 0.0031 | 19.0 | 1824 | 0.0819 | 0.9149 | 0.9149 | 0.9149 | 94 | 0.9102 | 0.9102 | 0.9102 | 167 | 0.9708 | 0.9708 | 0.9708 | 137 | 0.9322 | 0.9322 | 0.9322 | 0.9873 |
| 0.0025 | 20.0 | 1920 | 0.0871 | 0.8969 | 0.9255 | 0.9110 | 94 | 0.9321 | 0.9042 | 0.9179 | 167 | 0.9708 | 0.9708 | 0.9708 | 137 | 0.9369 | 0.9322 | 0.9345 | 0.9878 |
| 0.0023 | 21.0 | 2016 | 0.0813 | 0.89 | 0.9468 | 0.9175 | 94 | 0.9162 | 0.9162 | 0.9162 | 167 | 0.9706 | 0.9635 | 0.9670 | 137 | 0.9280 | 0.9397 | 0.9338 | 0.9873 |
| 0.0023 | 22.0 | 2112 | 0.0885 | 0.9158 | 0.9255 | 0.9206 | 94 | 0.8814 | 0.9341 | 0.9070 | 167 | 1.0 | 0.9635 | 0.9814 | 137 | 0.9282 | 0.9422 | 0.9352 | 0.9867 |
| 0.0018 | 23.0 | 2208 | 0.1209 | 0.8788 | 0.9255 | 0.9016 | 94 | 0.8947 | 0.9162 | 0.9053 | 167 | 0.9779 | 0.9708 | 0.9744 | 137 | 0.9187 | 0.9372 | 0.9279 | 0.9837 |
| 0.0036 | 24.0 | 2304 | 0.0841 | 0.9175 | 0.9468 | 0.9319 | 94 | 0.9029 | 0.9461 | 0.9240 | 167 | 0.9853 | 0.9781 | 0.9817 | 137 | 0.9338 | 0.9573 | 0.9454 | 0.9878 |
| 0.0034 | 25.0 | 2400 | 0.0860 | 0.9368 | 0.9468 | 0.9418 | 94 | 0.9186 | 0.9461 | 0.9322 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.9478 | 0.9573 | 0.9525 | 0.9884 |
| 0.0029 | 26.0 | 2496 | 0.0684 | 0.9381 | 0.9681 | 0.9529 | 94 | 0.9176 | 0.9341 | 0.9258 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.9478 | 0.9573 | 0.9525 | 0.9898 |
| 0.0031 | 27.0 | 2592 | 0.1158 | 0.9278 | 0.9574 | 0.9424 | 94 | 0.8933 | 0.9521 | 0.9217 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.9341 | 0.9623 | 0.9480 | 0.9865 |
| 0.0045 | 28.0 | 2688 | 0.0860 | 0.9263 | 0.9362 | 0.9312 | 94 | 0.8963 | 0.8802 | 0.8882 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.9365 | 0.9271 | 0.9318 | 0.9854 |
| 0.0018 | 29.0 | 2784 | 0.0869 | 0.9271 | 0.9468 | 0.9368 | 94 | 0.9290 | 0.9401 | 0.9345 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.95 | 0.9548 | 0.9524 | 0.9884 |
| 0.0023 | 30.0 | 2880 | 0.1042 | 0.9184 | 0.9574 | 0.9375 | 94 | 0.9394 | 0.9281 | 0.9337 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9547 | 0.9523 | 0.9535 | 0.9881 |
| 0.0028 | 31.0 | 2976 | 0.1003 | 0.9020 | 0.9787 | 0.9388 | 94 | 0.9118 | 0.9281 | 0.9199 | 167 | 0.9853 | 0.9781 | 0.9817 | 137 | 0.9338 | 0.9573 | 0.9454 | 0.9862 |
| 0.0015 | 32.0 | 3072 | 0.0802 | 0.91 | 0.9681 | 0.9381 | 94 | 0.9353 | 0.9521 | 0.9436 | 167 | 0.9853 | 0.9781 | 0.9817 | 137 | 0.9458 | 0.9648 | 0.9552 | 0.9890 |
| 0.0025 | 33.0 | 3168 | 0.0959 | 0.8667 | 0.9681 | 0.9146 | 94 | 0.9375 | 0.8982 | 0.9174 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9398 | 0.9422 | 0.9410 | 0.9862 |
| 0.0014 | 34.0 | 3264 | 0.0970 | 0.9184 | 0.9574 | 0.9375 | 94 | 0.9286 | 0.9341 | 0.9313 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.95 | 0.9548 | 0.9524 | 0.9881 |
| 0.0017 | 35.0 | 3360 | 0.0790 | 0.9570 | 0.9468 | 0.9519 | 94 | 0.9123 | 0.9341 | 0.9231 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.9499 | 0.9523 | 0.9511 | 0.9890 |
| 0.002 | 36.0 | 3456 | 0.0912 | 0.9010 | 0.9681 | 0.9333 | 94 | 0.9317 | 0.8982 | 0.9146 | 167 | 0.9853 | 0.9781 | 0.9817 | 137 | 0.9422 | 0.9422 | 0.9422 | 0.9870 |
| 0.0025 | 37.0 | 3552 | 0.1061 | 0.9271 | 0.9468 | 0.9368 | 94 | 0.9030 | 0.8922 | 0.8976 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9418 | 0.9347 | 0.9382 | 0.9865 |
| 0.0028 | 38.0 | 3648 | 0.0982 | 0.9184 | 0.9574 | 0.9375 | 94 | 0.9085 | 0.8922 | 0.9003 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9419 | 0.9372 | 0.9395 | 0.9870 |
| 0.0022 | 39.0 | 3744 | 0.1061 | 0.8969 | 0.9255 | 0.9110 | 94 | 0.8953 | 0.9222 | 0.9086 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9305 | 0.9422 | 0.9363 | 0.9848 |
| 0.0018 | 40.0 | 3840 | 0.1077 | 0.8980 | 0.9362 | 0.9167 | 94 | 0.9202 | 0.8982 | 0.9091 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9418 | 0.9347 | 0.9382 | 0.9862 |
| 0.002 | 41.0 | 3936 | 0.0923 | 0.8980 | 0.9362 | 0.9167 | 94 | 0.9325 | 0.9102 | 0.9212 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9468 | 0.9397 | 0.9433 | 0.9870 |
| 0.003 | 42.0 | 4032 | 0.0899 | 0.9053 | 0.9149 | 0.9101 | 94 | 0.9112 | 0.9222 | 0.9167 | 167 | 0.9853 | 0.9781 | 0.9817 | 137 | 0.935 | 0.9397 | 0.9373 | 0.9862 |
| 0.0027 | 43.0 | 4128 | 0.0827 | 0.9355 | 0.9255 | 0.9305 | 94 | 0.9277 | 0.9222 | 0.9249 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9542 | 0.9422 | 0.9482 | 0.9878 |
| 0.0015 | 44.0 | 4224 | 0.0798 | 0.9149 | 0.9149 | 0.9149 | 94 | 0.9102 | 0.9102 | 0.9102 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9418 | 0.9347 | 0.9382 | 0.9878 |
| 0.0011 | 45.0 | 4320 | 0.0868 | 0.8958 | 0.9149 | 0.9053 | 94 | 0.9313 | 0.8922 | 0.9113 | 167 | 0.9853 | 0.9781 | 0.9817 | 137 | 0.9413 | 0.9271 | 0.9342 | 0.9881 |
| 0.0012 | 46.0 | 4416 | 0.0743 | 0.8922 | 0.9681 | 0.9286 | 94 | 0.9679 | 0.9042 | 0.9350 | 167 | 0.9852 | 0.9708 | 0.9779 | 137 | 0.9542 | 0.9422 | 0.9482 | 0.9903 |
| 0.0012 | 47.0 | 4512 | 0.0870 | 0.9072 | 0.9362 | 0.9215 | 94 | 0.9375 | 0.8982 | 0.9174 | 167 | 0.9853 | 0.9781 | 0.9817 | 137 | 0.9466 | 0.9347 | 0.9406 | 0.9884 |
| 0.0019 | 48.0 | 4608 | 0.0759 | 0.89 | 0.9468 | 0.9175 | 94 | 0.9308 | 0.8862 | 0.9080 | 167 | 0.9779 | 0.9708 | 0.9744 | 137 | 0.9367 | 0.9296 | 0.9332 | 0.9881 |
| 0.0015 | 49.0 | 4704 | 0.0810 | 0.9271 | 0.9468 | 0.9368 | 94 | 0.9176 | 0.9341 | 0.9258 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9475 | 0.9523 | 0.9499 | 0.9895 |
| 0.0011 | 50.0 | 4800 | 0.0890 | 0.9082 | 0.9468 | 0.9271 | 94 | 0.9506 | 0.9222 | 0.9362 | 167 | 0.9853 | 0.9781 | 0.9817 | 137 | 0.9520 | 0.9472 | 0.9496 | 0.9890 |
| 0.0007 | 51.0 | 4896 | 0.0827 | 0.9167 | 0.9362 | 0.9263 | 94 | 0.9341 | 0.9341 | 0.9341 | 167 | 0.9853 | 0.9781 | 0.9817 | 137 | 0.9474 | 0.9497 | 0.9486 | 0.9895 |
| 0.001 | 52.0 | 4992 | 0.0873 | 0.8980 | 0.9362 | 0.9167 | 94 | 0.9281 | 0.9281 | 0.9281 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.9425 | 0.9472 | 0.9449 | 0.9887 |
| 0.001 | 53.0 | 5088 | 0.0820 | 0.8980 | 0.9362 | 0.9167 | 94 | 0.9394 | 0.9281 | 0.9337 | 167 | 0.9852 | 0.9708 | 0.9779 | 137 | 0.9447 | 0.9447 | 0.9447 | 0.9890 |
| 0.0004 | 54.0 | 5184 | 0.0917 | 0.8911 | 0.9574 | 0.9231 | 94 | 0.9434 | 0.8982 | 0.9202 | 167 | 0.9853 | 0.9781 | 0.9817 | 137 | 0.9444 | 0.9397 | 0.9421 | 0.9867 |
| 0.0006 | 55.0 | 5280 | 0.1053 | 0.8980 | 0.9362 | 0.9167 | 94 | 0.9333 | 0.9222 | 0.9277 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.9447 | 0.9447 | 0.9447 | 0.9884 |
| 0.001 | 56.0 | 5376 | 0.1040 | 0.8990 | 0.9468 | 0.9223 | 94 | 0.9333 | 0.9222 | 0.9277 | 167 | 0.9853 | 0.9781 | 0.9817 | 137 | 0.9425 | 0.9472 | 0.9449 | 0.9881 |
| 0.0005 | 57.0 | 5472 | 0.1042 | 0.8990 | 0.9468 | 0.9223 | 94 | 0.9337 | 0.9281 | 0.9309 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.945 | 0.9497 | 0.9474 | 0.9884 |
| 0.0009 | 58.0 | 5568 | 0.1057 | 0.9082 | 0.9468 | 0.9271 | 94 | 0.9202 | 0.8982 | 0.9091 | 167 | 0.9853 | 0.9781 | 0.9817 | 137 | 0.9395 | 0.9372 | 0.9384 | 0.9876 |
| 0.001 | 59.0 | 5664 | 0.1034 | 0.8911 | 0.9574 | 0.9231 | 94 | 0.9277 | 0.9222 | 0.9249 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9426 | 0.9497 | 0.9462 | 0.9873 |
| 0.0012 | 60.0 | 5760 | 0.0910 | 0.9072 | 0.9362 | 0.9215 | 94 | 0.9337 | 0.9281 | 0.9309 | 167 | 0.9779 | 0.9708 | 0.9744 | 137 | 0.9424 | 0.9447 | 0.9435 | 0.9887 |
| 0.0008 | 61.0 | 5856 | 0.0987 | 0.9247 | 0.9149 | 0.9198 | 94 | 0.9102 | 0.9102 | 0.9102 | 167 | 0.9779 | 0.9708 | 0.9744 | 137 | 0.9369 | 0.9322 | 0.9345 | 0.9862 |
| 0.0005 | 62.0 | 5952 | 0.1056 | 0.8889 | 0.9362 | 0.9119 | 94 | 0.9387 | 0.9162 | 0.9273 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9470 | 0.9422 | 0.9446 | 0.9876 |
| 0.0006 | 63.0 | 6048 | 0.1050 | 0.8980 | 0.9362 | 0.9167 | 94 | 0.9268 | 0.9102 | 0.9184 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.9421 | 0.9397 | 0.9409 | 0.9873 |
| 0.0013 | 64.0 | 6144 | 0.0956 | 0.9072 | 0.9362 | 0.9215 | 94 | 0.9329 | 0.9162 | 0.9245 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9494 | 0.9422 | 0.9458 | 0.9884 |
| 0.0006 | 65.0 | 6240 | 0.1061 | 0.9082 | 0.9468 | 0.9271 | 94 | 0.9313 | 0.8922 | 0.9113 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9490 | 0.9347 | 0.9418 | 0.9854 |
| 0.0008 | 66.0 | 6336 | 0.1032 | 0.8980 | 0.9362 | 0.9167 | 94 | 0.9325 | 0.9102 | 0.9212 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.9444 | 0.9397 | 0.9421 | 0.9881 |
| 0.0004 | 67.0 | 6432 | 0.0961 | 0.8980 | 0.9362 | 0.9167 | 94 | 0.9273 | 0.9162 | 0.9217 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9446 | 0.9422 | 0.9434 | 0.9890 |
| 0.0008 | 68.0 | 6528 | 0.0979 | 0.88 | 0.9362 | 0.9072 | 94 | 0.925 | 0.8862 | 0.9052 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.9367 | 0.9296 | 0.9332 | 0.9870 |
| 0.0013 | 69.0 | 6624 | 0.1021 | 0.89 | 0.9468 | 0.9175 | 94 | 0.9162 | 0.9162 | 0.9162 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9377 | 0.9447 | 0.9412 | 0.9870 |
| 0.0004 | 70.0 | 6720 | 0.0933 | 0.88 | 0.9362 | 0.9072 | 94 | 0.9264 | 0.9042 | 0.9152 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9395 | 0.9372 | 0.9384 | 0.9881 |
| 0.001 | 71.0 | 6816 | 0.0892 | 0.8788 | 0.9255 | 0.9016 | 94 | 0.9264 | 0.9042 | 0.9152 | 167 | 0.9852 | 0.9708 | 0.9779 | 137 | 0.9345 | 0.9322 | 0.9333 | 0.9881 |
| 0.0006 | 72.0 | 6912 | 0.0966 | 0.9091 | 0.9574 | 0.9326 | 94 | 0.9509 | 0.9281 | 0.9394 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.9547 | 0.9523 | 0.9535 | 0.9892 |
| 0.0006 | 73.0 | 7008 | 0.0997 | 0.8911 | 0.9574 | 0.9231 | 94 | 0.9441 | 0.9102 | 0.9268 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9495 | 0.9447 | 0.9471 | 0.9884 |
| 0.0004 | 74.0 | 7104 | 0.1035 | 0.8824 | 0.9574 | 0.9184 | 94 | 0.9497 | 0.9042 | 0.9264 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.9470 | 0.9422 | 0.9446 | 0.9881 |
| 0.0005 | 75.0 | 7200 | 0.1036 | 0.8788 | 0.9255 | 0.9016 | 94 | 0.9371 | 0.8922 | 0.9141 | 167 | 0.9852 | 0.9708 | 0.9779 | 137 | 0.9389 | 0.9271 | 0.9330 | 0.9870 |
| 0.0004 | 76.0 | 7296 | 0.0978 | 0.8788 | 0.9255 | 0.9016 | 94 | 0.9317 | 0.8982 | 0.9146 | 167 | 0.9638 | 0.9708 | 0.9673 | 137 | 0.9296 | 0.9296 | 0.9296 | 0.9867 |
| 0.0004 | 77.0 | 7392 | 0.0896 | 0.88 | 0.9362 | 0.9072 | 94 | 0.9273 | 0.9162 | 0.9217 | 167 | 0.9926 | 0.9781 | 0.9853 | 137 | 0.9375 | 0.9422 | 0.9398 | 0.9887 |
| 0.0007 | 78.0 | 7488 | 0.1034 | 0.8889 | 0.9362 | 0.9119 | 94 | 0.9308 | 0.8862 | 0.9080 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9439 | 0.9296 | 0.9367 | 0.9878 |
| 0.0004 | 79.0 | 7584 | 0.1117 | 0.8812 | 0.9468 | 0.9128 | 94 | 0.9259 | 0.8982 | 0.9119 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9395 | 0.9372 | 0.9384 | 0.9873 |
| 0.0006 | 80.0 | 7680 | 0.1053 | 0.8980 | 0.9362 | 0.9167 | 94 | 0.9017 | 0.9341 | 0.9176 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9333 | 0.9497 | 0.9415 | 0.9873 |
| 0.0003 | 81.0 | 7776 | 0.1023 | 0.8980 | 0.9362 | 0.9167 | 94 | 0.9222 | 0.9222 | 0.9222 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9424 | 0.9447 | 0.9435 | 0.9884 |
| 0.0005 | 82.0 | 7872 | 0.0998 | 0.8990 | 0.9468 | 0.9223 | 94 | 0.9281 | 0.9281 | 0.9281 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.945 | 0.9497 | 0.9474 | 0.9887 |
| 0.0004 | 83.0 | 7968 | 0.1031 | 0.8980 | 0.9362 | 0.9167 | 94 | 0.9222 | 0.9222 | 0.9222 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9424 | 0.9447 | 0.9435 | 0.9884 |
| 0.0002 | 84.0 | 8064 | 0.1076 | 0.9072 | 0.9362 | 0.9215 | 94 | 0.9273 | 0.9162 | 0.9217 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9470 | 0.9422 | 0.9446 | 0.9890 |
| 0.0008 | 85.0 | 8160 | 0.1031 | 0.9062 | 0.9255 | 0.9158 | 94 | 0.9273 | 0.9162 | 0.9217 | 167 | 0.9925 | 0.9708 | 0.9815 | 137 | 0.9443 | 0.9372 | 0.9407 | 0.9887 |
| 0.0003 | 86.0 | 8256 | 0.0967 | 0.9062 | 0.9255 | 0.9158 | 94 | 0.9383 | 0.9102 | 0.9240 | 167 | 0.9925 | 0.9708 | 0.9815 | 137 | 0.9490 | 0.9347 | 0.9418 | 0.9892 |
| 0.0005 | 87.0 | 8352 | 0.0978 | 0.8889 | 0.9362 | 0.9119 | 94 | 0.9317 | 0.8982 | 0.9146 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9442 | 0.9347 | 0.9394 | 0.9884 |
| 0.0003 | 88.0 | 8448 | 0.1104 | 0.8889 | 0.9362 | 0.9119 | 94 | 0.9375 | 0.8982 | 0.9174 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9466 | 0.9347 | 0.9406 | 0.9881 |
| 0.0005 | 89.0 | 8544 | 0.1069 | 0.89 | 0.9468 | 0.9175 | 94 | 0.9441 | 0.9102 | 0.9268 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9494 | 0.9422 | 0.9458 | 0.9887 |
| 0.0003 | 90.0 | 8640 | 0.1071 | 0.89 | 0.9468 | 0.9175 | 94 | 0.9441 | 0.9102 | 0.9268 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9494 | 0.9422 | 0.9458 | 0.9887 |
| 0.0005 | 91.0 | 8736 | 0.1068 | 0.89 | 0.9468 | 0.9175 | 94 | 0.9441 | 0.9102 | 0.9268 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9494 | 0.9422 | 0.9458 | 0.9887 |
| 0.0004 | 92.0 | 8832 | 0.1078 | 0.89 | 0.9468 | 0.9175 | 94 | 0.9444 | 0.9162 | 0.9301 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9495 | 0.9447 | 0.9471 | 0.9890 |
| 0.0003 | 93.0 | 8928 | 0.1079 | 0.89 | 0.9468 | 0.9175 | 94 | 0.9444 | 0.9162 | 0.9301 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9495 | 0.9447 | 0.9471 | 0.9890 |
| 0.0004 | 94.0 | 9024 | 0.1082 | 0.89 | 0.9468 | 0.9175 | 94 | 0.9387 | 0.9162 | 0.9273 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9471 | 0.9447 | 0.9459 | 0.9887 |
| 0.0003 | 95.0 | 9120 | 0.1080 | 0.89 | 0.9468 | 0.9175 | 94 | 0.9387 | 0.9162 | 0.9273 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9471 | 0.9447 | 0.9459 | 0.9887 |
| 0.0003 | 96.0 | 9216 | 0.1082 | 0.89 | 0.9468 | 0.9175 | 94 | 0.9387 | 0.9162 | 0.9273 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9471 | 0.9447 | 0.9459 | 0.9887 |
| 0.0002 | 97.0 | 9312 | 0.1080 | 0.89 | 0.9468 | 0.9175 | 94 | 0.9387 | 0.9162 | 0.9273 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9471 | 0.9447 | 0.9459 | 0.9887 |
| 0.0003 | 98.0 | 9408 | 0.1080 | 0.89 | 0.9468 | 0.9175 | 94 | 0.9444 | 0.9162 | 0.9301 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9495 | 0.9447 | 0.9471 | 0.9890 |
| 0.0003 | 99.0 | 9504 | 0.1085 | 0.89 | 0.9468 | 0.9175 | 94 | 0.9387 | 0.9162 | 0.9273 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9471 | 0.9447 | 0.9459 | 0.9887 |
| 0.0002 | 100.0 | 9600 | 0.1084 | 0.89 | 0.9468 | 0.9175 | 94 | 0.9387 | 0.9162 | 0.9273 | 167 | 1.0 | 0.9781 | 0.9889 | 137 | 0.9471 | 0.9447 | 0.9459 | 0.9887 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
dbands/llama-3-8b-code_bagel_hermes-2-5-blender-16bit
|
dbands
| 2024-06-04T00:15:01Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-04T00:09:05Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** dbands
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
martinsinnona/visdecode_vega_1
|
martinsinnona
| 2024-06-04T00:09:09Z | 48 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pix2struct",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-05-21T18:34:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
apwic/nerugm-unipelt-4
|
apwic
| 2024-06-04T00:00:06Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-28T02:54:06Z |
---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: nerugm-unipelt-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nerugm-unipelt-4
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2206
- Location Precision: 0.7949
- Location Recall: 0.8493
- Location F1: 0.8212
- Location Number: 73
- Organization Precision: 0.7361
- Organization Recall: 0.8154
- Organization F1: 0.7737
- Organization Number: 65
- Person Precision: 0.8924
- Person Recall: 0.94
- Person F1: 0.9156
- Person Number: 150
- Quantity Precision: 0.8125
- Quantity Recall: 0.8966
- Quantity F1: 0.8525
- Quantity Number: 29
- Time Precision: 0.7838
- Time Recall: 0.8529
- Time F1: 0.8169
- Time Number: 34
- Overall Precision: 0.8249
- Overall Recall: 0.8860
- Overall F1: 0.8544
- Overall Accuracy: 0.9638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Location Precision | Location Recall | Location F1 | Location Number | Organization Precision | Organization Recall | Organization F1 | Organization Number | Person Precision | Person Recall | Person F1 | Person Number | Quantity Precision | Quantity Recall | Quantity F1 | Quantity Number | Time Precision | Time Recall | Time F1 | Time Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:----------------:|:-------------:|:---------:|:-------------:|:------------------:|:---------------:|:-----------:|:---------------:|:--------------:|:-----------:|:-------:|:-----------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.9477 | 1.0 | 106 | 0.6205 | 0.0 | 0.0 | 0.0 | 73 | 0.0 | 0.0 | 0.0 | 65 | 0.2 | 0.0067 | 0.0129 | 150 | 0.0 | 0.0 | 0.0 | 29 | 0.0 | 0.0 | 0.0 | 34 | 0.2 | 0.0028 | 0.0056 | 0.8373 |
| 0.5027 | 2.0 | 212 | 0.3516 | 0.4026 | 0.4247 | 0.4133 | 73 | 0.1091 | 0.0923 | 0.1 | 65 | 0.6020 | 0.8067 | 0.6895 | 150 | 0.1739 | 0.1379 | 0.1538 | 29 | 0.5 | 0.6471 | 0.5641 | 34 | 0.46 | 0.5242 | 0.4900 | 0.9052 |
| 0.2888 | 3.0 | 318 | 0.1855 | 0.5319 | 0.6849 | 0.5988 | 73 | 0.5167 | 0.4769 | 0.4960 | 65 | 0.7816 | 0.9067 | 0.8395 | 150 | 0.4545 | 0.5172 | 0.4839 | 29 | 0.8286 | 0.8529 | 0.8406 | 34 | 0.6591 | 0.7436 | 0.6988 | 0.9404 |
| 0.1926 | 4.0 | 424 | 0.1595 | 0.6122 | 0.8219 | 0.7018 | 73 | 0.5054 | 0.7231 | 0.5949 | 65 | 0.8155 | 0.9133 | 0.8616 | 150 | 0.6053 | 0.7931 | 0.6866 | 29 | 0.8611 | 0.9118 | 0.8857 | 34 | 0.6882 | 0.8490 | 0.7602 | 0.9486 |
| 0.163 | 5.0 | 530 | 0.1464 | 0.6354 | 0.8356 | 0.7219 | 73 | 0.5904 | 0.7538 | 0.6622 | 65 | 0.8354 | 0.9133 | 0.8726 | 150 | 0.6757 | 0.8621 | 0.7576 | 29 | 0.725 | 0.8529 | 0.7838 | 34 | 0.7167 | 0.8575 | 0.7808 | 0.9515 |
| 0.1447 | 6.0 | 636 | 0.1606 | 0.6703 | 0.8356 | 0.7439 | 73 | 0.6265 | 0.8 | 0.7027 | 65 | 0.8323 | 0.9267 | 0.8770 | 150 | 0.5333 | 0.8276 | 0.6486 | 29 | 0.6222 | 0.8235 | 0.7089 | 34 | 0.7053 | 0.8661 | 0.7775 | 0.9471 |
| 0.1316 | 7.0 | 742 | 0.1419 | 0.6739 | 0.8493 | 0.7515 | 73 | 0.6622 | 0.7538 | 0.7050 | 65 | 0.8253 | 0.9133 | 0.8671 | 150 | 0.6757 | 0.8621 | 0.7576 | 29 | 0.75 | 0.8824 | 0.8108 | 34 | 0.7408 | 0.8632 | 0.7974 | 0.9550 |
| 0.1217 | 8.0 | 848 | 0.1318 | 0.7294 | 0.8493 | 0.7848 | 73 | 0.6364 | 0.7538 | 0.6901 | 65 | 0.8313 | 0.92 | 0.8734 | 150 | 0.6757 | 0.8621 | 0.7576 | 29 | 0.7895 | 0.8824 | 0.8333 | 34 | 0.7543 | 0.8661 | 0.8064 | 0.9569 |
| 0.1125 | 9.0 | 954 | 0.1269 | 0.7439 | 0.8356 | 0.7871 | 73 | 0.6145 | 0.7846 | 0.6892 | 65 | 0.8608 | 0.9067 | 0.8831 | 150 | 0.7297 | 0.9310 | 0.8182 | 29 | 0.8108 | 0.8824 | 0.8451 | 34 | 0.7683 | 0.8689 | 0.8155 | 0.9591 |
| 0.1088 | 10.0 | 1060 | 0.1347 | 0.6988 | 0.7945 | 0.7436 | 73 | 0.6944 | 0.7692 | 0.7299 | 65 | 0.8344 | 0.9067 | 0.8690 | 150 | 0.6944 | 0.8621 | 0.7692 | 29 | 0.7692 | 0.8824 | 0.8219 | 34 | 0.7608 | 0.8519 | 0.8038 | 0.9582 |
| 0.1017 | 11.0 | 1166 | 0.1373 | 0.7024 | 0.8082 | 0.7516 | 73 | 0.6456 | 0.7846 | 0.7083 | 65 | 0.8438 | 0.9 | 0.8710 | 150 | 0.6757 | 0.8621 | 0.7576 | 29 | 0.6667 | 0.8235 | 0.7368 | 34 | 0.7413 | 0.8490 | 0.7915 | 0.9559 |
| 0.0943 | 12.0 | 1272 | 0.1453 | 0.6778 | 0.8356 | 0.7485 | 73 | 0.6310 | 0.8154 | 0.7114 | 65 | 0.8457 | 0.9133 | 0.8782 | 150 | 0.7027 | 0.8966 | 0.7879 | 29 | 0.6512 | 0.8235 | 0.7273 | 34 | 0.7332 | 0.8689 | 0.7953 | 0.9567 |
| 0.0886 | 13.0 | 1378 | 0.1357 | 0.7375 | 0.8082 | 0.7712 | 73 | 0.5978 | 0.8462 | 0.7006 | 65 | 0.8553 | 0.9067 | 0.8803 | 150 | 0.7143 | 0.8621 | 0.7813 | 29 | 0.7692 | 0.8824 | 0.8219 | 34 | 0.7531 | 0.8689 | 0.8069 | 0.9579 |
| 0.0855 | 14.0 | 1484 | 0.1371 | 0.7024 | 0.8082 | 0.7516 | 73 | 0.625 | 0.7692 | 0.6897 | 65 | 0.8457 | 0.9133 | 0.8782 | 150 | 0.75 | 0.9310 | 0.8308 | 29 | 0.6829 | 0.8235 | 0.7467 | 34 | 0.7469 | 0.8575 | 0.7984 | 0.9569 |
| 0.0814 | 15.0 | 1590 | 0.1300 | 0.7821 | 0.8356 | 0.8079 | 73 | 0.6329 | 0.7692 | 0.6944 | 65 | 0.8562 | 0.9133 | 0.8839 | 150 | 0.7027 | 0.8966 | 0.7879 | 29 | 0.7368 | 0.8235 | 0.7778 | 34 | 0.7704 | 0.8604 | 0.8129 | 0.9599 |
| 0.079 | 16.0 | 1696 | 0.1442 | 0.7439 | 0.8356 | 0.7871 | 73 | 0.6667 | 0.7692 | 0.7143 | 65 | 0.8528 | 0.9267 | 0.8882 | 150 | 0.75 | 0.9310 | 0.8308 | 29 | 0.7 | 0.8235 | 0.7568 | 34 | 0.7702 | 0.8689 | 0.8166 | 0.9589 |
| 0.0722 | 17.0 | 1802 | 0.1371 | 0.7349 | 0.8356 | 0.7821 | 73 | 0.6667 | 0.7692 | 0.7143 | 65 | 0.8415 | 0.92 | 0.8790 | 150 | 0.7576 | 0.8621 | 0.8065 | 29 | 0.725 | 0.8529 | 0.7838 | 34 | 0.7671 | 0.8632 | 0.8123 | 0.9587 |
| 0.0724 | 18.0 | 1908 | 0.1402 | 0.75 | 0.8219 | 0.7843 | 73 | 0.6190 | 0.8 | 0.6980 | 65 | 0.875 | 0.9333 | 0.9032 | 150 | 0.6389 | 0.7931 | 0.7077 | 29 | 0.7568 | 0.8235 | 0.7887 | 34 | 0.7632 | 0.8632 | 0.8102 | 0.9579 |
| 0.0704 | 19.0 | 2014 | 0.1272 | 0.7821 | 0.8356 | 0.8079 | 73 | 0.68 | 0.7846 | 0.7286 | 65 | 0.8580 | 0.9267 | 0.8910 | 150 | 0.7429 | 0.8966 | 0.8125 | 29 | 0.7368 | 0.8235 | 0.7778 | 34 | 0.7861 | 0.8689 | 0.8254 | 0.9633 |
| 0.0629 | 20.0 | 2120 | 0.1404 | 0.7229 | 0.8219 | 0.7692 | 73 | 0.6265 | 0.8 | 0.7027 | 65 | 0.8696 | 0.9333 | 0.9003 | 150 | 0.7353 | 0.8621 | 0.7937 | 29 | 0.7895 | 0.8824 | 0.8333 | 34 | 0.7694 | 0.8746 | 0.8187 | 0.9599 |
| 0.0607 | 21.0 | 2226 | 0.1343 | 0.7531 | 0.8356 | 0.7922 | 73 | 0.6711 | 0.7846 | 0.7234 | 65 | 0.8854 | 0.9267 | 0.9055 | 150 | 0.7353 | 0.8621 | 0.7937 | 29 | 0.8857 | 0.9118 | 0.8986 | 34 | 0.8016 | 0.8746 | 0.8365 | 0.9631 |
| 0.0598 | 22.0 | 2332 | 0.1399 | 0.7531 | 0.8356 | 0.7922 | 73 | 0.6623 | 0.7846 | 0.7183 | 65 | 0.8910 | 0.9267 | 0.9085 | 150 | 0.7353 | 0.8621 | 0.7937 | 29 | 0.7568 | 0.8235 | 0.7887 | 34 | 0.7896 | 0.8661 | 0.8261 | 0.9619 |
| 0.0578 | 23.0 | 2438 | 0.1296 | 0.7792 | 0.8219 | 0.8000 | 73 | 0.68 | 0.7846 | 0.7286 | 65 | 0.8861 | 0.9333 | 0.9091 | 150 | 0.7353 | 0.8621 | 0.7937 | 29 | 0.8333 | 0.8824 | 0.8571 | 34 | 0.8053 | 0.8718 | 0.8372 | 0.9628 |
| 0.0538 | 24.0 | 2544 | 0.1458 | 0.7531 | 0.8356 | 0.7922 | 73 | 0.65 | 0.8 | 0.7172 | 65 | 0.8571 | 0.92 | 0.8875 | 150 | 0.7647 | 0.8966 | 0.8254 | 29 | 0.7632 | 0.8529 | 0.8056 | 34 | 0.7766 | 0.8718 | 0.8215 | 0.9596 |
| 0.0519 | 25.0 | 2650 | 0.1594 | 0.7176 | 0.8356 | 0.7722 | 73 | 0.6667 | 0.8 | 0.7273 | 65 | 0.8650 | 0.94 | 0.9010 | 150 | 0.6944 | 0.8621 | 0.7692 | 29 | 0.8056 | 0.8529 | 0.8286 | 34 | 0.7739 | 0.8775 | 0.8224 | 0.9594 |
| 0.0513 | 26.0 | 2756 | 0.1568 | 0.7143 | 0.8219 | 0.7643 | 73 | 0.68 | 0.7846 | 0.7286 | 65 | 0.8485 | 0.9333 | 0.8889 | 150 | 0.7143 | 0.8621 | 0.7813 | 29 | 0.7 | 0.8235 | 0.7568 | 34 | 0.7619 | 0.8661 | 0.8107 | 0.9582 |
| 0.051 | 27.0 | 2862 | 0.1527 | 0.7439 | 0.8356 | 0.7871 | 73 | 0.6296 | 0.7846 | 0.6986 | 65 | 0.8634 | 0.9267 | 0.8939 | 150 | 0.7143 | 0.8621 | 0.7813 | 29 | 0.7 | 0.8235 | 0.7568 | 34 | 0.7619 | 0.8661 | 0.8107 | 0.9594 |
| 0.0479 | 28.0 | 2968 | 0.1568 | 0.7439 | 0.8356 | 0.7871 | 73 | 0.6842 | 0.8 | 0.7376 | 65 | 0.875 | 0.9333 | 0.9032 | 150 | 0.7647 | 0.8966 | 0.8254 | 29 | 0.7 | 0.8235 | 0.7568 | 34 | 0.7832 | 0.8746 | 0.8264 | 0.9596 |
| 0.0442 | 29.0 | 3074 | 0.1412 | 0.7949 | 0.8493 | 0.8212 | 73 | 0.6463 | 0.8154 | 0.7211 | 65 | 0.8580 | 0.9267 | 0.8910 | 150 | 0.7353 | 0.8621 | 0.7937 | 29 | 0.8286 | 0.8529 | 0.8406 | 34 | 0.7877 | 0.8775 | 0.8302 | 0.9611 |
| 0.0431 | 30.0 | 3180 | 0.1463 | 0.8158 | 0.8493 | 0.8322 | 73 | 0.6667 | 0.7692 | 0.7143 | 65 | 0.8805 | 0.9333 | 0.9061 | 150 | 0.7879 | 0.8966 | 0.8387 | 29 | 0.7436 | 0.8529 | 0.7945 | 34 | 0.8037 | 0.8746 | 0.8377 | 0.9623 |
| 0.0431 | 31.0 | 3286 | 0.1430 | 0.7821 | 0.8356 | 0.8079 | 73 | 0.6757 | 0.7692 | 0.7194 | 65 | 0.8634 | 0.9267 | 0.8939 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.8056 | 0.8529 | 0.8286 | 34 | 0.8005 | 0.8689 | 0.8333 | 0.9631 |
| 0.0396 | 32.0 | 3392 | 0.1682 | 0.7722 | 0.8356 | 0.8026 | 73 | 0.6310 | 0.8154 | 0.7114 | 65 | 0.8634 | 0.9267 | 0.8939 | 150 | 0.7647 | 0.8966 | 0.8254 | 29 | 0.7317 | 0.8824 | 0.8 | 34 | 0.7744 | 0.8803 | 0.824 | 0.9594 |
| 0.0404 | 33.0 | 3498 | 0.1550 | 0.7922 | 0.8356 | 0.8133 | 73 | 0.7246 | 0.7692 | 0.7463 | 65 | 0.8910 | 0.9267 | 0.9085 | 150 | 0.7353 | 0.8621 | 0.7937 | 29 | 0.75 | 0.8824 | 0.8108 | 34 | 0.8112 | 0.8689 | 0.8391 | 0.9619 |
| 0.038 | 34.0 | 3604 | 0.1416 | 0.8158 | 0.8493 | 0.8322 | 73 | 0.6585 | 0.8308 | 0.7347 | 65 | 0.875 | 0.9333 | 0.9032 | 150 | 0.7879 | 0.8966 | 0.8387 | 29 | 0.8611 | 0.9118 | 0.8857 | 34 | 0.8088 | 0.8917 | 0.8482 | 0.9643 |
| 0.0389 | 35.0 | 3710 | 0.1660 | 0.7349 | 0.8356 | 0.7821 | 73 | 0.6711 | 0.7846 | 0.7234 | 65 | 0.8742 | 0.9267 | 0.8997 | 150 | 0.7059 | 0.8276 | 0.7619 | 29 | 0.6905 | 0.8529 | 0.7632 | 34 | 0.7716 | 0.8661 | 0.8161 | 0.9594 |
| 0.0359 | 36.0 | 3816 | 0.1483 | 0.7848 | 0.8493 | 0.8158 | 73 | 0.7027 | 0.8 | 0.7482 | 65 | 0.8734 | 0.92 | 0.8961 | 150 | 0.7647 | 0.8966 | 0.8254 | 29 | 0.8108 | 0.8824 | 0.8451 | 34 | 0.8063 | 0.8775 | 0.8404 | 0.9638 |
| 0.0352 | 37.0 | 3922 | 0.1701 | 0.7439 | 0.8356 | 0.7871 | 73 | 0.6933 | 0.8 | 0.7429 | 65 | 0.8734 | 0.92 | 0.8961 | 150 | 0.7647 | 0.8966 | 0.8254 | 29 | 0.8421 | 0.9412 | 0.8889 | 34 | 0.7984 | 0.8803 | 0.8374 | 0.9621 |
| 0.0342 | 38.0 | 4028 | 0.1522 | 0.8052 | 0.8493 | 0.8267 | 73 | 0.75 | 0.7846 | 0.7669 | 65 | 0.8734 | 0.92 | 0.8961 | 150 | 0.7647 | 0.8966 | 0.8254 | 29 | 0.7692 | 0.8824 | 0.8219 | 34 | 0.8165 | 0.8746 | 0.8446 | 0.9641 |
| 0.0315 | 39.0 | 4134 | 0.1590 | 0.7821 | 0.8356 | 0.8079 | 73 | 0.7013 | 0.8308 | 0.7606 | 65 | 0.8688 | 0.9267 | 0.8968 | 150 | 0.7576 | 0.8621 | 0.8065 | 29 | 0.8824 | 0.8824 | 0.8824 | 34 | 0.8089 | 0.8803 | 0.8431 | 0.9638 |
| 0.0335 | 40.0 | 4240 | 0.1513 | 0.8289 | 0.8630 | 0.8456 | 73 | 0.7067 | 0.8154 | 0.7571 | 65 | 0.8734 | 0.92 | 0.8961 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.8205 | 0.9412 | 0.8767 | 34 | 0.8211 | 0.8889 | 0.8536 | 0.9653 |
| 0.0317 | 41.0 | 4346 | 0.1541 | 0.8378 | 0.8493 | 0.8435 | 73 | 0.6835 | 0.8308 | 0.75 | 65 | 0.8868 | 0.94 | 0.9126 | 150 | 0.7879 | 0.8966 | 0.8387 | 29 | 0.75 | 0.8824 | 0.8108 | 34 | 0.8130 | 0.8917 | 0.8505 | 0.9633 |
| 0.0288 | 42.0 | 4452 | 0.1681 | 0.7922 | 0.8356 | 0.8133 | 73 | 0.6279 | 0.8308 | 0.7152 | 65 | 0.8931 | 0.9467 | 0.9191 | 150 | 0.7576 | 0.8621 | 0.8065 | 29 | 0.7838 | 0.8529 | 0.8169 | 34 | 0.7934 | 0.8860 | 0.8371 | 0.9606 |
| 0.0275 | 43.0 | 4558 | 0.1761 | 0.7821 | 0.8356 | 0.8079 | 73 | 0.6279 | 0.8308 | 0.7152 | 65 | 0.8931 | 0.9467 | 0.9191 | 150 | 0.7059 | 0.8276 | 0.7619 | 29 | 0.7436 | 0.8529 | 0.7945 | 34 | 0.7828 | 0.8832 | 0.8300 | 0.9604 |
| 0.0265 | 44.0 | 4664 | 0.1796 | 0.7922 | 0.8356 | 0.8133 | 73 | 0.7051 | 0.8462 | 0.7692 | 65 | 0.8712 | 0.9467 | 0.9073 | 150 | 0.7273 | 0.8276 | 0.7742 | 29 | 0.7895 | 0.8824 | 0.8333 | 34 | 0.8021 | 0.8889 | 0.8432 | 0.9628 |
| 0.0271 | 45.0 | 4770 | 0.1760 | 0.7949 | 0.8493 | 0.8212 | 73 | 0.7067 | 0.8154 | 0.7571 | 65 | 0.8696 | 0.9333 | 0.9003 | 150 | 0.7812 | 0.8621 | 0.8197 | 29 | 0.8158 | 0.9118 | 0.8611 | 34 | 0.8099 | 0.8860 | 0.8463 | 0.9621 |
| 0.026 | 46.0 | 4876 | 0.1910 | 0.7922 | 0.8356 | 0.8133 | 73 | 0.6795 | 0.8154 | 0.7413 | 65 | 0.875 | 0.9333 | 0.9032 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.7632 | 0.8529 | 0.8056 | 34 | 0.8026 | 0.8803 | 0.8397 | 0.9604 |
| 0.0264 | 47.0 | 4982 | 0.1727 | 0.8052 | 0.8493 | 0.8267 | 73 | 0.7013 | 0.8308 | 0.7606 | 65 | 0.8696 | 0.9333 | 0.9003 | 150 | 0.7576 | 0.8621 | 0.8065 | 29 | 0.7895 | 0.8824 | 0.8333 | 34 | 0.8057 | 0.8860 | 0.8440 | 0.9636 |
| 0.0252 | 48.0 | 5088 | 0.1840 | 0.7821 | 0.8356 | 0.8079 | 73 | 0.6923 | 0.8308 | 0.7552 | 65 | 0.8758 | 0.94 | 0.9068 | 150 | 0.7812 | 0.8621 | 0.8197 | 29 | 0.7632 | 0.8529 | 0.8056 | 34 | 0.8010 | 0.8832 | 0.8401 | 0.9626 |
| 0.0234 | 49.0 | 5194 | 0.1759 | 0.8133 | 0.8356 | 0.8243 | 73 | 0.7183 | 0.7846 | 0.75 | 65 | 0.875 | 0.9333 | 0.9032 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.7692 | 0.8824 | 0.8219 | 34 | 0.8170 | 0.8775 | 0.8462 | 0.9641 |
| 0.0227 | 50.0 | 5300 | 0.1780 | 0.7922 | 0.8356 | 0.8133 | 73 | 0.7324 | 0.8 | 0.7647 | 65 | 0.8917 | 0.9333 | 0.9121 | 150 | 0.8387 | 0.8966 | 0.8667 | 29 | 0.8158 | 0.9118 | 0.8611 | 34 | 0.8289 | 0.8832 | 0.8552 | 0.9651 |
| 0.0246 | 51.0 | 5406 | 0.1751 | 0.8052 | 0.8493 | 0.8267 | 73 | 0.7391 | 0.7846 | 0.7612 | 65 | 0.8854 | 0.9267 | 0.9055 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.8421 | 0.9412 | 0.8889 | 34 | 0.8311 | 0.8832 | 0.8564 | 0.9658 |
| 0.0225 | 52.0 | 5512 | 0.1924 | 0.7848 | 0.8493 | 0.8158 | 73 | 0.7333 | 0.8462 | 0.7857 | 65 | 0.8642 | 0.9333 | 0.8974 | 150 | 0.7576 | 0.8621 | 0.8065 | 29 | 0.8158 | 0.9118 | 0.8611 | 34 | 0.8088 | 0.8917 | 0.8482 | 0.9636 |
| 0.0217 | 53.0 | 5618 | 0.1959 | 0.7778 | 0.8630 | 0.8182 | 73 | 0.7297 | 0.8308 | 0.7770 | 65 | 0.875 | 0.9333 | 0.9032 | 150 | 0.7576 | 0.8621 | 0.8065 | 29 | 0.7073 | 0.8529 | 0.7733 | 34 | 0.7995 | 0.8860 | 0.8405 | 0.9621 |
| 0.0201 | 54.0 | 5724 | 0.2034 | 0.7922 | 0.8356 | 0.8133 | 73 | 0.6875 | 0.8462 | 0.7586 | 65 | 0.8642 | 0.9333 | 0.8974 | 150 | 0.7576 | 0.8621 | 0.8065 | 29 | 0.8158 | 0.9118 | 0.8611 | 34 | 0.8 | 0.8889 | 0.8421 | 0.9611 |
| 0.0198 | 55.0 | 5830 | 0.1859 | 0.8158 | 0.8493 | 0.8322 | 73 | 0.6986 | 0.7846 | 0.7391 | 65 | 0.8854 | 0.9267 | 0.9055 | 150 | 0.7576 | 0.8621 | 0.8065 | 29 | 0.7895 | 0.8824 | 0.8333 | 34 | 0.8143 | 0.8746 | 0.8434 | 0.9633 |
| 0.0199 | 56.0 | 5936 | 0.1812 | 0.8182 | 0.8630 | 0.8400 | 73 | 0.7324 | 0.8 | 0.7647 | 65 | 0.8805 | 0.9333 | 0.9061 | 150 | 0.7576 | 0.8621 | 0.8065 | 29 | 0.8889 | 0.9412 | 0.9143 | 34 | 0.8298 | 0.8889 | 0.8583 | 0.9663 |
| 0.0183 | 57.0 | 6042 | 0.1818 | 0.7949 | 0.8493 | 0.8212 | 73 | 0.7260 | 0.8154 | 0.7681 | 65 | 0.8758 | 0.94 | 0.9068 | 150 | 0.7812 | 0.8621 | 0.8197 | 29 | 0.8333 | 0.8824 | 0.8571 | 34 | 0.8184 | 0.8860 | 0.8509 | 0.9655 |
| 0.0204 | 58.0 | 6148 | 0.1844 | 0.775 | 0.8493 | 0.8105 | 73 | 0.7206 | 0.7538 | 0.7368 | 65 | 0.8734 | 0.92 | 0.8961 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.8421 | 0.9412 | 0.8889 | 34 | 0.8165 | 0.8746 | 0.8446 | 0.9636 |
| 0.0171 | 59.0 | 6254 | 0.2010 | 0.7625 | 0.8356 | 0.7974 | 73 | 0.7391 | 0.7846 | 0.7612 | 65 | 0.9032 | 0.9333 | 0.9180 | 150 | 0.7576 | 0.8621 | 0.8065 | 29 | 0.7436 | 0.8529 | 0.7945 | 34 | 0.8138 | 0.8718 | 0.8418 | 0.9641 |
| 0.0177 | 60.0 | 6360 | 0.1953 | 0.7949 | 0.8493 | 0.8212 | 73 | 0.72 | 0.8308 | 0.7714 | 65 | 0.8974 | 0.9333 | 0.9150 | 150 | 0.7879 | 0.8966 | 0.8387 | 29 | 0.8158 | 0.9118 | 0.8611 | 34 | 0.8237 | 0.8917 | 0.8564 | 0.9651 |
| 0.0166 | 61.0 | 6466 | 0.1929 | 0.7922 | 0.8356 | 0.8133 | 73 | 0.7297 | 0.8308 | 0.7770 | 65 | 0.8642 | 0.9333 | 0.8974 | 150 | 0.7812 | 0.8621 | 0.8197 | 29 | 0.8611 | 0.9118 | 0.8857 | 34 | 0.8163 | 0.8860 | 0.8497 | 0.9646 |
| 0.0169 | 62.0 | 6572 | 0.1962 | 0.8052 | 0.8493 | 0.8267 | 73 | 0.7324 | 0.8 | 0.7647 | 65 | 0.8861 | 0.9333 | 0.9091 | 150 | 0.7576 | 0.8621 | 0.8065 | 29 | 0.7632 | 0.8529 | 0.8056 | 34 | 0.8170 | 0.8775 | 0.8462 | 0.9651 |
| 0.0158 | 63.0 | 6678 | 0.2073 | 0.8158 | 0.8493 | 0.8322 | 73 | 0.6353 | 0.8308 | 0.7200 | 65 | 0.8679 | 0.92 | 0.8932 | 150 | 0.7812 | 0.8621 | 0.8197 | 29 | 0.7895 | 0.8824 | 0.8333 | 34 | 0.7923 | 0.8803 | 0.8340 | 0.9621 |
| 0.0167 | 64.0 | 6784 | 0.1845 | 0.8182 | 0.8630 | 0.8400 | 73 | 0.75 | 0.7846 | 0.7669 | 65 | 0.8974 | 0.9333 | 0.9150 | 150 | 0.7812 | 0.8621 | 0.8197 | 29 | 0.8158 | 0.9118 | 0.8611 | 34 | 0.8356 | 0.8832 | 0.8587 | 0.9678 |
| 0.0167 | 65.0 | 6890 | 0.2060 | 0.775 | 0.8493 | 0.8105 | 73 | 0.7432 | 0.8462 | 0.7914 | 65 | 0.8742 | 0.9267 | 0.8997 | 150 | 0.7353 | 0.8621 | 0.7937 | 29 | 0.8158 | 0.9118 | 0.8611 | 34 | 0.8104 | 0.8889 | 0.8478 | 0.9636 |
| 0.0167 | 66.0 | 6996 | 0.2054 | 0.7949 | 0.8493 | 0.8212 | 73 | 0.7286 | 0.7846 | 0.7556 | 65 | 0.8758 | 0.94 | 0.9068 | 150 | 0.7812 | 0.8621 | 0.8197 | 29 | 0.7436 | 0.8529 | 0.7945 | 34 | 0.8105 | 0.8775 | 0.8427 | 0.9619 |
| 0.0148 | 67.0 | 7102 | 0.2037 | 0.7975 | 0.8630 | 0.8289 | 73 | 0.7183 | 0.7846 | 0.75 | 65 | 0.8758 | 0.94 | 0.9068 | 150 | 0.7576 | 0.8621 | 0.8065 | 29 | 0.8158 | 0.9118 | 0.8611 | 34 | 0.8141 | 0.8860 | 0.8486 | 0.9636 |
| 0.0152 | 68.0 | 7208 | 0.2017 | 0.8182 | 0.8630 | 0.8400 | 73 | 0.7353 | 0.7692 | 0.7519 | 65 | 0.8854 | 0.9267 | 0.9055 | 150 | 0.7812 | 0.8621 | 0.8197 | 29 | 0.8378 | 0.9118 | 0.8732 | 34 | 0.8302 | 0.8775 | 0.8532 | 0.9646 |
| 0.0134 | 69.0 | 7314 | 0.2069 | 0.8289 | 0.8630 | 0.8456 | 73 | 0.7042 | 0.7692 | 0.7353 | 65 | 0.8861 | 0.9333 | 0.9091 | 150 | 0.7812 | 0.8621 | 0.8197 | 29 | 0.8611 | 0.9118 | 0.8857 | 34 | 0.8284 | 0.8803 | 0.8536 | 0.9631 |
| 0.0153 | 70.0 | 7420 | 0.2087 | 0.7975 | 0.8630 | 0.8289 | 73 | 0.7397 | 0.8308 | 0.7826 | 65 | 0.875 | 0.9333 | 0.9032 | 150 | 0.7353 | 0.8621 | 0.7937 | 29 | 0.7949 | 0.9118 | 0.8493 | 34 | 0.8130 | 0.8917 | 0.8505 | 0.9641 |
| 0.0137 | 71.0 | 7526 | 0.2142 | 0.8267 | 0.8493 | 0.8378 | 73 | 0.65 | 0.8 | 0.7172 | 65 | 0.8812 | 0.94 | 0.9097 | 150 | 0.7576 | 0.8621 | 0.8065 | 29 | 0.7895 | 0.8824 | 0.8333 | 34 | 0.8031 | 0.8832 | 0.8412 | 0.9611 |
| 0.015 | 72.0 | 7632 | 0.2135 | 0.8158 | 0.8493 | 0.8322 | 73 | 0.7083 | 0.7846 | 0.7445 | 65 | 0.8805 | 0.9333 | 0.9061 | 150 | 0.7576 | 0.8621 | 0.8065 | 29 | 0.7895 | 0.8824 | 0.8333 | 34 | 0.8148 | 0.8775 | 0.8450 | 0.9609 |
| 0.013 | 73.0 | 7738 | 0.2118 | 0.8077 | 0.8630 | 0.8344 | 73 | 0.7183 | 0.7846 | 0.75 | 65 | 0.875 | 0.9333 | 0.9032 | 150 | 0.7879 | 0.8966 | 0.8387 | 29 | 0.8158 | 0.9118 | 0.8611 | 34 | 0.8184 | 0.8860 | 0.8509 | 0.9633 |
| 0.0132 | 74.0 | 7844 | 0.2089 | 0.8026 | 0.8356 | 0.8188 | 73 | 0.7083 | 0.7846 | 0.7445 | 65 | 0.8742 | 0.9267 | 0.8997 | 150 | 0.7879 | 0.8966 | 0.8387 | 29 | 0.8108 | 0.8824 | 0.8451 | 34 | 0.8143 | 0.8746 | 0.8434 | 0.9638 |
| 0.0126 | 75.0 | 7950 | 0.1995 | 0.8289 | 0.8630 | 0.8456 | 73 | 0.7286 | 0.7846 | 0.7556 | 65 | 0.9091 | 0.9333 | 0.9211 | 150 | 0.7812 | 0.8621 | 0.8197 | 29 | 0.8611 | 0.9118 | 0.8857 | 34 | 0.8424 | 0.8832 | 0.8623 | 0.9660 |
| 0.0128 | 76.0 | 8056 | 0.1982 | 0.8289 | 0.8630 | 0.8456 | 73 | 0.7286 | 0.7846 | 0.7556 | 65 | 0.9091 | 0.9333 | 0.9211 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.8857 | 0.9118 | 0.8986 | 34 | 0.8474 | 0.8860 | 0.8663 | 0.9668 |
| 0.0123 | 77.0 | 8162 | 0.2078 | 0.7949 | 0.8493 | 0.8212 | 73 | 0.7123 | 0.8 | 0.7536 | 65 | 0.8797 | 0.9267 | 0.9026 | 150 | 0.7576 | 0.8621 | 0.8065 | 29 | 0.7895 | 0.8824 | 0.8333 | 34 | 0.8105 | 0.8775 | 0.8427 | 0.9633 |
| 0.013 | 78.0 | 8268 | 0.1980 | 0.8052 | 0.8493 | 0.8267 | 73 | 0.7206 | 0.7538 | 0.7368 | 65 | 0.8854 | 0.9267 | 0.9055 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.8378 | 0.9118 | 0.8732 | 34 | 0.8275 | 0.8746 | 0.8504 | 0.9651 |
| 0.0122 | 79.0 | 8374 | 0.2100 | 0.7722 | 0.8356 | 0.8026 | 73 | 0.7183 | 0.7846 | 0.75 | 65 | 0.8917 | 0.9333 | 0.9121 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.7895 | 0.8824 | 0.8333 | 34 | 0.8170 | 0.8775 | 0.8462 | 0.9641 |
| 0.0128 | 80.0 | 8480 | 0.2086 | 0.8158 | 0.8493 | 0.8322 | 73 | 0.7222 | 0.8 | 0.7591 | 65 | 0.875 | 0.9333 | 0.9032 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.8378 | 0.9118 | 0.8732 | 34 | 0.8249 | 0.8860 | 0.8544 | 0.9633 |
| 0.0117 | 81.0 | 8586 | 0.2055 | 0.8182 | 0.8630 | 0.8400 | 73 | 0.7286 | 0.7846 | 0.7556 | 65 | 0.9038 | 0.94 | 0.9216 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.8421 | 0.9412 | 0.8889 | 34 | 0.8391 | 0.8917 | 0.8646 | 0.9658 |
| 0.0122 | 82.0 | 8692 | 0.2136 | 0.7949 | 0.8493 | 0.8212 | 73 | 0.7183 | 0.7846 | 0.75 | 65 | 0.8917 | 0.9333 | 0.9121 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.7895 | 0.8824 | 0.8333 | 34 | 0.8218 | 0.8803 | 0.8501 | 0.9641 |
| 0.011 | 83.0 | 8798 | 0.2200 | 0.7848 | 0.8493 | 0.8158 | 73 | 0.6901 | 0.7538 | 0.7206 | 65 | 0.8805 | 0.9333 | 0.9061 | 150 | 0.7812 | 0.8621 | 0.8197 | 29 | 0.7895 | 0.8824 | 0.8333 | 34 | 0.8074 | 0.8718 | 0.8384 | 0.9619 |
| 0.0099 | 84.0 | 8904 | 0.2109 | 0.8077 | 0.8630 | 0.8344 | 73 | 0.7183 | 0.7846 | 0.75 | 65 | 0.8917 | 0.9333 | 0.9121 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.8611 | 0.9118 | 0.8857 | 34 | 0.8316 | 0.8860 | 0.8579 | 0.9648 |
| 0.0109 | 85.0 | 9010 | 0.2205 | 0.7949 | 0.8493 | 0.8212 | 73 | 0.7286 | 0.7846 | 0.7556 | 65 | 0.8861 | 0.9333 | 0.9091 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.8108 | 0.8824 | 0.8451 | 34 | 0.824 | 0.8803 | 0.8512 | 0.9641 |
| 0.0107 | 86.0 | 9116 | 0.2110 | 0.8077 | 0.8630 | 0.8344 | 73 | 0.7353 | 0.7692 | 0.7519 | 65 | 0.8917 | 0.9333 | 0.9121 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.8378 | 0.9118 | 0.8732 | 34 | 0.8333 | 0.8832 | 0.8575 | 0.9643 |
| 0.0101 | 87.0 | 9222 | 0.2093 | 0.8182 | 0.8630 | 0.8400 | 73 | 0.7391 | 0.7846 | 0.7612 | 65 | 0.8924 | 0.94 | 0.9156 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.8857 | 0.9118 | 0.8986 | 34 | 0.8410 | 0.8889 | 0.8643 | 0.9655 |
| 0.0111 | 88.0 | 9328 | 0.2124 | 0.8182 | 0.8630 | 0.8400 | 73 | 0.7391 | 0.7846 | 0.7612 | 65 | 0.8868 | 0.94 | 0.9126 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.8378 | 0.9118 | 0.8732 | 34 | 0.8342 | 0.8889 | 0.8607 | 0.9651 |
| 0.0095 | 89.0 | 9434 | 0.2123 | 0.8077 | 0.8630 | 0.8344 | 73 | 0.7286 | 0.7846 | 0.7556 | 65 | 0.8924 | 0.94 | 0.9156 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.8108 | 0.8824 | 0.8451 | 34 | 0.8293 | 0.8860 | 0.8567 | 0.9653 |
| 0.0105 | 90.0 | 9540 | 0.2122 | 0.8077 | 0.8630 | 0.8344 | 73 | 0.7429 | 0.8 | 0.7704 | 65 | 0.8868 | 0.94 | 0.9126 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.8333 | 0.8824 | 0.8571 | 34 | 0.832 | 0.8889 | 0.8595 | 0.9653 |
| 0.0102 | 91.0 | 9646 | 0.2248 | 0.7848 | 0.8493 | 0.8158 | 73 | 0.7297 | 0.8308 | 0.7770 | 65 | 0.8812 | 0.94 | 0.9097 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.8378 | 0.9118 | 0.8732 | 34 | 0.8220 | 0.8946 | 0.8568 | 0.9636 |
| 0.0104 | 92.0 | 9752 | 0.2231 | 0.7949 | 0.8493 | 0.8212 | 73 | 0.7324 | 0.8 | 0.7647 | 65 | 0.8812 | 0.94 | 0.9097 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.8108 | 0.8824 | 0.8451 | 34 | 0.8228 | 0.8860 | 0.8532 | 0.9633 |
| 0.0096 | 93.0 | 9858 | 0.2227 | 0.7949 | 0.8493 | 0.8212 | 73 | 0.7260 | 0.8154 | 0.7681 | 65 | 0.8812 | 0.94 | 0.9097 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.8108 | 0.8824 | 0.8451 | 34 | 0.8211 | 0.8889 | 0.8536 | 0.9638 |
| 0.0106 | 94.0 | 9964 | 0.2312 | 0.7848 | 0.8493 | 0.8158 | 73 | 0.7260 | 0.8154 | 0.7681 | 65 | 0.8758 | 0.94 | 0.9068 | 150 | 0.7576 | 0.8621 | 0.8065 | 29 | 0.8108 | 0.8824 | 0.8451 | 34 | 0.8120 | 0.8860 | 0.8474 | 0.9616 |
| 0.0096 | 95.0 | 10070 | 0.2199 | 0.7949 | 0.8493 | 0.8212 | 73 | 0.7324 | 0.8 | 0.7647 | 65 | 0.8812 | 0.94 | 0.9097 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.7838 | 0.8529 | 0.8169 | 34 | 0.8201 | 0.8832 | 0.8505 | 0.9633 |
| 0.0098 | 96.0 | 10176 | 0.2199 | 0.7949 | 0.8493 | 0.8212 | 73 | 0.7324 | 0.8 | 0.7647 | 65 | 0.8868 | 0.94 | 0.9126 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.7838 | 0.8529 | 0.8169 | 34 | 0.8223 | 0.8832 | 0.8516 | 0.9641 |
| 0.0099 | 97.0 | 10282 | 0.2210 | 0.7949 | 0.8493 | 0.8212 | 73 | 0.7297 | 0.8308 | 0.7770 | 65 | 0.8924 | 0.94 | 0.9156 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.7838 | 0.8529 | 0.8169 | 34 | 0.8232 | 0.8889 | 0.8548 | 0.9641 |
| 0.0094 | 98.0 | 10388 | 0.2210 | 0.7949 | 0.8493 | 0.8212 | 73 | 0.7123 | 0.8 | 0.7536 | 65 | 0.8868 | 0.94 | 0.9126 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.7838 | 0.8529 | 0.8169 | 34 | 0.8179 | 0.8832 | 0.8493 | 0.9633 |
| 0.0086 | 99.0 | 10494 | 0.2204 | 0.7949 | 0.8493 | 0.8212 | 73 | 0.7361 | 0.8154 | 0.7737 | 65 | 0.8924 | 0.94 | 0.9156 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.7838 | 0.8529 | 0.8169 | 34 | 0.8249 | 0.8860 | 0.8544 | 0.9638 |
| 0.0109 | 100.0 | 10600 | 0.2206 | 0.7949 | 0.8493 | 0.8212 | 73 | 0.7361 | 0.8154 | 0.7737 | 65 | 0.8924 | 0.94 | 0.9156 | 150 | 0.8125 | 0.8966 | 0.8525 | 29 | 0.7838 | 0.8529 | 0.8169 | 34 | 0.8249 | 0.8860 | 0.8544 | 0.9638 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
bella05/pogny_10_64_0.01
|
bella05
| 2024-06-03T23:50:21Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:klue/roberta-large",
"base_model:finetune:klue/roberta-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-03T19:35:38Z |
---
base_model: klue/roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: pogny_10_64_0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/bella05/huggingface/runs/2fqy4l1d)
# pogny_10_64_0.01
This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6851
- Accuracy: 0.4376
- F1: 0.2665
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 2.491 | 1.0 | 1205 | 2.5033 | 0.4376 | 0.2665 |
| 2.4679 | 2.0 | 2410 | 1.9460 | 0.4376 | 0.2665 |
| 2.302 | 3.0 | 3615 | 2.4098 | 0.0702 | 0.0092 |
| 2.1762 | 4.0 | 4820 | 2.2698 | 0.0545 | 0.0056 |
| 2.0639 | 5.0 | 6025 | 1.9917 | 0.4376 | 0.2665 |
| 2.0031 | 6.0 | 7230 | 1.9130 | 0.4376 | 0.2665 |
| 1.9241 | 7.0 | 8435 | 2.0131 | 0.4376 | 0.2665 |
| 1.8227 | 8.0 | 9640 | 1.8212 | 0.4376 | 0.2665 |
| 1.7854 | 9.0 | 10845 | 1.7379 | 0.4376 | 0.2665 |
| 1.7037 | 10.0 | 12050 | 1.6851 | 0.4376 | 0.2665 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.2
- Datasets 2.19.1
- Tokenizers 0.19.1
|
kevinvelez18/ViT_model
|
kevinvelez18
| 2024-06-03T23:46:48Z | 222 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-03T23:43:33Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ViT_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0252
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1492 | 3.8462 | 500 | 0.0252 | 0.9925 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
bartowski/phi3-4x4b-v1-GGUF
|
bartowski
| 2024-06-03T23:40:50Z | 192 | 0 | null |
[
"gguf",
"phi3",
"nlp",
"moe",
"text-generation",
"dataset:BEE-spoke-data/gutenberg-en-v1-clean",
"dataset:NeelNanda/pile-10k",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-06-03T23:18:02Z |
---
license: mit
tags:
- phi3
- nlp
- moe
datasets:
- BEE-spoke-data/gutenberg-en-v1-clean
- NeelNanda/pile-10k
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of phi3-4x4b-v1
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3070">b3070</a> for quantization.
Original model: https://huggingface.co/Fizzarolli/phi3-4x4b-v1
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
```
<s><|user|> {prompt}<|end|><|assistant|><|end|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [phi3-4x4b-v1-Q8_0.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-Q8_0.gguf) | Q8_0 | 11.76GB | Extremely high quality, generally unneeded but max available quant. |
| [phi3-4x4b-v1-Q6_K.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-Q6_K.gguf) | Q6_K | 9.08GB | Very high quality, near perfect, *recommended*. |
| [phi3-4x4b-v1-Q5_K_M.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-Q5_K_M.gguf) | Q5_K_M | 7.85GB | High quality, *recommended*. |
| [phi3-4x4b-v1-Q5_K_S.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-Q5_K_S.gguf) | Q5_K_S | 7.62GB | High quality, *recommended*. |
| [phi3-4x4b-v1-Q4_K_M.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-Q4_K_M.gguf) | Q4_K_M | 6.70GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [phi3-4x4b-v1-Q4_K_S.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-Q4_K_S.gguf) | Q4_K_S | 6.30GB | Slightly lower quality with more space savings, *recommended*. |
| [phi3-4x4b-v1-IQ4_XS.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-IQ4_XS.gguf) | IQ4_XS | 5.91GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [phi3-4x4b-v1-Q3_K_L.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-Q3_K_L.gguf) | Q3_K_L | 5.78GB | Lower quality but usable, good for low RAM availability. |
| [phi3-4x4b-v1-Q3_K_M.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-Q3_K_M.gguf) | Q3_K_M | 5.33GB | Even lower quality. |
| [phi3-4x4b-v1-IQ3_M.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-IQ3_M.gguf) | IQ3_M | 4.93GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [phi3-4x4b-v1-Q3_K_S.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-Q3_K_S.gguf) | Q3_K_S | 4.79GB | Low quality, not recommended. |
| [phi3-4x4b-v1-IQ3_XS.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-IQ3_XS.gguf) | IQ3_XS | 4.54GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [phi3-4x4b-v1-IQ3_XXS.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-IQ3_XXS.gguf) | IQ3_XXS | 4.25GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [phi3-4x4b-v1-Q2_K.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-Q2_K.gguf) | Q2_K | 4.07GB | Very low quality but surprisingly usable. |
| [phi3-4x4b-v1-IQ2_M.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-IQ2_M.gguf) | IQ2_M | 3.74GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [phi3-4x4b-v1-IQ2_S.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-IQ2_S.gguf) | IQ2_S | 3.43GB | Very low quality, uses SOTA techniques to be usable. |
| [phi3-4x4b-v1-IQ2_XS.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-IQ2_XS.gguf) | IQ2_XS | 3.34GB | Very low quality, uses SOTA techniques to be usable. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/phi3-4x4b-v1-GGUF --include "phi3-4x4b-v1-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/phi3-4x4b-v1-GGUF --include "phi3-4x4b-v1-Q8_0.gguf/*" --local-dir phi3-4x4b-v1-Q8_0
```
You can either specify a new local-dir (phi3-4x4b-v1-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
sj21867/ai_art_exp3_mobilenetv2
|
sj21867
| 2024-06-03T23:25:00Z | 193 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mobilenet_v2",
"image-classification",
"generated_from_trainer",
"base_model:google/mobilenet_v2_1.0_224",
"base_model:finetune:google/mobilenet_v2_1.0_224",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-03T23:23:07Z |
---
license: other
base_model: google/mobilenet_v2_1.0_224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ai_art_exp3_mobilenetv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai_art_exp3_mobilenetv2
This model is a fine-tuned version of [google/mobilenet_v2_1.0_224](https://huggingface.co/google/mobilenet_v2_1.0_224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: {'accuracy': 0.65}
- Loss: 0.8813
- Overall Accuracy: 0.65
- Human Accuracy: 0.34
- Ld Accuracy: 0.84
- Sd Accuracy: 0.77
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss | Overall Accuracy | Human Accuracy | Ld Accuracy | Sd Accuracy |
|:-------------:|:-----:|:----:|:--------------------------------:|:---------------:|:----------------:|:--------------:|:-----------:|:-----------:|
| 1.0707 | 0.96 | 18 | {'accuracy': 0.6333333333333333} | 0.8947 | 0.6333 | 0.3426 | 0.8485 | 0.7419 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
sj21867/ai_art_exp3_vit
|
sj21867
| 2024-06-03T23:18:54Z | 195 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-03T23:16:24Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ai_art_exp3_vit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai_art_exp3_vit
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: {'accuracy': 0.72}
- Loss: 0.9013
- Overall Accuracy: 0.72
- Human Accuracy: 0.34
- Ld Accuracy: 0.9
- Sd Accuracy: 0.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss | Overall Accuracy | Human Accuracy | Ld Accuracy | Sd Accuracy |
|:-------------:|:-----:|:----:|:------------------:|:---------------:|:----------------:|:--------------:|:-----------:|:-----------:|
| 1.0375 | 0.96 | 18 | {'accuracy': 0.72} | 0.9182 | 0.72 | 0.3889 | 0.9192 | 0.8925 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
radia/Phi-3-mini-128k-instruct-Q4_K_M-GGUF
|
radia
| 2024-06-03T23:15:38Z | 2 | 0 | null |
[
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:quantized:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-06-03T23:15:32Z |
---
language:
- en
license: mit
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
base_model: microsoft/Phi-3-mini-128k-instruct
license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# radia/Phi-3-mini-128k-instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-mini-128k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo radia/Phi-3-mini-128k-instruct-Q4_K_M-GGUF --hf-file phi-3-mini-128k-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo radia/Phi-3-mini-128k-instruct-Q4_K_M-GGUF --hf-file phi-3-mini-128k-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo radia/Phi-3-mini-128k-instruct-Q4_K_M-GGUF --hf-file phi-3-mini-128k-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo radia/Phi-3-mini-128k-instruct-Q4_K_M-GGUF --hf-file phi-3-mini-128k-instruct-q4_k_m.gguf -c 2048
```
|
mayabedge/whisper-ft
|
mayabedge
| 2024-06-03T23:05:31Z | 92 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-03T22:39:07Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Fine-tuned - NNCES
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Fine-tuned - NNCES
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1135
- Wer: 8.0963
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 1.2697 | 0.1 | 10 | 0.8252 | 40.9920 |
| 0.6597 | 0.2 | 20 | 0.5482 | 25.2371 |
| 0.4656 | 0.3 | 30 | 0.3488 | 20.0584 |
| 0.2774 | 0.4 | 40 | 0.2164 | 21.5901 |
| 0.1746 | 0.5 | 50 | 0.1770 | 19.0372 |
| 0.1826 | 0.6 | 60 | 0.1540 | 15.3902 |
| 0.1228 | 0.7 | 70 | 0.1364 | 11.4515 |
| 0.1271 | 0.8 | 80 | 0.1246 | 8.6798 |
| 0.2388 | 0.9 | 90 | 0.1165 | 8.0233 |
| 0.2584 | 1.0 | 100 | 0.1135 | 8.0963 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
datek/Qwen-Qwen1.5-1.8B-1717455654
|
datek
| 2024-06-03T23:02:36Z | 139 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-03T23:00:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cetusian/ner-model-furniture
|
cetusian
| 2024-06-03T23:00:08Z | 73 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-03T22:48:43Z |
---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: cetusian/ner-model-furniture
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cetusian/ner-model-furniture
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3445
- Validation Loss: 0.3911
- Train Precision: 0.7212
- Train Recall: 0.7764
- Train F1: 0.7478
- Train Accuracy: 0.8465
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 348, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.3467 | 0.3911 | 0.7212 | 0.7764 | 0.7478 | 0.8465 | 0 |
| 0.3445 | 0.3911 | 0.7212 | 0.7764 | 0.7478 | 0.8465 | 1 |
### Framework versions
- Transformers 4.41.1
- TensorFlow 2.15.0
- Datasets 2.19.2
- Tokenizers 0.19.1
|
mradermacher/llama-3-70B-openbio-dareties-GGUF
|
mradermacher
| 2024-06-03T22:54:25Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:toantam1290/llama-3-70B-openbio-dareties",
"base_model:quantized:toantam1290/llama-3-70B-openbio-dareties",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-03T17:54:45Z |
---
base_model: toantam1290/llama-3-70B-openbio-dareties
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/toantam1290/llama-3-70B-openbio-dareties
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-openbio-dareties-GGUF/resolve/main/llama-3-70B-openbio-dareties.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-openbio-dareties-GGUF/resolve/main/llama-3-70B-openbio-dareties.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-openbio-dareties-GGUF/resolve/main/llama-3-70B-openbio-dareties.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-openbio-dareties-GGUF/resolve/main/llama-3-70B-openbio-dareties.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-openbio-dareties-GGUF/resolve/main/llama-3-70B-openbio-dareties.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-openbio-dareties-GGUF/resolve/main/llama-3-70B-openbio-dareties.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-openbio-dareties-GGUF/resolve/main/llama-3-70B-openbio-dareties.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-openbio-dareties-GGUF/resolve/main/llama-3-70B-openbio-dareties.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-openbio-dareties-GGUF/resolve/main/llama-3-70B-openbio-dareties.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-openbio-dareties-GGUF/resolve/main/llama-3-70B-openbio-dareties.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-openbio-dareties-GGUF/resolve/main/llama-3-70B-openbio-dareties.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-openbio-dareties-GGUF/resolve/main/llama-3-70B-openbio-dareties.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/llama-3-70B-openbio-dareties-GGUF/resolve/main/llama-3-70B-openbio-dareties.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/llama-3-70B-openbio-dareties-GGUF/resolve/main/llama-3-70B-openbio-dareties.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/llama-3-70B-openbio-dareties-GGUF/resolve/main/llama-3-70B-openbio-dareties.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/llama-3-70B-openbio-dareties-GGUF/resolve/main/llama-3-70B-openbio-dareties.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
wwe180/Llama3-13B-lingyang-v1-Q4_K_M-GGUF
|
wwe180
| 2024-06-03T22:51:53Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:wwe180/Llama3-13B-lingyang-v1",
"base_model:quantized:wwe180/Llama3-13B-lingyang-v1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-03T21:24:10Z |
---
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model: wwe180/Llama3-13B-lingyang-v1
---
# wwe180/Llama3-13B-lingyang-v1-Q4_K_M-GGUF
This model was converted to GGUF format from [`wwe180/Llama3-13B-lingyang-v1`](https://huggingface.co/wwe180/Llama3-13B-lingyang-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/wwe180/Llama3-13B-lingyang-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo wwe180/Llama3-13B-lingyang-v1-Q4_K_M-GGUF --hf-file Llama3-13B-lingyang-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo wwe180/Llama3-13B-lingyang-v1-Q4_K_M-GGUF --hf-file Llama3-13B-lingyang-v1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo wwe180/Llama3-13B-lingyang-v1-Q4_K_M-GGUF --hf-file Llama3-13B-lingyang-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo wwe180/Llama3-13B-lingyang-v1-Q4_K_M-GGUF --hf-file Llama3-13B-lingyang-v1-q4_k_m.gguf -c 2048
```
|
Molkaa/mistral-7b-miniplatypus
|
Molkaa
| 2024-06-03T22:51:51Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | null | 2024-05-29T19:10:41Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
Ariffiq99/KUCI_e_care_xlm_roberta_base_Finetuned
|
Ariffiq99
| 2024-06-03T22:49:13Z | 103 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:Ariffiq99/e_care_xlm_roberta_base_finetuned",
"base_model:finetune:Ariffiq99/e_care_xlm_roberta_base_finetuned",
"license:mit",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-06-03T09:35:11Z |
---
license: mit
base_model: Ariffiq99/e_care_xlm_roberta_base_finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: KUCI_e_care_xlm_roberta_base_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KUCI_e_care_xlm_roberta_base_Finetuned
This model is a fine-tuned version of [Ariffiq99/e_care_xlm_roberta_base_finetuned](https://huggingface.co/Ariffiq99/e_care_xlm_roberta_base_finetuned) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0348
- F1: 0.7682
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.6965 | 1.0 | 5196 | 0.6542 | 0.7443 |
| 0.5632 | 2.0 | 10392 | 0.6754 | 0.7591 |
| 0.4634 | 3.0 | 15588 | 0.6456 | 0.7680 |
| 0.3661 | 4.0 | 20784 | 0.7082 | 0.7657 |
| 0.2783 | 5.0 | 25980 | 0.7899 | 0.7678 |
| 0.2305 | 6.0 | 31176 | 0.9280 | 0.7655 |
| 0.2057 | 7.0 | 36372 | 1.0348 | 0.7682 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
animaRegem/gemma-2b-malayalam-t2-gguf
|
animaRegem
| 2024-06-03T22:49:09Z | 7 | 0 |
transformers
|
[
"transformers",
"gguf",
"gemma",
"text-generation-inference",
"unsloth",
"en",
"base_model:Telugu-LLM-Labs/Indic-gemma-2b-finetuned-sft-Navarasa-2.0",
"base_model:quantized:Telugu-LLM-Labs/Indic-gemma-2b-finetuned-sft-Navarasa-2.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-03T21:37:07Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- gguf
base_model: Telugu-LLM-Labs/Indic-gemma-2b-finetuned-sft-Navarasa-2.0
---
# Uploaded model
- **Developed by:** animaRegem
- **License:** apache-2.0
- **Finetuned from model :** Telugu-LLM-Labs/Indic-gemma-2b-finetuned-sft-Navarasa-2.0
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
wwe180/Llama3-13B-lingyang-v1
|
wwe180
| 2024-06-03T22:47:07Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"Llama3",
"conversational",
"base_model:wwe180/Llama3-13B-lingyang-v1",
"base_model:finetune:wwe180/Llama3-13B-lingyang-v1",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-03T20:25:39Z |
---
base_model:
- wwe180/Llama3-13B-lingyang-v1
library_name: transformers
tags:
- mergekit
- merge
- Llama3
license:
- other
---
# After simple testing, the effect is good, stronger than llama-3-8b!
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method using [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) + [hfl/llama-3-chinese-8b-instruct-v2-lora](https://huggingface.co/hfl/llama-3-chinese-8b-instruct-v2-lora)
* [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1) + [Jiar/Llama-3-8B-Chinese](https://huggingface.co/Jiar/Llama-3-8B-Chinese)
* [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) + [camillop/Meta-Llama-3-8B-ORPO-ITA-llama-adapters](https://huggingface.co/camillop/Meta-Llama-3-8B-ORPO-ITA-llama-adapters)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Llama3-13B-lingyang-v1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
```
## Statement:
Llama3-13B-lingyang-v1 does not represent the views and positions of the model developers We will not be liable for any problems arising from the use of the Llama3-13B-lingyang-v1 open Source model, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
|
kaushiksiva07/Mistral-7B-Instruct-v0.2-Q4_0-GGUF
|
kaushiksiva07
| 2024-06-03T22:45:50Z | 40 | 0 | null |
[
"gguf",
"finetuned",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:quantized:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-06-03T22:45:38Z |
---
license: apache-2.0
tags:
- finetuned
- llama-cpp
- gguf-my-repo
base_model: mistralai/Mistral-7B-Instruct-v0.2
pipeline_tag: text-generation
inference: true
widget:
- messages:
- role: user
content: What is your favorite condiment?
---
# kaushiksiva07/Mistral-7B-Instruct-v0.2-Q4_0-GGUF
This model was converted to GGUF format from [`mistralai/Mistral-7B-Instruct-v0.2`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo kaushiksiva07/Mistral-7B-Instruct-v0.2-Q4_0-GGUF --hf-file mistral-7b-instruct-v0.2-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo kaushiksiva07/Mistral-7B-Instruct-v0.2-Q4_0-GGUF --hf-file mistral-7b-instruct-v0.2-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo kaushiksiva07/Mistral-7B-Instruct-v0.2-Q4_0-GGUF --hf-file mistral-7b-instruct-v0.2-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo kaushiksiva07/Mistral-7B-Instruct-v0.2-Q4_0-GGUF --hf-file mistral-7b-instruct-v0.2-q4_0.gguf -c 2048
```
|
shane062/whisper-small-finetuned-300
|
shane062
| 2024-06-03T22:43:21Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-03T04:06:31Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- wer
model-index:
- name: whisper-small-finetuned-300
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: audiofolder
type: audiofolder
config: default
split: test
args: default
metrics:
- name: Wer
type: wer
value: 64.86486486486487
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-finetuned-300
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7359
- Wer Ortho: 64.8649
- Wer: 64.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 30
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.5121 | 20.0 | 60 | 1.3011 | 64.8649 | 64.8649 |
| 0.0001 | 40.0 | 120 | 0.7236 | 64.8649 | 64.8649 |
| 0.0 | 60.0 | 180 | 0.7314 | 64.8649 | 64.8649 |
| 0.0 | 80.0 | 240 | 0.7340 | 64.8649 | 64.8649 |
| 0.0 | 100.0 | 300 | 0.7359 | 64.8649 | 64.8649 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
hdve/google-gemma-7b-1717454317
|
hdve
| 2024-06-03T22:41:22Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-03T22:38:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
enriquesaou/debug_seq2seq_squad
|
enriquesaou
| 2024-06-03T22:38:04Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad_v2",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-03T20:36:52Z |
---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: debug_seq2seq_squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/favcowboy/huggingface/runs/wdlupjr7)
# debug_seq2seq_squad
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
animaRegem/gemma-2b-malayalam-t2-model-adaptors
|
animaRegem
| 2024-06-03T22:37:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:Telugu-LLM-Labs/Indic-gemma-2b-finetuned-sft-Navarasa-2.0",
"base_model:finetune:Telugu-LLM-Labs/Indic-gemma-2b-finetuned-sft-Navarasa-2.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-03T21:27:08Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: Telugu-LLM-Labs/Indic-gemma-2b-finetuned-sft-Navarasa-2.0
---
# Uploaded model
- **Developed by:** animaRegem
- **License:** apache-2.0
- **Finetuned from model :** Telugu-LLM-Labs/Indic-gemma-2b-finetuned-sft-Navarasa-2.0
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Magedyoussef86/Maged
|
Magedyoussef86
| 2024-06-03T22:33:39Z | 0 | 0 | null |
[
"license:artistic-2.0",
"region:us"
] | null | 2024-06-03T22:33:38Z |
---
license: artistic-2.0
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.