modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_64_64_0.05_16_0.0002
|
ferrazzipietro
| 2024-03-08T04:10:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-08T04:08:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cxhoang/blip2-flickr8k-finetune
|
cxhoang
| 2024-03-08T04:05:50Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:ybelkada/blip2-opt-2.7b-fp16-sharded",
"base_model:adapter:ybelkada/blip2-opt-2.7b-fp16-sharded",
"region:us"
] | null | 2024-02-14T00:09:37Z |
---
library_name: peft
base_model: ybelkada/blip2-opt-2.7b-fp16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
frankenmerger/delta-4b-notso-base
|
frankenmerger
| 2024-03-08T04:04:06Z | 66 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"conversational",
"custom_code",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T18:41:10Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- conversational
---
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "gmonsoon/Delta-4B-notso-base"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
InnerI/InnerILLM-OpenPipe-Nous-Yarn-Mistral-optimized-1228-7B-slerp
|
InnerI
| 2024-03-08T04:01:27Z | 52 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"NousResearch/Yarn-Mistral-7b-128k",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"base_model:merge:NousResearch/Yarn-Mistral-7b-128k",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-13T03:31:40Z |
---
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- NousResearch/Yarn-Mistral-7b-128k
base_model:
- OpenPipe/mistral-ft-optimized-1218
- NousResearch/Yarn-Mistral-7b-128k
license: apache-2.0
---
# InnerILLM-OpenPipe-Nous-Yarn-Mistral-optimized-1228-7B-slerp
InnerILLM-OpenPipe-Nous-Yarn-Mistral-optimized-1228-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: NousResearch/Yarn-Mistral-7b-128k
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "InnerI/InnerILLM-OpenPipe-Nous-Yarn-Mistral-optimized-1228-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
OwOOwO/eacc_contTrain_m6_2
|
OwOOwO
| 2024-03-08T03:52:45Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-08T03:50:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
salmaafifi98/t5-small-finetuned-xsum
|
salmaafifi98
| 2024-03-08T03:46:12Z | 46 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:Rocketknight1/t5-small-finetuned-xsum",
"base_model:finetune:Rocketknight1/t5-small-finetuned-xsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-03T02:06:33Z |
---
license: apache-2.0
base_model: Rocketknight1/t5-small-finetuned-xsum
tags:
- generated_from_keras_callback
model-index:
- name: salmaafifi98/t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# salmaafifi98/t5-small-finetuned-xsum
This model is a fine-tuned version of [Rocketknight1/t5-small-finetuned-xsum](https://huggingface.co/Rocketknight1/t5-small-finetuned-xsum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.5775
- Validation Loss: 2.3341
- Train Rouge1: 30.2930
- Train Rouge2: 9.1969
- Train Rougel: 24.0331
- Train Rougelsum: 24.0378
- Train Gen Len: 18.7949
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 2.5775 | 2.3341 | 30.2930 | 9.1969 | 24.0331 | 24.0378 | 18.7949 | 0 |
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
humung/komt-mistral-7b-v1-vlending-cs-v0.1
|
humung
| 2024-03-08T03:40:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-08T03:40:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
debasishdas/llama2-7b-chat-finetuned-legal
|
debasishdas
| 2024-03-08T03:36:52Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"region:us"
] | null | 2024-03-07T08:55:42Z |
---
tags:
- generated_from_trainer
model-index:
- name: llama2-7b-chat-finetuned-legal
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7b-chat-finetuned-legal
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_64_64_0.05_4_0.0002
|
ferrazzipietro
| 2024-03-08T03:30:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-08T03:29:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KhangSimple/output
|
KhangSimple
| 2024-03-08T03:27:55Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-08T02:29:51Z |
---
license: apache-2.0
base_model: sentence-transformers/all-MiniLM-L6-v2
tags:
- generated_from_trainer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
migtissera/Tess-10.7B-v1.5b
|
migtissera
| 2024-03-08T03:13:32Z | 179 | 13 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T06:34:30Z |
---
license: apache-2.0
---
<br>

<br>
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-10.7B-v1.5b was trained on the SOLAR-10.7B base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
|
OwOOwO/eacc_contTrain_m2_55_rt
|
OwOOwO
| 2024-03-08T03:08:04Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-08T03:05:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
OwOOwO/eacc_contTrain_c1
|
OwOOwO
| 2024-03-08T02:41:46Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-08T02:39:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_64_32_0.01_4_0.0002
|
ferrazzipietro
| 2024-03-08T02:31:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-08T02:30:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AyeSee/roberta-large-lora-token-classification_v1
|
AyeSee
| 2024-03-08T01:35:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T13:48:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
daze-unlv/axolotl-medmcqa-2-epoch
|
daze-unlv
| 2024-03-08T01:28:30Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"mistral",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-03-07T22:54:28Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: lora-out/medmcqa-2-epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: mistralai/Mistral-7B-v0.1
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: daze-unlv/medmcqa_axolotl
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0
output_dir: ./lora-out/medmcqa-2-epoch
eval_sample_packing: false
adapter: lora
lora_model_dir:
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16: false
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: false
sdp_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# lora-out/medmcqa-2-epoch
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.9.1.dev0
- Transformers 4.39.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0
|
ahmedbaig/DataMeshGroup_ML_Practice
|
ahmedbaig
| 2024-03-08T01:25:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-03-07T06:02:58Z |
# AI Development
## Getting started
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
## Add your files
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
```
cd existing_repo
git remote add origin https://code.dmgsecure.io/Ahmed_Baig/ai-development.git
git branch -M main
git push -uf origin main
```
## Integrate with your tools
- [ ] [Set up project integrations](https://code.dmgsecure.io/Ahmed_Baig/ai-development/-/settings/integrations)
## Collaborate with your team
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Set auto-merge](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy
Use the built-in continuous integration in GitLab.
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing (SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
***
# Editing this README
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thanks to [makeareadme.com](https://www.makeareadme.com/) for this template.
## Suggestions for a good README
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
## Name
Choose a self-explaining name for your project.
## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.
## Contributing
State if you are open to contributions and what your requirements are for accepting them.
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
## License
For open source projects, say how it is licensed.
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
|
lilsomnus/SatAI-ft
|
lilsomnus
| 2024-03-08T01:21:48Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-03-08T01:21:45Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
model-index:
- name: SatAI-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SatAI-ft
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
|
CreitinGameplays/elisa-chan-phi-2-super
|
CreitinGameplays
| 2024-03-08T01:20:13Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"conversational",
"custom_code",
"en",
"dataset:CreitinGameplays/elisa-chan-v2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T18:07:06Z |
---
datasets:
- CreitinGameplays/elisa-chan-v2
language:
- en
widget:
- text: Hello, who are you?
example_title: Identity
- text: What can you do?
example_title: Capabilities
- text: How does a human brain work?
example_title: Question
---
This is the "abacaj/phi-2-super" model fine-tuned using my own dataset.
Prompt example:
```
prompt = "<|endoftext|>[INST] Hello there! [/INST]"
```
|
daze-unlv/medmcqa-alignment-lora-7b-2-epoch
|
daze-unlv
| 2024-03-08T01:18:13Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-03-06T18:56:58Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: medmcqa-alignment-lora-7b-2-epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medmcqa-alignment-lora-7b-2-epoch
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0 | 1.0 | 2601 | nan |
| 0.0 | 2.0 | 5202 | nan |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
|
sumedhuv/bert-relext
|
sumedhuv
| 2024-03-08T01:13:37Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-08T01:04:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
daze-unlv/medmcqa-alignment-lora-7b-4-epoch
|
daze-unlv
| 2024-03-08T01:05:47Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-03-07T07:49:03Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: medmcqa-alignment-lora-7b-4-epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medmcqa-alignment-lora-7b-4-epoch
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0 | 1.0 | 2601 | nan |
| 0.0 | 2.0 | 5203 | nan |
| 0.0 | 3.0 | 7804 | nan |
| 0.0 | 4.0 | 10404 | nan |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
|
dolainu/Nyanners_lora_Vtuber
|
dolainu
| 2024-03-08T00:59:52Z | 1 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stablediffusionapi/anylora-checkpoint",
"base_model:adapter:stablediffusionapi/anylora-checkpoint",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2024-03-08T00:59:41Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: <lora:NyanV0.1:0.8>, nyanners1st, medium hair, purple eyes, 1girl, sitting
parameters:
negative_prompt: ' easynegative'
output:
url: images/09210-3549732032.png
- text: >-
<lora:NyanV0.1:0.8>, nyanners2st, long hair, purple eyes, 1girl, sitting,
nude, bed, <lora:Smooth belly_v1.3.2:0.8>, petite, nsfw
parameters:
negative_prompt: ' easynegative'
output:
url: images/09218-3924406483.png
- text: >-
<lora:NyanV0.1:0.87>, nyanners2st, long hair, purple eyes, 1girl, sitting,
bed, <lora:Smooth belly_v1.3.2:0.8>, petite
parameters:
negative_prompt: ' easynegative'
output:
url: images/09222-462619498.png
- text: >-
<lora:NyanV0.1:0.87>, nyanners1st, medium hair, purple eyes, 1girl, sitting,
bed, <lora:Smooth belly_v1.3.2:0.8>, petite
parameters:
negative_prompt: ' easynegative'
output:
url: images/09226-797583566.png
base_model: stablediffusionapi/anylora-checkpoint
instance_prompt: null
license: apache-2.0
---
# Nyanners
<Gallery />
## Model description
TESTED AT 0.87 STRENGTH
Prompts:
short hair ver.: "nyanners1st, medium hair, purple eyes"
long hair ver.: "nyanners2st, long hair, purple eyes"
## Download model
Weights for this model are available in Safetensors format.
[Download](/dolainu/Nyanners_lora_Vtuber/tree/main) them in the Files & versions tab.
|
keanurefresh/73981
|
keanurefresh
| 2024-03-08T00:44:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-03-08T00:00:03Z |
Include in your prompt <lora:facialized:1>, cum, facial.You might want to include in your negative prompt cum on breasts, cum on body.
The model works best with low steps and CFG, I get good results with 10 steps and a CFG of 3 or 4.
|
zabir735/clip-seed-vit-4
|
zabir735
| 2024-03-08T00:41:07Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"clip",
"zero-shot-image-classification",
"generated_from_trainer",
"base_model:openai/clip-vit-base-patch16",
"base_model:finetune:openai/clip-vit-base-patch16",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2024-03-07T23:36:17Z |
---
base_model: openai/clip-vit-base-patch16
tags:
- generated_from_trainer
model-index:
- name: clip-seed-vit-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-seed-vit-4
This model is a fine-tuned version of [openai/clip-vit-base-patch16](https://huggingface.co/openai/clip-vit-base-patch16) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2432
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.2.0+cpu
- Datasets 2.16.1
- Tokenizers 0.15.1
|
mfidabel/Modelo_3_Whisper_Tiny
|
mfidabel
| 2024-03-08T00:35:21Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:openai/whisper-tiny",
"base_model:adapter:openai/whisper-tiny",
"license:apache-2.0",
"region:us"
] | null | 2024-03-07T20:36:38Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: openai/whisper-tiny
model-index:
- name: Modelo_3_Whisper_Tiny
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Modelo_3_Whisper_Tiny
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4710
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9233 | 1.0 | 1295 | 0.7396 |
| 0.8323 | 2.0 | 2590 | 0.6168 |
| 0.6931 | 3.0 | 3885 | 0.5398 |
| 0.5671 | 4.0 | 5180 | 0.4955 |
| 0.489 | 5.0 | 6475 | 0.4710 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.2
|
farid1088/RoBERTa-legal-de-cased_German_legal_SQuAD_100
|
farid1088
| 2024-03-08T00:30:54Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-05T20:49:10Z |
---
tags:
- generated_from_trainer
model-index:
- name: RoBERTa-legal-de-cased_German_legal_SQuAD_100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoBERTa-legal-de-cased_German_legal_SQuAD_100
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 160
- eval_batch_size: 40
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 2 | 6.2702 |
| No log | 2.0 | 4 | 6.2057 |
| No log | 3.0 | 6 | 6.0590 |
| No log | 4.0 | 8 | 5.8979 |
| No log | 5.0 | 10 | 5.4639 |
| No log | 6.0 | 12 | 5.3637 |
| No log | 7.0 | 14 | 5.1792 |
| No log | 8.0 | 16 | 4.9258 |
| No log | 9.0 | 18 | 4.7628 |
| No log | 10.0 | 20 | 4.5534 |
| No log | 11.0 | 22 | 4.3370 |
| No log | 12.0 | 24 | 4.1347 |
| No log | 13.0 | 26 | 3.9543 |
| No log | 14.0 | 28 | 3.7819 |
| No log | 15.0 | 30 | 3.6555 |
| No log | 16.0 | 32 | 3.5673 |
| No log | 17.0 | 34 | 3.4768 |
| No log | 18.0 | 36 | 3.3835 |
| No log | 19.0 | 38 | 3.3112 |
| No log | 20.0 | 40 | 3.2279 |
| No log | 21.0 | 42 | 3.1581 |
| No log | 22.0 | 44 | 3.0989 |
| No log | 23.0 | 46 | 3.0178 |
| No log | 24.0 | 48 | 2.9702 |
| No log | 25.0 | 50 | 2.9084 |
| No log | 26.0 | 52 | 2.8226 |
| No log | 27.0 | 54 | 2.8405 |
| No log | 28.0 | 56 | 2.8029 |
| No log | 29.0 | 58 | 2.6979 |
| No log | 30.0 | 60 | 2.7140 |
| No log | 31.0 | 62 | 2.6985 |
| No log | 32.0 | 64 | 2.6223 |
| No log | 33.0 | 66 | 2.6349 |
| No log | 34.0 | 68 | 2.5541 |
| No log | 35.0 | 70 | 2.4758 |
| No log | 36.0 | 72 | 2.4601 |
| No log | 37.0 | 74 | 2.4836 |
| No log | 38.0 | 76 | 2.3613 |
| No log | 39.0 | 78 | 2.2917 |
| No log | 40.0 | 80 | 2.3154 |
| No log | 41.0 | 82 | 2.2682 |
| No log | 42.0 | 84 | 2.2784 |
| No log | 43.0 | 86 | 2.2534 |
| No log | 44.0 | 88 | 2.1457 |
| No log | 45.0 | 90 | 2.1808 |
| No log | 46.0 | 92 | 2.2528 |
| No log | 47.0 | 94 | 2.1585 |
| No log | 48.0 | 96 | 2.0309 |
| No log | 49.0 | 98 | 2.0622 |
| No log | 50.0 | 100 | 2.0533 |
| No log | 51.0 | 102 | 1.9610 |
| No log | 52.0 | 104 | 1.9597 |
| No log | 53.0 | 106 | 1.8926 |
| No log | 54.0 | 108 | 1.8149 |
| No log | 55.0 | 110 | 1.7849 |
| No log | 56.0 | 112 | 1.8135 |
| No log | 57.0 | 114 | 1.8190 |
| No log | 58.0 | 116 | 1.8126 |
| No log | 59.0 | 118 | 1.8007 |
| No log | 60.0 | 120 | 1.7200 |
| No log | 61.0 | 122 | 1.6408 |
| No log | 62.0 | 124 | 1.6524 |
| No log | 63.0 | 126 | 1.6697 |
| No log | 64.0 | 128 | 1.6660 |
| No log | 65.0 | 130 | 1.5907 |
| No log | 66.0 | 132 | 1.5765 |
| No log | 67.0 | 134 | 1.5575 |
| No log | 68.0 | 136 | 1.5455 |
| No log | 69.0 | 138 | 1.5267 |
| No log | 70.0 | 140 | 1.4875 |
| No log | 71.0 | 142 | 1.4474 |
| No log | 72.0 | 144 | 1.4436 |
| No log | 73.0 | 146 | 1.4609 |
| No log | 74.0 | 148 | 1.4983 |
| No log | 75.0 | 150 | 1.4903 |
| No log | 76.0 | 152 | 1.4506 |
| No log | 77.0 | 154 | 1.3982 |
| No log | 78.0 | 156 | 1.3735 |
| No log | 79.0 | 158 | 1.3670 |
| No log | 80.0 | 160 | 1.3977 |
| No log | 81.0 | 162 | 1.4478 |
| No log | 82.0 | 164 | 1.4565 |
| No log | 83.0 | 166 | 1.4186 |
| No log | 84.0 | 168 | 1.3839 |
| No log | 85.0 | 170 | 1.3633 |
| No log | 86.0 | 172 | 1.3686 |
| No log | 87.0 | 174 | 1.3873 |
| No log | 88.0 | 176 | 1.3998 |
| No log | 89.0 | 178 | 1.4084 |
| No log | 90.0 | 180 | 1.4076 |
| No log | 91.0 | 182 | 1.3899 |
| No log | 92.0 | 184 | 1.3820 |
| No log | 93.0 | 186 | 1.3821 |
| No log | 94.0 | 188 | 1.3837 |
| No log | 95.0 | 190 | 1.3902 |
| No log | 96.0 | 192 | 1.3930 |
| No log | 97.0 | 194 | 1.3938 |
| No log | 98.0 | 196 | 1.3954 |
| No log | 99.0 | 198 | 1.3950 |
| No log | 100.0 | 200 | 1.3939 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.0
|
farid1088/RoBERTa-legal-de-cased_German_legal_SQuAD_17
|
farid1088
| 2024-03-08T00:24:34Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-05T20:47:41Z |
---
tags:
- generated_from_trainer
model-index:
- name: RoBERTa-legal-de-cased_German_legal_SQuAD_17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoBERTa-legal-de-cased_German_legal_SQuAD_17
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 160
- eval_batch_size: 40
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 17
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 2 | 6.2537 |
| No log | 2.0 | 4 | 6.3434 |
| No log | 3.0 | 6 | 6.2256 |
| No log | 4.0 | 8 | 6.0253 |
| No log | 5.0 | 10 | 5.8198 |
| No log | 6.0 | 12 | 5.5768 |
| No log | 7.0 | 14 | 5.4665 |
| No log | 8.0 | 16 | 5.4053 |
| No log | 9.0 | 18 | 5.3656 |
| No log | 10.0 | 20 | 5.3181 |
| No log | 11.0 | 22 | 5.2573 |
| No log | 12.0 | 24 | 5.1785 |
| No log | 13.0 | 26 | 5.1147 |
| No log | 14.0 | 28 | 5.0536 |
| No log | 15.0 | 30 | 5.0101 |
| No log | 16.0 | 32 | 4.9799 |
| No log | 17.0 | 34 | 4.9667 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.0
|
ColeD0/Claud.ai-2
|
ColeD0
| 2024-03-08T00:19:56Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-03-07T19:53:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ktm379/code-llama-7b-train_epoch3
|
ktm379
| 2024-03-08T00:19:26Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2024-03-07T17:33:19Z |
---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: TinyPixel/Llama-2-7B-bf16-sharded
model-index:
- name: code-llama-7b-train_epoch3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-llama-7b-train_epoch3
This model is a fine-tuned version of [TinyPixel/Llama-2-7B-bf16-sharded](https://huggingface.co/TinyPixel/Llama-2-7B-bf16-sharded) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
|
shleeeee/mistral-ko-tech-science-v1
|
shleeeee
| 2024-03-08T00:18:13Z | 2,267 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"ko",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-10T05:02:38Z |
---
license: other
language:
- ko
pipeline_tag: text-generation
---
# Model Card for mistral-ko-tech-science-v1
It is a fine-tuned model using Korean in the mistral-7b model
## Model Details
* **Model Developers** : shleeeee(Seunghyeon Lee) , oopsung(Sungwoo Park)
|
OwOOwO/eacc_contTrain_m2_25
|
OwOOwO
| 2024-03-08T00:16:21Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-08T00:13:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shleeeee/mistral-7b-ko-dpo-v1
|
shleeeee
| 2024-03-08T00:15:53Z | 103 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"ko",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-02T07:08:11Z |
---
license: other
language:
- ko
pipeline_tag: text-generation
---
# Model Card for mistral-7b-ko-dpo-v1
It is a fine-tuned model using Korean in the mistral-7b model
## Model Details
* **Model Developers** : shleeeee(Seunghyeon Lee) , oopsung(Sungwoo Park)
Input Models input text only.
Output Models generate text only.
Base Model mistralai/mistral-7B-v1
use SFT and DPO to train model
|
shleeeee/mistral-ko-7b-tech
|
shleeeee
| 2024-03-08T00:14:25Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetune",
"ko",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-29T15:35:30Z |
---
language:
- ko
pipeline_tag: text-generation
tags:
- finetune
license: other
---
# Model Card for mistral-ko-7b-tech
It is a fine-tuned model using Korean in the mistral-7b model.
## Model Details
* **Model Developers** : shleeeee(Seunghyeon Lee), oopsung(Sungwoo Park)
* **Repository** : To be added
* **Model Architecture** : The mistral-ko-7b-wiki-neft is is a fine-tuned version of the Mistral-7B-v0.1.
* **Lora target modules** : q_proj, k_proj, v_proj, o_proj,gate_proj
* **train_batch** : 4
* **Max_step** : 500
## Dataset
Korean Custom Dataset(2000)
## Prompt template: Mistral
```
<s>[INST]{['instruction']}[/INST]{['output']}</s>
```
## Usage
```
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("shleeeee/mistral-ko-7b-tech")
model = AutoModelForCausalLM.from_pretrained("shleeeee/mistral-ko-7b-tech")
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="shleeeee/mistral-ko-7b-tech")
```
## Evaluation

|
shleeeee/mistral-ko-OpenOrca-2000
|
shleeeee
| 2024-03-08T00:11:32Z | 95 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetune",
"ko",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-04T13:17:54Z |
---
language:
- ko
pipeline_tag: text-generation
tags:
- finetune
---
# Model Card for mistral-ko-OpenOrca-2000
It is a fine-tuned model using Korean in the mistral-7b model
## Model Details
* **Model Developers** : shleeeee(Seunghyeon Lee), oopsung(Sungwoo Park)
* **Repository** : To be added
* **Model Architecture** : The shleeeee/mistral-ko-OpenOrca-2000 is is a fine-tuned version of the Mistral-7B-v0.1.
* **Lora target modules** : q_proj, k_proj, v_proj, o_proj,gate_proj
* **train_batch** : 4
* **epochs** : 2
## Dataset
2000 ko-OpenOrca datasets
## Prompt template: Mistral
```
<s>[INST]{['instruction']}[/INST]{['output']}</s>
```
## Usage
```
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("shleeeee/mistral-ko-OpenOrca-2000")
model = AutoModelForCausalLM.from_pretrained("shleeeee/mistral-ko-OpenOrca-2000")
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="shleeeee/mistral-ko-OpenOrca-2000")
```
## Evaluation
To be added
|
shleeeee/mistral-ko-7b-wiki-neft
|
shleeeee
| 2024-03-08T00:11:04Z | 2,286 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetune",
"ko",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-29T04:46:44Z |
---
language:
- ko
pipeline_tag: text-generation
tags:
- finetune
---
# Model Card for mistral-ko-7b-wiki-neft
It is a fine-tuned model using Korean and NEFT in the mistral-7b model.
## Model Details
* **Model Developers** : shleeeee(Seunghyeon Lee), oopsung(Sungwoo Park)
* **Repository** : To be added
* **Model Architecture** : The mistral-ko-7b-wiki-neft is is a fine-tuned version of the Mistral-7B-v0.1.
* **Lora target modules** : q_proj, k_proj, v_proj, o_proj,gate_proj
* **train_batch** : 4
* **neftune_noise_alpha** : 5
* **Max_step** : 1000
## Dataset
Korean Custom Dataset
## Prompt template: Mistral
```
<s>[INST]{['instruction']}[/INST]{['output']}</s>
```
## Usage
```
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("shleeeee/mistral-7b-wiki")
model = AutoModelForCausalLM.from_pretrained("shleeeee/mistral-7b-wiki")
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="shleeeee/mistral-7b-wiki")
```
## Evaluation

|
shleeeee/mistral-7b-wiki
|
shleeeee
| 2024-03-08T00:10:36Z | 2,270 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetune",
"ko",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-28T13:00:50Z |
---
language:
- ko
pipeline_tag: text-generation
tags:
- finetune
---
# Model Card for mistral-7b-wiki
It is a fine-tuned model using Korean in the mistral-7b model
## Model Details
* **Model Developers** : shleeeee(Seunghyeon Lee) , oopsung(Sungwoo Park)
* **Repository** : To be added
* **Model Architecture** : The mistral-7b-wiki is is a fine-tuned version of the Mistral-7B-v0.1.
* **Lora target modules** : q_proj, k_proj, v_proj, o_proj,gate_proj
* **train_batch** : 2
* **Max_step** : 500
## Dataset
Korean Custom Dataset
## Prompt template: Mistral
```
<s>[INST]{['instruction']}[/INST]{['output']}</s>
```
## Usage
```
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("shleeeee/mistral-7b-wiki")
model = AutoModelForCausalLM.from_pretrained("shleeeee/mistral-7b-wiki")
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="shleeeee/mistral-7b-wiki")
```
## Evaluation

|
Ai-Marshal/Sentiment_Classification_2024-03-07
|
Ai-Marshal
| 2024-03-08T00:08:39Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-03-07T23:24:46Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
datasets:
- generator
model-index:
- name: Sentiment_Classification_2024-03-07
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sentiment_Classification_2024-03-07
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5049
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5879 | 1.0 | 559 | 0.5304 |
| 0.5272 | 2.0 | 1118 | 0.5142 |
| 0.5564 | 3.0 | 1677 | 0.5083 |
| 0.5207 | 4.0 | 2236 | 0.5057 |
| 0.5123 | 5.0 | 2795 | 0.5049 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Holarissun/phi2-airl_sft-tldr-seqsampler
|
Holarissun
| 2024-03-08T00:06:49Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-08T00:06:42Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi2-airl_sft-tldr-seqsampler
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi2-airl_sft-tldr-seqsampler
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
somskat/distilbert-base-uncased-finetuned-ner
|
somskat
| 2024-03-08T00:04:43Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:Jing1113/distilbert-base-uncased-finetuned-srl",
"base_model:finetune:Jing1113/distilbert-base-uncased-finetuned-srl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-03-07T23:55:19Z |
---
license: apache-2.0
base_model: Jing1113/distilbert-base-uncased-finetuned-srl
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [Jing1113/distilbert-base-uncased-finetuned-srl](https://huggingface.co/Jing1113/distilbert-base-uncased-finetuned-srl) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0986
- Precision: 0.8664
- Recall: 0.8732
- F1: 0.8698
- Accuracy: 0.9737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0566 | 1.0 | 2531 | 0.0963 | 0.8531 | 0.8727 | 0.8628 | 0.9720 |
| 0.0464 | 2.0 | 5062 | 0.0956 | 0.8591 | 0.8735 | 0.8662 | 0.9729 |
| 0.0389 | 3.0 | 7593 | 0.0986 | 0.8664 | 0.8732 | 0.8698 | 0.9737 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.14.1
|
OwOOwO/eacc_contTrain_m2_55_orig
|
OwOOwO
| 2024-03-08T00:01:33Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T23:59:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
imsarfaroz/fine-tuned-albert-tweets
|
imsarfaroz
| 2024-03-07T23:58:52Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-07T23:47:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: albert-base-v2
model-index:
- name: fine-tuned-albert-tweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-albert-tweets
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6212
- Accuracy: 0.6785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 179 | 0.6264 | 0.6377 |
| No log | 2.0 | 358 | 0.6212 | 0.6785 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_32_64_0.05_8_0.0002
|
ferrazzipietro
| 2024-03-07T23:56:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T23:55:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
s14pe/poca-SoccerTwos
|
s14pe
| 2024-03-07T23:54:13Z | 21 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-03-07T23:53:39Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: s14pe/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
adityahrudayam/T5_qa_model
|
adityahrudayam
| 2024-03-07T23:52:21Z | 32 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"question-answering",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-07T23:42:04Z |
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
model-index:
- name: T5_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5_qa_model
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | nan |
| No log | 2.0 | 2 | nan |
| No log | 3.0 | 3 | nan |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
andysalerno/openchat-nectar-0.1
|
andysalerno
| 2024-03-07T23:45:02Z | 13 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:berkeley-nest/Nectar",
"base_model:openchat/openchat-3.5-0106",
"base_model:finetune:openchat/openchat-3.5-0106",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T08:02:43Z |
---
license: apache-2.0
datasets:
- berkeley-nest/Nectar
base_model: openchat/openchat-3.5-0106
model-index:
- name: openchat-nectar-0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.21
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/openchat-nectar-0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.99
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/openchat-nectar-0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.17
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/openchat-nectar-0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 54.22
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/openchat-nectar-0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/openchat-nectar-0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/openchat-nectar-0.1
name: Open LLM Leaderboard
---
This is openchat/openchat-3.5-0106, tuned with DPO on a tiny subset Nectar. Only 200 steps, so nowhere close to a full epoch.
Careful attention was paid to make sure the chat template was followed properly.
Summary of versions:
**[openchat-nectar-0.1](https://huggingface.co/andysalerno/openchat-nectar-0.1)**
- 200 steps, no filtering on Nectar dataset, 5e-5 learning rate
**[openchat-nectar-0.2](https://huggingface.co/andysalerno/openchat-nectar-0.2)**
- empty repo, failed training. ignore it
**[openchat-nectar-0.3](https://huggingface.co/andysalerno/openchat-nectar-0.3)**
- 500 steps, no filtering on Nectar dataset, 5e-5 learning rate (same as 1 but with more steps)
**[openchat-nectar-0.4](https://huggingface.co/andysalerno/openchat-nectar-0.4)**
- 500 steps, filtered dataset to only include multi-chat-turn examples, used 4th ranking response as the "rejected" instead of 3rd, filtered out "good_natured=False", 5e-5 learning rate
**[openchat-nectar-0.5](https://huggingface.co/andysalerno/openchat-nectar-0.5)**
- 5000 steps (over a full epoch), filtered dataset to only include multi-chat-turn examples, used 4th ranking response as the "rejected" instead of 3rd, filtered out "good_natured=False", 5e-6 learning rate. Same as 0.4 but with 10x the steps, and 1/10th the learning rate
**[openchat-nectar-0.6](https://huggingface.co/andysalerno/openchat-nectar-0.6)**
- 500 steps, filtered dataset to only include multi-chat-turn examples, used 4th ranking response as the "rejected" instead of 3rd, filtered out "good_natured=False", 5e-5 learning rate. Same as 0.5 but with 1/10th the steps, and 10x the learning rate
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_andysalerno__openchat-nectar-0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.94|
|AI2 Reasoning Challenge (25-Shot)|66.21|
|HellaSwag (10-Shot) |82.99|
|MMLU (5-Shot) |65.17|
|TruthfulQA (0-shot) |54.22|
|Winogrande (5-shot) |81.37|
|GSM8k (5-shot) |69.67|
|
OwOOwO/eacc_contTrain_m1
|
OwOOwO
| 2024-03-07T23:38:01Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T23:35:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_32_64_0.05_4_0.0002
|
ferrazzipietro
| 2024-03-07T23:36:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T23:36:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
farid1088/GQA_RoBERTa_legal_SQuAD_complete_augmented_2000
|
farid1088
| 2024-03-07T23:29:54Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-07T21:28:30Z |
---
tags:
- generated_from_trainer
model-index:
- name: GQA_RoBERTa_legal_SQuAD_complete_augmented_2000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GQA_RoBERTa_legal_SQuAD_complete_augmented_2000
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 1.0 | 4 | 3.7756 |
| No log | 2.0 | 8 | 3.1205 |
| No log | 3.0 | 12 | 2.7419 |
| No log | 4.0 | 16 | 2.3978 |
| No log | 5.0 | 20 | 2.0572 |
| No log | 6.0 | 24 | 1.9690 |
| No log | 7.0 | 28 | 1.6922 |
| No log | 8.0 | 32 | 1.4999 |
| No log | 9.0 | 36 | 1.4624 |
| No log | 10.0 | 40 | 1.1915 |
| No log | 11.0 | 44 | 1.1501 |
| No log | 12.0 | 48 | 0.9852 |
| No log | 13.0 | 52 | 0.9573 |
| No log | 14.0 | 56 | 0.9131 |
| No log | 15.0 | 60 | 0.8843 |
| No log | 16.0 | 64 | 0.7765 |
| No log | 17.0 | 68 | 0.7787 |
| No log | 18.0 | 72 | 0.7613 |
| No log | 19.0 | 76 | 0.7610 |
| No log | 20.0 | 80 | 0.7447 |
| No log | 21.0 | 84 | 0.7049 |
| No log | 22.0 | 88 | 0.7030 |
| No log | 23.0 | 92 | 0.7066 |
| No log | 24.0 | 96 | 0.7073 |
| No log | 25.0 | 100 | 0.7238 |
| No log | 26.0 | 104 | 0.7560 |
| No log | 27.0 | 108 | 0.7350 |
| No log | 28.0 | 112 | 0.7325 |
| No log | 29.0 | 116 | 0.7513 |
| No log | 30.0 | 120 | 0.7656 |
| No log | 31.0 | 124 | 0.7594 |
| No log | 32.0 | 128 | 0.7744 |
| No log | 33.0 | 132 | 0.7835 |
| No log | 34.0 | 136 | 0.7608 |
| No log | 35.0 | 140 | 0.7423 |
| No log | 36.0 | 144 | 0.7543 |
| No log | 37.0 | 148 | 0.7305 |
| No log | 38.0 | 152 | 0.7398 |
| No log | 39.0 | 156 | 0.7364 |
| No log | 40.0 | 160 | 0.7313 |
| No log | 41.0 | 164 | 0.7163 |
| No log | 42.0 | 168 | 0.7181 |
| No log | 43.0 | 172 | 0.7243 |
| No log | 44.0 | 176 | 0.7259 |
| No log | 45.0 | 180 | 0.7980 |
| No log | 46.0 | 184 | 0.7784 |
| No log | 47.0 | 188 | 0.7271 |
| No log | 48.0 | 192 | 0.7014 |
| No log | 49.0 | 196 | 0.7110 |
| No log | 50.0 | 200 | 0.7621 |
| No log | 51.0 | 204 | 0.7851 |
| No log | 52.0 | 208 | 0.7917 |
| No log | 53.0 | 212 | 0.7877 |
| No log | 54.0 | 216 | 0.8123 |
| No log | 55.0 | 220 | 0.8462 |
| No log | 56.0 | 224 | 0.8405 |
| No log | 57.0 | 228 | 0.8330 |
| No log | 58.0 | 232 | 0.8115 |
| No log | 59.0 | 236 | 0.8067 |
| No log | 60.0 | 240 | 0.8457 |
| No log | 61.0 | 244 | 0.9419 |
| No log | 62.0 | 248 | 0.9387 |
| No log | 63.0 | 252 | 0.9612 |
| No log | 64.0 | 256 | 0.9213 |
| No log | 65.0 | 260 | 0.9035 |
| No log | 66.0 | 264 | 0.8863 |
| No log | 67.0 | 268 | 0.8914 |
| No log | 68.0 | 272 | 0.9060 |
| No log | 69.0 | 276 | 0.9424 |
| No log | 70.0 | 280 | 0.9367 |
| No log | 71.0 | 284 | 0.9201 |
| No log | 72.0 | 288 | 0.9070 |
| No log | 73.0 | 292 | 0.9037 |
| No log | 74.0 | 296 | 0.9116 |
| No log | 75.0 | 300 | 0.9108 |
| No log | 76.0 | 304 | 0.9139 |
| No log | 77.0 | 308 | 0.9506 |
| No log | 78.0 | 312 | 0.9703 |
| No log | 79.0 | 316 | 0.9848 |
| No log | 80.0 | 320 | 0.9586 |
| No log | 81.0 | 324 | 0.9591 |
| No log | 82.0 | 328 | 0.9678 |
| No log | 83.0 | 332 | 0.9951 |
| No log | 84.0 | 336 | 0.9788 |
| No log | 85.0 | 340 | 0.9374 |
| No log | 86.0 | 344 | 0.9085 |
| No log | 87.0 | 348 | 0.8789 |
| No log | 88.0 | 352 | 0.8838 |
| No log | 89.0 | 356 | 0.8711 |
| No log | 90.0 | 360 | 0.8792 |
| No log | 91.0 | 364 | 0.8904 |
| No log | 92.0 | 368 | 0.9014 |
| No log | 93.0 | 372 | 0.9518 |
| No log | 94.0 | 376 | 0.9872 |
| No log | 95.0 | 380 | 0.9193 |
| No log | 96.0 | 384 | 0.8909 |
| No log | 97.0 | 388 | 0.8989 |
| No log | 98.0 | 392 | 0.9064 |
| No log | 99.0 | 396 | 0.9341 |
| No log | 100.0 | 400 | 0.9550 |
| No log | 101.0 | 404 | 0.9706 |
| No log | 102.0 | 408 | 1.0495 |
| No log | 103.0 | 412 | 1.0350 |
| No log | 104.0 | 416 | 0.9688 |
| No log | 105.0 | 420 | 0.9610 |
| No log | 106.0 | 424 | 0.9537 |
| No log | 107.0 | 428 | 0.9579 |
| No log | 108.0 | 432 | 0.9877 |
| No log | 109.0 | 436 | 1.0223 |
| No log | 110.0 | 440 | 1.0488 |
| No log | 111.0 | 444 | 1.0673 |
| No log | 112.0 | 448 | 0.9968 |
| No log | 113.0 | 452 | 1.0307 |
| No log | 114.0 | 456 | 1.0888 |
| No log | 115.0 | 460 | 1.0773 |
| No log | 116.0 | 464 | 1.0990 |
| No log | 117.0 | 468 | 1.1120 |
| No log | 118.0 | 472 | 1.0821 |
| No log | 119.0 | 476 | 1.0407 |
| No log | 120.0 | 480 | 1.0365 |
| No log | 121.0 | 484 | 1.0269 |
| No log | 122.0 | 488 | 0.9804 |
| No log | 123.0 | 492 | 0.9752 |
| No log | 124.0 | 496 | 0.9785 |
| 0.9513 | 125.0 | 500 | 0.9739 |
| 0.9513 | 126.0 | 504 | 0.9894 |
| 0.9513 | 127.0 | 508 | 1.0625 |
| 0.9513 | 128.0 | 512 | 1.0423 |
| 0.9513 | 129.0 | 516 | 1.0479 |
| 0.9513 | 130.0 | 520 | 1.0725 |
| 0.9513 | 131.0 | 524 | 1.1035 |
| 0.9513 | 132.0 | 528 | 1.0921 |
| 0.9513 | 133.0 | 532 | 0.9806 |
| 0.9513 | 134.0 | 536 | 0.9012 |
| 0.9513 | 135.0 | 540 | 0.9527 |
| 0.9513 | 136.0 | 544 | 1.0029 |
| 0.9513 | 137.0 | 548 | 1.0212 |
| 0.9513 | 138.0 | 552 | 1.0392 |
| 0.9513 | 139.0 | 556 | 0.9753 |
| 0.9513 | 140.0 | 560 | 0.9817 |
| 0.9513 | 141.0 | 564 | 0.9755 |
| 0.9513 | 142.0 | 568 | 0.9933 |
| 0.9513 | 143.0 | 572 | 1.0276 |
| 0.9513 | 144.0 | 576 | 1.0285 |
| 0.9513 | 145.0 | 580 | 1.0276 |
| 0.9513 | 146.0 | 584 | 1.0582 |
| 0.9513 | 147.0 | 588 | 1.0810 |
| 0.9513 | 148.0 | 592 | 1.0618 |
| 0.9513 | 149.0 | 596 | 1.0152 |
| 0.9513 | 150.0 | 600 | 1.0553 |
| 0.9513 | 151.0 | 604 | 1.0921 |
| 0.9513 | 152.0 | 608 | 1.0401 |
| 0.9513 | 153.0 | 612 | 0.9760 |
| 0.9513 | 154.0 | 616 | 0.9576 |
| 0.9513 | 155.0 | 620 | 0.9523 |
| 0.9513 | 156.0 | 624 | 0.9901 |
| 0.9513 | 157.0 | 628 | 0.9793 |
| 0.9513 | 158.0 | 632 | 0.9726 |
| 0.9513 | 159.0 | 636 | 0.9676 |
| 0.9513 | 160.0 | 640 | 1.0070 |
| 0.9513 | 161.0 | 644 | 1.0107 |
| 0.9513 | 162.0 | 648 | 1.0067 |
| 0.9513 | 163.0 | 652 | 1.0042 |
| 0.9513 | 164.0 | 656 | 0.9888 |
| 0.9513 | 165.0 | 660 | 0.9758 |
| 0.9513 | 166.0 | 664 | 0.9983 |
| 0.9513 | 167.0 | 668 | 1.0273 |
| 0.9513 | 168.0 | 672 | 1.0220 |
| 0.9513 | 169.0 | 676 | 1.0063 |
| 0.9513 | 170.0 | 680 | 0.9852 |
| 0.9513 | 171.0 | 684 | 1.0590 |
| 0.9513 | 172.0 | 688 | 1.1016 |
| 0.9513 | 173.0 | 692 | 1.0622 |
| 0.9513 | 174.0 | 696 | 1.0408 |
| 0.9513 | 175.0 | 700 | 1.0156 |
| 0.9513 | 176.0 | 704 | 1.0073 |
| 0.9513 | 177.0 | 708 | 1.0284 |
| 0.9513 | 178.0 | 712 | 1.0398 |
| 0.9513 | 179.0 | 716 | 0.9925 |
| 0.9513 | 180.0 | 720 | 1.0192 |
| 0.9513 | 181.0 | 724 | 1.0434 |
| 0.9513 | 182.0 | 728 | 1.0429 |
| 0.9513 | 183.0 | 732 | 1.0614 |
| 0.9513 | 184.0 | 736 | 1.0663 |
| 0.9513 | 185.0 | 740 | 1.0529 |
| 0.9513 | 186.0 | 744 | 1.0479 |
| 0.9513 | 187.0 | 748 | 1.0352 |
| 0.9513 | 188.0 | 752 | 1.0374 |
| 0.9513 | 189.0 | 756 | 1.0061 |
| 0.9513 | 190.0 | 760 | 0.9905 |
| 0.9513 | 191.0 | 764 | 0.9959 |
| 0.9513 | 192.0 | 768 | 1.0204 |
| 0.9513 | 193.0 | 772 | 1.0509 |
| 0.9513 | 194.0 | 776 | 1.0616 |
| 0.9513 | 195.0 | 780 | 1.0709 |
| 0.9513 | 196.0 | 784 | 1.0794 |
| 0.9513 | 197.0 | 788 | 1.0797 |
| 0.9513 | 198.0 | 792 | 1.0722 |
| 0.9513 | 199.0 | 796 | 1.0697 |
| 0.9513 | 200.0 | 800 | 1.0759 |
| 0.9513 | 201.0 | 804 | 1.0787 |
| 0.9513 | 202.0 | 808 | 1.1036 |
| 0.9513 | 203.0 | 812 | 1.1021 |
| 0.9513 | 204.0 | 816 | 1.1088 |
| 0.9513 | 205.0 | 820 | 1.1201 |
| 0.9513 | 206.0 | 824 | 1.1168 |
| 0.9513 | 207.0 | 828 | 1.1030 |
| 0.9513 | 208.0 | 832 | 1.0986 |
| 0.9513 | 209.0 | 836 | 1.0953 |
| 0.9513 | 210.0 | 840 | 1.0708 |
| 0.9513 | 211.0 | 844 | 1.0704 |
| 0.9513 | 212.0 | 848 | 1.0681 |
| 0.9513 | 213.0 | 852 | 1.0676 |
| 0.9513 | 214.0 | 856 | 1.0789 |
| 0.9513 | 215.0 | 860 | 1.1193 |
| 0.9513 | 216.0 | 864 | 1.1378 |
| 0.9513 | 217.0 | 868 | 1.1566 |
| 0.9513 | 218.0 | 872 | 1.1650 |
| 0.9513 | 219.0 | 876 | 1.1268 |
| 0.9513 | 220.0 | 880 | 1.1152 |
| 0.9513 | 221.0 | 884 | 1.0909 |
| 0.9513 | 222.0 | 888 | 1.0778 |
| 0.9513 | 223.0 | 892 | 1.0819 |
| 0.9513 | 224.0 | 896 | 1.1042 |
| 0.9513 | 225.0 | 900 | 1.1532 |
| 0.9513 | 226.0 | 904 | 1.1695 |
| 0.9513 | 227.0 | 908 | 1.1730 |
| 0.9513 | 228.0 | 912 | 1.1549 |
| 0.9513 | 229.0 | 916 | 1.1318 |
| 0.9513 | 230.0 | 920 | 1.1319 |
| 0.9513 | 231.0 | 924 | 1.1306 |
| 0.9513 | 232.0 | 928 | 1.1583 |
| 0.9513 | 233.0 | 932 | 1.1915 |
| 0.9513 | 234.0 | 936 | 1.2038 |
| 0.9513 | 235.0 | 940 | 1.1877 |
| 0.9513 | 236.0 | 944 | 1.1775 |
| 0.9513 | 237.0 | 948 | 1.1820 |
| 0.9513 | 238.0 | 952 | 1.1885 |
| 0.9513 | 239.0 | 956 | 1.2012 |
| 0.9513 | 240.0 | 960 | 1.2013 |
| 0.9513 | 241.0 | 964 | 1.1876 |
| 0.9513 | 242.0 | 968 | 1.1801 |
| 0.9513 | 243.0 | 972 | 1.1799 |
| 0.9513 | 244.0 | 976 | 1.1711 |
| 0.9513 | 245.0 | 980 | 1.1550 |
| 0.9513 | 246.0 | 984 | 1.1499 |
| 0.9513 | 247.0 | 988 | 1.1303 |
| 0.9513 | 248.0 | 992 | 1.1138 |
| 0.9513 | 249.0 | 996 | 1.1351 |
| 0.4059 | 250.0 | 1000 | 1.1635 |
| 0.4059 | 251.0 | 1004 | 1.1975 |
| 0.4059 | 252.0 | 1008 | 1.2352 |
| 0.4059 | 253.0 | 1012 | 1.2442 |
| 0.4059 | 254.0 | 1016 | 1.2108 |
| 0.4059 | 255.0 | 1020 | 1.1813 |
| 0.4059 | 256.0 | 1024 | 1.1469 |
| 0.4059 | 257.0 | 1028 | 1.0936 |
| 0.4059 | 258.0 | 1032 | 1.0322 |
| 0.4059 | 259.0 | 1036 | 1.0076 |
| 0.4059 | 260.0 | 1040 | 1.0304 |
| 0.4059 | 261.0 | 1044 | 1.0946 |
| 0.4059 | 262.0 | 1048 | 1.1132 |
| 0.4059 | 263.0 | 1052 | 1.1231 |
| 0.4059 | 264.0 | 1056 | 1.1268 |
| 0.4059 | 265.0 | 1060 | 1.1290 |
| 0.4059 | 266.0 | 1064 | 1.1261 |
| 0.4059 | 267.0 | 1068 | 1.1095 |
| 0.4059 | 268.0 | 1072 | 1.0643 |
| 0.4059 | 269.0 | 1076 | 1.0283 |
| 0.4059 | 270.0 | 1080 | 1.0181 |
| 0.4059 | 271.0 | 1084 | 1.0670 |
| 0.4059 | 272.0 | 1088 | 1.1049 |
| 0.4059 | 273.0 | 1092 | 1.1309 |
| 0.4059 | 274.0 | 1096 | 1.1533 |
| 0.4059 | 275.0 | 1100 | 1.1767 |
| 0.4059 | 276.0 | 1104 | 1.1846 |
| 0.4059 | 277.0 | 1108 | 1.1899 |
| 0.4059 | 278.0 | 1112 | 1.1834 |
| 0.4059 | 279.0 | 1116 | 1.2054 |
| 0.4059 | 280.0 | 1120 | 1.1807 |
| 0.4059 | 281.0 | 1124 | 1.1238 |
| 0.4059 | 282.0 | 1128 | 1.0955 |
| 0.4059 | 283.0 | 1132 | 1.0557 |
| 0.4059 | 284.0 | 1136 | 1.0615 |
| 0.4059 | 285.0 | 1140 | 1.0758 |
| 0.4059 | 286.0 | 1144 | 1.1007 |
| 0.4059 | 287.0 | 1148 | 1.1431 |
| 0.4059 | 288.0 | 1152 | 1.1335 |
| 0.4059 | 289.0 | 1156 | 1.0713 |
| 0.4059 | 290.0 | 1160 | 1.0302 |
| 0.4059 | 291.0 | 1164 | 1.0070 |
| 0.4059 | 292.0 | 1168 | 1.0587 |
| 0.4059 | 293.0 | 1172 | 1.1093 |
| 0.4059 | 294.0 | 1176 | 1.1549 |
| 0.4059 | 295.0 | 1180 | 1.1744 |
| 0.4059 | 296.0 | 1184 | 1.1590 |
| 0.4059 | 297.0 | 1188 | 1.0999 |
| 0.4059 | 298.0 | 1192 | 1.0508 |
| 0.4059 | 299.0 | 1196 | 1.0082 |
| 0.4059 | 300.0 | 1200 | 1.0266 |
| 0.4059 | 301.0 | 1204 | 1.0897 |
| 0.4059 | 302.0 | 1208 | 1.2008 |
| 0.4059 | 303.0 | 1212 | 1.2833 |
| 0.4059 | 304.0 | 1216 | 1.2775 |
| 0.4059 | 305.0 | 1220 | 1.2754 |
| 0.4059 | 306.0 | 1224 | 1.2059 |
| 0.4059 | 307.0 | 1228 | 1.1187 |
| 0.4059 | 308.0 | 1232 | 1.1612 |
| 0.4059 | 309.0 | 1236 | 1.1794 |
| 0.4059 | 310.0 | 1240 | 1.1969 |
| 0.4059 | 311.0 | 1244 | 1.1991 |
| 0.4059 | 312.0 | 1248 | 1.1921 |
| 0.4059 | 313.0 | 1252 | 1.2148 |
| 0.4059 | 314.0 | 1256 | 1.2524 |
| 0.4059 | 315.0 | 1260 | 1.2606 |
| 0.4059 | 316.0 | 1264 | 1.2423 |
| 0.4059 | 317.0 | 1268 | 1.1989 |
| 0.4059 | 318.0 | 1272 | 1.1552 |
| 0.4059 | 319.0 | 1276 | 1.1222 |
| 0.4059 | 320.0 | 1280 | 1.1219 |
| 0.4059 | 321.0 | 1284 | 1.1678 |
| 0.4059 | 322.0 | 1288 | 1.1853 |
| 0.4059 | 323.0 | 1292 | 1.1274 |
| 0.4059 | 324.0 | 1296 | 1.0615 |
| 0.4059 | 325.0 | 1300 | 1.1044 |
| 0.4059 | 326.0 | 1304 | 1.1874 |
| 0.4059 | 327.0 | 1308 | 1.1911 |
| 0.4059 | 328.0 | 1312 | 1.1513 |
| 0.4059 | 329.0 | 1316 | 1.0682 |
| 0.4059 | 330.0 | 1320 | 1.0366 |
| 0.4059 | 331.0 | 1324 | 1.0736 |
| 0.4059 | 332.0 | 1328 | 1.1319 |
| 0.4059 | 333.0 | 1332 | 1.1256 |
| 0.4059 | 334.0 | 1336 | 1.0977 |
| 0.4059 | 335.0 | 1340 | 1.0509 |
| 0.4059 | 336.0 | 1344 | 1.0081 |
| 0.4059 | 337.0 | 1348 | 1.0239 |
| 0.4059 | 338.0 | 1352 | 1.0681 |
| 0.4059 | 339.0 | 1356 | 1.1298 |
| 0.4059 | 340.0 | 1360 | 1.1369 |
| 0.4059 | 341.0 | 1364 | 1.0729 |
| 0.4059 | 342.0 | 1368 | 0.9855 |
| 0.4059 | 343.0 | 1372 | 0.9409 |
| 0.4059 | 344.0 | 1376 | 0.9527 |
| 0.4059 | 345.0 | 1380 | 1.0270 |
| 0.4059 | 346.0 | 1384 | 1.0781 |
| 0.4059 | 347.0 | 1388 | 1.1151 |
| 0.4059 | 348.0 | 1392 | 1.1403 |
| 0.4059 | 349.0 | 1396 | 1.1603 |
| 0.4059 | 350.0 | 1400 | 1.1856 |
| 0.4059 | 351.0 | 1404 | 1.1898 |
| 0.4059 | 352.0 | 1408 | 1.1933 |
| 0.4059 | 353.0 | 1412 | 1.2285 |
| 0.4059 | 354.0 | 1416 | 1.2589 |
| 0.4059 | 355.0 | 1420 | 1.2458 |
| 0.4059 | 356.0 | 1424 | 1.2131 |
| 0.4059 | 357.0 | 1428 | 1.2127 |
| 0.4059 | 358.0 | 1432 | 1.2372 |
| 0.4059 | 359.0 | 1436 | 1.2434 |
| 0.4059 | 360.0 | 1440 | 1.2399 |
| 0.4059 | 361.0 | 1444 | 1.2213 |
| 0.4059 | 362.0 | 1448 | 1.1881 |
| 0.4059 | 363.0 | 1452 | 1.1636 |
| 0.4059 | 364.0 | 1456 | 1.1456 |
| 0.4059 | 365.0 | 1460 | 1.1520 |
| 0.4059 | 366.0 | 1464 | 1.1635 |
| 0.4059 | 367.0 | 1468 | 1.1836 |
| 0.4059 | 368.0 | 1472 | 1.1956 |
| 0.4059 | 369.0 | 1476 | 1.2053 |
| 0.4059 | 370.0 | 1480 | 1.2042 |
| 0.4059 | 371.0 | 1484 | 1.1728 |
| 0.4059 | 372.0 | 1488 | 1.1536 |
| 0.4059 | 373.0 | 1492 | 1.1376 |
| 0.4059 | 374.0 | 1496 | 1.1239 |
| 0.4026 | 375.0 | 1500 | 1.1201 |
| 0.4026 | 376.0 | 1504 | 1.1128 |
| 0.4026 | 377.0 | 1508 | 1.1067 |
| 0.4026 | 378.0 | 1512 | 1.1073 |
| 0.4026 | 379.0 | 1516 | 1.1112 |
| 0.4026 | 380.0 | 1520 | 1.1212 |
| 0.4026 | 381.0 | 1524 | 1.1387 |
| 0.4026 | 382.0 | 1528 | 1.1460 |
| 0.4026 | 383.0 | 1532 | 1.1238 |
| 0.4026 | 384.0 | 1536 | 1.1028 |
| 0.4026 | 385.0 | 1540 | 1.1051 |
| 0.4026 | 386.0 | 1544 | 1.1086 |
| 0.4026 | 387.0 | 1548 | 1.0921 |
| 0.4026 | 388.0 | 1552 | 1.0765 |
| 0.4026 | 389.0 | 1556 | 1.0831 |
| 0.4026 | 390.0 | 1560 | 1.0897 |
| 0.4026 | 391.0 | 1564 | 1.0915 |
| 0.4026 | 392.0 | 1568 | 1.0901 |
| 0.4026 | 393.0 | 1572 | 1.0891 |
| 0.4026 | 394.0 | 1576 | 1.0918 |
| 0.4026 | 395.0 | 1580 | 1.0979 |
| 0.4026 | 396.0 | 1584 | 1.0970 |
| 0.4026 | 397.0 | 1588 | 1.0804 |
| 0.4026 | 398.0 | 1592 | 1.0838 |
| 0.4026 | 399.0 | 1596 | 1.0858 |
| 0.4026 | 400.0 | 1600 | 1.0962 |
| 0.4026 | 401.0 | 1604 | 1.1256 |
| 0.4026 | 402.0 | 1608 | 1.1424 |
| 0.4026 | 403.0 | 1612 | 1.1586 |
| 0.4026 | 404.0 | 1616 | 1.1724 |
| 0.4026 | 405.0 | 1620 | 1.1751 |
| 0.4026 | 406.0 | 1624 | 1.1961 |
| 0.4026 | 407.0 | 1628 | 1.2155 |
| 0.4026 | 408.0 | 1632 | 1.2273 |
| 0.4026 | 409.0 | 1636 | 1.2307 |
| 0.4026 | 410.0 | 1640 | 1.2315 |
| 0.4026 | 411.0 | 1644 | 1.2128 |
| 0.4026 | 412.0 | 1648 | 1.1893 |
| 0.4026 | 413.0 | 1652 | 1.1579 |
| 0.4026 | 414.0 | 1656 | 1.1366 |
| 0.4026 | 415.0 | 1660 | 1.1357 |
| 0.4026 | 416.0 | 1664 | 1.1407 |
| 0.4026 | 417.0 | 1668 | 1.1430 |
| 0.4026 | 418.0 | 1672 | 1.1448 |
| 0.4026 | 419.0 | 1676 | 1.1484 |
| 0.4026 | 420.0 | 1680 | 1.1536 |
| 0.4026 | 421.0 | 1684 | 1.1489 |
| 0.4026 | 422.0 | 1688 | 1.1727 |
| 0.4026 | 423.0 | 1692 | 1.1906 |
| 0.4026 | 424.0 | 1696 | 1.1960 |
| 0.4026 | 425.0 | 1700 | 1.1939 |
| 0.4026 | 426.0 | 1704 | 1.1789 |
| 0.4026 | 427.0 | 1708 | 1.1635 |
| 0.4026 | 428.0 | 1712 | 1.1499 |
| 0.4026 | 429.0 | 1716 | 1.1432 |
| 0.4026 | 430.0 | 1720 | 1.1382 |
| 0.4026 | 431.0 | 1724 | 1.1275 |
| 0.4026 | 432.0 | 1728 | 1.1173 |
| 0.4026 | 433.0 | 1732 | 1.1088 |
| 0.4026 | 434.0 | 1736 | 1.0911 |
| 0.4026 | 435.0 | 1740 | 1.0853 |
| 0.4026 | 436.0 | 1744 | 1.0861 |
| 0.4026 | 437.0 | 1748 | 1.1100 |
| 0.4026 | 438.0 | 1752 | 1.1545 |
| 0.4026 | 439.0 | 1756 | 1.1714 |
| 0.4026 | 440.0 | 1760 | 1.1520 |
| 0.4026 | 441.0 | 1764 | 1.1242 |
| 0.4026 | 442.0 | 1768 | 1.1029 |
| 0.4026 | 443.0 | 1772 | 1.0844 |
| 0.4026 | 444.0 | 1776 | 1.0676 |
| 0.4026 | 445.0 | 1780 | 1.0830 |
| 0.4026 | 446.0 | 1784 | 1.0936 |
| 0.4026 | 447.0 | 1788 | 1.0992 |
| 0.4026 | 448.0 | 1792 | 1.1024 |
| 0.4026 | 449.0 | 1796 | 1.1005 |
| 0.4026 | 450.0 | 1800 | 1.0968 |
| 0.4026 | 451.0 | 1804 | 1.0915 |
| 0.4026 | 452.0 | 1808 | 1.0914 |
| 0.4026 | 453.0 | 1812 | 1.0897 |
| 0.4026 | 454.0 | 1816 | 1.0799 |
| 0.4026 | 455.0 | 1820 | 1.1148 |
| 0.4026 | 456.0 | 1824 | 1.1440 |
| 0.4026 | 457.0 | 1828 | 1.1571 |
| 0.4026 | 458.0 | 1832 | 1.1594 |
| 0.4026 | 459.0 | 1836 | 1.1520 |
| 0.4026 | 460.0 | 1840 | 1.1392 |
| 0.4026 | 461.0 | 1844 | 1.1145 |
| 0.4026 | 462.0 | 1848 | 1.1045 |
| 0.4026 | 463.0 | 1852 | 1.0923 |
| 0.4026 | 464.0 | 1856 | 1.0772 |
| 0.4026 | 465.0 | 1860 | 1.0652 |
| 0.4026 | 466.0 | 1864 | 1.0405 |
| 0.4026 | 467.0 | 1868 | 1.0121 |
| 0.4026 | 468.0 | 1872 | 1.0254 |
| 0.4026 | 469.0 | 1876 | 1.1054 |
| 0.4026 | 470.0 | 1880 | 1.1700 |
| 0.4026 | 471.0 | 1884 | 1.1976 |
| 0.4026 | 472.0 | 1888 | 1.1985 |
| 0.4026 | 473.0 | 1892 | 1.2013 |
| 0.4026 | 474.0 | 1896 | 1.1945 |
| 0.4026 | 475.0 | 1900 | 1.1819 |
| 0.4026 | 476.0 | 1904 | 1.1745 |
| 0.4026 | 477.0 | 1908 | 1.1637 |
| 0.4026 | 478.0 | 1912 | 1.1613 |
| 0.4026 | 479.0 | 1916 | 1.2205 |
| 0.4026 | 480.0 | 1920 | 1.3217 |
| 0.4026 | 481.0 | 1924 | 1.3495 |
| 0.4026 | 482.0 | 1928 | 1.3611 |
| 0.4026 | 483.0 | 1932 | 1.3540 |
| 0.4026 | 484.0 | 1936 | 1.3446 |
| 0.4026 | 485.0 | 1940 | 1.3276 |
| 0.4026 | 486.0 | 1944 | 1.2940 |
| 0.4026 | 487.0 | 1948 | 1.2593 |
| 0.4026 | 488.0 | 1952 | 1.2319 |
| 0.4026 | 489.0 | 1956 | 1.2247 |
| 0.4026 | 490.0 | 1960 | 1.2264 |
| 0.4026 | 491.0 | 1964 | 1.2378 |
| 0.4026 | 492.0 | 1968 | 1.2434 |
| 0.4026 | 493.0 | 1972 | 1.2530 |
| 0.4026 | 494.0 | 1976 | 1.2621 |
| 0.4026 | 495.0 | 1980 | 1.2628 |
| 0.4026 | 496.0 | 1984 | 1.2380 |
| 0.4026 | 497.0 | 1988 | 1.2284 |
| 0.4026 | 498.0 | 1992 | 1.2583 |
| 0.4026 | 499.0 | 1996 | 1.2241 |
| 0.4132 | 500.0 | 2000 | 1.2637 |
| 0.4132 | 501.0 | 2004 | 1.2356 |
| 0.4132 | 502.0 | 2008 | 1.1919 |
| 0.4132 | 503.0 | 2012 | 1.1615 |
| 0.4132 | 504.0 | 2016 | 1.1739 |
| 0.4132 | 505.0 | 2020 | 1.1578 |
| 0.4132 | 506.0 | 2024 | 1.1376 |
| 0.4132 | 507.0 | 2028 | 1.1027 |
| 0.4132 | 508.0 | 2032 | 1.0491 |
| 0.4132 | 509.0 | 2036 | 1.0300 |
| 0.4132 | 510.0 | 2040 | 1.0555 |
| 0.4132 | 511.0 | 2044 | 1.0936 |
| 0.4132 | 512.0 | 2048 | 1.1107 |
| 0.4132 | 513.0 | 2052 | 1.1290 |
| 0.4132 | 514.0 | 2056 | 1.1403 |
| 0.4132 | 515.0 | 2060 | 1.1134 |
| 0.4132 | 516.0 | 2064 | 1.0623 |
| 0.4132 | 517.0 | 2068 | 1.1057 |
| 0.4132 | 518.0 | 2072 | 1.0797 |
| 0.4132 | 519.0 | 2076 | 1.1629 |
| 0.4132 | 520.0 | 2080 | 1.2167 |
| 0.4132 | 521.0 | 2084 | 1.2047 |
| 0.4132 | 522.0 | 2088 | 1.1083 |
| 0.4132 | 523.0 | 2092 | 1.0418 |
| 0.4132 | 524.0 | 2096 | 1.0102 |
| 0.4132 | 525.0 | 2100 | 1.0244 |
| 0.4132 | 526.0 | 2104 | 1.1072 |
| 0.4132 | 527.0 | 2108 | 1.1927 |
| 0.4132 | 528.0 | 2112 | 1.2431 |
| 0.4132 | 529.0 | 2116 | 1.2620 |
| 0.4132 | 530.0 | 2120 | 1.2626 |
| 0.4132 | 531.0 | 2124 | 1.2374 |
| 0.4132 | 532.0 | 2128 | 1.2128 |
| 0.4132 | 533.0 | 2132 | 1.1929 |
| 0.4132 | 534.0 | 2136 | 1.1825 |
| 0.4132 | 535.0 | 2140 | 1.1820 |
| 0.4132 | 536.0 | 2144 | 1.1747 |
| 0.4132 | 537.0 | 2148 | 1.1500 |
| 0.4132 | 538.0 | 2152 | 1.1300 |
| 0.4132 | 539.0 | 2156 | 1.1154 |
| 0.4132 | 540.0 | 2160 | 1.1131 |
| 0.4132 | 541.0 | 2164 | 1.2039 |
| 0.4132 | 542.0 | 2168 | 1.2969 |
| 0.4132 | 543.0 | 2172 | 1.3467 |
| 0.4132 | 544.0 | 2176 | 1.3269 |
| 0.4132 | 545.0 | 2180 | 1.2708 |
| 0.4132 | 546.0 | 2184 | 1.2328 |
| 0.4132 | 547.0 | 2188 | 1.2018 |
| 0.4132 | 548.0 | 2192 | 1.2414 |
| 0.4132 | 549.0 | 2196 | 1.3077 |
| 0.4132 | 550.0 | 2200 | 1.3456 |
| 0.4132 | 551.0 | 2204 | 1.3697 |
| 0.4132 | 552.0 | 2208 | 1.3549 |
| 0.4132 | 553.0 | 2212 | 1.3114 |
| 0.4132 | 554.0 | 2216 | 1.2546 |
| 0.4132 | 555.0 | 2220 | 1.1885 |
| 0.4132 | 556.0 | 2224 | 1.1551 |
| 0.4132 | 557.0 | 2228 | 1.1560 |
| 0.4132 | 558.0 | 2232 | 1.1636 |
| 0.4132 | 559.0 | 2236 | 1.1683 |
| 0.4132 | 560.0 | 2240 | 1.1802 |
| 0.4132 | 561.0 | 2244 | 1.1915 |
| 0.4132 | 562.0 | 2248 | 1.2013 |
| 0.4132 | 563.0 | 2252 | 1.2959 |
| 0.4132 | 564.0 | 2256 | 1.3462 |
| 0.4132 | 565.0 | 2260 | 1.3304 |
| 0.4132 | 566.0 | 2264 | 1.2797 |
| 0.4132 | 567.0 | 2268 | 1.2271 |
| 0.4132 | 568.0 | 2272 | 1.1545 |
| 0.4132 | 569.0 | 2276 | 1.0932 |
| 0.4132 | 570.0 | 2280 | 1.0846 |
| 0.4132 | 571.0 | 2284 | 1.1062 |
| 0.4132 | 572.0 | 2288 | 1.1248 |
| 0.4132 | 573.0 | 2292 | 1.1334 |
| 0.4132 | 574.0 | 2296 | 1.1361 |
| 0.4132 | 575.0 | 2300 | 1.1488 |
| 0.4132 | 576.0 | 2304 | 1.1842 |
| 0.4132 | 577.0 | 2308 | 1.2073 |
| 0.4132 | 578.0 | 2312 | 1.2114 |
| 0.4132 | 579.0 | 2316 | 1.2072 |
| 0.4132 | 580.0 | 2320 | 1.2062 |
| 0.4132 | 581.0 | 2324 | 1.2102 |
| 0.4132 | 582.0 | 2328 | 1.1919 |
| 0.4132 | 583.0 | 2332 | 1.1725 |
| 0.4132 | 584.0 | 2336 | 1.1534 |
| 0.4132 | 585.0 | 2340 | 1.1383 |
| 0.4132 | 586.0 | 2344 | 1.1390 |
| 0.4132 | 587.0 | 2348 | 1.1535 |
| 0.4132 | 588.0 | 2352 | 1.1533 |
| 0.4132 | 589.0 | 2356 | 1.1464 |
| 0.4132 | 590.0 | 2360 | 1.1425 |
| 0.4132 | 591.0 | 2364 | 1.1457 |
| 0.4132 | 592.0 | 2368 | 1.1446 |
| 0.4132 | 593.0 | 2372 | 1.1400 |
| 0.4132 | 594.0 | 2376 | 1.1323 |
| 0.4132 | 595.0 | 2380 | 1.1214 |
| 0.4132 | 596.0 | 2384 | 1.1196 |
| 0.4132 | 597.0 | 2388 | 1.1202 |
| 0.4132 | 598.0 | 2392 | 1.1111 |
| 0.4132 | 599.0 | 2396 | 1.1033 |
| 0.4132 | 600.0 | 2400 | 1.0880 |
| 0.4132 | 601.0 | 2404 | 1.0803 |
| 0.4132 | 602.0 | 2408 | 1.1013 |
| 0.4132 | 603.0 | 2412 | 1.1340 |
| 0.4132 | 604.0 | 2416 | 1.1478 |
| 0.4132 | 605.0 | 2420 | 1.1489 |
| 0.4132 | 606.0 | 2424 | 1.1421 |
| 0.4132 | 607.0 | 2428 | 1.1339 |
| 0.4132 | 608.0 | 2432 | 1.1218 |
| 0.4132 | 609.0 | 2436 | 1.1091 |
| 0.4132 | 610.0 | 2440 | 1.1061 |
| 0.4132 | 611.0 | 2444 | 1.0998 |
| 0.4132 | 612.0 | 2448 | 1.1126 |
| 0.4132 | 613.0 | 2452 | 1.1213 |
| 0.4132 | 614.0 | 2456 | 1.1272 |
| 0.4132 | 615.0 | 2460 | 1.1455 |
| 0.4132 | 616.0 | 2464 | 1.1578 |
| 0.4132 | 617.0 | 2468 | 1.1805 |
| 0.4132 | 618.0 | 2472 | 1.2011 |
| 0.4132 | 619.0 | 2476 | 1.2163 |
| 0.4132 | 620.0 | 2480 | 1.2338 |
| 0.4132 | 621.0 | 2484 | 1.2324 |
| 0.4132 | 622.0 | 2488 | 1.2222 |
| 0.4132 | 623.0 | 2492 | 1.1981 |
| 0.4132 | 624.0 | 2496 | 1.1771 |
| 0.4061 | 625.0 | 2500 | 1.1522 |
| 0.4061 | 626.0 | 2504 | 1.1489 |
| 0.4061 | 627.0 | 2508 | 1.1523 |
| 0.4061 | 628.0 | 2512 | 1.1616 |
| 0.4061 | 629.0 | 2516 | 1.1826 |
| 0.4061 | 630.0 | 2520 | 1.2340 |
| 0.4061 | 631.0 | 2524 | 1.2748 |
| 0.4061 | 632.0 | 2528 | 1.2921 |
| 0.4061 | 633.0 | 2532 | 1.2943 |
| 0.4061 | 634.0 | 2536 | 1.2903 |
| 0.4061 | 635.0 | 2540 | 1.2727 |
| 0.4061 | 636.0 | 2544 | 1.2437 |
| 0.4061 | 637.0 | 2548 | 1.2215 |
| 0.4061 | 638.0 | 2552 | 1.2745 |
| 0.4061 | 639.0 | 2556 | 1.3062 |
| 0.4061 | 640.0 | 2560 | 1.3212 |
| 0.4061 | 641.0 | 2564 | 1.3231 |
| 0.4061 | 642.0 | 2568 | 1.3165 |
| 0.4061 | 643.0 | 2572 | 1.2992 |
| 0.4061 | 644.0 | 2576 | 1.2758 |
| 0.4061 | 645.0 | 2580 | 1.2506 |
| 0.4061 | 646.0 | 2584 | 1.2508 |
| 0.4061 | 647.0 | 2588 | 1.2453 |
| 0.4061 | 648.0 | 2592 | 1.2296 |
| 0.4061 | 649.0 | 2596 | 1.2141 |
| 0.4061 | 650.0 | 2600 | 1.2024 |
| 0.4061 | 651.0 | 2604 | 1.1930 |
| 0.4061 | 652.0 | 2608 | 1.2219 |
| 0.4061 | 653.0 | 2612 | 1.2306 |
| 0.4061 | 654.0 | 2616 | 1.2269 |
| 0.4061 | 655.0 | 2620 | 1.2037 |
| 0.4061 | 656.0 | 2624 | 1.1795 |
| 0.4061 | 657.0 | 2628 | 1.1435 |
| 0.4061 | 658.0 | 2632 | 1.1146 |
| 0.4061 | 659.0 | 2636 | 1.0946 |
| 0.4061 | 660.0 | 2640 | 1.0931 |
| 0.4061 | 661.0 | 2644 | 1.1798 |
| 0.4061 | 662.0 | 2648 | 1.1944 |
| 0.4061 | 663.0 | 2652 | 1.1942 |
| 0.4061 | 664.0 | 2656 | 1.2285 |
| 0.4061 | 665.0 | 2660 | 1.3122 |
| 0.4061 | 666.0 | 2664 | 1.3508 |
| 0.4061 | 667.0 | 2668 | 1.3625 |
| 0.4061 | 668.0 | 2672 | 1.3328 |
| 0.4061 | 669.0 | 2676 | 1.2849 |
| 0.4061 | 670.0 | 2680 | 1.2284 |
| 0.4061 | 671.0 | 2684 | 1.1931 |
| 0.4061 | 672.0 | 2688 | 1.1913 |
| 0.4061 | 673.0 | 2692 | 1.2059 |
| 0.4061 | 674.0 | 2696 | 1.2328 |
| 0.4061 | 675.0 | 2700 | 1.2668 |
| 0.4061 | 676.0 | 2704 | 1.2732 |
| 0.4061 | 677.0 | 2708 | 1.2647 |
| 0.4061 | 678.0 | 2712 | 1.2574 |
| 0.4061 | 679.0 | 2716 | 1.2319 |
| 0.4061 | 680.0 | 2720 | 1.2031 |
| 0.4061 | 681.0 | 2724 | 1.2425 |
| 0.4061 | 682.0 | 2728 | 1.2883 |
| 0.4061 | 683.0 | 2732 | 1.3076 |
| 0.4061 | 684.0 | 2736 | 1.3102 |
| 0.4061 | 685.0 | 2740 | 1.3046 |
| 0.4061 | 686.0 | 2744 | 1.2982 |
| 0.4061 | 687.0 | 2748 | 1.2846 |
| 0.4061 | 688.0 | 2752 | 1.2751 |
| 0.4061 | 689.0 | 2756 | 1.2671 |
| 0.4061 | 690.0 | 2760 | 1.2551 |
| 0.4061 | 691.0 | 2764 | 1.2444 |
| 0.4061 | 692.0 | 2768 | 1.2144 |
| 0.4061 | 693.0 | 2772 | 1.1945 |
| 0.4061 | 694.0 | 2776 | 1.1846 |
| 0.4061 | 695.0 | 2780 | 1.1939 |
| 0.4061 | 696.0 | 2784 | 1.1949 |
| 0.4061 | 697.0 | 2788 | 1.2070 |
| 0.4061 | 698.0 | 2792 | 1.2194 |
| 0.4061 | 699.0 | 2796 | 1.2330 |
| 0.4061 | 700.0 | 2800 | 1.2461 |
| 0.4061 | 701.0 | 2804 | 1.2499 |
| 0.4061 | 702.0 | 2808 | 1.2419 |
| 0.4061 | 703.0 | 2812 | 1.2619 |
| 0.4061 | 704.0 | 2816 | 1.2295 |
| 0.4061 | 705.0 | 2820 | 1.2170 |
| 0.4061 | 706.0 | 2824 | 1.2960 |
| 0.4061 | 707.0 | 2828 | 1.3246 |
| 0.4061 | 708.0 | 2832 | 1.3304 |
| 0.4061 | 709.0 | 2836 | 1.3395 |
| 0.4061 | 710.0 | 2840 | 1.3449 |
| 0.4061 | 711.0 | 2844 | 1.3399 |
| 0.4061 | 712.0 | 2848 | 1.3301 |
| 0.4061 | 713.0 | 2852 | 1.3168 |
| 0.4061 | 714.0 | 2856 | 1.3108 |
| 0.4061 | 715.0 | 2860 | 1.3146 |
| 0.4061 | 716.0 | 2864 | 1.3229 |
| 0.4061 | 717.0 | 2868 | 1.3482 |
| 0.4061 | 718.0 | 2872 | 1.3742 |
| 0.4061 | 719.0 | 2876 | 1.3829 |
| 0.4061 | 720.0 | 2880 | 1.3847 |
| 0.4061 | 721.0 | 2884 | 1.3867 |
| 0.4061 | 722.0 | 2888 | 1.3857 |
| 0.4061 | 723.0 | 2892 | 1.3810 |
| 0.4061 | 724.0 | 2896 | 1.3730 |
| 0.4061 | 725.0 | 2900 | 1.3631 |
| 0.4061 | 726.0 | 2904 | 1.3527 |
| 0.4061 | 727.0 | 2908 | 1.3418 |
| 0.4061 | 728.0 | 2912 | 1.3186 |
| 0.4061 | 729.0 | 2916 | 1.3084 |
| 0.4061 | 730.0 | 2920 | 1.3000 |
| 0.4061 | 731.0 | 2924 | 1.2873 |
| 0.4061 | 732.0 | 2928 | 1.2775 |
| 0.4061 | 733.0 | 2932 | 1.2699 |
| 0.4061 | 734.0 | 2936 | 1.2703 |
| 0.4061 | 735.0 | 2940 | 1.2799 |
| 0.4061 | 736.0 | 2944 | 1.2905 |
| 0.4061 | 737.0 | 2948 | 1.3006 |
| 0.4061 | 738.0 | 2952 | 1.3002 |
| 0.4061 | 739.0 | 2956 | 1.2978 |
| 0.4061 | 740.0 | 2960 | 1.2848 |
| 0.4061 | 741.0 | 2964 | 1.2631 |
| 0.4061 | 742.0 | 2968 | 1.2506 |
| 0.4061 | 743.0 | 2972 | 1.2557 |
| 0.4061 | 744.0 | 2976 | 1.2643 |
| 0.4061 | 745.0 | 2980 | 1.2719 |
| 0.4061 | 746.0 | 2984 | 1.2731 |
| 0.4061 | 747.0 | 2988 | 1.3278 |
| 0.4061 | 748.0 | 2992 | 1.3545 |
| 0.4061 | 749.0 | 2996 | 1.3598 |
| 0.4016 | 750.0 | 3000 | 1.3552 |
| 0.4016 | 751.0 | 3004 | 1.3679 |
| 0.4016 | 752.0 | 3008 | 1.3758 |
| 0.4016 | 753.0 | 3012 | 1.3602 |
| 0.4016 | 754.0 | 3016 | 1.3482 |
| 0.4016 | 755.0 | 3020 | 1.3237 |
| 0.4016 | 756.0 | 3024 | 1.3004 |
| 0.4016 | 757.0 | 3028 | 1.2859 |
| 0.4016 | 758.0 | 3032 | 1.2923 |
| 0.4016 | 759.0 | 3036 | 1.3164 |
| 0.4016 | 760.0 | 3040 | 1.3224 |
| 0.4016 | 761.0 | 3044 | 1.3039 |
| 0.4016 | 762.0 | 3048 | 1.2589 |
| 0.4016 | 763.0 | 3052 | 1.1517 |
| 0.4016 | 764.0 | 3056 | 1.0966 |
| 0.4016 | 765.0 | 3060 | 1.1509 |
| 0.4016 | 766.0 | 3064 | 1.2219 |
| 0.4016 | 767.0 | 3068 | 1.2252 |
| 0.4016 | 768.0 | 3072 | 1.2120 |
| 0.4016 | 769.0 | 3076 | 1.1997 |
| 0.4016 | 770.0 | 3080 | 1.1788 |
| 0.4016 | 771.0 | 3084 | 1.1522 |
| 0.4016 | 772.0 | 3088 | 1.1402 |
| 0.4016 | 773.0 | 3092 | 1.1456 |
| 0.4016 | 774.0 | 3096 | 1.1622 |
| 0.4016 | 775.0 | 3100 | 1.1761 |
| 0.4016 | 776.0 | 3104 | 1.1781 |
| 0.4016 | 777.0 | 3108 | 1.1733 |
| 0.4016 | 778.0 | 3112 | 1.1608 |
| 0.4016 | 779.0 | 3116 | 1.1462 |
| 0.4016 | 780.0 | 3120 | 1.1350 |
| 0.4016 | 781.0 | 3124 | 1.1381 |
| 0.4016 | 782.0 | 3128 | 1.1442 |
| 0.4016 | 783.0 | 3132 | 1.1534 |
| 0.4016 | 784.0 | 3136 | 1.1221 |
| 0.4016 | 785.0 | 3140 | 1.1822 |
| 0.4016 | 786.0 | 3144 | 1.2308 |
| 0.4016 | 787.0 | 3148 | 1.2633 |
| 0.4016 | 788.0 | 3152 | 1.2659 |
| 0.4016 | 789.0 | 3156 | 1.2471 |
| 0.4016 | 790.0 | 3160 | 1.1818 |
| 0.4016 | 791.0 | 3164 | 1.1384 |
| 0.4016 | 792.0 | 3168 | 1.1248 |
| 0.4016 | 793.0 | 3172 | 1.1100 |
| 0.4016 | 794.0 | 3176 | 1.1004 |
| 0.4016 | 795.0 | 3180 | 1.1016 |
| 0.4016 | 796.0 | 3184 | 1.1277 |
| 0.4016 | 797.0 | 3188 | 1.1689 |
| 0.4016 | 798.0 | 3192 | 1.1946 |
| 0.4016 | 799.0 | 3196 | 1.2127 |
| 0.4016 | 800.0 | 3200 | 1.2245 |
| 0.4016 | 801.0 | 3204 | 1.2228 |
| 0.4016 | 802.0 | 3208 | 1.2164 |
| 0.4016 | 803.0 | 3212 | 1.2172 |
| 0.4016 | 804.0 | 3216 | 1.2180 |
| 0.4016 | 805.0 | 3220 | 1.2165 |
| 0.4016 | 806.0 | 3224 | 1.2123 |
| 0.4016 | 807.0 | 3228 | 1.2098 |
| 0.4016 | 808.0 | 3232 | 1.2090 |
| 0.4016 | 809.0 | 3236 | 1.2058 |
| 0.4016 | 810.0 | 3240 | 1.2009 |
| 0.4016 | 811.0 | 3244 | 1.2007 |
| 0.4016 | 812.0 | 3248 | 1.2076 |
| 0.4016 | 813.0 | 3252 | 1.2389 |
| 0.4016 | 814.0 | 3256 | 1.2485 |
| 0.4016 | 815.0 | 3260 | 1.2495 |
| 0.4016 | 816.0 | 3264 | 1.2480 |
| 0.4016 | 817.0 | 3268 | 1.2444 |
| 0.4016 | 818.0 | 3272 | 1.2378 |
| 0.4016 | 819.0 | 3276 | 1.2285 |
| 0.4016 | 820.0 | 3280 | 1.2135 |
| 0.4016 | 821.0 | 3284 | 1.1896 |
| 0.4016 | 822.0 | 3288 | 1.1637 |
| 0.4016 | 823.0 | 3292 | 1.1443 |
| 0.4016 | 824.0 | 3296 | 1.1267 |
| 0.4016 | 825.0 | 3300 | 1.1119 |
| 0.4016 | 826.0 | 3304 | 1.1052 |
| 0.4016 | 827.0 | 3308 | 1.1026 |
| 0.4016 | 828.0 | 3312 | 1.1021 |
| 0.4016 | 829.0 | 3316 | 1.1042 |
| 0.4016 | 830.0 | 3320 | 1.1077 |
| 0.4016 | 831.0 | 3324 | 1.1123 |
| 0.4016 | 832.0 | 3328 | 1.1195 |
| 0.4016 | 833.0 | 3332 | 1.1204 |
| 0.4016 | 834.0 | 3336 | 1.1215 |
| 0.4016 | 835.0 | 3340 | 1.1350 |
| 0.4016 | 836.0 | 3344 | 1.1476 |
| 0.4016 | 837.0 | 3348 | 1.1558 |
| 0.4016 | 838.0 | 3352 | 1.1687 |
| 0.4016 | 839.0 | 3356 | 1.1715 |
| 0.4016 | 840.0 | 3360 | 1.1797 |
| 0.4016 | 841.0 | 3364 | 1.2209 |
| 0.4016 | 842.0 | 3368 | 1.2569 |
| 0.4016 | 843.0 | 3372 | 1.2802 |
| 0.4016 | 844.0 | 3376 | 1.3029 |
| 0.4016 | 845.0 | 3380 | 1.2870 |
| 0.4016 | 846.0 | 3384 | 1.1964 |
| 0.4016 | 847.0 | 3388 | 1.1334 |
| 0.4016 | 848.0 | 3392 | 1.1218 |
| 0.4016 | 849.0 | 3396 | 1.1278 |
| 0.4016 | 850.0 | 3400 | 1.1315 |
| 0.4016 | 851.0 | 3404 | 1.1784 |
| 0.4016 | 852.0 | 3408 | 1.2120 |
| 0.4016 | 853.0 | 3412 | 1.2280 |
| 0.4016 | 854.0 | 3416 | 1.2320 |
| 0.4016 | 855.0 | 3420 | 1.1869 |
| 0.4016 | 856.0 | 3424 | 1.1227 |
| 0.4016 | 857.0 | 3428 | 1.0755 |
| 0.4016 | 858.0 | 3432 | 1.0452 |
| 0.4016 | 859.0 | 3436 | 1.0299 |
| 0.4016 | 860.0 | 3440 | 1.0241 |
| 0.4016 | 861.0 | 3444 | 1.0236 |
| 0.4016 | 862.0 | 3448 | 1.0262 |
| 0.4016 | 863.0 | 3452 | 1.0287 |
| 0.4016 | 864.0 | 3456 | 1.0308 |
| 0.4016 | 865.0 | 3460 | 1.0330 |
| 0.4016 | 866.0 | 3464 | 1.0352 |
| 0.4016 | 867.0 | 3468 | 1.0370 |
| 0.4016 | 868.0 | 3472 | 1.0386 |
| 0.4016 | 869.0 | 3476 | 1.0386 |
| 0.4016 | 870.0 | 3480 | 1.0296 |
| 0.4016 | 871.0 | 3484 | 1.0207 |
| 0.4016 | 872.0 | 3488 | 1.0171 |
| 0.4016 | 873.0 | 3492 | 1.0158 |
| 0.4016 | 874.0 | 3496 | 1.0149 |
| 0.4014 | 875.0 | 3500 | 1.0150 |
| 0.4014 | 876.0 | 3504 | 1.0162 |
| 0.4014 | 877.0 | 3508 | 1.0176 |
| 0.4014 | 878.0 | 3512 | 1.0295 |
| 0.4014 | 879.0 | 3516 | 1.0410 |
| 0.4014 | 880.0 | 3520 | 1.0489 |
| 0.4014 | 881.0 | 3524 | 1.0540 |
| 0.4014 | 882.0 | 3528 | 1.0578 |
| 0.4014 | 883.0 | 3532 | 1.0607 |
| 0.4014 | 884.0 | 3536 | 1.0630 |
| 0.4014 | 885.0 | 3540 | 1.0675 |
| 0.4014 | 886.0 | 3544 | 1.0700 |
| 0.4014 | 887.0 | 3548 | 1.0726 |
| 0.4014 | 888.0 | 3552 | 1.0851 |
| 0.4014 | 889.0 | 3556 | 1.0946 |
| 0.4014 | 890.0 | 3560 | 1.1003 |
| 0.4014 | 891.0 | 3564 | 1.0967 |
| 0.4014 | 892.0 | 3568 | 1.0899 |
| 0.4014 | 893.0 | 3572 | 1.0831 |
| 0.4014 | 894.0 | 3576 | 1.0767 |
| 0.4014 | 895.0 | 3580 | 1.0696 |
| 0.4014 | 896.0 | 3584 | 1.0664 |
| 0.4014 | 897.0 | 3588 | 1.0691 |
| 0.4014 | 898.0 | 3592 | 1.0772 |
| 0.4014 | 899.0 | 3596 | 1.0807 |
| 0.4014 | 900.0 | 3600 | 1.0831 |
| 0.4014 | 901.0 | 3604 | 1.0822 |
| 0.4014 | 902.0 | 3608 | 1.0792 |
| 0.4014 | 903.0 | 3612 | 1.0659 |
| 0.4014 | 904.0 | 3616 | 1.0539 |
| 0.4014 | 905.0 | 3620 | 1.0426 |
| 0.4014 | 906.0 | 3624 | 1.0392 |
| 0.4014 | 907.0 | 3628 | 1.0473 |
| 0.4014 | 908.0 | 3632 | 1.0532 |
| 0.4014 | 909.0 | 3636 | 1.0545 |
| 0.4014 | 910.0 | 3640 | 1.0536 |
| 0.4014 | 911.0 | 3644 | 1.0540 |
| 0.4014 | 912.0 | 3648 | 1.0546 |
| 0.4014 | 913.0 | 3652 | 1.0587 |
| 0.4014 | 914.0 | 3656 | 1.0701 |
| 0.4014 | 915.0 | 3660 | 1.0807 |
| 0.4014 | 916.0 | 3664 | 1.0884 |
| 0.4014 | 917.0 | 3668 | 1.0956 |
| 0.4014 | 918.0 | 3672 | 1.1019 |
| 0.4014 | 919.0 | 3676 | 1.1053 |
| 0.4014 | 920.0 | 3680 | 1.1067 |
| 0.4014 | 921.0 | 3684 | 1.1044 |
| 0.4014 | 922.0 | 3688 | 1.1030 |
| 0.4014 | 923.0 | 3692 | 1.1033 |
| 0.4014 | 924.0 | 3696 | 1.1041 |
| 0.4014 | 925.0 | 3700 | 1.1068 |
| 0.4014 | 926.0 | 3704 | 1.1116 |
| 0.4014 | 927.0 | 3708 | 1.1157 |
| 0.4014 | 928.0 | 3712 | 1.1195 |
| 0.4014 | 929.0 | 3716 | 1.1245 |
| 0.4014 | 930.0 | 3720 | 1.1271 |
| 0.4014 | 931.0 | 3724 | 1.1289 |
| 0.4014 | 932.0 | 3728 | 1.1316 |
| 0.4014 | 933.0 | 3732 | 1.1340 |
| 0.4014 | 934.0 | 3736 | 1.1367 |
| 0.4014 | 935.0 | 3740 | 1.1425 |
| 0.4014 | 936.0 | 3744 | 1.1488 |
| 0.4014 | 937.0 | 3748 | 1.1515 |
| 0.4014 | 938.0 | 3752 | 1.1503 |
| 0.4014 | 939.0 | 3756 | 1.1478 |
| 0.4014 | 940.0 | 3760 | 1.1487 |
| 0.4014 | 941.0 | 3764 | 1.1488 |
| 0.4014 | 942.0 | 3768 | 1.1488 |
| 0.4014 | 943.0 | 3772 | 1.1493 |
| 0.4014 | 944.0 | 3776 | 1.1358 |
| 0.4014 | 945.0 | 3780 | 1.0983 |
| 0.4014 | 946.0 | 3784 | 1.0740 |
| 0.4014 | 947.0 | 3788 | 1.0641 |
| 0.4014 | 948.0 | 3792 | 1.0617 |
| 0.4014 | 949.0 | 3796 | 1.0639 |
| 0.4014 | 950.0 | 3800 | 1.0667 |
| 0.4014 | 951.0 | 3804 | 1.0778 |
| 0.4014 | 952.0 | 3808 | 1.0883 |
| 0.4014 | 953.0 | 3812 | 1.1023 |
| 0.4014 | 954.0 | 3816 | 1.1139 |
| 0.4014 | 955.0 | 3820 | 1.1205 |
| 0.4014 | 956.0 | 3824 | 1.1238 |
| 0.4014 | 957.0 | 3828 | 1.1264 |
| 0.4014 | 958.0 | 3832 | 1.1328 |
| 0.4014 | 959.0 | 3836 | 1.1374 |
| 0.4014 | 960.0 | 3840 | 1.1400 |
| 0.4014 | 961.0 | 3844 | 1.1397 |
| 0.4014 | 962.0 | 3848 | 1.1388 |
| 0.4014 | 963.0 | 3852 | 1.1385 |
| 0.4014 | 964.0 | 3856 | 1.1390 |
| 0.4014 | 965.0 | 3860 | 1.1397 |
| 0.4014 | 966.0 | 3864 | 1.1413 |
| 0.4014 | 967.0 | 3868 | 1.1471 |
| 0.4014 | 968.0 | 3872 | 1.1519 |
| 0.4014 | 969.0 | 3876 | 1.1541 |
| 0.4014 | 970.0 | 3880 | 1.1526 |
| 0.4014 | 971.0 | 3884 | 1.1506 |
| 0.4014 | 972.0 | 3888 | 1.1494 |
| 0.4014 | 973.0 | 3892 | 1.1484 |
| 0.4014 | 974.0 | 3896 | 1.1436 |
| 0.4014 | 975.0 | 3900 | 1.1406 |
| 0.4014 | 976.0 | 3904 | 1.1369 |
| 0.4014 | 977.0 | 3908 | 1.1329 |
| 0.4014 | 978.0 | 3912 | 1.1309 |
| 0.4014 | 979.0 | 3916 | 1.1291 |
| 0.4014 | 980.0 | 3920 | 1.1285 |
| 0.4014 | 981.0 | 3924 | 1.1298 |
| 0.4014 | 982.0 | 3928 | 1.1328 |
| 0.4014 | 983.0 | 3932 | 1.1266 |
| 0.4014 | 984.0 | 3936 | 1.1233 |
| 0.4014 | 985.0 | 3940 | 1.1279 |
| 0.4014 | 986.0 | 3944 | 1.1331 |
| 0.4014 | 987.0 | 3948 | 1.1367 |
| 0.4014 | 988.0 | 3952 | 1.1336 |
| 0.4014 | 989.0 | 3956 | 1.1305 |
| 0.4014 | 990.0 | 3960 | 1.1284 |
| 0.4014 | 991.0 | 3964 | 1.1270 |
| 0.4014 | 992.0 | 3968 | 1.1256 |
| 0.4014 | 993.0 | 3972 | 1.1231 |
| 0.4014 | 994.0 | 3976 | 1.1220 |
| 0.4014 | 995.0 | 3980 | 1.1229 |
| 0.4014 | 996.0 | 3984 | 1.1074 |
| 0.4014 | 997.0 | 3988 | 1.1741 |
| 0.4014 | 998.0 | 3992 | 1.2255 |
| 0.4014 | 999.0 | 3996 | 1.2600 |
| 0.4025 | 1000.0 | 4000 | 1.2943 |
| 0.4025 | 1001.0 | 4004 | 1.3115 |
| 0.4025 | 1002.0 | 4008 | 1.3149 |
| 0.4025 | 1003.0 | 4012 | 1.2950 |
| 0.4025 | 1004.0 | 4016 | 1.2578 |
| 0.4025 | 1005.0 | 4020 | 1.2230 |
| 0.4025 | 1006.0 | 4024 | 1.1886 |
| 0.4025 | 1007.0 | 4028 | 1.1686 |
| 0.4025 | 1008.0 | 4032 | 1.1784 |
| 0.4025 | 1009.0 | 4036 | 1.1909 |
| 0.4025 | 1010.0 | 4040 | 1.1984 |
| 0.4025 | 1011.0 | 4044 | 1.2013 |
| 0.4025 | 1012.0 | 4048 | 1.2029 |
| 0.4025 | 1013.0 | 4052 | 1.2016 |
| 0.4025 | 1014.0 | 4056 | 1.1755 |
| 0.4025 | 1015.0 | 4060 | 1.0993 |
| 0.4025 | 1016.0 | 4064 | 1.0576 |
| 0.4025 | 1017.0 | 4068 | 1.0620 |
| 0.4025 | 1018.0 | 4072 | 1.0791 |
| 0.4025 | 1019.0 | 4076 | 1.0938 |
| 0.4025 | 1020.0 | 4080 | 1.1000 |
| 0.4025 | 1021.0 | 4084 | 1.1049 |
| 0.4025 | 1022.0 | 4088 | 1.1093 |
| 0.4025 | 1023.0 | 4092 | 1.1115 |
| 0.4025 | 1024.0 | 4096 | 1.1253 |
| 0.4025 | 1025.0 | 4100 | 1.1377 |
| 0.4025 | 1026.0 | 4104 | 1.1378 |
| 0.4025 | 1027.0 | 4108 | 1.1303 |
| 0.4025 | 1028.0 | 4112 | 1.1133 |
| 0.4025 | 1029.0 | 4116 | 1.0965 |
| 0.4025 | 1030.0 | 4120 | 1.0833 |
| 0.4025 | 1031.0 | 4124 | 1.0750 |
| 0.4025 | 1032.0 | 4128 | 1.0715 |
| 0.4025 | 1033.0 | 4132 | 1.0742 |
| 0.4025 | 1034.0 | 4136 | 1.0822 |
| 0.4025 | 1035.0 | 4140 | 1.0887 |
| 0.4025 | 1036.0 | 4144 | 1.0935 |
| 0.4025 | 1037.0 | 4148 | 1.0960 |
| 0.4025 | 1038.0 | 4152 | 1.0993 |
| 0.4025 | 1039.0 | 4156 | 1.1041 |
| 0.4025 | 1040.0 | 4160 | 1.1087 |
| 0.4025 | 1041.0 | 4164 | 1.1171 |
| 0.4025 | 1042.0 | 4168 | 1.1270 |
| 0.4025 | 1043.0 | 4172 | 1.1340 |
| 0.4025 | 1044.0 | 4176 | 1.1404 |
| 0.4025 | 1045.0 | 4180 | 1.1455 |
| 0.4025 | 1046.0 | 4184 | 1.1466 |
| 0.4025 | 1047.0 | 4188 | 1.1479 |
| 0.4025 | 1048.0 | 4192 | 1.1482 |
| 0.4025 | 1049.0 | 4196 | 1.1489 |
| 0.4025 | 1050.0 | 4200 | 1.1486 |
| 0.4025 | 1051.0 | 4204 | 1.1477 |
| 0.4025 | 1052.0 | 4208 | 1.1471 |
| 0.4025 | 1053.0 | 4212 | 1.1478 |
| 0.4025 | 1054.0 | 4216 | 1.1483 |
| 0.4025 | 1055.0 | 4220 | 1.1424 |
| 0.4025 | 1056.0 | 4224 | 1.1357 |
| 0.4025 | 1057.0 | 4228 | 1.1308 |
| 0.4025 | 1058.0 | 4232 | 1.1275 |
| 0.4025 | 1059.0 | 4236 | 1.1346 |
| 0.4025 | 1060.0 | 4240 | 1.1628 |
| 0.4025 | 1061.0 | 4244 | 1.1450 |
| 0.4025 | 1062.0 | 4248 | 1.1331 |
| 0.4025 | 1063.0 | 4252 | 1.1271 |
| 0.4025 | 1064.0 | 4256 | 1.1263 |
| 0.4025 | 1065.0 | 4260 | 1.1266 |
| 0.4025 | 1066.0 | 4264 | 1.1259 |
| 0.4025 | 1067.0 | 4268 | 1.1255 |
| 0.4025 | 1068.0 | 4272 | 1.1248 |
| 0.4025 | 1069.0 | 4276 | 1.1228 |
| 0.4025 | 1070.0 | 4280 | 1.1207 |
| 0.4025 | 1071.0 | 4284 | 1.1215 |
| 0.4025 | 1072.0 | 4288 | 1.1191 |
| 0.4025 | 1073.0 | 4292 | 1.1177 |
| 0.4025 | 1074.0 | 4296 | 1.1179 |
| 0.4025 | 1075.0 | 4300 | 1.1181 |
| 0.4025 | 1076.0 | 4304 | 1.1181 |
| 0.4025 | 1077.0 | 4308 | 1.1172 |
| 0.4025 | 1078.0 | 4312 | 1.1154 |
| 0.4025 | 1079.0 | 4316 | 1.1134 |
| 0.4025 | 1080.0 | 4320 | 1.1121 |
| 0.4025 | 1081.0 | 4324 | 1.1111 |
| 0.4025 | 1082.0 | 4328 | 1.1102 |
| 0.4025 | 1083.0 | 4332 | 1.1102 |
| 0.4025 | 1084.0 | 4336 | 1.1109 |
| 0.4025 | 1085.0 | 4340 | 1.1119 |
| 0.4025 | 1086.0 | 4344 | 1.1126 |
| 0.4025 | 1087.0 | 4348 | 1.1129 |
| 0.4025 | 1088.0 | 4352 | 1.1131 |
| 0.4025 | 1089.0 | 4356 | 1.1131 |
| 0.4025 | 1090.0 | 4360 | 1.1129 |
| 0.4025 | 1091.0 | 4364 | 1.1130 |
| 0.4025 | 1092.0 | 4368 | 1.0967 |
| 0.4025 | 1093.0 | 4372 | 1.0824 |
| 0.4025 | 1094.0 | 4376 | 1.0799 |
| 0.4025 | 1095.0 | 4380 | 1.0830 |
| 0.4025 | 1096.0 | 4384 | 1.0894 |
| 0.4025 | 1097.0 | 4388 | 1.0983 |
| 0.4025 | 1098.0 | 4392 | 1.1050 |
| 0.4025 | 1099.0 | 4396 | 1.1161 |
| 0.4025 | 1100.0 | 4400 | 1.1332 |
| 0.4025 | 1101.0 | 4404 | 1.1434 |
| 0.4025 | 1102.0 | 4408 | 1.1527 |
| 0.4025 | 1103.0 | 4412 | 1.1581 |
| 0.4025 | 1104.0 | 4416 | 1.1606 |
| 0.4025 | 1105.0 | 4420 | 1.1648 |
| 0.4025 | 1106.0 | 4424 | 1.1656 |
| 0.4025 | 1107.0 | 4428 | 1.1644 |
| 0.4025 | 1108.0 | 4432 | 1.1646 |
| 0.4025 | 1109.0 | 4436 | 1.1654 |
| 0.4025 | 1110.0 | 4440 | 1.1610 |
| 0.4025 | 1111.0 | 4444 | 1.1545 |
| 0.4025 | 1112.0 | 4448 | 1.1492 |
| 0.4025 | 1113.0 | 4452 | 1.1442 |
| 0.4025 | 1114.0 | 4456 | 1.1438 |
| 0.4025 | 1115.0 | 4460 | 1.1538 |
| 0.4025 | 1116.0 | 4464 | 1.1623 |
| 0.4025 | 1117.0 | 4468 | 1.1693 |
| 0.4025 | 1118.0 | 4472 | 1.1743 |
| 0.4025 | 1119.0 | 4476 | 1.1749 |
| 0.4025 | 1120.0 | 4480 | 1.1382 |
| 0.4025 | 1121.0 | 4484 | 1.1209 |
| 0.4025 | 1122.0 | 4488 | 1.1680 |
| 0.4025 | 1123.0 | 4492 | 1.2175 |
| 0.4025 | 1124.0 | 4496 | 1.2453 |
| 0.4015 | 1125.0 | 4500 | 1.2393 |
| 0.4015 | 1126.0 | 4504 | 1.2185 |
| 0.4015 | 1127.0 | 4508 | 1.1926 |
| 0.4015 | 1128.0 | 4512 | 1.1660 |
| 0.4015 | 1129.0 | 4516 | 1.1457 |
| 0.4015 | 1130.0 | 4520 | 1.1286 |
| 0.4015 | 1131.0 | 4524 | 1.1176 |
| 0.4015 | 1132.0 | 4528 | 1.1100 |
| 0.4015 | 1133.0 | 4532 | 1.1023 |
| 0.4015 | 1134.0 | 4536 | 1.0997 |
| 0.4015 | 1135.0 | 4540 | 1.0973 |
| 0.4015 | 1136.0 | 4544 | 1.0962 |
| 0.4015 | 1137.0 | 4548 | 1.0984 |
| 0.4015 | 1138.0 | 4552 | 1.1027 |
| 0.4015 | 1139.0 | 4556 | 1.1081 |
| 0.4015 | 1140.0 | 4560 | 1.1123 |
| 0.4015 | 1141.0 | 4564 | 1.1148 |
| 0.4015 | 1142.0 | 4568 | 1.1128 |
| 0.4015 | 1143.0 | 4572 | 1.1084 |
| 0.4015 | 1144.0 | 4576 | 1.1048 |
| 0.4015 | 1145.0 | 4580 | 1.0997 |
| 0.4015 | 1146.0 | 4584 | 1.1051 |
| 0.4015 | 1147.0 | 4588 | 1.1135 |
| 0.4015 | 1148.0 | 4592 | 1.1169 |
| 0.4015 | 1149.0 | 4596 | 1.1196 |
| 0.4015 | 1150.0 | 4600 | 1.1214 |
| 0.4015 | 1151.0 | 4604 | 1.1132 |
| 0.4015 | 1152.0 | 4608 | 1.1172 |
| 0.4015 | 1153.0 | 4612 | 1.1228 |
| 0.4015 | 1154.0 | 4616 | 1.1291 |
| 0.4015 | 1155.0 | 4620 | 1.1335 |
| 0.4015 | 1156.0 | 4624 | 1.1364 |
| 0.4015 | 1157.0 | 4628 | 1.1378 |
| 0.4015 | 1158.0 | 4632 | 1.1378 |
| 0.4015 | 1159.0 | 4636 | 1.1380 |
| 0.4015 | 1160.0 | 4640 | 1.1300 |
| 0.4015 | 1161.0 | 4644 | 1.1238 |
| 0.4015 | 1162.0 | 4648 | 1.1207 |
| 0.4015 | 1163.0 | 4652 | 1.1203 |
| 0.4015 | 1164.0 | 4656 | 1.1198 |
| 0.4015 | 1165.0 | 4660 | 1.1092 |
| 0.4015 | 1166.0 | 4664 | 1.1052 |
| 0.4015 | 1167.0 | 4668 | 1.1309 |
| 0.4015 | 1168.0 | 4672 | 1.1826 |
| 0.4015 | 1169.0 | 4676 | 1.1280 |
| 0.4015 | 1170.0 | 4680 | 1.1234 |
| 0.4015 | 1171.0 | 4684 | 1.1804 |
| 0.4015 | 1172.0 | 4688 | 1.2199 |
| 0.4015 | 1173.0 | 4692 | 1.2259 |
| 0.4015 | 1174.0 | 4696 | 1.2267 |
| 0.4015 | 1175.0 | 4700 | 1.2261 |
| 0.4015 | 1176.0 | 4704 | 1.2248 |
| 0.4015 | 1177.0 | 4708 | 1.2086 |
| 0.4015 | 1178.0 | 4712 | 1.1969 |
| 0.4015 | 1179.0 | 4716 | 1.1937 |
| 0.4015 | 1180.0 | 4720 | 1.1915 |
| 0.4015 | 1181.0 | 4724 | 1.1917 |
| 0.4015 | 1182.0 | 4728 | 1.1925 |
| 0.4015 | 1183.0 | 4732 | 1.2010 |
| 0.4015 | 1184.0 | 4736 | 1.2017 |
| 0.4015 | 1185.0 | 4740 | 1.1974 |
| 0.4015 | 1186.0 | 4744 | 1.1934 |
| 0.4015 | 1187.0 | 4748 | 1.1915 |
| 0.4015 | 1188.0 | 4752 | 1.1902 |
| 0.4015 | 1189.0 | 4756 | 1.1896 |
| 0.4015 | 1190.0 | 4760 | 1.1888 |
| 0.4015 | 1191.0 | 4764 | 1.1806 |
| 0.4015 | 1192.0 | 4768 | 1.1684 |
| 0.4015 | 1193.0 | 4772 | 1.1584 |
| 0.4015 | 1194.0 | 4776 | 1.1505 |
| 0.4015 | 1195.0 | 4780 | 1.1480 |
| 0.4015 | 1196.0 | 4784 | 1.1483 |
| 0.4015 | 1197.0 | 4788 | 1.1506 |
| 0.4015 | 1198.0 | 4792 | 1.1532 |
| 0.4015 | 1199.0 | 4796 | 1.1542 |
| 0.4015 | 1200.0 | 4800 | 1.1539 |
| 0.4015 | 1201.0 | 4804 | 1.1521 |
| 0.4015 | 1202.0 | 4808 | 1.1509 |
| 0.4015 | 1203.0 | 4812 | 1.1495 |
| 0.4015 | 1204.0 | 4816 | 1.1499 |
| 0.4015 | 1205.0 | 4820 | 1.1519 |
| 0.4015 | 1206.0 | 4824 | 1.1538 |
| 0.4015 | 1207.0 | 4828 | 1.1569 |
| 0.4015 | 1208.0 | 4832 | 1.1558 |
| 0.4015 | 1209.0 | 4836 | 1.1562 |
| 0.4015 | 1210.0 | 4840 | 1.1556 |
| 0.4015 | 1211.0 | 4844 | 1.1548 |
| 0.4015 | 1212.0 | 4848 | 1.1574 |
| 0.4015 | 1213.0 | 4852 | 1.1591 |
| 0.4015 | 1214.0 | 4856 | 1.1590 |
| 0.4015 | 1215.0 | 4860 | 1.1575 |
| 0.4015 | 1216.0 | 4864 | 1.1385 |
| 0.4015 | 1217.0 | 4868 | 1.1270 |
| 0.4015 | 1218.0 | 4872 | 1.1209 |
| 0.4015 | 1219.0 | 4876 | 1.1201 |
| 0.4015 | 1220.0 | 4880 | 1.1297 |
| 0.4015 | 1221.0 | 4884 | 1.1371 |
| 0.4015 | 1222.0 | 4888 | 1.1426 |
| 0.4015 | 1223.0 | 4892 | 1.1456 |
| 0.4015 | 1224.0 | 4896 | 1.1458 |
| 0.4015 | 1225.0 | 4900 | 1.1463 |
| 0.4015 | 1226.0 | 4904 | 1.1458 |
| 0.4015 | 1227.0 | 4908 | 1.1445 |
| 0.4015 | 1228.0 | 4912 | 1.1438 |
| 0.4015 | 1229.0 | 4916 | 1.1434 |
| 0.4015 | 1230.0 | 4920 | 1.1434 |
| 0.4015 | 1231.0 | 4924 | 1.1420 |
| 0.4015 | 1232.0 | 4928 | 1.1431 |
| 0.4015 | 1233.0 | 4932 | 1.1469 |
| 0.4015 | 1234.0 | 4936 | 1.1481 |
| 0.4015 | 1235.0 | 4940 | 1.1464 |
| 0.4015 | 1236.0 | 4944 | 1.1433 |
| 0.4015 | 1237.0 | 4948 | 1.1392 |
| 0.4015 | 1238.0 | 4952 | 1.1353 |
| 0.4015 | 1239.0 | 4956 | 1.1318 |
| 0.4015 | 1240.0 | 4960 | 1.1300 |
| 0.4015 | 1241.0 | 4964 | 1.1287 |
| 0.4015 | 1242.0 | 4968 | 1.1837 |
| 0.4015 | 1243.0 | 4972 | 1.2690 |
| 0.4015 | 1244.0 | 4976 | 1.3062 |
| 0.4015 | 1245.0 | 4980 | 1.3034 |
| 0.4015 | 1246.0 | 4984 | 1.2571 |
| 0.4015 | 1247.0 | 4988 | 1.2178 |
| 0.4015 | 1248.0 | 4992 | 1.1835 |
| 0.4015 | 1249.0 | 4996 | 1.1600 |
| 0.4008 | 1250.0 | 5000 | 1.1461 |
| 0.4008 | 1251.0 | 5004 | 1.1375 |
| 0.4008 | 1252.0 | 5008 | 1.1322 |
| 0.4008 | 1253.0 | 5012 | 1.1299 |
| 0.4008 | 1254.0 | 5016 | 1.1389 |
| 0.4008 | 1255.0 | 5020 | 1.1511 |
| 0.4008 | 1256.0 | 5024 | 1.1566 |
| 0.4008 | 1257.0 | 5028 | 1.1594 |
| 0.4008 | 1258.0 | 5032 | 1.1602 |
| 0.4008 | 1259.0 | 5036 | 1.1609 |
| 0.4008 | 1260.0 | 5040 | 1.1610 |
| 0.4008 | 1261.0 | 5044 | 1.1608 |
| 0.4008 | 1262.0 | 5048 | 1.1597 |
| 0.4008 | 1263.0 | 5052 | 1.1590 |
| 0.4008 | 1264.0 | 5056 | 1.1597 |
| 0.4008 | 1265.0 | 5060 | 1.1603 |
| 0.4008 | 1266.0 | 5064 | 1.1604 |
| 0.4008 | 1267.0 | 5068 | 1.1602 |
| 0.4008 | 1268.0 | 5072 | 1.1598 |
| 0.4008 | 1269.0 | 5076 | 1.1579 |
| 0.4008 | 1270.0 | 5080 | 1.1565 |
| 0.4008 | 1271.0 | 5084 | 1.1558 |
| 0.4008 | 1272.0 | 5088 | 1.1548 |
| 0.4008 | 1273.0 | 5092 | 1.1559 |
| 0.4008 | 1274.0 | 5096 | 1.1588 |
| 0.4008 | 1275.0 | 5100 | 1.1622 |
| 0.4008 | 1276.0 | 5104 | 1.1649 |
| 0.4008 | 1277.0 | 5108 | 1.1670 |
| 0.4008 | 1278.0 | 5112 | 1.1698 |
| 0.4008 | 1279.0 | 5116 | 1.1725 |
| 0.4008 | 1280.0 | 5120 | 1.1868 |
| 0.4008 | 1281.0 | 5124 | 1.2203 |
| 0.4008 | 1282.0 | 5128 | 1.2401 |
| 0.4008 | 1283.0 | 5132 | 1.2493 |
| 0.4008 | 1284.0 | 5136 | 1.2511 |
| 0.4008 | 1285.0 | 5140 | 1.2476 |
| 0.4008 | 1286.0 | 5144 | 1.2440 |
| 0.4008 | 1287.0 | 5148 | 1.2408 |
| 0.4008 | 1288.0 | 5152 | 1.2389 |
| 0.4008 | 1289.0 | 5156 | 1.2452 |
| 0.4008 | 1290.0 | 5160 | 1.2512 |
| 0.4008 | 1291.0 | 5164 | 1.2502 |
| 0.4008 | 1292.0 | 5168 | 1.2396 |
| 0.4008 | 1293.0 | 5172 | 1.2263 |
| 0.4008 | 1294.0 | 5176 | 1.2149 |
| 0.4008 | 1295.0 | 5180 | 1.2061 |
| 0.4008 | 1296.0 | 5184 | 1.1999 |
| 0.4008 | 1297.0 | 5188 | 1.1953 |
| 0.4008 | 1298.0 | 5192 | 1.1914 |
| 0.4008 | 1299.0 | 5196 | 1.1855 |
| 0.4008 | 1300.0 | 5200 | 1.1795 |
| 0.4008 | 1301.0 | 5204 | 1.1830 |
| 0.4008 | 1302.0 | 5208 | 1.1923 |
| 0.4008 | 1303.0 | 5212 | 1.2020 |
| 0.4008 | 1304.0 | 5216 | 1.2060 |
| 0.4008 | 1305.0 | 5220 | 1.2277 |
| 0.4008 | 1306.0 | 5224 | 1.2438 |
| 0.4008 | 1307.0 | 5228 | 1.2499 |
| 0.4008 | 1308.0 | 5232 | 1.2500 |
| 0.4008 | 1309.0 | 5236 | 1.2497 |
| 0.4008 | 1310.0 | 5240 | 1.2522 |
| 0.4008 | 1311.0 | 5244 | 1.2541 |
| 0.4008 | 1312.0 | 5248 | 1.2537 |
| 0.4008 | 1313.0 | 5252 | 1.2522 |
| 0.4008 | 1314.0 | 5256 | 1.2485 |
| 0.4008 | 1315.0 | 5260 | 1.2415 |
| 0.4008 | 1316.0 | 5264 | 1.2388 |
| 0.4008 | 1317.0 | 5268 | 1.2365 |
| 0.4008 | 1318.0 | 5272 | 1.2348 |
| 0.4008 | 1319.0 | 5276 | 1.2331 |
| 0.4008 | 1320.0 | 5280 | 1.2321 |
| 0.4008 | 1321.0 | 5284 | 1.2298 |
| 0.4008 | 1322.0 | 5288 | 1.2291 |
| 0.4008 | 1323.0 | 5292 | 1.2288 |
| 0.4008 | 1324.0 | 5296 | 1.2259 |
| 0.4008 | 1325.0 | 5300 | 1.2227 |
| 0.4008 | 1326.0 | 5304 | 1.2183 |
| 0.4008 | 1327.0 | 5308 | 1.2139 |
| 0.4008 | 1328.0 | 5312 | 1.2110 |
| 0.4008 | 1329.0 | 5316 | 1.2143 |
| 0.4008 | 1330.0 | 5320 | 1.2166 |
| 0.4008 | 1331.0 | 5324 | 1.2170 |
| 0.4008 | 1332.0 | 5328 | 1.2170 |
| 0.4008 | 1333.0 | 5332 | 1.2179 |
| 0.4008 | 1334.0 | 5336 | 1.2179 |
| 0.4008 | 1335.0 | 5340 | 1.2162 |
| 0.4008 | 1336.0 | 5344 | 1.2154 |
| 0.4008 | 1337.0 | 5348 | 1.2187 |
| 0.4008 | 1338.0 | 5352 | 1.2213 |
| 0.4008 | 1339.0 | 5356 | 1.2225 |
| 0.4008 | 1340.0 | 5360 | 1.2231 |
| 0.4008 | 1341.0 | 5364 | 1.2304 |
| 0.4008 | 1342.0 | 5368 | 1.2316 |
| 0.4008 | 1343.0 | 5372 | 1.2299 |
| 0.4008 | 1344.0 | 5376 | 1.2254 |
| 0.4008 | 1345.0 | 5380 | 1.2162 |
| 0.4008 | 1346.0 | 5384 | 1.2209 |
| 0.4008 | 1347.0 | 5388 | 1.2183 |
| 0.4008 | 1348.0 | 5392 | 1.2093 |
| 0.4008 | 1349.0 | 5396 | 1.1974 |
| 0.4008 | 1350.0 | 5400 | 1.1941 |
| 0.4008 | 1351.0 | 5404 | 1.1966 |
| 0.4008 | 1352.0 | 5408 | 1.2073 |
| 0.4008 | 1353.0 | 5412 | 1.2096 |
| 0.4008 | 1354.0 | 5416 | 1.2137 |
| 0.4008 | 1355.0 | 5420 | 1.2198 |
| 0.4008 | 1356.0 | 5424 | 1.2200 |
| 0.4008 | 1357.0 | 5428 | 1.2225 |
| 0.4008 | 1358.0 | 5432 | 1.2242 |
| 0.4008 | 1359.0 | 5436 | 1.2235 |
| 0.4008 | 1360.0 | 5440 | 1.2221 |
| 0.4008 | 1361.0 | 5444 | 1.2212 |
| 0.4008 | 1362.0 | 5448 | 1.2151 |
| 0.4008 | 1363.0 | 5452 | 1.2104 |
| 0.4008 | 1364.0 | 5456 | 1.2369 |
| 0.4008 | 1365.0 | 5460 | 1.2581 |
| 0.4008 | 1366.0 | 5464 | 1.2742 |
| 0.4008 | 1367.0 | 5468 | 1.2864 |
| 0.4008 | 1368.0 | 5472 | 1.2911 |
| 0.4008 | 1369.0 | 5476 | 1.2839 |
| 0.4008 | 1370.0 | 5480 | 1.2776 |
| 0.4008 | 1371.0 | 5484 | 1.2769 |
| 0.4008 | 1372.0 | 5488 | 1.2795 |
| 0.4008 | 1373.0 | 5492 | 1.2875 |
| 0.4008 | 1374.0 | 5496 | 1.2917 |
| 0.4015 | 1375.0 | 5500 | 1.2912 |
| 0.4015 | 1376.0 | 5504 | 1.2882 |
| 0.4015 | 1377.0 | 5508 | 1.2835 |
| 0.4015 | 1378.0 | 5512 | 1.2786 |
| 0.4015 | 1379.0 | 5516 | 1.2770 |
| 0.4015 | 1380.0 | 5520 | 1.2903 |
| 0.4015 | 1381.0 | 5524 | 1.2977 |
| 0.4015 | 1382.0 | 5528 | 1.3009 |
| 0.4015 | 1383.0 | 5532 | 1.3018 |
| 0.4015 | 1384.0 | 5536 | 1.3013 |
| 0.4015 | 1385.0 | 5540 | 1.2998 |
| 0.4015 | 1386.0 | 5544 | 1.2951 |
| 0.4015 | 1387.0 | 5548 | 1.2918 |
| 0.4015 | 1388.0 | 5552 | 1.2899 |
| 0.4015 | 1389.0 | 5556 | 1.2895 |
| 0.4015 | 1390.0 | 5560 | 1.2881 |
| 0.4015 | 1391.0 | 5564 | 1.2862 |
| 0.4015 | 1392.0 | 5568 | 1.2841 |
| 0.4015 | 1393.0 | 5572 | 1.2819 |
| 0.4015 | 1394.0 | 5576 | 1.2798 |
| 0.4015 | 1395.0 | 5580 | 1.2772 |
| 0.4015 | 1396.0 | 5584 | 1.2705 |
| 0.4015 | 1397.0 | 5588 | 1.2660 |
| 0.4015 | 1398.0 | 5592 | 1.2614 |
| 0.4015 | 1399.0 | 5596 | 1.2573 |
| 0.4015 | 1400.0 | 5600 | 1.2546 |
| 0.4015 | 1401.0 | 5604 | 1.2531 |
| 0.4015 | 1402.0 | 5608 | 1.2521 |
| 0.4015 | 1403.0 | 5612 | 1.2500 |
| 0.4015 | 1404.0 | 5616 | 1.2508 |
| 0.4015 | 1405.0 | 5620 | 1.2504 |
| 0.4015 | 1406.0 | 5624 | 1.2504 |
| 0.4015 | 1407.0 | 5628 | 1.2498 |
| 0.4015 | 1408.0 | 5632 | 1.2506 |
| 0.4015 | 1409.0 | 5636 | 1.2501 |
| 0.4015 | 1410.0 | 5640 | 1.2494 |
| 0.4015 | 1411.0 | 5644 | 1.2472 |
| 0.4015 | 1412.0 | 5648 | 1.2456 |
| 0.4015 | 1413.0 | 5652 | 1.2446 |
| 0.4015 | 1414.0 | 5656 | 1.2436 |
| 0.4015 | 1415.0 | 5660 | 1.2433 |
| 0.4015 | 1416.0 | 5664 | 1.2426 |
| 0.4015 | 1417.0 | 5668 | 1.2430 |
| 0.4015 | 1418.0 | 5672 | 1.2423 |
| 0.4015 | 1419.0 | 5676 | 1.2421 |
| 0.4015 | 1420.0 | 5680 | 1.2426 |
| 0.4015 | 1421.0 | 5684 | 1.2434 |
| 0.4015 | 1422.0 | 5688 | 1.2442 |
| 0.4015 | 1423.0 | 5692 | 1.2458 |
| 0.4015 | 1424.0 | 5696 | 1.2465 |
| 0.4015 | 1425.0 | 5700 | 1.2464 |
| 0.4015 | 1426.0 | 5704 | 1.2464 |
| 0.4015 | 1427.0 | 5708 | 1.2456 |
| 0.4015 | 1428.0 | 5712 | 1.2452 |
| 0.4015 | 1429.0 | 5716 | 1.2433 |
| 0.4015 | 1430.0 | 5720 | 1.2398 |
| 0.4015 | 1431.0 | 5724 | 1.2345 |
| 0.4015 | 1432.0 | 5728 | 1.2310 |
| 0.4015 | 1433.0 | 5732 | 1.2283 |
| 0.4015 | 1434.0 | 5736 | 1.2254 |
| 0.4015 | 1435.0 | 5740 | 1.2245 |
| 0.4015 | 1436.0 | 5744 | 1.2243 |
| 0.4015 | 1437.0 | 5748 | 1.2281 |
| 0.4015 | 1438.0 | 5752 | 1.2306 |
| 0.4015 | 1439.0 | 5756 | 1.2311 |
| 0.4015 | 1440.0 | 5760 | 1.2309 |
| 0.4015 | 1441.0 | 5764 | 1.2304 |
| 0.4015 | 1442.0 | 5768 | 1.2311 |
| 0.4015 | 1443.0 | 5772 | 1.2319 |
| 0.4015 | 1444.0 | 5776 | 1.2317 |
| 0.4015 | 1445.0 | 5780 | 1.2316 |
| 0.4015 | 1446.0 | 5784 | 1.2310 |
| 0.4015 | 1447.0 | 5788 | 1.2289 |
| 0.4015 | 1448.0 | 5792 | 1.2265 |
| 0.4015 | 1449.0 | 5796 | 1.2239 |
| 0.4015 | 1450.0 | 5800 | 1.2194 |
| 0.4015 | 1451.0 | 5804 | 1.2156 |
| 0.4015 | 1452.0 | 5808 | 1.2129 |
| 0.4015 | 1453.0 | 5812 | 1.2106 |
| 0.4015 | 1454.0 | 5816 | 1.2093 |
| 0.4015 | 1455.0 | 5820 | 1.2084 |
| 0.4015 | 1456.0 | 5824 | 1.2084 |
| 0.4015 | 1457.0 | 5828 | 1.2071 |
| 0.4015 | 1458.0 | 5832 | 1.2051 |
| 0.4015 | 1459.0 | 5836 | 1.2022 |
| 0.4015 | 1460.0 | 5840 | 1.2007 |
| 0.4015 | 1461.0 | 5844 | 1.1995 |
| 0.4015 | 1462.0 | 5848 | 1.2008 |
| 0.4015 | 1463.0 | 5852 | 1.2019 |
| 0.4015 | 1464.0 | 5856 | 1.2022 |
| 0.4015 | 1465.0 | 5860 | 1.2017 |
| 0.4015 | 1466.0 | 5864 | 1.2005 |
| 0.4015 | 1467.0 | 5868 | 1.1990 |
| 0.4015 | 1468.0 | 5872 | 1.1974 |
| 0.4015 | 1469.0 | 5876 | 1.1966 |
| 0.4015 | 1470.0 | 5880 | 1.1973 |
| 0.4015 | 1471.0 | 5884 | 1.1988 |
| 0.4015 | 1472.0 | 5888 | 1.1995 |
| 0.4015 | 1473.0 | 5892 | 1.1972 |
| 0.4015 | 1474.0 | 5896 | 1.1946 |
| 0.4015 | 1475.0 | 5900 | 1.1937 |
| 0.4015 | 1476.0 | 5904 | 1.1935 |
| 0.4015 | 1477.0 | 5908 | 1.1945 |
| 0.4015 | 1478.0 | 5912 | 1.1963 |
| 0.4015 | 1479.0 | 5916 | 1.1971 |
| 0.4015 | 1480.0 | 5920 | 1.1973 |
| 0.4015 | 1481.0 | 5924 | 1.1968 |
| 0.4015 | 1482.0 | 5928 | 1.1970 |
| 0.4015 | 1483.0 | 5932 | 1.1981 |
| 0.4015 | 1484.0 | 5936 | 1.2011 |
| 0.4015 | 1485.0 | 5940 | 1.2031 |
| 0.4015 | 1486.0 | 5944 | 1.2038 |
| 0.4015 | 1487.0 | 5948 | 1.2041 |
| 0.4015 | 1488.0 | 5952 | 1.2046 |
| 0.4015 | 1489.0 | 5956 | 1.2054 |
| 0.4015 | 1490.0 | 5960 | 1.2053 |
| 0.4015 | 1491.0 | 5964 | 1.2047 |
| 0.4015 | 1492.0 | 5968 | 1.2043 |
| 0.4015 | 1493.0 | 5972 | 1.2037 |
| 0.4015 | 1494.0 | 5976 | 1.2039 |
| 0.4015 | 1495.0 | 5980 | 1.2042 |
| 0.4015 | 1496.0 | 5984 | 1.2033 |
| 0.4015 | 1497.0 | 5988 | 1.2028 |
| 0.4015 | 1498.0 | 5992 | 1.2025 |
| 0.4015 | 1499.0 | 5996 | 1.2027 |
| 0.4005 | 1500.0 | 6000 | 1.2024 |
| 0.4005 | 1501.0 | 6004 | 1.2017 |
| 0.4005 | 1502.0 | 6008 | 1.2016 |
| 0.4005 | 1503.0 | 6012 | 1.2028 |
| 0.4005 | 1504.0 | 6016 | 1.2034 |
| 0.4005 | 1505.0 | 6020 | 1.2017 |
| 0.4005 | 1506.0 | 6024 | 1.2009 |
| 0.4005 | 1507.0 | 6028 | 1.2023 |
| 0.4005 | 1508.0 | 6032 | 1.2039 |
| 0.4005 | 1509.0 | 6036 | 1.2052 |
| 0.4005 | 1510.0 | 6040 | 1.2066 |
| 0.4005 | 1511.0 | 6044 | 1.2072 |
| 0.4005 | 1512.0 | 6048 | 1.2076 |
| 0.4005 | 1513.0 | 6052 | 1.2075 |
| 0.4005 | 1514.0 | 6056 | 1.2071 |
| 0.4005 | 1515.0 | 6060 | 1.2070 |
| 0.4005 | 1516.0 | 6064 | 1.2072 |
| 0.4005 | 1517.0 | 6068 | 1.2076 |
| 0.4005 | 1518.0 | 6072 | 1.2063 |
| 0.4005 | 1519.0 | 6076 | 1.2048 |
| 0.4005 | 1520.0 | 6080 | 1.2035 |
| 0.4005 | 1521.0 | 6084 | 1.2034 |
| 0.4005 | 1522.0 | 6088 | 1.2024 |
| 0.4005 | 1523.0 | 6092 | 1.2014 |
| 0.4005 | 1524.0 | 6096 | 1.2002 |
| 0.4005 | 1525.0 | 6100 | 1.2007 |
| 0.4005 | 1526.0 | 6104 | 1.2013 |
| 0.4005 | 1527.0 | 6108 | 1.2028 |
| 0.4005 | 1528.0 | 6112 | 1.2047 |
| 0.4005 | 1529.0 | 6116 | 1.2052 |
| 0.4005 | 1530.0 | 6120 | 1.2029 |
| 0.4005 | 1531.0 | 6124 | 1.1988 |
| 0.4005 | 1532.0 | 6128 | 1.1963 |
| 0.4005 | 1533.0 | 6132 | 1.1948 |
| 0.4005 | 1534.0 | 6136 | 1.2572 |
| 0.4005 | 1535.0 | 6140 | 1.3083 |
| 0.4005 | 1536.0 | 6144 | 1.3353 |
| 0.4005 | 1537.0 | 6148 | 1.3495 |
| 0.4005 | 1538.0 | 6152 | 1.3553 |
| 0.4005 | 1539.0 | 6156 | 1.3575 |
| 0.4005 | 1540.0 | 6160 | 1.3562 |
| 0.4005 | 1541.0 | 6164 | 1.3531 |
| 0.4005 | 1542.0 | 6168 | 1.3512 |
| 0.4005 | 1543.0 | 6172 | 1.3500 |
| 0.4005 | 1544.0 | 6176 | 1.3490 |
| 0.4005 | 1545.0 | 6180 | 1.3482 |
| 0.4005 | 1546.0 | 6184 | 1.3469 |
| 0.4005 | 1547.0 | 6188 | 1.3453 |
| 0.4005 | 1548.0 | 6192 | 1.3416 |
| 0.4005 | 1549.0 | 6196 | 1.3357 |
| 0.4005 | 1550.0 | 6200 | 1.3297 |
| 0.4005 | 1551.0 | 6204 | 1.3243 |
| 0.4005 | 1552.0 | 6208 | 1.3198 |
| 0.4005 | 1553.0 | 6212 | 1.3167 |
| 0.4005 | 1554.0 | 6216 | 1.3153 |
| 0.4005 | 1555.0 | 6220 | 1.3178 |
| 0.4005 | 1556.0 | 6224 | 1.3195 |
| 0.4005 | 1557.0 | 6228 | 1.3196 |
| 0.4005 | 1558.0 | 6232 | 1.3191 |
| 0.4005 | 1559.0 | 6236 | 1.3161 |
| 0.4005 | 1560.0 | 6240 | 1.3133 |
| 0.4005 | 1561.0 | 6244 | 1.3188 |
| 0.4005 | 1562.0 | 6248 | 1.3219 |
| 0.4005 | 1563.0 | 6252 | 1.3229 |
| 0.4005 | 1564.0 | 6256 | 1.3212 |
| 0.4005 | 1565.0 | 6260 | 1.3197 |
| 0.4005 | 1566.0 | 6264 | 1.3178 |
| 0.4005 | 1567.0 | 6268 | 1.3158 |
| 0.4005 | 1568.0 | 6272 | 1.3133 |
| 0.4005 | 1569.0 | 6276 | 1.2699 |
| 0.4005 | 1570.0 | 6280 | 1.2334 |
| 0.4005 | 1571.0 | 6284 | 1.2064 |
| 0.4005 | 1572.0 | 6288 | 1.1874 |
| 0.4005 | 1573.0 | 6292 | 1.1745 |
| 0.4005 | 1574.0 | 6296 | 1.1676 |
| 0.4005 | 1575.0 | 6300 | 1.1638 |
| 0.4005 | 1576.0 | 6304 | 1.1626 |
| 0.4005 | 1577.0 | 6308 | 1.1644 |
| 0.4005 | 1578.0 | 6312 | 1.1544 |
| 0.4005 | 1579.0 | 6316 | 1.1388 |
| 0.4005 | 1580.0 | 6320 | 1.1285 |
| 0.4005 | 1581.0 | 6324 | 1.1222 |
| 0.4005 | 1582.0 | 6328 | 1.1200 |
| 0.4005 | 1583.0 | 6332 | 1.1229 |
| 0.4005 | 1584.0 | 6336 | 1.1250 |
| 0.4005 | 1585.0 | 6340 | 1.1318 |
| 0.4005 | 1586.0 | 6344 | 1.1341 |
| 0.4005 | 1587.0 | 6348 | 1.1354 |
| 0.4005 | 1588.0 | 6352 | 1.1353 |
| 0.4005 | 1589.0 | 6356 | 1.1354 |
| 0.4005 | 1590.0 | 6360 | 1.1357 |
| 0.4005 | 1591.0 | 6364 | 1.1355 |
| 0.4005 | 1592.0 | 6368 | 1.1338 |
| 0.4005 | 1593.0 | 6372 | 1.1318 |
| 0.4005 | 1594.0 | 6376 | 1.1298 |
| 0.4005 | 1595.0 | 6380 | 1.1265 |
| 0.4005 | 1596.0 | 6384 | 1.1231 |
| 0.4005 | 1597.0 | 6388 | 1.1209 |
| 0.4005 | 1598.0 | 6392 | 1.1193 |
| 0.4005 | 1599.0 | 6396 | 1.1188 |
| 0.4005 | 1600.0 | 6400 | 1.1357 |
| 0.4005 | 1601.0 | 6404 | 1.1445 |
| 0.4005 | 1602.0 | 6408 | 1.1491 |
| 0.4005 | 1603.0 | 6412 | 1.1495 |
| 0.4005 | 1604.0 | 6416 | 1.1489 |
| 0.4005 | 1605.0 | 6420 | 1.1499 |
| 0.4005 | 1606.0 | 6424 | 1.1537 |
| 0.4005 | 1607.0 | 6428 | 1.1544 |
| 0.4005 | 1608.0 | 6432 | 1.1567 |
| 0.4005 | 1609.0 | 6436 | 1.1581 |
| 0.4005 | 1610.0 | 6440 | 1.1583 |
| 0.4005 | 1611.0 | 6444 | 1.1580 |
| 0.4005 | 1612.0 | 6448 | 1.1578 |
| 0.4005 | 1613.0 | 6452 | 1.1684 |
| 0.4005 | 1614.0 | 6456 | 1.1755 |
| 0.4005 | 1615.0 | 6460 | 1.1773 |
| 0.4005 | 1616.0 | 6464 | 1.1752 |
| 0.4005 | 1617.0 | 6468 | 1.1739 |
| 0.4005 | 1618.0 | 6472 | 1.1721 |
| 0.4005 | 1619.0 | 6476 | 1.1710 |
| 0.4005 | 1620.0 | 6480 | 1.1708 |
| 0.4005 | 1621.0 | 6484 | 1.1690 |
| 0.4005 | 1622.0 | 6488 | 1.1667 |
| 0.4005 | 1623.0 | 6492 | 1.1625 |
| 0.4005 | 1624.0 | 6496 | 1.1594 |
| 0.4004 | 1625.0 | 6500 | 1.1572 |
| 0.4004 | 1626.0 | 6504 | 1.1549 |
| 0.4004 | 1627.0 | 6508 | 1.1524 |
| 0.4004 | 1628.0 | 6512 | 1.1513 |
| 0.4004 | 1629.0 | 6516 | 1.1508 |
| 0.4004 | 1630.0 | 6520 | 1.1507 |
| 0.4004 | 1631.0 | 6524 | 1.1514 |
| 0.4004 | 1632.0 | 6528 | 1.1496 |
| 0.4004 | 1633.0 | 6532 | 1.1472 |
| 0.4004 | 1634.0 | 6536 | 1.1463 |
| 0.4004 | 1635.0 | 6540 | 1.1457 |
| 0.4004 | 1636.0 | 6544 | 1.1459 |
| 0.4004 | 1637.0 | 6548 | 1.1460 |
| 0.4004 | 1638.0 | 6552 | 1.1470 |
| 0.4004 | 1639.0 | 6556 | 1.1465 |
| 0.4004 | 1640.0 | 6560 | 1.1463 |
| 0.4004 | 1641.0 | 6564 | 1.1468 |
| 0.4004 | 1642.0 | 6568 | 1.1471 |
| 0.4004 | 1643.0 | 6572 | 1.1464 |
| 0.4004 | 1644.0 | 6576 | 1.1461 |
| 0.4004 | 1645.0 | 6580 | 1.1466 |
| 0.4004 | 1646.0 | 6584 | 1.1476 |
| 0.4004 | 1647.0 | 6588 | 1.1477 |
| 0.4004 | 1648.0 | 6592 | 1.1476 |
| 0.4004 | 1649.0 | 6596 | 1.1481 |
| 0.4004 | 1650.0 | 6600 | 1.1645 |
| 0.4004 | 1651.0 | 6604 | 1.1910 |
| 0.4004 | 1652.0 | 6608 | 1.2079 |
| 0.4004 | 1653.0 | 6612 | 1.2180 |
| 0.4004 | 1654.0 | 6616 | 1.2234 |
| 0.4004 | 1655.0 | 6620 | 1.2256 |
| 0.4004 | 1656.0 | 6624 | 1.2252 |
| 0.4004 | 1657.0 | 6628 | 1.2233 |
| 0.4004 | 1658.0 | 6632 | 1.2203 |
| 0.4004 | 1659.0 | 6636 | 1.2179 |
| 0.4004 | 1660.0 | 6640 | 1.2146 |
| 0.4004 | 1661.0 | 6644 | 1.2111 |
| 0.4004 | 1662.0 | 6648 | 1.2098 |
| 0.4004 | 1663.0 | 6652 | 1.2081 |
| 0.4004 | 1664.0 | 6656 | 1.2055 |
| 0.4004 | 1665.0 | 6660 | 1.1987 |
| 0.4004 | 1666.0 | 6664 | 1.1908 |
| 0.4004 | 1667.0 | 6668 | 1.1863 |
| 0.4004 | 1668.0 | 6672 | 1.1831 |
| 0.4004 | 1669.0 | 6676 | 1.1824 |
| 0.4004 | 1670.0 | 6680 | 1.1804 |
| 0.4004 | 1671.0 | 6684 | 1.1798 |
| 0.4004 | 1672.0 | 6688 | 1.1807 |
| 0.4004 | 1673.0 | 6692 | 1.1830 |
| 0.4004 | 1674.0 | 6696 | 1.1838 |
| 0.4004 | 1675.0 | 6700 | 1.1842 |
| 0.4004 | 1676.0 | 6704 | 1.1839 |
| 0.4004 | 1677.0 | 6708 | 1.1832 |
| 0.4004 | 1678.0 | 6712 | 1.1821 |
| 0.4004 | 1679.0 | 6716 | 1.1809 |
| 0.4004 | 1680.0 | 6720 | 1.1799 |
| 0.4004 | 1681.0 | 6724 | 1.1793 |
| 0.4004 | 1682.0 | 6728 | 1.1780 |
| 0.4004 | 1683.0 | 6732 | 1.1765 |
| 0.4004 | 1684.0 | 6736 | 1.1746 |
| 0.4004 | 1685.0 | 6740 | 1.1736 |
| 0.4004 | 1686.0 | 6744 | 1.1737 |
| 0.4004 | 1687.0 | 6748 | 1.1750 |
| 0.4004 | 1688.0 | 6752 | 1.1762 |
| 0.4004 | 1689.0 | 6756 | 1.1767 |
| 0.4004 | 1690.0 | 6760 | 1.1776 |
| 0.4004 | 1691.0 | 6764 | 1.1783 |
| 0.4004 | 1692.0 | 6768 | 1.1797 |
| 0.4004 | 1693.0 | 6772 | 1.1809 |
| 0.4004 | 1694.0 | 6776 | 1.1814 |
| 0.4004 | 1695.0 | 6780 | 1.1826 |
| 0.4004 | 1696.0 | 6784 | 1.1843 |
| 0.4004 | 1697.0 | 6788 | 1.1839 |
| 0.4004 | 1698.0 | 6792 | 1.1827 |
| 0.4004 | 1699.0 | 6796 | 1.1809 |
| 0.4004 | 1700.0 | 6800 | 1.1802 |
| 0.4004 | 1701.0 | 6804 | 1.1792 |
| 0.4004 | 1702.0 | 6808 | 1.1789 |
| 0.4004 | 1703.0 | 6812 | 1.1785 |
| 0.4004 | 1704.0 | 6816 | 1.1786 |
| 0.4004 | 1705.0 | 6820 | 1.1774 |
| 0.4004 | 1706.0 | 6824 | 1.1759 |
| 0.4004 | 1707.0 | 6828 | 1.1745 |
| 0.4004 | 1708.0 | 6832 | 1.1737 |
| 0.4004 | 1709.0 | 6836 | 1.1730 |
| 0.4004 | 1710.0 | 6840 | 1.1725 |
| 0.4004 | 1711.0 | 6844 | 1.1828 |
| 0.4004 | 1712.0 | 6848 | 1.1921 |
| 0.4004 | 1713.0 | 6852 | 1.1985 |
| 0.4004 | 1714.0 | 6856 | 1.2017 |
| 0.4004 | 1715.0 | 6860 | 1.2036 |
| 0.4004 | 1716.0 | 6864 | 1.2047 |
| 0.4004 | 1717.0 | 6868 | 1.2047 |
| 0.4004 | 1718.0 | 6872 | 1.2048 |
| 0.4004 | 1719.0 | 6876 | 1.2044 |
| 0.4004 | 1720.0 | 6880 | 1.2031 |
| 0.4004 | 1721.0 | 6884 | 1.2019 |
| 0.4004 | 1722.0 | 6888 | 1.2012 |
| 0.4004 | 1723.0 | 6892 | 1.2003 |
| 0.4004 | 1724.0 | 6896 | 1.1991 |
| 0.4004 | 1725.0 | 6900 | 1.1993 |
| 0.4004 | 1726.0 | 6904 | 1.1991 |
| 0.4004 | 1727.0 | 6908 | 1.1984 |
| 0.4004 | 1728.0 | 6912 | 1.1980 |
| 0.4004 | 1729.0 | 6916 | 1.1972 |
| 0.4004 | 1730.0 | 6920 | 1.1966 |
| 0.4004 | 1731.0 | 6924 | 1.1963 |
| 0.4004 | 1732.0 | 6928 | 1.1960 |
| 0.4004 | 1733.0 | 6932 | 1.1964 |
| 0.4004 | 1734.0 | 6936 | 1.1965 |
| 0.4004 | 1735.0 | 6940 | 1.1961 |
| 0.4004 | 1736.0 | 6944 | 1.1961 |
| 0.4004 | 1737.0 | 6948 | 1.1961 |
| 0.4004 | 1738.0 | 6952 | 1.1952 |
| 0.4004 | 1739.0 | 6956 | 1.1941 |
| 0.4004 | 1740.0 | 6960 | 1.1927 |
| 0.4004 | 1741.0 | 6964 | 1.1918 |
| 0.4004 | 1742.0 | 6968 | 1.1915 |
| 0.4004 | 1743.0 | 6972 | 1.1917 |
| 0.4004 | 1744.0 | 6976 | 1.1916 |
| 0.4004 | 1745.0 | 6980 | 1.1904 |
| 0.4004 | 1746.0 | 6984 | 1.1885 |
| 0.4004 | 1747.0 | 6988 | 1.1858 |
| 0.4004 | 1748.0 | 6992 | 1.1834 |
| 0.4004 | 1749.0 | 6996 | 1.1813 |
| 0.401 | 1750.0 | 7000 | 1.1793 |
| 0.401 | 1751.0 | 7004 | 1.1773 |
| 0.401 | 1752.0 | 7008 | 1.1912 |
| 0.401 | 1753.0 | 7012 | 1.1996 |
| 0.401 | 1754.0 | 7016 | 1.2069 |
| 0.401 | 1755.0 | 7020 | 1.2124 |
| 0.401 | 1756.0 | 7024 | 1.2148 |
| 0.401 | 1757.0 | 7028 | 1.2169 |
| 0.401 | 1758.0 | 7032 | 1.2179 |
| 0.401 | 1759.0 | 7036 | 1.2280 |
| 0.401 | 1760.0 | 7040 | 1.2425 |
| 0.401 | 1761.0 | 7044 | 1.2519 |
| 0.401 | 1762.0 | 7048 | 1.2579 |
| 0.401 | 1763.0 | 7052 | 1.2617 |
| 0.401 | 1764.0 | 7056 | 1.2642 |
| 0.401 | 1765.0 | 7060 | 1.2660 |
| 0.401 | 1766.0 | 7064 | 1.2669 |
| 0.401 | 1767.0 | 7068 | 1.2672 |
| 0.401 | 1768.0 | 7072 | 1.2671 |
| 0.401 | 1769.0 | 7076 | 1.2670 |
| 0.401 | 1770.0 | 7080 | 1.2663 |
| 0.401 | 1771.0 | 7084 | 1.2653 |
| 0.401 | 1772.0 | 7088 | 1.2647 |
| 0.401 | 1773.0 | 7092 | 1.2646 |
| 0.401 | 1774.0 | 7096 | 1.2632 |
| 0.401 | 1775.0 | 7100 | 1.2631 |
| 0.401 | 1776.0 | 7104 | 1.2633 |
| 0.401 | 1777.0 | 7108 | 1.2632 |
| 0.401 | 1778.0 | 7112 | 1.2627 |
| 0.401 | 1779.0 | 7116 | 1.2621 |
| 0.401 | 1780.0 | 7120 | 1.2621 |
| 0.401 | 1781.0 | 7124 | 1.2613 |
| 0.401 | 1782.0 | 7128 | 1.2605 |
| 0.401 | 1783.0 | 7132 | 1.2607 |
| 0.401 | 1784.0 | 7136 | 1.2611 |
| 0.401 | 1785.0 | 7140 | 1.2613 |
| 0.401 | 1786.0 | 7144 | 1.2615 |
| 0.401 | 1787.0 | 7148 | 1.2603 |
| 0.401 | 1788.0 | 7152 | 1.2549 |
| 0.401 | 1789.0 | 7156 | 1.2472 |
| 0.401 | 1790.0 | 7160 | 1.2418 |
| 0.401 | 1791.0 | 7164 | 1.2381 |
| 0.401 | 1792.0 | 7168 | 1.2356 |
| 0.401 | 1793.0 | 7172 | 1.2338 |
| 0.401 | 1794.0 | 7176 | 1.2328 |
| 0.401 | 1795.0 | 7180 | 1.2314 |
| 0.401 | 1796.0 | 7184 | 1.2304 |
| 0.401 | 1797.0 | 7188 | 1.2291 |
| 0.401 | 1798.0 | 7192 | 1.2275 |
| 0.401 | 1799.0 | 7196 | 1.2232 |
| 0.401 | 1800.0 | 7200 | 1.2205 |
| 0.401 | 1801.0 | 7204 | 1.2190 |
| 0.401 | 1802.0 | 7208 | 1.2192 |
| 0.401 | 1803.0 | 7212 | 1.2199 |
| 0.401 | 1804.0 | 7216 | 1.2199 |
| 0.401 | 1805.0 | 7220 | 1.2201 |
| 0.401 | 1806.0 | 7224 | 1.2204 |
| 0.401 | 1807.0 | 7228 | 1.2204 |
| 0.401 | 1808.0 | 7232 | 1.2202 |
| 0.401 | 1809.0 | 7236 | 1.2199 |
| 0.401 | 1810.0 | 7240 | 1.2195 |
| 0.401 | 1811.0 | 7244 | 1.2194 |
| 0.401 | 1812.0 | 7248 | 1.2195 |
| 0.401 | 1813.0 | 7252 | 1.2191 |
| 0.401 | 1814.0 | 7256 | 1.2185 |
| 0.401 | 1815.0 | 7260 | 1.2183 |
| 0.401 | 1816.0 | 7264 | 1.2184 |
| 0.401 | 1817.0 | 7268 | 1.2186 |
| 0.401 | 1818.0 | 7272 | 1.2190 |
| 0.401 | 1819.0 | 7276 | 1.2189 |
| 0.401 | 1820.0 | 7280 | 1.2186 |
| 0.401 | 1821.0 | 7284 | 1.2183 |
| 0.401 | 1822.0 | 7288 | 1.2191 |
| 0.401 | 1823.0 | 7292 | 1.2202 |
| 0.401 | 1824.0 | 7296 | 1.2214 |
| 0.401 | 1825.0 | 7300 | 1.2223 |
| 0.401 | 1826.0 | 7304 | 1.2224 |
| 0.401 | 1827.0 | 7308 | 1.2203 |
| 0.401 | 1828.0 | 7312 | 1.2192 |
| 0.401 | 1829.0 | 7316 | 1.2193 |
| 0.401 | 1830.0 | 7320 | 1.2190 |
| 0.401 | 1831.0 | 7324 | 1.2184 |
| 0.401 | 1832.0 | 7328 | 1.2176 |
| 0.401 | 1833.0 | 7332 | 1.2078 |
| 0.401 | 1834.0 | 7336 | 1.2013 |
| 0.401 | 1835.0 | 7340 | 1.1970 |
| 0.401 | 1836.0 | 7344 | 1.1946 |
| 0.401 | 1837.0 | 7348 | 1.1931 |
| 0.401 | 1838.0 | 7352 | 1.1918 |
| 0.401 | 1839.0 | 7356 | 1.1913 |
| 0.401 | 1840.0 | 7360 | 1.1914 |
| 0.401 | 1841.0 | 7364 | 1.1920 |
| 0.401 | 1842.0 | 7368 | 1.1927 |
| 0.401 | 1843.0 | 7372 | 1.1929 |
| 0.401 | 1844.0 | 7376 | 1.1928 |
| 0.401 | 1845.0 | 7380 | 1.1923 |
| 0.401 | 1846.0 | 7384 | 1.1920 |
| 0.401 | 1847.0 | 7388 | 1.1924 |
| 0.401 | 1848.0 | 7392 | 1.1927 |
| 0.401 | 1849.0 | 7396 | 1.1930 |
| 0.401 | 1850.0 | 7400 | 1.1929 |
| 0.401 | 1851.0 | 7404 | 1.1927 |
| 0.401 | 1852.0 | 7408 | 1.1921 |
| 0.401 | 1853.0 | 7412 | 1.1916 |
| 0.401 | 1854.0 | 7416 | 1.1914 |
| 0.401 | 1855.0 | 7420 | 1.1913 |
| 0.401 | 1856.0 | 7424 | 1.1914 |
| 0.401 | 1857.0 | 7428 | 1.1913 |
| 0.401 | 1858.0 | 7432 | 1.1909 |
| 0.401 | 1859.0 | 7436 | 1.1907 |
| 0.401 | 1860.0 | 7440 | 1.1907 |
| 0.401 | 1861.0 | 7444 | 1.1906 |
| 0.401 | 1862.0 | 7448 | 1.1903 |
| 0.401 | 1863.0 | 7452 | 1.1902 |
| 0.401 | 1864.0 | 7456 | 1.1926 |
| 0.401 | 1865.0 | 7460 | 1.1959 |
| 0.401 | 1866.0 | 7464 | 1.1985 |
| 0.401 | 1867.0 | 7468 | 1.2005 |
| 0.401 | 1868.0 | 7472 | 1.2018 |
| 0.401 | 1869.0 | 7476 | 1.2014 |
| 0.401 | 1870.0 | 7480 | 1.2009 |
| 0.401 | 1871.0 | 7484 | 1.2010 |
| 0.401 | 1872.0 | 7488 | 1.2009 |
| 0.401 | 1873.0 | 7492 | 1.2003 |
| 0.401 | 1874.0 | 7496 | 1.1998 |
| 0.4005 | 1875.0 | 7500 | 1.1991 |
| 0.4005 | 1876.0 | 7504 | 1.1985 |
| 0.4005 | 1877.0 | 7508 | 1.1982 |
| 0.4005 | 1878.0 | 7512 | 1.1978 |
| 0.4005 | 1879.0 | 7516 | 1.1976 |
| 0.4005 | 1880.0 | 7520 | 1.1963 |
| 0.4005 | 1881.0 | 7524 | 1.1952 |
| 0.4005 | 1882.0 | 7528 | 1.1948 |
| 0.4005 | 1883.0 | 7532 | 1.1940 |
| 0.4005 | 1884.0 | 7536 | 1.1932 |
| 0.4005 | 1885.0 | 7540 | 1.1927 |
| 0.4005 | 1886.0 | 7544 | 1.1924 |
| 0.4005 | 1887.0 | 7548 | 1.1916 |
| 0.4005 | 1888.0 | 7552 | 1.1905 |
| 0.4005 | 1889.0 | 7556 | 1.1893 |
| 0.4005 | 1890.0 | 7560 | 1.1883 |
| 0.4005 | 1891.0 | 7564 | 1.1873 |
| 0.4005 | 1892.0 | 7568 | 1.1865 |
| 0.4005 | 1893.0 | 7572 | 1.1862 |
| 0.4005 | 1894.0 | 7576 | 1.1853 |
| 0.4005 | 1895.0 | 7580 | 1.1847 |
| 0.4005 | 1896.0 | 7584 | 1.1843 |
| 0.4005 | 1897.0 | 7588 | 1.1842 |
| 0.4005 | 1898.0 | 7592 | 1.1848 |
| 0.4005 | 1899.0 | 7596 | 1.1855 |
| 0.4005 | 1900.0 | 7600 | 1.1866 |
| 0.4005 | 1901.0 | 7604 | 1.1875 |
| 0.4005 | 1902.0 | 7608 | 1.1883 |
| 0.4005 | 1903.0 | 7612 | 1.1892 |
| 0.4005 | 1904.0 | 7616 | 1.1896 |
| 0.4005 | 1905.0 | 7620 | 1.1896 |
| 0.4005 | 1906.0 | 7624 | 1.1895 |
| 0.4005 | 1907.0 | 7628 | 1.1892 |
| 0.4005 | 1908.0 | 7632 | 1.1890 |
| 0.4005 | 1909.0 | 7636 | 1.1892 |
| 0.4005 | 1910.0 | 7640 | 1.1892 |
| 0.4005 | 1911.0 | 7644 | 1.1888 |
| 0.4005 | 1912.0 | 7648 | 1.1884 |
| 0.4005 | 1913.0 | 7652 | 1.1881 |
| 0.4005 | 1914.0 | 7656 | 1.1876 |
| 0.4005 | 1915.0 | 7660 | 1.1870 |
| 0.4005 | 1916.0 | 7664 | 1.1866 |
| 0.4005 | 1917.0 | 7668 | 1.1865 |
| 0.4005 | 1918.0 | 7672 | 1.1863 |
| 0.4005 | 1919.0 | 7676 | 1.1863 |
| 0.4005 | 1920.0 | 7680 | 1.1848 |
| 0.4005 | 1921.0 | 7684 | 1.1799 |
| 0.4005 | 1922.0 | 7688 | 1.1758 |
| 0.4005 | 1923.0 | 7692 | 1.1711 |
| 0.4005 | 1924.0 | 7696 | 1.1681 |
| 0.4005 | 1925.0 | 7700 | 1.1661 |
| 0.4005 | 1926.0 | 7704 | 1.1651 |
| 0.4005 | 1927.0 | 7708 | 1.1649 |
| 0.4005 | 1928.0 | 7712 | 1.1646 |
| 0.4005 | 1929.0 | 7716 | 1.1639 |
| 0.4005 | 1930.0 | 7720 | 1.1634 |
| 0.4005 | 1931.0 | 7724 | 1.1628 |
| 0.4005 | 1932.0 | 7728 | 1.1627 |
| 0.4005 | 1933.0 | 7732 | 1.1624 |
| 0.4005 | 1934.0 | 7736 | 1.1620 |
| 0.4005 | 1935.0 | 7740 | 1.1619 |
| 0.4005 | 1936.0 | 7744 | 1.1618 |
| 0.4005 | 1937.0 | 7748 | 1.1618 |
| 0.4005 | 1938.0 | 7752 | 1.1618 |
| 0.4005 | 1939.0 | 7756 | 1.1632 |
| 0.4005 | 1940.0 | 7760 | 1.1642 |
| 0.4005 | 1941.0 | 7764 | 1.1649 |
| 0.4005 | 1942.0 | 7768 | 1.1653 |
| 0.4005 | 1943.0 | 7772 | 1.1657 |
| 0.4005 | 1944.0 | 7776 | 1.1660 |
| 0.4005 | 1945.0 | 7780 | 1.1657 |
| 0.4005 | 1946.0 | 7784 | 1.1653 |
| 0.4005 | 1947.0 | 7788 | 1.1650 |
| 0.4005 | 1948.0 | 7792 | 1.1648 |
| 0.4005 | 1949.0 | 7796 | 1.1646 |
| 0.4005 | 1950.0 | 7800 | 1.1644 |
| 0.4005 | 1951.0 | 7804 | 1.1642 |
| 0.4005 | 1952.0 | 7808 | 1.1637 |
| 0.4005 | 1953.0 | 7812 | 1.1635 |
| 0.4005 | 1954.0 | 7816 | 1.1633 |
| 0.4005 | 1955.0 | 7820 | 1.1631 |
| 0.4005 | 1956.0 | 7824 | 1.1629 |
| 0.4005 | 1957.0 | 7828 | 1.1628 |
| 0.4005 | 1958.0 | 7832 | 1.1628 |
| 0.4005 | 1959.0 | 7836 | 1.1628 |
| 0.4005 | 1960.0 | 7840 | 1.1629 |
| 0.4005 | 1961.0 | 7844 | 1.1631 |
| 0.4005 | 1962.0 | 7848 | 1.1633 |
| 0.4005 | 1963.0 | 7852 | 1.1634 |
| 0.4005 | 1964.0 | 7856 | 1.1634 |
| 0.4005 | 1965.0 | 7860 | 1.1666 |
| 0.4005 | 1966.0 | 7864 | 1.1694 |
| 0.4005 | 1967.0 | 7868 | 1.1712 |
| 0.4005 | 1968.0 | 7872 | 1.1723 |
| 0.4005 | 1969.0 | 7876 | 1.1733 |
| 0.4005 | 1970.0 | 7880 | 1.1740 |
| 0.4005 | 1971.0 | 7884 | 1.1742 |
| 0.4005 | 1972.0 | 7888 | 1.1745 |
| 0.4005 | 1973.0 | 7892 | 1.1747 |
| 0.4005 | 1974.0 | 7896 | 1.1752 |
| 0.4005 | 1975.0 | 7900 | 1.1760 |
| 0.4005 | 1976.0 | 7904 | 1.1766 |
| 0.4005 | 1977.0 | 7908 | 1.1769 |
| 0.4005 | 1978.0 | 7912 | 1.1771 |
| 0.4005 | 1979.0 | 7916 | 1.1773 |
| 0.4005 | 1980.0 | 7920 | 1.1774 |
| 0.4005 | 1981.0 | 7924 | 1.1773 |
| 0.4005 | 1982.0 | 7928 | 1.1773 |
| 0.4005 | 1983.0 | 7932 | 1.1771 |
| 0.4005 | 1984.0 | 7936 | 1.1768 |
| 0.4005 | 1985.0 | 7940 | 1.1762 |
| 0.4005 | 1986.0 | 7944 | 1.1758 |
| 0.4005 | 1987.0 | 7948 | 1.1756 |
| 0.4005 | 1988.0 | 7952 | 1.1754 |
| 0.4005 | 1989.0 | 7956 | 1.1753 |
| 0.4005 | 1990.0 | 7960 | 1.1754 |
| 0.4005 | 1991.0 | 7964 | 1.1757 |
| 0.4005 | 1992.0 | 7968 | 1.1759 |
| 0.4005 | 1993.0 | 7972 | 1.1760 |
| 0.4005 | 1994.0 | 7976 | 1.1761 |
| 0.4005 | 1995.0 | 7980 | 1.1761 |
| 0.4005 | 1996.0 | 7984 | 1.1761 |
| 0.4005 | 1997.0 | 7988 | 1.1761 |
| 0.4005 | 1998.0 | 7992 | 1.1761 |
| 0.4005 | 1999.0 | 7996 | 1.1761 |
| 0.4011 | 2000.0 | 8000 | 1.1761 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.0
|
timothy-geiger/distilhubert-finetuned-gtzan
|
timothy-geiger
| 2024-03-07T23:25:42Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-03-06T17:44:21Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.87
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0924
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7495 | 1.0 | 450 | 1.7168 | 0.52 |
| 1.1633 | 2.0 | 900 | 1.0515 | 0.66 |
| 0.3792 | 3.0 | 1350 | 0.7312 | 0.73 |
| 0.5365 | 4.0 | 1800 | 0.9707 | 0.75 |
| 0.0234 | 5.0 | 2250 | 1.1124 | 0.75 |
| 0.0039 | 6.0 | 2700 | 0.9717 | 0.82 |
| 0.1781 | 7.0 | 3150 | 1.0491 | 0.82 |
| 0.0009 | 8.0 | 3600 | 1.1946 | 0.83 |
| 0.0007 | 9.0 | 4050 | 1.1116 | 0.84 |
| 0.0004 | 10.0 | 4500 | 1.0814 | 0.85 |
| 0.0004 | 11.0 | 4950 | 1.1160 | 0.85 |
| 0.0003 | 12.0 | 5400 | 1.1082 | 0.85 |
| 0.0003 | 13.0 | 5850 | 1.1311 | 0.86 |
| 0.0002 | 14.0 | 6300 | 1.1159 | 0.86 |
| 0.0003 | 15.0 | 6750 | 1.0924 | 0.87 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Alpaca69B/phi2-2b-absa
|
Alpaca69B
| 2024-03-07T23:21:02Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-30T02:56:41Z |
---
library_name: transformers
tags: []
---
---
# phi2-2b-absa: Fine-Tuned Aspect-Based Sentiment Analysis Model
## Model Description
The **phi2-2b-absa** model is a fine-tuned aspect-based sentiment analysis (ABSA) model based on the Microsoft Phi-2 model. It has been trained on the **semeval2016-full-absa-reviews-english-translated-resampled** dataset. The model predicts sentiments towards different aspects mentioned in a given sentence.
## Fine-Tuning Details
The fine tuning can be revisited on [Google Colab](https://colab.research.google.com/drive/1n3ykETLpHQPXwPhUcOe-z9cG3ThrDkSi?usp=sharing).
### Dataset
- **Name:** semeval2016-full-absa-reviews-english-translated-resampled
- **Description:** Annotated dataset for ABSA containing sentences, aspects, sentiments, and additional contextual text. It is split into train and test sets.
### Model Architecture
- **Base Model:** Microsoft Phi-2
- **Fine-Tuned Model:** phi2-2b-absa
### Fine-Tuning Parameters
- **LoRA Attention Dimension (lora_r):** 64
- **LoRA Scaling Parameter (lora_alpha):** 16
- **LoRA Dropout Probability (lora_dropout):** 0.1
### BitsAndBytes Quantization
- **Activate 4-bit Precision:** True
- **Compute Dtype for 4-bit Models:** float16
- **Quantization Type:** nf4
### Training Parameters
- **Number of Training Epochs:** 1
- **Batch Size per GPU for Training:** 4
- **Batch Size per GPU for Evaluation:** 4
- **Gradient Accumulation Steps:** 1
- **Learning Rate:** 2e-4
- **Weight Decay:** 0.001
- **Optimizer:** PagedAdamW (32-bit)
- **Learning Rate Scheduler:** Cosine
### SFT Parameters
- **Maximum Sequence Length:** None
- **Packing:** False
## How to Use
```
from transformers import AutoTokenizer, pipeline
import torch
model = "Alpaca69B/llama-2-7b-absa-semeval-2016"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.float16,
device="auto",
)
input_sentence = "the first thing that attracts attention is the warm reception and the smiling receptionists."
sequences = pipeline(
f'### Human: {input_sentence} ### Assistant: aspect:',
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
sequences[0]['generated_text']
```
Testing can be seen on [Google Colab](https://colab.research.google.com/drive/1eKdZYYWiivyeCQDsocGBstVODMLZyT-_?usp=sharing)
## Acknowledgments
- The fine-tuning process and model development were performed by Ben Kampmann.
---
|
sarak7/H4_39_769_v1
|
sarak7
| 2024-03-07T23:19:46Z | 175 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T23:18:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
imsarfaroz/fine-tuned-albert-emotion
|
imsarfaroz
| 2024-03-07T23:19:19Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-07T14:58:37Z |
---
license: apache-2.0
base_model: albert-base-v2
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: fine-tuned-albert-tweets
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9305
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-albert-tweets
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1757
- Accuracy: 0.9305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3202 | 1.0 | 1000 | 0.2518 | 0.912 |
| 0.1537 | 2.0 | 2000 | 0.1757 | 0.9305 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
dolainu/Nyanners_loraXL_Vtuber
|
dolainu
| 2024-03-07T23:15:36Z | 4 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2024-03-07T22:36:54Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
<lora:NyanXL_V1_50se:0.87>, nyanners1st, purple eyes, petite, closed mouth,
smug
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers, realistic
output:
url: images/09104-3065517054.png
- text: >-
<lora:NyanXL_V1_50se:0.87>, nyanners1st, purple eyes, petite, closed mouth,
smug, shirt lift, bed, legs up, pussy, hugging own legs
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers, realistic
output:
url: images/09101-2699702684.png
- text: >-
<lora:NyanXL_V1_50se:0.87>, nyanners1st, purple eyes, petite, closed mouth,
smug, shirt lift, bed, masturbating, pussy
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers, realistic
output:
url: images/09094-3843125815.png
- text: >-
score_9, score_8_up, score_7_up, <lora:NyanXL_V1_50se:0.87>, nyanners1st,
purple eyes, petite, closed mouth, smug, sitting, table, drink, hand on
cheek, looking at viewer, resting head
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers
output:
url: images/09082-3207580297.png
- text: >-
score_9, score_8_up, score_7_up, <lora:NyanXL_V1_50se:0.87>, nyanners1st,
medium hair, purple eyes, petite, closed mouth, smug, sitting, table, drink,
hand on cheek, looking at viewer, resting head
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers
output:
url: images/00028-4212975261.png
- text: >-
score_9, score_8_up, score_7_up, <lora:NyanXL_V1_50se:0.87>, nyanners1st,
medium hair, purple eyes, petite, closed mouth, smug, sitting, table, drink,
hand on cheek, looking at viewer, resting head
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers
output:
url: images/09078-3444304162.png
- text: >-
score_9, score_8_up, score_7_up, <lora:NyanXL_V1_50se:0.87>, nyanners1st,
purple eyes, petite, closed mouth, smug, kneeling, shirt lift
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers
output:
url: images/09085-1394382796.png
- text: >-
score_9, score_8_up, score_7_up, <lora:NyanXL_V1_50se:1>, nyanners2st, long
hair, kneeling, closed mouth, shirt lift
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, realistic, bad anatomy, bad proportions,
deformed, deformed anatomy, deformed fingers, motion lines
output:
url: images/09018-3981033982.png
- text: >-
score_9, score_8_up, score_7_up, <lora:NyanXL_V1_50se:0.87>, nyanners2st,
long hair, closed mouth, shirt lift, smug, petite, lying, bed
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, realistic, bad anatomy, bad proportions,
deformed, deformed anatomy, deformed fingers, motion lines
output:
url: images/09035-1317649319.png
- text: >-
score_9, score_8_up, score_7_up, <lora:NyanXL_V1_50se:0.87>, nyanners2st,
long hair, kneeling, closed mouth, shirt lift, smug, petite
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, realistic, bad anatomy, bad proportions,
deformed, deformed anatomy, deformed fingers, motion lines
output:
url: images/09026-560715627.png
- text: >-
score_9, score_8_up, score_7_up, <lora:NyanXL_V1_50se:0.87>, nyanners2st,
long hair, kneeling, closed mouth, shirt lift, smug, petite
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, realistic, bad anatomy, bad proportions,
deformed, deformed anatomy, deformed fingers, motion lines
output:
url: images/09024-4125556276.png
- text: >-
score_9, score_8_up, score_7_up, <lora:NyanXL_V1_50se:0.87>, nyanners2st,
long hair, purple eyes, petite, closed mouth, smug, sitting, table, drink,
hand on cheek, looking at viewer, resting head
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers
output:
url: images/09077-834539960.png
base_model: stablediffusionapi/pony-diffusion-v6-xl
instance_prompt: null
license: apache-2.0
---
# Nyanners
<Gallery />
## Model description
Works best with Ponydiffusion V6 XL
TESTED AT 0.87 STRENGTH
Prompts:
short hair ver.: "nyanners1st, purple eyes"---optional: "medium hair"
long hair ver.: "nyanners2st, long hair, purple eyes"
## Download model
Weights for this model are available in Safetensors format.
[Download](/dolainu/Nyanners_lora_Vtuber/tree/main) them in the Files & versions tab.
|
ArtMindia/artmindia3k
|
ArtMindia
| 2024-03-07T23:07:06Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"pytorch",
"mistral",
"question-answering",
"en",
"license:apache-2.0",
"region:us"
] |
question-answering
| 2023-10-28T00:43:48Z |
---
license: apache-2.0
language:
- en
library_name: adapter-transformers
metrics:
- accuracy
pipeline_tag: question-answering
---
---
license: apache-2.0
language:
- en
This is just a test card with a few thousand rows of data. I wish I had more to add but that is all.
How much do you need. Here it is.
fsdf
asdf
sdf
dsafs
fsd
fad
sfs
f
sadf
dafs
dfs
fa
sf
asf
sf
s
afs
fsdf
f
s
f
sf
sdf
sf
sf
s
This model is not too short
|
AIdenU/LLAMA-2-13b-ko-Y24-DPO_v2.0
|
AIdenU
| 2024-03-07T23:05:28Z | 216 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T00:55:41Z |
---
license: apache-2.0
language:
- ko
pipeline_tag: text-generation
tags:
- llama2
---
### BaseModel
- [AIdenU/LLAMA-2-13b-ko-Y24_v2.0](https://huggingface.co/AIdenU/LLAMA-2-13b-ko-Y24_v2.0)
### Model Generation
```
from transforemrs import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("AIdenU/LLAMA-2-13b-ko-Y24-DPO_v2.0", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("AIdenU/LLAMA-2-13b-ko-Y24-DPO_v2.0", use_fast=True)
systemPrompt = "당신은 유능한 AI입니다."
prompt = "지렁이도 밟으면 꿈틀하나요?"
outputs = model.generate(
**tokenizer(
f"[INST] <<SYS>>\n{systemPrompt}\n<</SYS>>\n\n{prompt} [/INST] ",
return_tensors='pt'
).to('cuda'),
max_new_tokens=256,
temperature=0.2,
top_p=1,
do_sample=True
)
print(tokenizer.decode(outputs[0]))
```
|
iestynmullinor/roberta-reranker-fever-better
|
iestynmullinor
| 2024-03-07T23:05:15Z | 91 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-07T23:04:56Z |
---
license: mit
base_model: FacebookAI/roberta-base
tags:
- generated_from_trainer
model-index:
- name: roberta-reranker-fever-better
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-reranker-fever-better
This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0444 | 1.0 | 12500 | 0.0209 |
| 0.0001 | 2.0 | 25000 | 0.0278 |
| 0.0 | 3.0 | 37500 | 0.0266 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.1
|
dolainu/Pipkin_Pippa_lora_Vtuber
|
dolainu
| 2024-03-07T23:05:07Z | 4 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2024-03-07T23:04:57Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
<lora:PippaV0.15E:0.7>, pipkin_pippa, pink eyes, <lora:hyperrefiner_v090:1>,
sitting
parameters:
negative_prompt: easynegative, bad-hands-5
output:
url: images/03865-278470680.png
- text: >-
<lora:PippaV0.15E:0.7>, pipkin_pippa, pink eyes, <lora:hyperrefiner_v090:1>,
(nude), petite, shiny skin, <lora:Smooth belly_v1.3.2:0.6>
parameters:
negative_prompt: easynegative, bad-hands-5
output:
url: images/03859-2653556988.png
- text: >-
<lora:PippaV0.15E:0.7>, pipkin_pippa, pink eyes, <lora:hyperrefiner_v090:1>,
(nude), petite, shiny skin, embarrassed
parameters:
negative_prompt: easynegative, bad-hands-5
output:
url: images/03856-508608459.png
- text: >-
<lora:PippaV0.15E:0.7>, pipkin_pippa, pink eyes, <lora:hyperrefiner_v090:1>,
1girl, sitting, [pregnant]
parameters:
negative_prompt: easynegative
output:
url: images/03729-1697240416.png
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
license: apache-2.0
---
# Pippkin Pippa
<Gallery />
## Model description
Tested at 0.7-0.8 strength.
Trigger Worlds: pipkin_pippa, pink eyes
For NSFW i recomend including "petite" in your prompt, otherwise the proportions may be slightly off.
## Download model
Weights for this model are available in Safetensors format.
[Download](/dolainu/Pippkin_Pippa_lora_Vtuber/tree/main) them in the Files & versions tab.
|
AIdenU/Gemma-7b-ko-Y24_v2.0
|
AIdenU
| 2024-03-07T23:04:05Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-05T00:12:43Z |
---
license: apache-2.0
language:
- ko
pipeline_tag: text-generation
tags:
- gemma
---
### BaseModel
- [google/gemma-7b](https://huggingface.co/google/gemma-7b)
### Model Generation
```
from transforemrs import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("AIdenU/Gemma-7b-ko-Y24_v2.0", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("AIdenU/Gemma-7b-ko-Y24_v2.0", use_fast=True)
systemPrompt = "당신은 유능한 AI입니다."
prompt = "지렁이도 밟으면 꿈틀하나요?"
outputs = model.generate(
**tokenizer(
f"### instruction: {system}\n{prompt} \n### output: ",
return_tensors='pt'
).to('cuda'),
max_new_tokens=256,
temperature=0.2,
top_p=1,
do_sample=True
)
print(tokenizer.decode(outputs[0]))
```
|
AIdenU/LLAMA-2-13b-ko-Y24_v2.0
|
AIdenU
| 2024-03-07T23:01:30Z | 239 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T23:18:08Z |
---
license: apache-2.0
language:
- ko
pipeline_tag: text-generation
tags:
- llama2
---
### BaseModel
- [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)
### Model Generation
```
from transforemrs import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("AIdenU/LLAMA-2-13b-ko-Y24_v2.0", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("AIdenU/LLAMA-2-13b-ko-Y24_v2.0", use_fast=True)
systemPrompt = "당신은 유능한 AI입니다."
prompt = "지렁이도 밟으면 꿈틀하나요?"
outputs = model.generate(
**tokenizer(
f"[INST] <<SYS>>\n{systemPrompt}\n<</SYS>>\n\n{prompt} [/INST] ",
return_tensors='pt'
).to('cuda'),
max_new_tokens=256,
temperature=0.2,
top_p=1,
do_sample=True
)
print(tokenizer.decode(outputs[0]))
```
|
dagbs/gemma-2b-it_oasst2_chatML_Cluster2_aya_multilingual-GGUF
|
dagbs
| 2024-03-07T22:46:21Z | 366 | 2 | null |
[
"gguf",
"bg",
"ca",
"cs",
"da",
"de",
"en",
"es",
"fr",
"hr",
"hu",
"it",
"nl",
"pl",
"pt",
"ro",
"ru",
"sl",
"sr",
"sv",
"uk",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-07T21:33:13Z |
---
license: apache-2.0
language:
- bg
- ca
- cs
- da
- de
- en
- es
- fr
- hr
- hu
- it
- nl
- pl
- pt
- ro
- ru
- sl
- sr
- sv
- uk
---
# gemma-2b-it_oasst2_chatML_Cluster2_aya_multilingual - GGUF
Original Model: [NickyNicky/gemma-2b-it_oasst2_chatML_Cluster2_aya_multilingual](https://huggingface.co/NickyNicky/gemma-2b-it_oasst2_chatML_Cluster2_aya_multilingual)

|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_32_32_0.01_4_0.0002
|
ferrazzipietro
| 2024-03-07T22:39:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T22:39:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF
|
MaziyarPanahi
| 2024-03-07T22:33:24Z | 54 | 2 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"en",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"base_model:google/gemma-7b",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:HuggingFaceH4/zephyr-7b-gemma-sft-v0.1",
"base_model:quantized:HuggingFaceH4/zephyr-7b-gemma-sft-v0.1"
] |
text-generation
| 2024-03-07T22:03:29Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- tensorboard
- safetensors
- gemma
- text-generation
- alignment-handbook
- trl
- sft
- generated_from_trainer
- conversational
- en
- dataset:HuggingFaceH4/deita-10k-v0-sft
- base_model:google/gemma-7b
- license:other
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: zephyr-7b-gemma-sft-v0.1-GGUF
base_model: HuggingFaceH4/zephyr-7b-gemma-sft-v0.1
inference: false
model_creator: HuggingFaceH4
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF)
- Model creator: [HuggingFaceH4](https://huggingface.co/HuggingFaceH4)
- Original model: [HuggingFaceH4/zephyr-7b-gemma-sft-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-sft-v0.1)
## Description
[MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF) contains GGUF format model files for [HuggingFaceH4/zephyr-7b-gemma-sft-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-sft-v0.1).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF) and below it, a specific filename to download, such as: zephyr-7b-gemma-sft-v0.1-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF zephyr-7b-gemma-sft-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF zephyr-7b-gemma-sft-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m zephyr-7b-gemma-sft-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./zephyr-7b-gemma-sft-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./zephyr-7b-gemma-sft-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
Maqqq/OpenHermes-2.5-Mistral-7B-15
|
Maqqq
| 2024-03-07T22:26:33Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T21:55:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
babakroshanikhiavi/q-FrozenLake-v1-4x4-noSlippery
|
babakroshanikhiavi
| 2024-03-07T22:23:50Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-07T22:20:59Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="babakroshanikhiavi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_32_32_0.05_16_0.0002
|
ferrazzipietro
| 2024-03-07T22:20:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T17:23:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nisten/smaugzilla-77b
|
nisten
| 2024-03-07T22:19:39Z | 50 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"base_model:abacusai/Smaug-72B-v0.1",
"base_model:finetune:abacusai/Smaug-72B-v0.1",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T01:17:55Z |
---
base_model:
- abacusai/Smaug-72B-v0.1
library_name: transformers
tags:
- mergekit
- merge
license: mit
---
# SMAUGZILLA-77B

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
The base model was [Smaug-72](https://huggingface.co/abacusai/Smaug-72B-v0.1).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* /home/ubuntu/nvm/smaug
* /home/ubuntu/nvm/minismaug
|
Arczisan/christy-doa
|
Arczisan
| 2024-03-07T22:17:25Z | 2 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"region:us"
] |
text-to-image
| 2024-03-07T22:17:21Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/chisty.png
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
---
# Dead or Alive - Christy
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Arczisan/christy-doa/tree/main) them in the Files & versions tab.
|
Arczisan/royal-gown
|
Arczisan
| 2024-03-07T22:05:54Z | 5 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"region:us"
] |
text-to-image
| 2024-03-07T22:05:50Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: "UNICODE\0\0(\0(\0m\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0,\0b\0e\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0e\0d\0g\0Q\0u\0a\0l\0i\0t\0y\0)\0)\0,\0s\0m\0i\0l\0e\0,\01\0g\0i\0r\0l\0,\0s\0o\0l\0o\0,\0s\0t\0a\0n\0d\0i\0n\0g\0,\0p\0o\0s\0i\0n\0g\0,\0"
output:
url: >-
images/18489-484510326-((masterpiece,best
quality,edgQuality)),smile,1girl,solo,standing,posing,_ballgown, edgPetal,
a woman wearing a ballgown made of.jpeg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
---
# Royal Gown
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Arczisan/royal-gown/tree/main) them in the Files & versions tab.
|
HuggingFaceM4/siglip-so400m-14-980-flash-attn2-navit
|
HuggingFaceM4
| 2024-03-07T22:05:47Z | 8,619 | 43 |
transformers
|
[
"transformers",
"safetensors",
"siglip",
"zero-shot-image-classification",
"custom_code",
"arxiv:2307.06304",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2024-01-30T19:31:08Z |
---
license: apache-2.0
---
Same as https://huggingface.co/HuggingFaceM4/siglip-so400m-14-384-flash-attn2 with two changes:
- increase max resolution to 980 x 980 (instead of 384 x 384) by interpolating the position embeddings
- implement the strategy in [NaViT](https://arxiv.org/abs/2307.06304) to allow a/ variable resoltion images, b/ aspect ratio preserved images
These changes only apply to the vision tower. No changes to the text tower.
Implementation is fully backward compatible to `https://huggingface.co/HuggingFaceM4/siglip-so400m-14-384-flash-attn2` -> just don't specify the `patch_attention_mask`
Usage:
```python
import torch
from modeling_siglip import SiglipVisionModel
DEVICE = torch.device("cuda:0")
PATCH_SIZE = 14
pixel_values = torch.randn(2, 3, 28, 42, dtype=torch.bfloat16, device=DEVICE)
pixel_attention_mask = [
[
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
],
[
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
],
]
pixel_attention_mask = torch.tensor(pixel_attention_mask, dtype=torch.bool, device=DEVICE)
patches_subgrid = pixel_attention_mask.unfold(
dimension=1, size=PATCH_SIZE, step=PATCH_SIZE
).unfold(dimension=2, size=PATCH_SIZE, step=PATCH_SIZE)
patch_attention_mask = (patches_subgrid.sum(dim=(-1, -2)) > 0).bool()
model = SiglipVisionModel.from_pretrained("HuggingFaceM4/siglip-so400m-14-980-flash-attn2-navit", _flash_attn_2_enabled=True)
model.train()
model.vision_model.to(DEVICE, dtype=torch.bfloat16)
output = model.vision_model(pixel_values=pixel_values, patch_attention_mask=patch_attention_mask)
```
|
6001k1d/ppo-Huggy
|
6001k1d
| 2024-03-07T22:00:21Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-03-06T18:19:18Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: 6001k1d/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Maqqq/OpenHermes-2.5-Mistral-7B-14
|
Maqqq
| 2024-03-07T21:58:58Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T19:57:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
twhoool02/Llama-2-7b-hf-AWQ
|
twhoool02
| 2024-03-07T21:58:42Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"AWQ",
"llama-2",
"en",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:quantized:meta-llama/Llama-2-7b-hf",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-03-03T21:37:40Z |
---
language: en
license: other
tags:
- facebook
- meta
- AWQ
- llama-2
- llama
base_model: meta-llama/Llama-2-7b-hf
model_name: Llama-2-7b-hf-AWQ
library:
- Transformers
- AWQ
arxiv: https://arxiv.org/abs/2306.00978
model_type: llama
pipeline_tag: text-generation
qunatized_by: twhoool02
---
# Model Card for Llama-2-7b-hf-AWQ
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model is a quantized version of the meta-llama/Llama-2-7b-hf model. The model was quantized using AWQ.
- **Developed by:** Ted Whooley
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** llama
- **Language(s) (NLP):** en
- **License:** other
- **Finetuned from model [optional]:** meta-llama/Llama-2-7b-hf
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
farid1088/GQA_BERT_legal_SQuAD_complete_augmented_1000
|
farid1088
| 2024-03-07T21:47:44Z | 35 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-05T19:45:34Z |
---
tags:
- generated_from_trainer
model-index:
- name: GQA_BERT_legal_SQuAD_complete_augmented_1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GQA_BERT_legal_SQuAD_complete_augmented_1000
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 160
- eval_batch_size: 40
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 1.0 | 3 | 5.1193 |
| No log | 2.0 | 6 | 4.5799 |
| No log | 3.0 | 9 | 3.9567 |
| No log | 4.0 | 12 | 3.6237 |
| No log | 5.0 | 15 | 3.1782 |
| No log | 6.0 | 18 | 2.8046 |
| No log | 7.0 | 21 | 2.5128 |
| No log | 8.0 | 24 | 2.2374 |
| No log | 9.0 | 27 | 2.0316 |
| No log | 10.0 | 30 | 1.8078 |
| No log | 11.0 | 33 | 1.6466 |
| No log | 12.0 | 36 | 1.4863 |
| No log | 13.0 | 39 | 1.3314 |
| No log | 14.0 | 42 | 1.2554 |
| No log | 15.0 | 45 | 1.1996 |
| No log | 16.0 | 48 | 1.1472 |
| No log | 17.0 | 51 | 1.1241 |
| No log | 18.0 | 54 | 1.1021 |
| No log | 19.0 | 57 | 1.0756 |
| No log | 20.0 | 60 | 1.0455 |
| No log | 21.0 | 63 | 1.0384 |
| No log | 22.0 | 66 | 1.0371 |
| No log | 23.0 | 69 | 1.0400 |
| No log | 24.0 | 72 | 1.0343 |
| No log | 25.0 | 75 | 1.0282 |
| No log | 26.0 | 78 | 1.0215 |
| No log | 27.0 | 81 | 1.0235 |
| No log | 28.0 | 84 | 1.0281 |
| No log | 29.0 | 87 | 1.0335 |
| No log | 30.0 | 90 | 1.0226 |
| No log | 31.0 | 93 | 1.0375 |
| No log | 32.0 | 96 | 1.0567 |
| No log | 33.0 | 99 | 1.0612 |
| No log | 34.0 | 102 | 1.0516 |
| No log | 35.0 | 105 | 1.0574 |
| No log | 36.0 | 108 | 1.0527 |
| No log | 37.0 | 111 | 1.0364 |
| No log | 38.0 | 114 | 1.0571 |
| No log | 39.0 | 117 | 1.0338 |
| No log | 40.0 | 120 | 1.0056 |
| No log | 41.0 | 123 | 1.0375 |
| No log | 42.0 | 126 | 1.0606 |
| No log | 43.0 | 129 | 1.0335 |
| No log | 44.0 | 132 | 1.0536 |
| No log | 45.0 | 135 | 1.0963 |
| No log | 46.0 | 138 | 1.0876 |
| No log | 47.0 | 141 | 1.0824 |
| No log | 48.0 | 144 | 1.0867 |
| No log | 49.0 | 147 | 1.0898 |
| No log | 50.0 | 150 | 1.0947 |
| No log | 51.0 | 153 | 1.0833 |
| No log | 52.0 | 156 | 1.0795 |
| No log | 53.0 | 159 | 1.0890 |
| No log | 54.0 | 162 | 1.1027 |
| No log | 55.0 | 165 | 1.0710 |
| No log | 56.0 | 168 | 1.1119 |
| No log | 57.0 | 171 | 1.1167 |
| No log | 58.0 | 174 | 1.1019 |
| No log | 59.0 | 177 | 1.1102 |
| No log | 60.0 | 180 | 1.1040 |
| No log | 61.0 | 183 | 1.0985 |
| No log | 62.0 | 186 | 1.1447 |
| No log | 63.0 | 189 | 1.1205 |
| No log | 64.0 | 192 | 1.1286 |
| No log | 65.0 | 195 | 1.1538 |
| No log | 66.0 | 198 | 1.1575 |
| No log | 67.0 | 201 | 1.1532 |
| No log | 68.0 | 204 | 1.1379 |
| No log | 69.0 | 207 | 1.1971 |
| No log | 70.0 | 210 | 1.1726 |
| No log | 71.0 | 213 | 1.1800 |
| No log | 72.0 | 216 | 1.1861 |
| No log | 73.0 | 219 | 1.1463 |
| No log | 74.0 | 222 | 1.1704 |
| No log | 75.0 | 225 | 1.1275 |
| No log | 76.0 | 228 | 1.0847 |
| No log | 77.0 | 231 | 1.1109 |
| No log | 78.0 | 234 | 1.1514 |
| No log | 79.0 | 237 | 1.1237 |
| No log | 80.0 | 240 | 1.1396 |
| No log | 81.0 | 243 | 1.1494 |
| No log | 82.0 | 246 | 1.0609 |
| No log | 83.0 | 249 | 1.1209 |
| No log | 84.0 | 252 | 1.1821 |
| No log | 85.0 | 255 | 1.0816 |
| No log | 86.0 | 258 | 1.1173 |
| No log | 87.0 | 261 | 1.1777 |
| No log | 88.0 | 264 | 1.1400 |
| No log | 89.0 | 267 | 1.2374 |
| No log | 90.0 | 270 | 1.2227 |
| No log | 91.0 | 273 | 1.1647 |
| No log | 92.0 | 276 | 1.3076 |
| No log | 93.0 | 279 | 1.2866 |
| No log | 94.0 | 282 | 1.1507 |
| No log | 95.0 | 285 | 1.1742 |
| No log | 96.0 | 288 | 1.2750 |
| No log | 97.0 | 291 | 1.1480 |
| No log | 98.0 | 294 | 1.0779 |
| No log | 99.0 | 297 | 1.1850 |
| No log | 100.0 | 300 | 1.1745 |
| No log | 101.0 | 303 | 1.0987 |
| No log | 102.0 | 306 | 1.1721 |
| No log | 103.0 | 309 | 1.1679 |
| No log | 104.0 | 312 | 1.1257 |
| No log | 105.0 | 315 | 1.1888 |
| No log | 106.0 | 318 | 1.2515 |
| No log | 107.0 | 321 | 1.1134 |
| No log | 108.0 | 324 | 1.0962 |
| No log | 109.0 | 327 | 1.1823 |
| No log | 110.0 | 330 | 1.2550 |
| No log | 111.0 | 333 | 1.1812 |
| No log | 112.0 | 336 | 1.1463 |
| No log | 113.0 | 339 | 1.2233 |
| No log | 114.0 | 342 | 1.2499 |
| No log | 115.0 | 345 | 1.1986 |
| No log | 116.0 | 348 | 1.2197 |
| No log | 117.0 | 351 | 1.1806 |
| No log | 118.0 | 354 | 1.2562 |
| No log | 119.0 | 357 | 1.1836 |
| No log | 120.0 | 360 | 1.1398 |
| No log | 121.0 | 363 | 1.1737 |
| No log | 122.0 | 366 | 1.1796 |
| No log | 123.0 | 369 | 1.1494 |
| No log | 124.0 | 372 | 1.1725 |
| No log | 125.0 | 375 | 1.1785 |
| No log | 126.0 | 378 | 1.1925 |
| No log | 127.0 | 381 | 1.2297 |
| No log | 128.0 | 384 | 1.1730 |
| No log | 129.0 | 387 | 1.2427 |
| No log | 130.0 | 390 | 1.3419 |
| No log | 131.0 | 393 | 1.2353 |
| No log | 132.0 | 396 | 1.1667 |
| No log | 133.0 | 399 | 1.2442 |
| No log | 134.0 | 402 | 1.3210 |
| No log | 135.0 | 405 | 1.3273 |
| No log | 136.0 | 408 | 1.2971 |
| No log | 137.0 | 411 | 1.3059 |
| No log | 138.0 | 414 | 1.2881 |
| No log | 139.0 | 417 | 1.2596 |
| No log | 140.0 | 420 | 1.2907 |
| No log | 141.0 | 423 | 1.4198 |
| No log | 142.0 | 426 | 1.2816 |
| No log | 143.0 | 429 | 1.2489 |
| No log | 144.0 | 432 | 1.2540 |
| No log | 145.0 | 435 | 1.2669 |
| No log | 146.0 | 438 | 1.1884 |
| No log | 147.0 | 441 | 1.1789 |
| No log | 148.0 | 444 | 1.1828 |
| No log | 149.0 | 447 | 1.2189 |
| No log | 150.0 | 450 | 1.2064 |
| No log | 151.0 | 453 | 1.1665 |
| No log | 152.0 | 456 | 1.1841 |
| No log | 153.0 | 459 | 1.2595 |
| No log | 154.0 | 462 | 1.2757 |
| No log | 155.0 | 465 | 1.2431 |
| No log | 156.0 | 468 | 1.2333 |
| No log | 157.0 | 471 | 1.2409 |
| No log | 158.0 | 474 | 1.2384 |
| No log | 159.0 | 477 | 1.1822 |
| No log | 160.0 | 480 | 1.1944 |
| No log | 161.0 | 483 | 1.1245 |
| No log | 162.0 | 486 | 1.1272 |
| No log | 163.0 | 489 | 1.1473 |
| No log | 164.0 | 492 | 1.1434 |
| No log | 165.0 | 495 | 1.2481 |
| No log | 166.0 | 498 | 1.2696 |
| 1.011 | 167.0 | 501 | 1.1787 |
| 1.011 | 168.0 | 504 | 1.1010 |
| 1.011 | 169.0 | 507 | 1.2381 |
| 1.011 | 170.0 | 510 | 1.3548 |
| 1.011 | 171.0 | 513 | 1.2571 |
| 1.011 | 172.0 | 516 | 1.1513 |
| 1.011 | 173.0 | 519 | 1.2023 |
| 1.011 | 174.0 | 522 | 1.3151 |
| 1.011 | 175.0 | 525 | 1.2439 |
| 1.011 | 176.0 | 528 | 1.1787 |
| 1.011 | 177.0 | 531 | 1.2082 |
| 1.011 | 178.0 | 534 | 1.1628 |
| 1.011 | 179.0 | 537 | 1.1537 |
| 1.011 | 180.0 | 540 | 1.1693 |
| 1.011 | 181.0 | 543 | 1.2020 |
| 1.011 | 182.0 | 546 | 1.2659 |
| 1.011 | 183.0 | 549 | 1.2523 |
| 1.011 | 184.0 | 552 | 1.2889 |
| 1.011 | 185.0 | 555 | 1.2769 |
| 1.011 | 186.0 | 558 | 1.1812 |
| 1.011 | 187.0 | 561 | 1.2795 |
| 1.011 | 188.0 | 564 | 1.3695 |
| 1.011 | 189.0 | 567 | 1.2992 |
| 1.011 | 190.0 | 570 | 1.1889 |
| 1.011 | 191.0 | 573 | 1.2498 |
| 1.011 | 192.0 | 576 | 1.3926 |
| 1.011 | 193.0 | 579 | 1.3189 |
| 1.011 | 194.0 | 582 | 1.2336 |
| 1.011 | 195.0 | 585 | 1.2429 |
| 1.011 | 196.0 | 588 | 1.3424 |
| 1.011 | 197.0 | 591 | 1.2656 |
| 1.011 | 198.0 | 594 | 1.1868 |
| 1.011 | 199.0 | 597 | 1.2422 |
| 1.011 | 200.0 | 600 | 1.3169 |
| 1.011 | 201.0 | 603 | 1.2882 |
| 1.011 | 202.0 | 606 | 1.2062 |
| 1.011 | 203.0 | 609 | 1.2312 |
| 1.011 | 204.0 | 612 | 1.2377 |
| 1.011 | 205.0 | 615 | 1.1899 |
| 1.011 | 206.0 | 618 | 1.2552 |
| 1.011 | 207.0 | 621 | 1.2026 |
| 1.011 | 208.0 | 624 | 1.1862 |
| 1.011 | 209.0 | 627 | 1.1977 |
| 1.011 | 210.0 | 630 | 1.1801 |
| 1.011 | 211.0 | 633 | 1.1900 |
| 1.011 | 212.0 | 636 | 1.3119 |
| 1.011 | 213.0 | 639 | 1.3340 |
| 1.011 | 214.0 | 642 | 1.2406 |
| 1.011 | 215.0 | 645 | 1.2233 |
| 1.011 | 216.0 | 648 | 1.2326 |
| 1.011 | 217.0 | 651 | 1.2835 |
| 1.011 | 218.0 | 654 | 1.2755 |
| 1.011 | 219.0 | 657 | 1.2479 |
| 1.011 | 220.0 | 660 | 1.2538 |
| 1.011 | 221.0 | 663 | 1.3152 |
| 1.011 | 222.0 | 666 | 1.2363 |
| 1.011 | 223.0 | 669 | 1.1776 |
| 1.011 | 224.0 | 672 | 1.2833 |
| 1.011 | 225.0 | 675 | 1.3166 |
| 1.011 | 226.0 | 678 | 1.1878 |
| 1.011 | 227.0 | 681 | 1.1190 |
| 1.011 | 228.0 | 684 | 1.2481 |
| 1.011 | 229.0 | 687 | 1.3140 |
| 1.011 | 230.0 | 690 | 1.1750 |
| 1.011 | 231.0 | 693 | 1.1255 |
| 1.011 | 232.0 | 696 | 1.1758 |
| 1.011 | 233.0 | 699 | 1.2614 |
| 1.011 | 234.0 | 702 | 1.2244 |
| 1.011 | 235.0 | 705 | 1.2471 |
| 1.011 | 236.0 | 708 | 1.2735 |
| 1.011 | 237.0 | 711 | 1.2557 |
| 1.011 | 238.0 | 714 | 1.1484 |
| 1.011 | 239.0 | 717 | 1.1619 |
| 1.011 | 240.0 | 720 | 1.2471 |
| 1.011 | 241.0 | 723 | 1.2403 |
| 1.011 | 242.0 | 726 | 1.2039 |
| 1.011 | 243.0 | 729 | 1.2126 |
| 1.011 | 244.0 | 732 | 1.2413 |
| 1.011 | 245.0 | 735 | 1.2021 |
| 1.011 | 246.0 | 738 | 1.1651 |
| 1.011 | 247.0 | 741 | 1.1945 |
| 1.011 | 248.0 | 744 | 1.1920 |
| 1.011 | 249.0 | 747 | 1.1741 |
| 1.011 | 250.0 | 750 | 1.1544 |
| 1.011 | 251.0 | 753 | 1.1757 |
| 1.011 | 252.0 | 756 | 1.2283 |
| 1.011 | 253.0 | 759 | 1.1520 |
| 1.011 | 254.0 | 762 | 1.1001 |
| 1.011 | 255.0 | 765 | 1.1319 |
| 1.011 | 256.0 | 768 | 1.2306 |
| 1.011 | 257.0 | 771 | 1.2233 |
| 1.011 | 258.0 | 774 | 1.1541 |
| 1.011 | 259.0 | 777 | 1.2105 |
| 1.011 | 260.0 | 780 | 1.2084 |
| 1.011 | 261.0 | 783 | 1.1656 |
| 1.011 | 262.0 | 786 | 1.1414 |
| 1.011 | 263.0 | 789 | 1.1738 |
| 1.011 | 264.0 | 792 | 1.2153 |
| 1.011 | 265.0 | 795 | 1.2846 |
| 1.011 | 266.0 | 798 | 1.2991 |
| 1.011 | 267.0 | 801 | 1.2698 |
| 1.011 | 268.0 | 804 | 1.2488 |
| 1.011 | 269.0 | 807 | 1.3379 |
| 1.011 | 270.0 | 810 | 1.3777 |
| 1.011 | 271.0 | 813 | 1.3034 |
| 1.011 | 272.0 | 816 | 1.2925 |
| 1.011 | 273.0 | 819 | 1.3598 |
| 1.011 | 274.0 | 822 | 1.3181 |
| 1.011 | 275.0 | 825 | 1.1246 |
| 1.011 | 276.0 | 828 | 1.0685 |
| 1.011 | 277.0 | 831 | 1.1546 |
| 1.011 | 278.0 | 834 | 1.2029 |
| 1.011 | 279.0 | 837 | 1.2433 |
| 1.011 | 280.0 | 840 | 1.3051 |
| 1.011 | 281.0 | 843 | 1.3124 |
| 1.011 | 282.0 | 846 | 1.2468 |
| 1.011 | 283.0 | 849 | 1.2155 |
| 1.011 | 284.0 | 852 | 1.2340 |
| 1.011 | 285.0 | 855 | 1.2815 |
| 1.011 | 286.0 | 858 | 1.3270 |
| 1.011 | 287.0 | 861 | 1.3363 |
| 1.011 | 288.0 | 864 | 1.3316 |
| 1.011 | 289.0 | 867 | 1.3048 |
| 1.011 | 290.0 | 870 | 1.2741 |
| 1.011 | 291.0 | 873 | 1.2994 |
| 1.011 | 292.0 | 876 | 1.3124 |
| 1.011 | 293.0 | 879 | 1.2510 |
| 1.011 | 294.0 | 882 | 1.2051 |
| 1.011 | 295.0 | 885 | 1.2394 |
| 1.011 | 296.0 | 888 | 1.2530 |
| 1.011 | 297.0 | 891 | 1.2104 |
| 1.011 | 298.0 | 894 | 1.1540 |
| 1.011 | 299.0 | 897 | 1.1353 |
| 1.011 | 300.0 | 900 | 1.1786 |
| 1.011 | 301.0 | 903 | 1.2086 |
| 1.011 | 302.0 | 906 | 1.2456 |
| 1.011 | 303.0 | 909 | 1.2706 |
| 1.011 | 304.0 | 912 | 1.3325 |
| 1.011 | 305.0 | 915 | 1.2892 |
| 1.011 | 306.0 | 918 | 1.2357 |
| 1.011 | 307.0 | 921 | 1.2447 |
| 1.011 | 308.0 | 924 | 1.3212 |
| 1.011 | 309.0 | 927 | 1.2885 |
| 1.011 | 310.0 | 930 | 1.2718 |
| 1.011 | 311.0 | 933 | 1.3002 |
| 1.011 | 312.0 | 936 | 1.2508 |
| 1.011 | 313.0 | 939 | 1.3075 |
| 1.011 | 314.0 | 942 | 1.3327 |
| 1.011 | 315.0 | 945 | 1.2802 |
| 1.011 | 316.0 | 948 | 1.1862 |
| 1.011 | 317.0 | 951 | 1.2011 |
| 1.011 | 318.0 | 954 | 1.2118 |
| 1.011 | 319.0 | 957 | 1.2621 |
| 1.011 | 320.0 | 960 | 1.3194 |
| 1.011 | 321.0 | 963 | 1.3178 |
| 1.011 | 322.0 | 966 | 1.3375 |
| 1.011 | 323.0 | 969 | 1.2784 |
| 1.011 | 324.0 | 972 | 1.2132 |
| 1.011 | 325.0 | 975 | 1.1895 |
| 1.011 | 326.0 | 978 | 1.2521 |
| 1.011 | 327.0 | 981 | 1.3398 |
| 1.011 | 328.0 | 984 | 1.2295 |
| 1.011 | 329.0 | 987 | 1.1417 |
| 1.011 | 330.0 | 990 | 1.1594 |
| 1.011 | 331.0 | 993 | 1.3099 |
| 1.011 | 332.0 | 996 | 1.3461 |
| 1.011 | 333.0 | 999 | 1.1772 |
| 0.5949 | 334.0 | 1002 | 1.0855 |
| 0.5949 | 335.0 | 1005 | 1.1587 |
| 0.5949 | 336.0 | 1008 | 1.2964 |
| 0.5949 | 337.0 | 1011 | 1.2694 |
| 0.5949 | 338.0 | 1014 | 1.2127 |
| 0.5949 | 339.0 | 1017 | 1.1815 |
| 0.5949 | 340.0 | 1020 | 1.1907 |
| 0.5949 | 341.0 | 1023 | 1.2233 |
| 0.5949 | 342.0 | 1026 | 1.1774 |
| 0.5949 | 343.0 | 1029 | 1.1496 |
| 0.5949 | 344.0 | 1032 | 1.1266 |
| 0.5949 | 345.0 | 1035 | 1.1182 |
| 0.5949 | 346.0 | 1038 | 1.1806 |
| 0.5949 | 347.0 | 1041 | 1.1528 |
| 0.5949 | 348.0 | 1044 | 1.1292 |
| 0.5949 | 349.0 | 1047 | 1.1044 |
| 0.5949 | 350.0 | 1050 | 1.1721 |
| 0.5949 | 351.0 | 1053 | 1.2474 |
| 0.5949 | 352.0 | 1056 | 1.2158 |
| 0.5949 | 353.0 | 1059 | 1.1859 |
| 0.5949 | 354.0 | 1062 | 1.1686 |
| 0.5949 | 355.0 | 1065 | 1.1558 |
| 0.5949 | 356.0 | 1068 | 1.1703 |
| 0.5949 | 357.0 | 1071 | 1.1504 |
| 0.5949 | 358.0 | 1074 | 1.1182 |
| 0.5949 | 359.0 | 1077 | 1.1441 |
| 0.5949 | 360.0 | 1080 | 1.1833 |
| 0.5949 | 361.0 | 1083 | 1.2068 |
| 0.5949 | 362.0 | 1086 | 1.1335 |
| 0.5949 | 363.0 | 1089 | 1.0419 |
| 0.5949 | 364.0 | 1092 | 1.0862 |
| 0.5949 | 365.0 | 1095 | 1.2001 |
| 0.5949 | 366.0 | 1098 | 1.2415 |
| 0.5949 | 367.0 | 1101 | 1.1930 |
| 0.5949 | 368.0 | 1104 | 1.1220 |
| 0.5949 | 369.0 | 1107 | 1.0307 |
| 0.5949 | 370.0 | 1110 | 1.0519 |
| 0.5949 | 371.0 | 1113 | 1.0704 |
| 0.5949 | 372.0 | 1116 | 1.0365 |
| 0.5949 | 373.0 | 1119 | 1.0087 |
| 0.5949 | 374.0 | 1122 | 0.9999 |
| 0.5949 | 375.0 | 1125 | 1.1072 |
| 0.5949 | 376.0 | 1128 | 1.1944 |
| 0.5949 | 377.0 | 1131 | 1.1771 |
| 0.5949 | 378.0 | 1134 | 1.1325 |
| 0.5949 | 379.0 | 1137 | 1.0533 |
| 0.5949 | 380.0 | 1140 | 1.0384 |
| 0.5949 | 381.0 | 1143 | 1.0943 |
| 0.5949 | 382.0 | 1146 | 1.2307 |
| 0.5949 | 383.0 | 1149 | 1.2407 |
| 0.5949 | 384.0 | 1152 | 1.1655 |
| 0.5949 | 385.0 | 1155 | 1.0630 |
| 0.5949 | 386.0 | 1158 | 1.0247 |
| 0.5949 | 387.0 | 1161 | 1.1115 |
| 0.5949 | 388.0 | 1164 | 1.1942 |
| 0.5949 | 389.0 | 1167 | 1.2013 |
| 0.5949 | 390.0 | 1170 | 1.1971 |
| 0.5949 | 391.0 | 1173 | 1.1855 |
| 0.5949 | 392.0 | 1176 | 1.1868 |
| 0.5949 | 393.0 | 1179 | 1.1613 |
| 0.5949 | 394.0 | 1182 | 1.1488 |
| 0.5949 | 395.0 | 1185 | 1.1648 |
| 0.5949 | 396.0 | 1188 | 1.1688 |
| 0.5949 | 397.0 | 1191 | 1.1526 |
| 0.5949 | 398.0 | 1194 | 1.1241 |
| 0.5949 | 399.0 | 1197 | 1.0765 |
| 0.5949 | 400.0 | 1200 | 1.0983 |
| 0.5949 | 401.0 | 1203 | 1.1407 |
| 0.5949 | 402.0 | 1206 | 1.1459 |
| 0.5949 | 403.0 | 1209 | 1.1184 |
| 0.5949 | 404.0 | 1212 | 1.0951 |
| 0.5949 | 405.0 | 1215 | 1.0853 |
| 0.5949 | 406.0 | 1218 | 1.0962 |
| 0.5949 | 407.0 | 1221 | 1.1022 |
| 0.5949 | 408.0 | 1224 | 1.0818 |
| 0.5949 | 409.0 | 1227 | 1.0528 |
| 0.5949 | 410.0 | 1230 | 1.0500 |
| 0.5949 | 411.0 | 1233 | 1.0918 |
| 0.5949 | 412.0 | 1236 | 1.0624 |
| 0.5949 | 413.0 | 1239 | 1.0452 |
| 0.5949 | 414.0 | 1242 | 1.0485 |
| 0.5949 | 415.0 | 1245 | 1.0624 |
| 0.5949 | 416.0 | 1248 | 1.1062 |
| 0.5949 | 417.0 | 1251 | 1.1095 |
| 0.5949 | 418.0 | 1254 | 1.0988 |
| 0.5949 | 419.0 | 1257 | 1.0867 |
| 0.5949 | 420.0 | 1260 | 1.0873 |
| 0.5949 | 421.0 | 1263 | 1.0577 |
| 0.5949 | 422.0 | 1266 | 1.0998 |
| 0.5949 | 423.0 | 1269 | 1.1579 |
| 0.5949 | 424.0 | 1272 | 1.1373 |
| 0.5949 | 425.0 | 1275 | 1.1138 |
| 0.5949 | 426.0 | 1278 | 1.0807 |
| 0.5949 | 427.0 | 1281 | 1.0137 |
| 0.5949 | 428.0 | 1284 | 1.0083 |
| 0.5949 | 429.0 | 1287 | 1.0493 |
| 0.5949 | 430.0 | 1290 | 1.1223 |
| 0.5949 | 431.0 | 1293 | 1.1817 |
| 0.5949 | 432.0 | 1296 | 1.1060 |
| 0.5949 | 433.0 | 1299 | 1.0584 |
| 0.5949 | 434.0 | 1302 | 1.0634 |
| 0.5949 | 435.0 | 1305 | 1.1070 |
| 0.5949 | 436.0 | 1308 | 1.1183 |
| 0.5949 | 437.0 | 1311 | 1.0928 |
| 0.5949 | 438.0 | 1314 | 1.0754 |
| 0.5949 | 439.0 | 1317 | 1.0556 |
| 0.5949 | 440.0 | 1320 | 1.0317 |
| 0.5949 | 441.0 | 1323 | 1.0036 |
| 0.5949 | 442.0 | 1326 | 1.0292 |
| 0.5949 | 443.0 | 1329 | 1.1790 |
| 0.5949 | 444.0 | 1332 | 1.2675 |
| 0.5949 | 445.0 | 1335 | 1.2144 |
| 0.5949 | 446.0 | 1338 | 1.0888 |
| 0.5949 | 447.0 | 1341 | 1.0147 |
| 0.5949 | 448.0 | 1344 | 1.0074 |
| 0.5949 | 449.0 | 1347 | 1.0412 |
| 0.5949 | 450.0 | 1350 | 1.0851 |
| 0.5949 | 451.0 | 1353 | 1.1087 |
| 0.5949 | 452.0 | 1356 | 1.1293 |
| 0.5949 | 453.0 | 1359 | 1.1286 |
| 0.5949 | 454.0 | 1362 | 1.0910 |
| 0.5949 | 455.0 | 1365 | 1.0787 |
| 0.5949 | 456.0 | 1368 | 1.1053 |
| 0.5949 | 457.0 | 1371 | 1.1651 |
| 0.5949 | 458.0 | 1374 | 1.2237 |
| 0.5949 | 459.0 | 1377 | 1.2137 |
| 0.5949 | 460.0 | 1380 | 1.1833 |
| 0.5949 | 461.0 | 1383 | 1.1378 |
| 0.5949 | 462.0 | 1386 | 1.0625 |
| 0.5949 | 463.0 | 1389 | 1.0437 |
| 0.5949 | 464.0 | 1392 | 1.0432 |
| 0.5949 | 465.0 | 1395 | 1.1278 |
| 0.5949 | 466.0 | 1398 | 1.2145 |
| 0.5949 | 467.0 | 1401 | 1.2425 |
| 0.5949 | 468.0 | 1404 | 1.2570 |
| 0.5949 | 469.0 | 1407 | 1.2285 |
| 0.5949 | 470.0 | 1410 | 1.2091 |
| 0.5949 | 471.0 | 1413 | 1.1901 |
| 0.5949 | 472.0 | 1416 | 1.2086 |
| 0.5949 | 473.0 | 1419 | 1.2434 |
| 0.5949 | 474.0 | 1422 | 1.2743 |
| 0.5949 | 475.0 | 1425 | 1.2743 |
| 0.5949 | 476.0 | 1428 | 1.2380 |
| 0.5949 | 477.0 | 1431 | 1.1984 |
| 0.5949 | 478.0 | 1434 | 1.1687 |
| 0.5949 | 479.0 | 1437 | 1.1170 |
| 0.5949 | 480.0 | 1440 | 1.1537 |
| 0.5949 | 481.0 | 1443 | 1.1691 |
| 0.5949 | 482.0 | 1446 | 1.1855 |
| 0.5949 | 483.0 | 1449 | 1.2248 |
| 0.5949 | 484.0 | 1452 | 1.2351 |
| 0.5949 | 485.0 | 1455 | 1.2162 |
| 0.5949 | 486.0 | 1458 | 1.1853 |
| 0.5949 | 487.0 | 1461 | 1.1756 |
| 0.5949 | 488.0 | 1464 | 1.1743 |
| 0.5949 | 489.0 | 1467 | 1.1563 |
| 0.5949 | 490.0 | 1470 | 1.1135 |
| 0.5949 | 491.0 | 1473 | 1.1080 |
| 0.5949 | 492.0 | 1476 | 1.1491 |
| 0.5949 | 493.0 | 1479 | 1.2150 |
| 0.5949 | 494.0 | 1482 | 1.2236 |
| 0.5949 | 495.0 | 1485 | 1.1843 |
| 0.5949 | 496.0 | 1488 | 1.1380 |
| 0.5949 | 497.0 | 1491 | 1.1301 |
| 0.5949 | 498.0 | 1494 | 1.1271 |
| 0.5949 | 499.0 | 1497 | 1.1288 |
| 0.571 | 500.0 | 1500 | 1.1521 |
| 0.571 | 501.0 | 1503 | 1.1695 |
| 0.571 | 502.0 | 1506 | 1.1846 |
| 0.571 | 503.0 | 1509 | 1.1920 |
| 0.571 | 504.0 | 1512 | 1.2064 |
| 0.571 | 505.0 | 1515 | 1.1979 |
| 0.571 | 506.0 | 1518 | 1.1894 |
| 0.571 | 507.0 | 1521 | 1.1961 |
| 0.571 | 508.0 | 1524 | 1.1673 |
| 0.571 | 509.0 | 1527 | 1.1427 |
| 0.571 | 510.0 | 1530 | 1.0826 |
| 0.571 | 511.0 | 1533 | 1.0712 |
| 0.571 | 512.0 | 1536 | 1.0924 |
| 0.571 | 513.0 | 1539 | 1.1014 |
| 0.571 | 514.0 | 1542 | 1.0983 |
| 0.571 | 515.0 | 1545 | 1.1061 |
| 0.571 | 516.0 | 1548 | 1.1573 |
| 0.571 | 517.0 | 1551 | 1.1913 |
| 0.571 | 518.0 | 1554 | 1.2075 |
| 0.571 | 519.0 | 1557 | 1.2028 |
| 0.571 | 520.0 | 1560 | 1.1595 |
| 0.571 | 521.0 | 1563 | 1.1596 |
| 0.571 | 522.0 | 1566 | 1.1658 |
| 0.571 | 523.0 | 1569 | 1.1231 |
| 0.571 | 524.0 | 1572 | 1.0694 |
| 0.571 | 525.0 | 1575 | 1.0454 |
| 0.571 | 526.0 | 1578 | 1.0508 |
| 0.571 | 527.0 | 1581 | 1.0686 |
| 0.571 | 528.0 | 1584 | 1.0965 |
| 0.571 | 529.0 | 1587 | 1.1343 |
| 0.571 | 530.0 | 1590 | 1.1618 |
| 0.571 | 531.0 | 1593 | 1.1807 |
| 0.571 | 532.0 | 1596 | 1.1774 |
| 0.571 | 533.0 | 1599 | 1.1541 |
| 0.571 | 534.0 | 1602 | 1.1005 |
| 0.571 | 535.0 | 1605 | 1.0268 |
| 0.571 | 536.0 | 1608 | 0.9818 |
| 0.571 | 537.0 | 1611 | 0.9613 |
| 0.571 | 538.0 | 1614 | 0.9870 |
| 0.571 | 539.0 | 1617 | 1.0816 |
| 0.571 | 540.0 | 1620 | 1.1666 |
| 0.571 | 541.0 | 1623 | 1.2139 |
| 0.571 | 542.0 | 1626 | 1.2001 |
| 0.571 | 543.0 | 1629 | 1.1493 |
| 0.571 | 544.0 | 1632 | 1.1064 |
| 0.571 | 545.0 | 1635 | 1.0677 |
| 0.571 | 546.0 | 1638 | 1.0399 |
| 0.571 | 547.0 | 1641 | 1.0606 |
| 0.571 | 548.0 | 1644 | 1.0844 |
| 0.571 | 549.0 | 1647 | 1.0929 |
| 0.571 | 550.0 | 1650 | 1.1143 |
| 0.571 | 551.0 | 1653 | 1.1430 |
| 0.571 | 552.0 | 1656 | 1.1389 |
| 0.571 | 553.0 | 1659 | 1.1146 |
| 0.571 | 554.0 | 1662 | 1.0844 |
| 0.571 | 555.0 | 1665 | 1.0515 |
| 0.571 | 556.0 | 1668 | 0.9798 |
| 0.571 | 557.0 | 1671 | 0.9584 |
| 0.571 | 558.0 | 1674 | 0.9507 |
| 0.571 | 559.0 | 1677 | 0.9697 |
| 0.571 | 560.0 | 1680 | 1.0111 |
| 0.571 | 561.0 | 1683 | 1.1366 |
| 0.571 | 562.0 | 1686 | 1.1931 |
| 0.571 | 563.0 | 1689 | 1.2054 |
| 0.571 | 564.0 | 1692 | 1.1996 |
| 0.571 | 565.0 | 1695 | 1.1912 |
| 0.571 | 566.0 | 1698 | 1.1710 |
| 0.571 | 567.0 | 1701 | 1.1521 |
| 0.571 | 568.0 | 1704 | 1.1221 |
| 0.571 | 569.0 | 1707 | 1.0651 |
| 0.571 | 570.0 | 1710 | 1.0452 |
| 0.571 | 571.0 | 1713 | 1.0838 |
| 0.571 | 572.0 | 1716 | 1.1103 |
| 0.571 | 573.0 | 1719 | 1.1390 |
| 0.571 | 574.0 | 1722 | 1.1774 |
| 0.571 | 575.0 | 1725 | 1.1868 |
| 0.571 | 576.0 | 1728 | 1.1772 |
| 0.571 | 577.0 | 1731 | 1.1650 |
| 0.571 | 578.0 | 1734 | 1.1581 |
| 0.571 | 579.0 | 1737 | 1.1599 |
| 0.571 | 580.0 | 1740 | 1.1636 |
| 0.571 | 581.0 | 1743 | 1.1610 |
| 0.571 | 582.0 | 1746 | 1.1555 |
| 0.571 | 583.0 | 1749 | 1.1448 |
| 0.571 | 584.0 | 1752 | 1.1494 |
| 0.571 | 585.0 | 1755 | 1.1522 |
| 0.571 | 586.0 | 1758 | 1.1509 |
| 0.571 | 587.0 | 1761 | 1.1568 |
| 0.571 | 588.0 | 1764 | 1.1691 |
| 0.571 | 589.0 | 1767 | 1.1693 |
| 0.571 | 590.0 | 1770 | 1.1546 |
| 0.571 | 591.0 | 1773 | 1.1497 |
| 0.571 | 592.0 | 1776 | 1.1415 |
| 0.571 | 593.0 | 1779 | 1.1379 |
| 0.571 | 594.0 | 1782 | 1.1385 |
| 0.571 | 595.0 | 1785 | 1.1376 |
| 0.571 | 596.0 | 1788 | 1.1376 |
| 0.571 | 597.0 | 1791 | 1.1265 |
| 0.571 | 598.0 | 1794 | 1.1118 |
| 0.571 | 599.0 | 1797 | 1.1027 |
| 0.571 | 600.0 | 1800 | 1.0991 |
| 0.571 | 601.0 | 1803 | 1.1160 |
| 0.571 | 602.0 | 1806 | 1.1335 |
| 0.571 | 603.0 | 1809 | 1.1405 |
| 0.571 | 604.0 | 1812 | 1.1459 |
| 0.571 | 605.0 | 1815 | 1.1514 |
| 0.571 | 606.0 | 1818 | 1.1628 |
| 0.571 | 607.0 | 1821 | 1.1777 |
| 0.571 | 608.0 | 1824 | 1.1711 |
| 0.571 | 609.0 | 1827 | 1.1633 |
| 0.571 | 610.0 | 1830 | 1.1522 |
| 0.571 | 611.0 | 1833 | 1.1396 |
| 0.571 | 612.0 | 1836 | 1.1313 |
| 0.571 | 613.0 | 1839 | 1.1266 |
| 0.571 | 614.0 | 1842 | 1.1233 |
| 0.571 | 615.0 | 1845 | 1.1166 |
| 0.571 | 616.0 | 1848 | 1.1213 |
| 0.571 | 617.0 | 1851 | 1.1258 |
| 0.571 | 618.0 | 1854 | 1.1342 |
| 0.571 | 619.0 | 1857 | 1.1537 |
| 0.571 | 620.0 | 1860 | 1.1591 |
| 0.571 | 621.0 | 1863 | 1.1463 |
| 0.571 | 622.0 | 1866 | 1.1240 |
| 0.571 | 623.0 | 1869 | 1.1186 |
| 0.571 | 624.0 | 1872 | 1.1201 |
| 0.571 | 625.0 | 1875 | 1.1328 |
| 0.571 | 626.0 | 1878 | 1.1401 |
| 0.571 | 627.0 | 1881 | 1.1478 |
| 0.571 | 628.0 | 1884 | 1.1560 |
| 0.571 | 629.0 | 1887 | 1.1570 |
| 0.571 | 630.0 | 1890 | 1.1550 |
| 0.571 | 631.0 | 1893 | 1.1539 |
| 0.571 | 632.0 | 1896 | 1.1528 |
| 0.571 | 633.0 | 1899 | 1.1419 |
| 0.571 | 634.0 | 1902 | 1.1359 |
| 0.571 | 635.0 | 1905 | 1.1231 |
| 0.571 | 636.0 | 1908 | 1.1170 |
| 0.571 | 637.0 | 1911 | 1.1108 |
| 0.571 | 638.0 | 1914 | 1.1065 |
| 0.571 | 639.0 | 1917 | 1.1016 |
| 0.571 | 640.0 | 1920 | 1.1157 |
| 0.571 | 641.0 | 1923 | 1.1263 |
| 0.571 | 642.0 | 1926 | 1.1291 |
| 0.571 | 643.0 | 1929 | 1.1231 |
| 0.571 | 644.0 | 1932 | 1.1157 |
| 0.571 | 645.0 | 1935 | 1.1399 |
| 0.571 | 646.0 | 1938 | 1.1908 |
| 0.571 | 647.0 | 1941 | 1.2225 |
| 0.571 | 648.0 | 1944 | 1.2391 |
| 0.571 | 649.0 | 1947 | 1.2375 |
| 0.571 | 650.0 | 1950 | 1.2170 |
| 0.571 | 651.0 | 1953 | 1.1856 |
| 0.571 | 652.0 | 1956 | 1.1502 |
| 0.571 | 653.0 | 1959 | 1.1326 |
| 0.571 | 654.0 | 1962 | 1.1034 |
| 0.571 | 655.0 | 1965 | 1.0623 |
| 0.571 | 656.0 | 1968 | 1.0506 |
| 0.571 | 657.0 | 1971 | 1.0749 |
| 0.571 | 658.0 | 1974 | 1.2005 |
| 0.571 | 659.0 | 1977 | 1.2534 |
| 0.571 | 660.0 | 1980 | 1.2685 |
| 0.571 | 661.0 | 1983 | 1.2609 |
| 0.571 | 662.0 | 1986 | 1.2400 |
| 0.571 | 663.0 | 1989 | 1.2247 |
| 0.571 | 664.0 | 1992 | 1.2150 |
| 0.571 | 665.0 | 1995 | 1.2068 |
| 0.571 | 666.0 | 1998 | 1.1900 |
| 0.5609 | 667.0 | 2001 | 1.1792 |
| 0.5609 | 668.0 | 2004 | 1.1798 |
| 0.5609 | 669.0 | 2007 | 1.1843 |
| 0.5609 | 670.0 | 2010 | 1.1988 |
| 0.5609 | 671.0 | 2013 | 1.2119 |
| 0.5609 | 672.0 | 2016 | 1.2242 |
| 0.5609 | 673.0 | 2019 | 1.2244 |
| 0.5609 | 674.0 | 2022 | 1.2116 |
| 0.5609 | 675.0 | 2025 | 1.1945 |
| 0.5609 | 676.0 | 2028 | 1.1792 |
| 0.5609 | 677.0 | 2031 | 1.1733 |
| 0.5609 | 678.0 | 2034 | 1.1772 |
| 0.5609 | 679.0 | 2037 | 1.1895 |
| 0.5609 | 680.0 | 2040 | 1.2009 |
| 0.5609 | 681.0 | 2043 | 1.2075 |
| 0.5609 | 682.0 | 2046 | 1.2042 |
| 0.5609 | 683.0 | 2049 | 1.2065 |
| 0.5609 | 684.0 | 2052 | 1.2109 |
| 0.5609 | 685.0 | 2055 | 1.2103 |
| 0.5609 | 686.0 | 2058 | 1.2022 |
| 0.5609 | 687.0 | 2061 | 1.1938 |
| 0.5609 | 688.0 | 2064 | 1.1807 |
| 0.5609 | 689.0 | 2067 | 1.1710 |
| 0.5609 | 690.0 | 2070 | 1.1687 |
| 0.5609 | 691.0 | 2073 | 1.1635 |
| 0.5609 | 692.0 | 2076 | 1.1547 |
| 0.5609 | 693.0 | 2079 | 1.1375 |
| 0.5609 | 694.0 | 2082 | 1.1281 |
| 0.5609 | 695.0 | 2085 | 1.1188 |
| 0.5609 | 696.0 | 2088 | 1.1105 |
| 0.5609 | 697.0 | 2091 | 1.1131 |
| 0.5609 | 698.0 | 2094 | 1.1272 |
| 0.5609 | 699.0 | 2097 | 1.1351 |
| 0.5609 | 700.0 | 2100 | 1.1479 |
| 0.5609 | 701.0 | 2103 | 1.1571 |
| 0.5609 | 702.0 | 2106 | 1.1723 |
| 0.5609 | 703.0 | 2109 | 1.1871 |
| 0.5609 | 704.0 | 2112 | 1.1956 |
| 0.5609 | 705.0 | 2115 | 1.1998 |
| 0.5609 | 706.0 | 2118 | 1.2008 |
| 0.5609 | 707.0 | 2121 | 1.1992 |
| 0.5609 | 708.0 | 2124 | 1.1948 |
| 0.5609 | 709.0 | 2127 | 1.1771 |
| 0.5609 | 710.0 | 2130 | 1.1540 |
| 0.5609 | 711.0 | 2133 | 1.1320 |
| 0.5609 | 712.0 | 2136 | 1.1108 |
| 0.5609 | 713.0 | 2139 | 1.0930 |
| 0.5609 | 714.0 | 2142 | 1.0885 |
| 0.5609 | 715.0 | 2145 | 1.1121 |
| 0.5609 | 716.0 | 2148 | 1.1600 |
| 0.5609 | 717.0 | 2151 | 1.1982 |
| 0.5609 | 718.0 | 2154 | 1.2199 |
| 0.5609 | 719.0 | 2157 | 1.2274 |
| 0.5609 | 720.0 | 2160 | 1.2191 |
| 0.5609 | 721.0 | 2163 | 1.2108 |
| 0.5609 | 722.0 | 2166 | 1.2185 |
| 0.5609 | 723.0 | 2169 | 1.2203 |
| 0.5609 | 724.0 | 2172 | 1.2209 |
| 0.5609 | 725.0 | 2175 | 1.2235 |
| 0.5609 | 726.0 | 2178 | 1.2229 |
| 0.5609 | 727.0 | 2181 | 1.2314 |
| 0.5609 | 728.0 | 2184 | 1.2341 |
| 0.5609 | 729.0 | 2187 | 1.2352 |
| 0.5609 | 730.0 | 2190 | 1.2300 |
| 0.5609 | 731.0 | 2193 | 1.2216 |
| 0.5609 | 732.0 | 2196 | 1.2104 |
| 0.5609 | 733.0 | 2199 | 1.1965 |
| 0.5609 | 734.0 | 2202 | 1.1908 |
| 0.5609 | 735.0 | 2205 | 1.1752 |
| 0.5609 | 736.0 | 2208 | 1.1486 |
| 0.5609 | 737.0 | 2211 | 1.1308 |
| 0.5609 | 738.0 | 2214 | 1.1216 |
| 0.5609 | 739.0 | 2217 | 1.1571 |
| 0.5609 | 740.0 | 2220 | 1.1847 |
| 0.5609 | 741.0 | 2223 | 1.2001 |
| 0.5609 | 742.0 | 2226 | 1.1996 |
| 0.5609 | 743.0 | 2229 | 1.1964 |
| 0.5609 | 744.0 | 2232 | 1.1955 |
| 0.5609 | 745.0 | 2235 | 1.1885 |
| 0.5609 | 746.0 | 2238 | 1.1838 |
| 0.5609 | 747.0 | 2241 | 1.1820 |
| 0.5609 | 748.0 | 2244 | 1.1838 |
| 0.5609 | 749.0 | 2247 | 1.1891 |
| 0.5609 | 750.0 | 2250 | 1.1866 |
| 0.5609 | 751.0 | 2253 | 1.1797 |
| 0.5609 | 752.0 | 2256 | 1.1721 |
| 0.5609 | 753.0 | 2259 | 1.1611 |
| 0.5609 | 754.0 | 2262 | 1.1519 |
| 0.5609 | 755.0 | 2265 | 1.1407 |
| 0.5609 | 756.0 | 2268 | 1.1216 |
| 0.5609 | 757.0 | 2271 | 1.0963 |
| 0.5609 | 758.0 | 2274 | 1.0794 |
| 0.5609 | 759.0 | 2277 | 1.0741 |
| 0.5609 | 760.0 | 2280 | 1.0928 |
| 0.5609 | 761.0 | 2283 | 1.1165 |
| 0.5609 | 762.0 | 2286 | 1.1480 |
| 0.5609 | 763.0 | 2289 | 1.1834 |
| 0.5609 | 764.0 | 2292 | 1.2018 |
| 0.5609 | 765.0 | 2295 | 1.2098 |
| 0.5609 | 766.0 | 2298 | 1.2140 |
| 0.5609 | 767.0 | 2301 | 1.2204 |
| 0.5609 | 768.0 | 2304 | 1.2251 |
| 0.5609 | 769.0 | 2307 | 1.2276 |
| 0.5609 | 770.0 | 2310 | 1.2279 |
| 0.5609 | 771.0 | 2313 | 1.2264 |
| 0.5609 | 772.0 | 2316 | 1.2230 |
| 0.5609 | 773.0 | 2319 | 1.2169 |
| 0.5609 | 774.0 | 2322 | 1.1973 |
| 0.5609 | 775.0 | 2325 | 1.1658 |
| 0.5609 | 776.0 | 2328 | 1.1314 |
| 0.5609 | 777.0 | 2331 | 1.0967 |
| 0.5609 | 778.0 | 2334 | 1.0720 |
| 0.5609 | 779.0 | 2337 | 1.0603 |
| 0.5609 | 780.0 | 2340 | 1.0603 |
| 0.5609 | 781.0 | 2343 | 1.0969 |
| 0.5609 | 782.0 | 2346 | 1.1418 |
| 0.5609 | 783.0 | 2349 | 1.1850 |
| 0.5609 | 784.0 | 2352 | 1.2089 |
| 0.5609 | 785.0 | 2355 | 1.2210 |
| 0.5609 | 786.0 | 2358 | 1.2242 |
| 0.5609 | 787.0 | 2361 | 1.2232 |
| 0.5609 | 788.0 | 2364 | 1.2249 |
| 0.5609 | 789.0 | 2367 | 1.2262 |
| 0.5609 | 790.0 | 2370 | 1.2225 |
| 0.5609 | 791.0 | 2373 | 1.2153 |
| 0.5609 | 792.0 | 2376 | 1.2055 |
| 0.5609 | 793.0 | 2379 | 1.2109 |
| 0.5609 | 794.0 | 2382 | 1.2173 |
| 0.5609 | 795.0 | 2385 | 1.2182 |
| 0.5609 | 796.0 | 2388 | 1.2156 |
| 0.5609 | 797.0 | 2391 | 1.2170 |
| 0.5609 | 798.0 | 2394 | 1.2110 |
| 0.5609 | 799.0 | 2397 | 1.1985 |
| 0.5609 | 800.0 | 2400 | 1.1834 |
| 0.5609 | 801.0 | 2403 | 1.1597 |
| 0.5609 | 802.0 | 2406 | 1.1457 |
| 0.5609 | 803.0 | 2409 | 1.1432 |
| 0.5609 | 804.0 | 2412 | 1.1409 |
| 0.5609 | 805.0 | 2415 | 1.1360 |
| 0.5609 | 806.0 | 2418 | 1.1461 |
| 0.5609 | 807.0 | 2421 | 1.1626 |
| 0.5609 | 808.0 | 2424 | 1.1707 |
| 0.5609 | 809.0 | 2427 | 1.1774 |
| 0.5609 | 810.0 | 2430 | 1.1812 |
| 0.5609 | 811.0 | 2433 | 1.1855 |
| 0.5609 | 812.0 | 2436 | 1.1865 |
| 0.5609 | 813.0 | 2439 | 1.1843 |
| 0.5609 | 814.0 | 2442 | 1.1849 |
| 0.5609 | 815.0 | 2445 | 1.1902 |
| 0.5609 | 816.0 | 2448 | 1.1900 |
| 0.5609 | 817.0 | 2451 | 1.1887 |
| 0.5609 | 818.0 | 2454 | 1.1885 |
| 0.5609 | 819.0 | 2457 | 1.1884 |
| 0.5609 | 820.0 | 2460 | 1.1831 |
| 0.5609 | 821.0 | 2463 | 1.1759 |
| 0.5609 | 822.0 | 2466 | 1.1766 |
| 0.5609 | 823.0 | 2469 | 1.1818 |
| 0.5609 | 824.0 | 2472 | 1.1898 |
| 0.5609 | 825.0 | 2475 | 1.1949 |
| 0.5609 | 826.0 | 2478 | 1.1970 |
| 0.5609 | 827.0 | 2481 | 1.2047 |
| 0.5609 | 828.0 | 2484 | 1.2086 |
| 0.5609 | 829.0 | 2487 | 1.2049 |
| 0.5609 | 830.0 | 2490 | 1.1998 |
| 0.5609 | 831.0 | 2493 | 1.1987 |
| 0.5609 | 832.0 | 2496 | 1.2038 |
| 0.5609 | 833.0 | 2499 | 1.2137 |
| 0.5599 | 834.0 | 2502 | 1.2197 |
| 0.5599 | 835.0 | 2505 | 1.2264 |
| 0.5599 | 836.0 | 2508 | 1.2293 |
| 0.5599 | 837.0 | 2511 | 1.2281 |
| 0.5599 | 838.0 | 2514 | 1.2262 |
| 0.5599 | 839.0 | 2517 | 1.2209 |
| 0.5599 | 840.0 | 2520 | 1.2126 |
| 0.5599 | 841.0 | 2523 | 1.2055 |
| 0.5599 | 842.0 | 2526 | 1.1972 |
| 0.5599 | 843.0 | 2529 | 1.1880 |
| 0.5599 | 844.0 | 2532 | 1.1799 |
| 0.5599 | 845.0 | 2535 | 1.1767 |
| 0.5599 | 846.0 | 2538 | 1.1792 |
| 0.5599 | 847.0 | 2541 | 1.1769 |
| 0.5599 | 848.0 | 2544 | 1.1715 |
| 0.5599 | 849.0 | 2547 | 1.1668 |
| 0.5599 | 850.0 | 2550 | 1.1641 |
| 0.5599 | 851.0 | 2553 | 1.1639 |
| 0.5599 | 852.0 | 2556 | 1.1642 |
| 0.5599 | 853.0 | 2559 | 1.1589 |
| 0.5599 | 854.0 | 2562 | 1.1554 |
| 0.5599 | 855.0 | 2565 | 1.1556 |
| 0.5599 | 856.0 | 2568 | 1.1517 |
| 0.5599 | 857.0 | 2571 | 1.1528 |
| 0.5599 | 858.0 | 2574 | 1.1559 |
| 0.5599 | 859.0 | 2577 | 1.1622 |
| 0.5599 | 860.0 | 2580 | 1.1635 |
| 0.5599 | 861.0 | 2583 | 1.1649 |
| 0.5599 | 862.0 | 2586 | 1.1650 |
| 0.5599 | 863.0 | 2589 | 1.1639 |
| 0.5599 | 864.0 | 2592 | 1.1631 |
| 0.5599 | 865.0 | 2595 | 1.1627 |
| 0.5599 | 866.0 | 2598 | 1.1555 |
| 0.5599 | 867.0 | 2601 | 1.1510 |
| 0.5599 | 868.0 | 2604 | 1.1513 |
| 0.5599 | 869.0 | 2607 | 1.1551 |
| 0.5599 | 870.0 | 2610 | 1.1630 |
| 0.5599 | 871.0 | 2613 | 1.1689 |
| 0.5599 | 872.0 | 2616 | 1.1722 |
| 0.5599 | 873.0 | 2619 | 1.1720 |
| 0.5599 | 874.0 | 2622 | 1.1710 |
| 0.5599 | 875.0 | 2625 | 1.1698 |
| 0.5599 | 876.0 | 2628 | 1.1643 |
| 0.5599 | 877.0 | 2631 | 1.1576 |
| 0.5599 | 878.0 | 2634 | 1.1508 |
| 0.5599 | 879.0 | 2637 | 1.1444 |
| 0.5599 | 880.0 | 2640 | 1.1430 |
| 0.5599 | 881.0 | 2643 | 1.1444 |
| 0.5599 | 882.0 | 2646 | 1.1451 |
| 0.5599 | 883.0 | 2649 | 1.1463 |
| 0.5599 | 884.0 | 2652 | 1.1518 |
| 0.5599 | 885.0 | 2655 | 1.1551 |
| 0.5599 | 886.0 | 2658 | 1.1562 |
| 0.5599 | 887.0 | 2661 | 1.1597 |
| 0.5599 | 888.0 | 2664 | 1.1637 |
| 0.5599 | 889.0 | 2667 | 1.1693 |
| 0.5599 | 890.0 | 2670 | 1.1743 |
| 0.5599 | 891.0 | 2673 | 1.1783 |
| 0.5599 | 892.0 | 2676 | 1.1831 |
| 0.5599 | 893.0 | 2679 | 1.1885 |
| 0.5599 | 894.0 | 2682 | 1.1921 |
| 0.5599 | 895.0 | 2685 | 1.1906 |
| 0.5599 | 896.0 | 2688 | 1.1873 |
| 0.5599 | 897.0 | 2691 | 1.1866 |
| 0.5599 | 898.0 | 2694 | 1.1871 |
| 0.5599 | 899.0 | 2697 | 1.1870 |
| 0.5599 | 900.0 | 2700 | 1.1876 |
| 0.5599 | 901.0 | 2703 | 1.1920 |
| 0.5599 | 902.0 | 2706 | 1.1956 |
| 0.5599 | 903.0 | 2709 | 1.1966 |
| 0.5599 | 904.0 | 2712 | 1.1961 |
| 0.5599 | 905.0 | 2715 | 1.1954 |
| 0.5599 | 906.0 | 2718 | 1.1927 |
| 0.5599 | 907.0 | 2721 | 1.1884 |
| 0.5599 | 908.0 | 2724 | 1.1816 |
| 0.5599 | 909.0 | 2727 | 1.1754 |
| 0.5599 | 910.0 | 2730 | 1.1705 |
| 0.5599 | 911.0 | 2733 | 1.1666 |
| 0.5599 | 912.0 | 2736 | 1.1652 |
| 0.5599 | 913.0 | 2739 | 1.1631 |
| 0.5599 | 914.0 | 2742 | 1.1676 |
| 0.5599 | 915.0 | 2745 | 1.1710 |
| 0.5599 | 916.0 | 2748 | 1.1729 |
| 0.5599 | 917.0 | 2751 | 1.1759 |
| 0.5599 | 918.0 | 2754 | 1.1776 |
| 0.5599 | 919.0 | 2757 | 1.1790 |
| 0.5599 | 920.0 | 2760 | 1.1799 |
| 0.5599 | 921.0 | 2763 | 1.1785 |
| 0.5599 | 922.0 | 2766 | 1.1753 |
| 0.5599 | 923.0 | 2769 | 1.1689 |
| 0.5599 | 924.0 | 2772 | 1.1657 |
| 0.5599 | 925.0 | 2775 | 1.1680 |
| 0.5599 | 926.0 | 2778 | 1.1640 |
| 0.5599 | 927.0 | 2781 | 1.1617 |
| 0.5599 | 928.0 | 2784 | 1.1589 |
| 0.5599 | 929.0 | 2787 | 1.1561 |
| 0.5599 | 930.0 | 2790 | 1.1537 |
| 0.5599 | 931.0 | 2793 | 1.1529 |
| 0.5599 | 932.0 | 2796 | 1.1534 |
| 0.5599 | 933.0 | 2799 | 1.1594 |
| 0.5599 | 934.0 | 2802 | 1.1654 |
| 0.5599 | 935.0 | 2805 | 1.1697 |
| 0.5599 | 936.0 | 2808 | 1.1726 |
| 0.5599 | 937.0 | 2811 | 1.1753 |
| 0.5599 | 938.0 | 2814 | 1.1780 |
| 0.5599 | 939.0 | 2817 | 1.1814 |
| 0.5599 | 940.0 | 2820 | 1.1828 |
| 0.5599 | 941.0 | 2823 | 1.1848 |
| 0.5599 | 942.0 | 2826 | 1.1851 |
| 0.5599 | 943.0 | 2829 | 1.1853 |
| 0.5599 | 944.0 | 2832 | 1.1857 |
| 0.5599 | 945.0 | 2835 | 1.1850 |
| 0.5599 | 946.0 | 2838 | 1.1836 |
| 0.5599 | 947.0 | 2841 | 1.1799 |
| 0.5599 | 948.0 | 2844 | 1.1758 |
| 0.5599 | 949.0 | 2847 | 1.1722 |
| 0.5599 | 950.0 | 2850 | 1.1687 |
| 0.5599 | 951.0 | 2853 | 1.1677 |
| 0.5599 | 952.0 | 2856 | 1.1661 |
| 0.5599 | 953.0 | 2859 | 1.1668 |
| 0.5599 | 954.0 | 2862 | 1.1685 |
| 0.5599 | 955.0 | 2865 | 1.1672 |
| 0.5599 | 956.0 | 2868 | 1.1631 |
| 0.5599 | 957.0 | 2871 | 1.1587 |
| 0.5599 | 958.0 | 2874 | 1.1575 |
| 0.5599 | 959.0 | 2877 | 1.1557 |
| 0.5599 | 960.0 | 2880 | 1.1551 |
| 0.5599 | 961.0 | 2883 | 1.1556 |
| 0.5599 | 962.0 | 2886 | 1.1539 |
| 0.5599 | 963.0 | 2889 | 1.1509 |
| 0.5599 | 964.0 | 2892 | 1.1482 |
| 0.5599 | 965.0 | 2895 | 1.1465 |
| 0.5599 | 966.0 | 2898 | 1.1474 |
| 0.5599 | 967.0 | 2901 | 1.1487 |
| 0.5599 | 968.0 | 2904 | 1.1494 |
| 0.5599 | 969.0 | 2907 | 1.1508 |
| 0.5599 | 970.0 | 2910 | 1.1519 |
| 0.5599 | 971.0 | 2913 | 1.1508 |
| 0.5599 | 972.0 | 2916 | 1.1490 |
| 0.5599 | 973.0 | 2919 | 1.1471 |
| 0.5599 | 974.0 | 2922 | 1.1463 |
| 0.5599 | 975.0 | 2925 | 1.1459 |
| 0.5599 | 976.0 | 2928 | 1.1457 |
| 0.5599 | 977.0 | 2931 | 1.1460 |
| 0.5599 | 978.0 | 2934 | 1.1469 |
| 0.5599 | 979.0 | 2937 | 1.1485 |
| 0.5599 | 980.0 | 2940 | 1.1500 |
| 0.5599 | 981.0 | 2943 | 1.1509 |
| 0.5599 | 982.0 | 2946 | 1.1515 |
| 0.5599 | 983.0 | 2949 | 1.1516 |
| 0.5599 | 984.0 | 2952 | 1.1520 |
| 0.5599 | 985.0 | 2955 | 1.1524 |
| 0.5599 | 986.0 | 2958 | 1.1517 |
| 0.5599 | 987.0 | 2961 | 1.1524 |
| 0.5599 | 988.0 | 2964 | 1.1530 |
| 0.5599 | 989.0 | 2967 | 1.1537 |
| 0.5599 | 990.0 | 2970 | 1.1542 |
| 0.5599 | 991.0 | 2973 | 1.1550 |
| 0.5599 | 992.0 | 2976 | 1.1558 |
| 0.5599 | 993.0 | 2979 | 1.1566 |
| 0.5599 | 994.0 | 2982 | 1.1573 |
| 0.5599 | 995.0 | 2985 | 1.1580 |
| 0.5599 | 996.0 | 2988 | 1.1585 |
| 0.5599 | 997.0 | 2991 | 1.1589 |
| 0.5599 | 998.0 | 2994 | 1.1589 |
| 0.5599 | 999.0 | 2997 | 1.1590 |
| 0.5597 | 1000.0 | 3000 | 1.1590 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.0
|
Nusrat1234/Mistral-7B-User-Profile
|
Nusrat1234
| 2024-03-07T21:44:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T21:43:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_32_32_0.05_4_0.0002
|
ferrazzipietro
| 2024-03-07T21:43:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T16:45:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gotzmann/v0.8-adapter
|
gotzmann
| 2024-03-07T21:39:56Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:gotzmann/uni",
"base_model:adapter:gotzmann/uni",
"license:other",
"region:us"
] | null | 2024-03-07T21:38:22Z |
---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: gotzmann/uni
model-index:
- name: exported
results: []
---
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1.0
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
sharukat/so_mpnet-base_question_classifier
|
sharukat
| 2024-03-07T21:35:42Z | 46 | 0 |
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:flax-sentence-embeddings/stackoverflow_mpnet-base",
"base_model:finetune:flax-sentence-embeddings/stackoverflow_mpnet-base",
"model-index",
"region:us"
] |
text-classification
| 2024-03-03T05:22:06Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
- precision
- recall
- f1
widget:
- text: 'I''m trying to take a dataframe and convert them to tensors to train a model
in keras. I think it''s being triggered when I am converting my Y label to a tensor:
I''m getting the following error when casting y_train to tensor from slices: In
the tutorials this seems to work but I think those tutorials are doing multiclass
classifications whereas I''m doing a regression so y_train is a series not multiple
columns. Any suggestions of what I can do?'
- text: My weights are defined as I want to use the weights decay so I add, for example,
the argument to the tf.get_variable. Now I'm wondering if during the evaluation
phase this is still correct or maybe I have to set the regularizer factor to 0.
There is also another argument trainable. The documentation says If True also
add the variable to the graph collection GraphKeys.TRAINABLE_VARIABLES. which
is not clear to me. Should I use it? Can someone explain to me if the weights
decay effects in a sort of wrong way the evaluation step? How can I solve in that
case?
- text: 'Maybe I''m confused about what "inner" and "outer" tensor dimensions are,
but the documentation for tf.matmul puzzles me: Isn''t it the case that R-rank
arguments need to have matching (or no) R-2 outer dimensions, and that (as in
normal matrix multiplication) the Rth, inner dimension of the first argument must
match the R-1st dimension of the second. That is, in The outer dimensions a, ...,
z must be identical to a'', ..., z'' (or not exist), and x and x'' must match
(while p and q can be anything). Or put another way, shouldn''t the docs say:'
- text: 'I am using tf.data with reinitializable iterator to handle training and dev
set data. For each epoch, I initialize the training data set. The official documentation
has similar structure. I think this is not efficient especially if the training
set is large. Some of the resources I found online has sess.run(train_init_op,
feed_dict={X: X_train, Y: Y_train}) before the for loop to avoid this issue. But
then we can''t process the dev set after each epoch; we can only process it after
we are done iterating over epochs epochs. Is there a way to efficiently process
the dev set after each epoch?'
- text: 'Why is the pred variable being calculated before any of the training iterations
occur? I would expect that a pred would be generated (through the RNN() function)
during each pass through of the data for every iteration? There must be something
I am missing. Is pred something like a function object? I have looked at the docs
for tf.matmul() and that returns a tensor, not a function. Full source: https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/recurrent_network.py
Here is the code:'
pipeline_tag: text-classification
inference: true
base_model: flax-sentence-embeddings/stackoverflow_mpnet-base
model-index:
- name: SetFit with flax-sentence-embeddings/stackoverflow_mpnet-base
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.81875
name: Accuracy
- type: precision
value: 0.8248924988055423
name: Precision
- type: recall
value: 0.81875
name: Recall
- type: f1
value: 0.8178892421209625
name: F1
---
# SetFit with flax-sentence-embeddings/stackoverflow_mpnet-base
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [flax-sentence-embeddings/stackoverflow_mpnet-base](https://huggingface.co/flax-sentence-embeddings/stackoverflow_mpnet-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [flax-sentence-embeddings/stackoverflow_mpnet-base](https://huggingface.co/flax-sentence-embeddings/stackoverflow_mpnet-base)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'In tf.gradients, there is a keyword argument grad_ys Why is grads_ys needed here? The docs here is implicit. Could you please give some specific purpose and code? And my example code for tf.gradients is'</li><li>'I am coding a Convolutional Neural Network to classify images in TensorFlow but there is a problem: When I try to feed my NumPy array of flattened images (3 channels with RGB values from 0 to 255) to a tf.estimator.inputs.numpy_input_fn I get the following error: My numpy_imput_fn looks like this: In the documentation for the function it is said that x should be a dict of NumPy array:'</li><li>'I am trying to use tf.pad. Here is my attempt to pad the tensor to length 20, with values 10. I get this error message I am looking at the documentation https://www.tensorflow.org/api_docs/python/tf/pad But I am unable to figure out how to shape the pad value'</li></ul> |
| 0 | <ul><li>"I am trying to use tf.train.shuffle_batch to consume batches of data from a TFRecord file using TensorFlow 1.0. The relevant functions are: The code enters through examine_batches(), having been handed the output of batch_generator(). batch_generator() calls tfrecord_to_graph_ops() and the problem is in that function, I believe. I am calling on a file with 1,000 bytes (numbers 0-9). If I call eval() on this in a Session, it shows me all 1,000 elements. But if I try to put it in a batch generator, it crashes. If I don't reshape targets, I get an error like ValueError: All shapes must be fully defined when tf.train.shuffle_batch is called. If I call targets.set_shape([1]), reminiscent of Google's CIFAR-10 example code, I get an error like Invalid argument: Shape mismatch in tuple component 0. Expected [1], got [1000] in tf.train.shuffle_batch. I also tried using tf.strided_slice to cut a chunk of the raw data - this doesn't crash but it results in just getting the first event over and over again. What is the right way to do this? To pull batches from a TFRecord file? Note, I could manually write a function that chopped up the raw byte data and did some sort of batching - especially easy if I am using the feed_dict approach to getting data into the graph - but I am trying to learn how to use TensorFlow's TFRecord files and how to use their built in batching functions. Thanks!"</li><li>"I am fairly new to TF and ML in general, so I have relied heavily on the documentation and tutorials provided by TF. I have been following along with the Tensorflow 2.0 Objection Detection API tutorial to the letter and have encountered an issue while training: everytime I run the training script model_main_tf2.py, it always hangs after the output: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2) after a number of depreciation warnings. I have tried many different ways of fixing this, including modifying the train script and pipeline.config files. My dataset isn't very large, less than 100 images with a max of 15 labels per image. useful info: Python 3.8.0 Tensorflow 2.4.4 (Non GPU) Windows 10 Pro Any and all help is appreciated!"</li><li>'I found two solutions to calculate FLOPS of Keras models (TF 2.x): [1] https://github.com/tensorflow/tensorflow/issues/32809#issuecomment-849439287 [2] https://github.com/tensorflow/tensorflow/issues/32809#issuecomment-841975359 At first glance, both seem to work perfectly when testing with tf.keras.applications.ResNet50(). The resulting FLOPS are identical and correspond to the FLOPS of the ResNet paper. But then I built a small GRU model and found different FLOPS for the two methods: This results in the following numbers: 13206 for method [1] and 18306 for method [2]. That is really confusing... Does anyone know how to correctly calculate FLOPS of recurrent Keras models in TF 2.x? EDIT I found another information: [3] https://github.com/tensorflow/tensorflow/issues/36391#issuecomment-596055100 When adding this argument to convert_variables_to_constants_v2, the outputs of [1] and [2] are the same when using my GRU example. The tensorflow documentation explains this argument as follows (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/framework/convert_to_constants.py): Can someone try to explain this?'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy | Precision | Recall | F1 |
|:--------|:---------|:----------|:-------|:-------|
| **all** | 0.8187 | 0.8249 | 0.8187 | 0.8179 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("sharukat/so_mpnet-base_question_classifier")
# Run inference
preds = model("I'm trying to take a dataframe and convert them to tensors to train a model in keras. I think it's being triggered when I am converting my Y label to a tensor: I'm getting the following error when casting y_train to tensor from slices: In the tutorials this seems to work but I think those tutorials are doing multiclass classifications whereas I'm doing a regression so y_train is a series not multiple columns. Any suggestions of what I can do?")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:----|
| Word count | 12 | 128.0219 | 907 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 320 |
| 1 | 320 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (1, 16)
- max_steps: -1
- sampling_strategy: unique
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- max_length: 256
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:---------:|:-------------:|:---------------:|
| 0.0000 | 1 | 0.3266 | - |
| **1.0** | **25640** | **0.0** | **0.2863** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.13
- SetFit: 1.0.3
- Sentence Transformers: 2.5.1
- Transformers: 4.38.1
- PyTorch: 2.1.2
- Datasets: 2.18.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
cmu-lti/sotopia-pi-mistral-7b-SR
|
cmu-lti
| 2024-03-07T21:32:29Z | 90 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-03-07T21:26:43Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
pypy/VGMShield
|
pypy
| 2024-03-07T21:32:06Z | 0 | 3 | null |
[
"Fake Video Detection",
"Fake Video Source Tracing",
"video-classification",
"dataset:OpenGVLab/InternVid",
"dataset:TempoFunk/webvid-10M",
"arxiv:2402.13126",
"license:apache-2.0",
"region:us"
] |
video-classification
| 2024-02-21T20:23:29Z |
---
license: apache-2.0
datasets:
- OpenGVLab/InternVid
- TempoFunk/webvid-10M
pipeline_tag: video-classification
tags:
- Fake Video Detection
- Fake Video Source Tracing
---
<div style="text-align: center;">
<img src="./symbol.png" alt="symbol" style="height: 100px;"/>
</div>
# VGMShield: Mitigating Misuse of Video Generative Models
This repository pre-trained checkpoints to evaluate our detection and source tracing models. Our paper can be found at [here](https://arxiv.org/abs/2402.13126).
**Detection Model**:
[I3D](./detect/i3d/invid_i3d_i2v_i2v_best_model.pth) (0 True 1 False)
[MAE](./detect/mae/invid_mae_i2v_i2v_best_model.pth) (0 True 1 False)
[XCLIP](./detect/xclip/invid_xclip_i2v_i2v_best_model.pth) (0 True 1 False)
[MAE-sora](./detect/mae/detection_ft_sora.pt) (0 True 1 False)
**Source Tracing Model**
> 0 Hotshot-xl 1 i2vgen-xl(i2v) 2 i2vgen-xl(t2v) 3 LaVie 4 SEINE 5 Show-1 6 Stable Video Diffusion 7 VideoCrafter(i2v) 8 VideoCrafter(t2v)
[I3D](./source_tracing/i3d/invid_i3d_st_best_model.pth)-based source tracing model
[MAE](./source_tracing/mae/invid_mae_st_best_model.pth)-based source tracing model
[XCLIP](./source_tracing/xclip/invid_xclip_st_best_model.pth)-based source tracing model
[MAE](./source_tracing/mae/source_tracing_ft_sora.pt)-based(sora) source tracing model sora is label 9.
|
cmu-lti/sotopia-pi-mistral-7b-BC
|
cmu-lti
| 2024-03-07T21:30:09Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-03-07T21:24:20Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
farid1088/GQA_RoBERTa_legal_SQuAD_complete_augmented_1000
|
farid1088
| 2024-03-07T21:28:15Z | 27 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-06T02:07:37Z |
---
tags:
- generated_from_trainer
model-index:
- name: GQA_RoBERTa_legal_SQuAD_complete_augmented_1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GQA_RoBERTa_legal_SQuAD_complete_augmented_1000
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2040
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 1.0 | 4 | 3.7757 |
| No log | 2.0 | 8 | 3.1210 |
| No log | 3.0 | 12 | 2.7424 |
| No log | 4.0 | 16 | 2.3990 |
| No log | 5.0 | 20 | 2.0583 |
| No log | 6.0 | 24 | 1.9699 |
| No log | 7.0 | 28 | 1.6942 |
| No log | 8.0 | 32 | 1.5022 |
| No log | 9.0 | 36 | 1.4585 |
| No log | 10.0 | 40 | 1.1937 |
| No log | 11.0 | 44 | 1.1496 |
| No log | 12.0 | 48 | 0.9856 |
| No log | 13.0 | 52 | 0.9389 |
| No log | 14.0 | 56 | 0.9621 |
| No log | 15.0 | 60 | 0.8580 |
| No log | 16.0 | 64 | 0.8093 |
| No log | 17.0 | 68 | 0.7783 |
| No log | 18.0 | 72 | 0.7656 |
| No log | 19.0 | 76 | 0.7793 |
| No log | 20.0 | 80 | 0.7327 |
| No log | 21.0 | 84 | 0.7109 |
| No log | 22.0 | 88 | 0.7120 |
| No log | 23.0 | 92 | 0.7099 |
| No log | 24.0 | 96 | 0.7191 |
| No log | 25.0 | 100 | 0.7350 |
| No log | 26.0 | 104 | 0.7634 |
| No log | 27.0 | 108 | 0.7498 |
| No log | 28.0 | 112 | 0.7353 |
| No log | 29.0 | 116 | 0.7319 |
| No log | 30.0 | 120 | 0.7603 |
| No log | 31.0 | 124 | 0.7701 |
| No log | 32.0 | 128 | 0.7818 |
| No log | 33.0 | 132 | 0.7904 |
| No log | 34.0 | 136 | 0.7580 |
| No log | 35.0 | 140 | 0.7640 |
| No log | 36.0 | 144 | 0.7558 |
| No log | 37.0 | 148 | 0.7470 |
| No log | 38.0 | 152 | 0.7730 |
| No log | 39.0 | 156 | 0.7450 |
| No log | 40.0 | 160 | 0.7516 |
| No log | 41.0 | 164 | 0.7475 |
| No log | 42.0 | 168 | 0.7306 |
| No log | 43.0 | 172 | 0.7488 |
| No log | 44.0 | 176 | 0.7604 |
| No log | 45.0 | 180 | 0.8035 |
| No log | 46.0 | 184 | 0.7837 |
| No log | 47.0 | 188 | 0.7307 |
| No log | 48.0 | 192 | 0.6987 |
| No log | 49.0 | 196 | 0.7281 |
| No log | 50.0 | 200 | 0.7453 |
| No log | 51.0 | 204 | 0.7811 |
| No log | 52.0 | 208 | 0.7951 |
| No log | 53.0 | 212 | 0.7833 |
| No log | 54.0 | 216 | 0.7961 |
| No log | 55.0 | 220 | 0.8255 |
| No log | 56.0 | 224 | 0.8038 |
| No log | 57.0 | 228 | 0.8384 |
| No log | 58.0 | 232 | 0.8412 |
| No log | 59.0 | 236 | 0.8206 |
| No log | 60.0 | 240 | 0.8224 |
| No log | 61.0 | 244 | 0.8638 |
| No log | 62.0 | 248 | 0.9014 |
| No log | 63.0 | 252 | 0.9255 |
| No log | 64.0 | 256 | 0.9019 |
| No log | 65.0 | 260 | 0.8741 |
| No log | 66.0 | 264 | 0.8442 |
| No log | 67.0 | 268 | 0.8526 |
| No log | 68.0 | 272 | 0.8702 |
| No log | 69.0 | 276 | 0.9321 |
| No log | 70.0 | 280 | 0.9450 |
| No log | 71.0 | 284 | 0.8868 |
| No log | 72.0 | 288 | 0.8622 |
| No log | 73.0 | 292 | 0.8586 |
| No log | 74.0 | 296 | 0.8935 |
| No log | 75.0 | 300 | 0.9010 |
| No log | 76.0 | 304 | 0.8703 |
| No log | 77.0 | 308 | 0.8726 |
| No log | 78.0 | 312 | 0.9113 |
| No log | 79.0 | 316 | 0.9175 |
| No log | 80.0 | 320 | 0.9173 |
| No log | 81.0 | 324 | 0.9550 |
| No log | 82.0 | 328 | 0.9649 |
| No log | 83.0 | 332 | 0.9917 |
| No log | 84.0 | 336 | 0.9783 |
| No log | 85.0 | 340 | 0.9558 |
| No log | 86.0 | 344 | 0.9425 |
| No log | 87.0 | 348 | 0.9323 |
| No log | 88.0 | 352 | 0.9471 |
| No log | 89.0 | 356 | 0.9749 |
| No log | 90.0 | 360 | 0.9638 |
| No log | 91.0 | 364 | 0.9881 |
| No log | 92.0 | 368 | 0.9697 |
| No log | 93.0 | 372 | 0.9189 |
| No log | 94.0 | 376 | 0.9036 |
| No log | 95.0 | 380 | 0.8745 |
| No log | 96.0 | 384 | 0.8811 |
| No log | 97.0 | 388 | 0.8967 |
| No log | 98.0 | 392 | 0.9032 |
| No log | 99.0 | 396 | 0.9201 |
| No log | 100.0 | 400 | 0.9524 |
| No log | 101.0 | 404 | 0.9983 |
| No log | 102.0 | 408 | 0.9742 |
| No log | 103.0 | 412 | 0.9834 |
| No log | 104.0 | 416 | 0.9480 |
| No log | 105.0 | 420 | 0.9367 |
| No log | 106.0 | 424 | 0.9340 |
| No log | 107.0 | 428 | 0.9454 |
| No log | 108.0 | 432 | 0.9553 |
| No log | 109.0 | 436 | 0.9694 |
| No log | 110.0 | 440 | 0.9696 |
| No log | 111.0 | 444 | 0.9280 |
| No log | 112.0 | 448 | 0.9166 |
| No log | 113.0 | 452 | 0.9406 |
| No log | 114.0 | 456 | 0.9372 |
| No log | 115.0 | 460 | 0.9147 |
| No log | 116.0 | 464 | 0.9267 |
| No log | 117.0 | 468 | 0.9665 |
| No log | 118.0 | 472 | 1.0231 |
| No log | 119.0 | 476 | 1.0291 |
| No log | 120.0 | 480 | 0.9973 |
| No log | 121.0 | 484 | 0.9516 |
| No log | 122.0 | 488 | 0.9134 |
| No log | 123.0 | 492 | 0.8852 |
| No log | 124.0 | 496 | 0.8535 |
| 0.9595 | 125.0 | 500 | 0.9003 |
| 0.9595 | 126.0 | 504 | 0.9523 |
| 0.9595 | 127.0 | 508 | 0.9925 |
| 0.9595 | 128.0 | 512 | 0.9736 |
| 0.9595 | 129.0 | 516 | 0.9584 |
| 0.9595 | 130.0 | 520 | 0.9625 |
| 0.9595 | 131.0 | 524 | 0.9533 |
| 0.9595 | 132.0 | 528 | 0.9774 |
| 0.9595 | 133.0 | 532 | 0.9898 |
| 0.9595 | 134.0 | 536 | 0.9657 |
| 0.9595 | 135.0 | 540 | 0.9627 |
| 0.9595 | 136.0 | 544 | 1.0049 |
| 0.9595 | 137.0 | 548 | 1.0241 |
| 0.9595 | 138.0 | 552 | 1.0184 |
| 0.9595 | 139.0 | 556 | 1.0387 |
| 0.9595 | 140.0 | 560 | 1.0528 |
| 0.9595 | 141.0 | 564 | 1.0510 |
| 0.9595 | 142.0 | 568 | 1.0153 |
| 0.9595 | 143.0 | 572 | 0.9628 |
| 0.9595 | 144.0 | 576 | 0.9999 |
| 0.9595 | 145.0 | 580 | 1.0139 |
| 0.9595 | 146.0 | 584 | 1.0149 |
| 0.9595 | 147.0 | 588 | 1.0016 |
| 0.9595 | 148.0 | 592 | 0.9516 |
| 0.9595 | 149.0 | 596 | 0.9290 |
| 0.9595 | 150.0 | 600 | 0.9084 |
| 0.9595 | 151.0 | 604 | 0.8736 |
| 0.9595 | 152.0 | 608 | 0.8832 |
| 0.9595 | 153.0 | 612 | 0.9093 |
| 0.9595 | 154.0 | 616 | 0.9489 |
| 0.9595 | 155.0 | 620 | 0.9548 |
| 0.9595 | 156.0 | 624 | 0.8944 |
| 0.9595 | 157.0 | 628 | 0.8681 |
| 0.9595 | 158.0 | 632 | 0.8733 |
| 0.9595 | 159.0 | 636 | 0.8852 |
| 0.9595 | 160.0 | 640 | 0.9133 |
| 0.9595 | 161.0 | 644 | 0.8900 |
| 0.9595 | 162.0 | 648 | 0.8863 |
| 0.9595 | 163.0 | 652 | 0.8928 |
| 0.9595 | 164.0 | 656 | 0.8959 |
| 0.9595 | 165.0 | 660 | 0.9163 |
| 0.9595 | 166.0 | 664 | 0.9739 |
| 0.9595 | 167.0 | 668 | 1.0204 |
| 0.9595 | 168.0 | 672 | 1.0059 |
| 0.9595 | 169.0 | 676 | 0.9578 |
| 0.9595 | 170.0 | 680 | 0.9313 |
| 0.9595 | 171.0 | 684 | 0.9084 |
| 0.9595 | 172.0 | 688 | 0.9836 |
| 0.9595 | 173.0 | 692 | 1.0601 |
| 0.9595 | 174.0 | 696 | 1.0884 |
| 0.9595 | 175.0 | 700 | 1.0779 |
| 0.9595 | 176.0 | 704 | 1.0599 |
| 0.9595 | 177.0 | 708 | 1.0422 |
| 0.9595 | 178.0 | 712 | 1.0271 |
| 0.9595 | 179.0 | 716 | 1.0100 |
| 0.9595 | 180.0 | 720 | 0.9945 |
| 0.9595 | 181.0 | 724 | 1.0018 |
| 0.9595 | 182.0 | 728 | 1.0234 |
| 0.9595 | 183.0 | 732 | 1.0380 |
| 0.9595 | 184.0 | 736 | 1.0525 |
| 0.9595 | 185.0 | 740 | 1.0420 |
| 0.9595 | 186.0 | 744 | 1.0325 |
| 0.9595 | 187.0 | 748 | 1.0125 |
| 0.9595 | 188.0 | 752 | 0.9891 |
| 0.9595 | 189.0 | 756 | 0.9515 |
| 0.9595 | 190.0 | 760 | 0.9495 |
| 0.9595 | 191.0 | 764 | 0.9642 |
| 0.9595 | 192.0 | 768 | 0.9876 |
| 0.9595 | 193.0 | 772 | 0.9985 |
| 0.9595 | 194.0 | 776 | 1.0227 |
| 0.9595 | 195.0 | 780 | 1.0730 |
| 0.9595 | 196.0 | 784 | 1.0871 |
| 0.9595 | 197.0 | 788 | 1.0918 |
| 0.9595 | 198.0 | 792 | 1.1092 |
| 0.9595 | 199.0 | 796 | 1.0989 |
| 0.9595 | 200.0 | 800 | 1.0992 |
| 0.9595 | 201.0 | 804 | 1.1034 |
| 0.9595 | 202.0 | 808 | 1.0881 |
| 0.9595 | 203.0 | 812 | 1.0707 |
| 0.9595 | 204.0 | 816 | 1.0777 |
| 0.9595 | 205.0 | 820 | 1.0758 |
| 0.9595 | 206.0 | 824 | 1.0684 |
| 0.9595 | 207.0 | 828 | 1.0629 |
| 0.9595 | 208.0 | 832 | 1.0659 |
| 0.9595 | 209.0 | 836 | 1.0585 |
| 0.9595 | 210.0 | 840 | 1.0132 |
| 0.9595 | 211.0 | 844 | 0.9791 |
| 0.9595 | 212.0 | 848 | 0.9761 |
| 0.9595 | 213.0 | 852 | 1.0348 |
| 0.9595 | 214.0 | 856 | 1.0910 |
| 0.9595 | 215.0 | 860 | 1.1354 |
| 0.9595 | 216.0 | 864 | 1.1348 |
| 0.9595 | 217.0 | 868 | 1.0884 |
| 0.9595 | 218.0 | 872 | 1.0430 |
| 0.9595 | 219.0 | 876 | 1.0202 |
| 0.9595 | 220.0 | 880 | 1.0097 |
| 0.9595 | 221.0 | 884 | 1.0151 |
| 0.9595 | 222.0 | 888 | 1.0096 |
| 0.9595 | 223.0 | 892 | 1.0302 |
| 0.9595 | 224.0 | 896 | 1.0635 |
| 0.9595 | 225.0 | 900 | 1.0611 |
| 0.9595 | 226.0 | 904 | 1.0548 |
| 0.9595 | 227.0 | 908 | 1.1173 |
| 0.9595 | 228.0 | 912 | 1.1561 |
| 0.9595 | 229.0 | 916 | 1.1550 |
| 0.9595 | 230.0 | 920 | 1.0254 |
| 0.9595 | 231.0 | 924 | 0.9364 |
| 0.9595 | 232.0 | 928 | 0.9316 |
| 0.9595 | 233.0 | 932 | 0.9717 |
| 0.9595 | 234.0 | 936 | 1.0406 |
| 0.9595 | 235.0 | 940 | 1.0643 |
| 0.9595 | 236.0 | 944 | 1.1092 |
| 0.9595 | 237.0 | 948 | 1.1197 |
| 0.9595 | 238.0 | 952 | 1.1270 |
| 0.9595 | 239.0 | 956 | 1.1300 |
| 0.9595 | 240.0 | 960 | 1.0921 |
| 0.9595 | 241.0 | 964 | 1.0446 |
| 0.9595 | 242.0 | 968 | 1.0234 |
| 0.9595 | 243.0 | 972 | 1.0067 |
| 0.9595 | 244.0 | 976 | 1.0324 |
| 0.9595 | 245.0 | 980 | 1.0434 |
| 0.9595 | 246.0 | 984 | 1.0502 |
| 0.9595 | 247.0 | 988 | 1.0618 |
| 0.9595 | 248.0 | 992 | 1.1352 |
| 0.9595 | 249.0 | 996 | 1.1672 |
| 0.4061 | 250.0 | 1000 | 1.1700 |
| 0.4061 | 251.0 | 1004 | 1.1416 |
| 0.4061 | 252.0 | 1008 | 1.1198 |
| 0.4061 | 253.0 | 1012 | 1.1226 |
| 0.4061 | 254.0 | 1016 | 1.1220 |
| 0.4061 | 255.0 | 1020 | 1.1317 |
| 0.4061 | 256.0 | 1024 | 1.1390 |
| 0.4061 | 257.0 | 1028 | 1.1069 |
| 0.4061 | 258.0 | 1032 | 1.0700 |
| 0.4061 | 259.0 | 1036 | 1.0657 |
| 0.4061 | 260.0 | 1040 | 1.0839 |
| 0.4061 | 261.0 | 1044 | 1.1030 |
| 0.4061 | 262.0 | 1048 | 1.1005 |
| 0.4061 | 263.0 | 1052 | 1.0882 |
| 0.4061 | 264.0 | 1056 | 1.0740 |
| 0.4061 | 265.0 | 1060 | 1.0710 |
| 0.4061 | 266.0 | 1064 | 1.0775 |
| 0.4061 | 267.0 | 1068 | 1.0908 |
| 0.4061 | 268.0 | 1072 | 1.1077 |
| 0.4061 | 269.0 | 1076 | 1.1204 |
| 0.4061 | 270.0 | 1080 | 1.1259 |
| 0.4061 | 271.0 | 1084 | 1.1208 |
| 0.4061 | 272.0 | 1088 | 1.1004 |
| 0.4061 | 273.0 | 1092 | 1.0761 |
| 0.4061 | 274.0 | 1096 | 1.0683 |
| 0.4061 | 275.0 | 1100 | 1.0663 |
| 0.4061 | 276.0 | 1104 | 1.0627 |
| 0.4061 | 277.0 | 1108 | 1.1069 |
| 0.4061 | 278.0 | 1112 | 1.1032 |
| 0.4061 | 279.0 | 1116 | 1.0401 |
| 0.4061 | 280.0 | 1120 | 1.0408 |
| 0.4061 | 281.0 | 1124 | 1.1004 |
| 0.4061 | 282.0 | 1128 | 1.1623 |
| 0.4061 | 283.0 | 1132 | 1.1512 |
| 0.4061 | 284.0 | 1136 | 1.1242 |
| 0.4061 | 285.0 | 1140 | 1.0919 |
| 0.4061 | 286.0 | 1144 | 1.0818 |
| 0.4061 | 287.0 | 1148 | 1.0703 |
| 0.4061 | 288.0 | 1152 | 1.0501 |
| 0.4061 | 289.0 | 1156 | 1.0347 |
| 0.4061 | 290.0 | 1160 | 1.0299 |
| 0.4061 | 291.0 | 1164 | 1.0641 |
| 0.4061 | 292.0 | 1168 | 1.0679 |
| 0.4061 | 293.0 | 1172 | 1.0680 |
| 0.4061 | 294.0 | 1176 | 1.1041 |
| 0.4061 | 295.0 | 1180 | 1.1802 |
| 0.4061 | 296.0 | 1184 | 1.1971 |
| 0.4061 | 297.0 | 1188 | 1.1793 |
| 0.4061 | 298.0 | 1192 | 1.1459 |
| 0.4061 | 299.0 | 1196 | 1.1035 |
| 0.4061 | 300.0 | 1200 | 1.0577 |
| 0.4061 | 301.0 | 1204 | 1.0544 |
| 0.4061 | 302.0 | 1208 | 1.0737 |
| 0.4061 | 303.0 | 1212 | 1.0819 |
| 0.4061 | 304.0 | 1216 | 1.0899 |
| 0.4061 | 305.0 | 1220 | 1.0885 |
| 0.4061 | 306.0 | 1224 | 1.0755 |
| 0.4061 | 307.0 | 1228 | 1.0139 |
| 0.4061 | 308.0 | 1232 | 0.9849 |
| 0.4061 | 309.0 | 1236 | 0.9781 |
| 0.4061 | 310.0 | 1240 | 0.9953 |
| 0.4061 | 311.0 | 1244 | 1.0138 |
| 0.4061 | 312.0 | 1248 | 1.0119 |
| 0.4061 | 313.0 | 1252 | 1.0704 |
| 0.4061 | 314.0 | 1256 | 1.1161 |
| 0.4061 | 315.0 | 1260 | 1.1500 |
| 0.4061 | 316.0 | 1264 | 1.1862 |
| 0.4061 | 317.0 | 1268 | 1.1833 |
| 0.4061 | 318.0 | 1272 | 1.1706 |
| 0.4061 | 319.0 | 1276 | 1.1517 |
| 0.4061 | 320.0 | 1280 | 1.1309 |
| 0.4061 | 321.0 | 1284 | 1.0936 |
| 0.4061 | 322.0 | 1288 | 1.0957 |
| 0.4061 | 323.0 | 1292 | 1.1080 |
| 0.4061 | 324.0 | 1296 | 1.1087 |
| 0.4061 | 325.0 | 1300 | 1.1314 |
| 0.4061 | 326.0 | 1304 | 1.1757 |
| 0.4061 | 327.0 | 1308 | 1.1896 |
| 0.4061 | 328.0 | 1312 | 1.1742 |
| 0.4061 | 329.0 | 1316 | 1.1661 |
| 0.4061 | 330.0 | 1320 | 1.1675 |
| 0.4061 | 331.0 | 1324 | 1.1691 |
| 0.4061 | 332.0 | 1328 | 1.1715 |
| 0.4061 | 333.0 | 1332 | 1.1513 |
| 0.4061 | 334.0 | 1336 | 1.1347 |
| 0.4061 | 335.0 | 1340 | 1.1386 |
| 0.4061 | 336.0 | 1344 | 1.1587 |
| 0.4061 | 337.0 | 1348 | 1.1739 |
| 0.4061 | 338.0 | 1352 | 1.1790 |
| 0.4061 | 339.0 | 1356 | 1.1615 |
| 0.4061 | 340.0 | 1360 | 1.1484 |
| 0.4061 | 341.0 | 1364 | 1.1376 |
| 0.4061 | 342.0 | 1368 | 1.1258 |
| 0.4061 | 343.0 | 1372 | 1.1142 |
| 0.4061 | 344.0 | 1376 | 1.1062 |
| 0.4061 | 345.0 | 1380 | 1.0986 |
| 0.4061 | 346.0 | 1384 | 1.0905 |
| 0.4061 | 347.0 | 1388 | 1.0776 |
| 0.4061 | 348.0 | 1392 | 1.0687 |
| 0.4061 | 349.0 | 1396 | 1.0865 |
| 0.4061 | 350.0 | 1400 | 1.0822 |
| 0.4061 | 351.0 | 1404 | 1.0831 |
| 0.4061 | 352.0 | 1408 | 1.0914 |
| 0.4061 | 353.0 | 1412 | 1.1018 |
| 0.4061 | 354.0 | 1416 | 1.1078 |
| 0.4061 | 355.0 | 1420 | 1.1190 |
| 0.4061 | 356.0 | 1424 | 1.1374 |
| 0.4061 | 357.0 | 1428 | 1.1534 |
| 0.4061 | 358.0 | 1432 | 1.2011 |
| 0.4061 | 359.0 | 1436 | 1.2166 |
| 0.4061 | 360.0 | 1440 | 1.2168 |
| 0.4061 | 361.0 | 1444 | 1.2144 |
| 0.4061 | 362.0 | 1448 | 1.1989 |
| 0.4061 | 363.0 | 1452 | 1.1832 |
| 0.4061 | 364.0 | 1456 | 1.1531 |
| 0.4061 | 365.0 | 1460 | 1.1422 |
| 0.4061 | 366.0 | 1464 | 1.1279 |
| 0.4061 | 367.0 | 1468 | 1.1210 |
| 0.4061 | 368.0 | 1472 | 1.1114 |
| 0.4061 | 369.0 | 1476 | 1.1034 |
| 0.4061 | 370.0 | 1480 | 1.0998 |
| 0.4061 | 371.0 | 1484 | 1.1009 |
| 0.4061 | 372.0 | 1488 | 1.1048 |
| 0.4061 | 373.0 | 1492 | 1.1002 |
| 0.4061 | 374.0 | 1496 | 1.0920 |
| 0.4027 | 375.0 | 1500 | 1.0851 |
| 0.4027 | 376.0 | 1504 | 1.0787 |
| 0.4027 | 377.0 | 1508 | 1.0733 |
| 0.4027 | 378.0 | 1512 | 1.0695 |
| 0.4027 | 379.0 | 1516 | 1.0686 |
| 0.4027 | 380.0 | 1520 | 1.0687 |
| 0.4027 | 381.0 | 1524 | 1.0757 |
| 0.4027 | 382.0 | 1528 | 1.1245 |
| 0.4027 | 383.0 | 1532 | 1.1659 |
| 0.4027 | 384.0 | 1536 | 1.1729 |
| 0.4027 | 385.0 | 1540 | 1.1401 |
| 0.4027 | 386.0 | 1544 | 1.1316 |
| 0.4027 | 387.0 | 1548 | 1.1445 |
| 0.4027 | 388.0 | 1552 | 1.1504 |
| 0.4027 | 389.0 | 1556 | 1.1461 |
| 0.4027 | 390.0 | 1560 | 1.1450 |
| 0.4027 | 391.0 | 1564 | 1.1428 |
| 0.4027 | 392.0 | 1568 | 1.1392 |
| 0.4027 | 393.0 | 1572 | 1.1304 |
| 0.4027 | 394.0 | 1576 | 1.1038 |
| 0.4027 | 395.0 | 1580 | 1.0931 |
| 0.4027 | 396.0 | 1584 | 1.0837 |
| 0.4027 | 397.0 | 1588 | 1.0824 |
| 0.4027 | 398.0 | 1592 | 1.0808 |
| 0.4027 | 399.0 | 1596 | 1.0819 |
| 0.4027 | 400.0 | 1600 | 1.0794 |
| 0.4027 | 401.0 | 1604 | 1.0887 |
| 0.4027 | 402.0 | 1608 | 1.0771 |
| 0.4027 | 403.0 | 1612 | 1.1094 |
| 0.4027 | 404.0 | 1616 | 1.1436 |
| 0.4027 | 405.0 | 1620 | 1.1654 |
| 0.4027 | 406.0 | 1624 | 1.1661 |
| 0.4027 | 407.0 | 1628 | 1.1561 |
| 0.4027 | 408.0 | 1632 | 1.1425 |
| 0.4027 | 409.0 | 1636 | 1.1329 |
| 0.4027 | 410.0 | 1640 | 1.1031 |
| 0.4027 | 411.0 | 1644 | 1.0969 |
| 0.4027 | 412.0 | 1648 | 1.1374 |
| 0.4027 | 413.0 | 1652 | 1.2151 |
| 0.4027 | 414.0 | 1656 | 1.2531 |
| 0.4027 | 415.0 | 1660 | 1.2576 |
| 0.4027 | 416.0 | 1664 | 1.2520 |
| 0.4027 | 417.0 | 1668 | 1.2261 |
| 0.4027 | 418.0 | 1672 | 1.1952 |
| 0.4027 | 419.0 | 1676 | 1.1627 |
| 0.4027 | 420.0 | 1680 | 1.1412 |
| 0.4027 | 421.0 | 1684 | 1.1316 |
| 0.4027 | 422.0 | 1688 | 1.1335 |
| 0.4027 | 423.0 | 1692 | 1.1366 |
| 0.4027 | 424.0 | 1696 | 1.1405 |
| 0.4027 | 425.0 | 1700 | 1.1503 |
| 0.4027 | 426.0 | 1704 | 1.1579 |
| 0.4027 | 427.0 | 1708 | 1.1629 |
| 0.4027 | 428.0 | 1712 | 1.1647 |
| 0.4027 | 429.0 | 1716 | 1.1752 |
| 0.4027 | 430.0 | 1720 | 1.2149 |
| 0.4027 | 431.0 | 1724 | 1.2361 |
| 0.4027 | 432.0 | 1728 | 1.2406 |
| 0.4027 | 433.0 | 1732 | 1.2271 |
| 0.4027 | 434.0 | 1736 | 1.2130 |
| 0.4027 | 435.0 | 1740 | 1.2011 |
| 0.4027 | 436.0 | 1744 | 1.1930 |
| 0.4027 | 437.0 | 1748 | 1.1895 |
| 0.4027 | 438.0 | 1752 | 1.1903 |
| 0.4027 | 439.0 | 1756 | 1.1907 |
| 0.4027 | 440.0 | 1760 | 1.1871 |
| 0.4027 | 441.0 | 1764 | 1.1850 |
| 0.4027 | 442.0 | 1768 | 1.1835 |
| 0.4027 | 443.0 | 1772 | 1.1841 |
| 0.4027 | 444.0 | 1776 | 1.1790 |
| 0.4027 | 445.0 | 1780 | 1.1860 |
| 0.4027 | 446.0 | 1784 | 1.1998 |
| 0.4027 | 447.0 | 1788 | 1.2106 |
| 0.4027 | 448.0 | 1792 | 1.2091 |
| 0.4027 | 449.0 | 1796 | 1.2059 |
| 0.4027 | 450.0 | 1800 | 1.2032 |
| 0.4027 | 451.0 | 1804 | 1.2225 |
| 0.4027 | 452.0 | 1808 | 1.2336 |
| 0.4027 | 453.0 | 1812 | 1.2409 |
| 0.4027 | 454.0 | 1816 | 1.2450 |
| 0.4027 | 455.0 | 1820 | 1.2479 |
| 0.4027 | 456.0 | 1824 | 1.2373 |
| 0.4027 | 457.0 | 1828 | 1.2258 |
| 0.4027 | 458.0 | 1832 | 1.2178 |
| 0.4027 | 459.0 | 1836 | 1.2142 |
| 0.4027 | 460.0 | 1840 | 1.2237 |
| 0.4027 | 461.0 | 1844 | 1.2365 |
| 0.4027 | 462.0 | 1848 | 1.2448 |
| 0.4027 | 463.0 | 1852 | 1.2462 |
| 0.4027 | 464.0 | 1856 | 1.2458 |
| 0.4027 | 465.0 | 1860 | 1.2426 |
| 0.4027 | 466.0 | 1864 | 1.2366 |
| 0.4027 | 467.0 | 1868 | 1.2280 |
| 0.4027 | 468.0 | 1872 | 1.2097 |
| 0.4027 | 469.0 | 1876 | 1.1996 |
| 0.4027 | 470.0 | 1880 | 1.1970 |
| 0.4027 | 471.0 | 1884 | 1.1946 |
| 0.4027 | 472.0 | 1888 | 1.1921 |
| 0.4027 | 473.0 | 1892 | 1.1885 |
| 0.4027 | 474.0 | 1896 | 1.1959 |
| 0.4027 | 475.0 | 1900 | 1.2028 |
| 0.4027 | 476.0 | 1904 | 1.2091 |
| 0.4027 | 477.0 | 1908 | 1.2131 |
| 0.4027 | 478.0 | 1912 | 1.2149 |
| 0.4027 | 479.0 | 1916 | 1.2142 |
| 0.4027 | 480.0 | 1920 | 1.2106 |
| 0.4027 | 481.0 | 1924 | 1.2185 |
| 0.4027 | 482.0 | 1928 | 1.2249 |
| 0.4027 | 483.0 | 1932 | 1.2221 |
| 0.4027 | 484.0 | 1936 | 1.2240 |
| 0.4027 | 485.0 | 1940 | 1.2291 |
| 0.4027 | 486.0 | 1944 | 1.2215 |
| 0.4027 | 487.0 | 1948 | 1.2306 |
| 0.4027 | 488.0 | 1952 | 1.2364 |
| 0.4027 | 489.0 | 1956 | 1.2394 |
| 0.4027 | 490.0 | 1960 | 1.2425 |
| 0.4027 | 491.0 | 1964 | 1.2441 |
| 0.4027 | 492.0 | 1968 | 1.2484 |
| 0.4027 | 493.0 | 1972 | 1.2533 |
| 0.4027 | 494.0 | 1976 | 1.2587 |
| 0.4027 | 495.0 | 1980 | 1.2861 |
| 0.4027 | 496.0 | 1984 | 1.3230 |
| 0.4027 | 497.0 | 1988 | 1.3310 |
| 0.4027 | 498.0 | 1992 | 1.3040 |
| 0.4027 | 499.0 | 1996 | 1.2828 |
| 0.4015 | 500.0 | 2000 | 1.2658 |
| 0.4015 | 501.0 | 2004 | 1.2563 |
| 0.4015 | 502.0 | 2008 | 1.2468 |
| 0.4015 | 503.0 | 2012 | 1.2381 |
| 0.4015 | 504.0 | 2016 | 1.2305 |
| 0.4015 | 505.0 | 2020 | 1.2271 |
| 0.4015 | 506.0 | 2024 | 1.2447 |
| 0.4015 | 507.0 | 2028 | 1.2642 |
| 0.4015 | 508.0 | 2032 | 1.2743 |
| 0.4015 | 509.0 | 2036 | 1.2797 |
| 0.4015 | 510.0 | 2040 | 1.2839 |
| 0.4015 | 511.0 | 2044 | 1.2645 |
| 0.4015 | 512.0 | 2048 | 1.2411 |
| 0.4015 | 513.0 | 2052 | 1.2261 |
| 0.4015 | 514.0 | 2056 | 1.2141 |
| 0.4015 | 515.0 | 2060 | 1.2026 |
| 0.4015 | 516.0 | 2064 | 1.1991 |
| 0.4015 | 517.0 | 2068 | 1.2004 |
| 0.4015 | 518.0 | 2072 | 1.1927 |
| 0.4015 | 519.0 | 2076 | 1.2065 |
| 0.4015 | 520.0 | 2080 | 1.1876 |
| 0.4015 | 521.0 | 2084 | 1.1670 |
| 0.4015 | 522.0 | 2088 | 1.2298 |
| 0.4015 | 523.0 | 2092 | 1.2412 |
| 0.4015 | 524.0 | 2096 | 1.2469 |
| 0.4015 | 525.0 | 2100 | 1.2639 |
| 0.4015 | 526.0 | 2104 | 1.2845 |
| 0.4015 | 527.0 | 2108 | 1.2928 |
| 0.4015 | 528.0 | 2112 | 1.2928 |
| 0.4015 | 529.0 | 2116 | 1.2901 |
| 0.4015 | 530.0 | 2120 | 1.2863 |
| 0.4015 | 531.0 | 2124 | 1.2819 |
| 0.4015 | 532.0 | 2128 | 1.2756 |
| 0.4015 | 533.0 | 2132 | 1.2602 |
| 0.4015 | 534.0 | 2136 | 1.2220 |
| 0.4015 | 535.0 | 2140 | 1.1909 |
| 0.4015 | 536.0 | 2144 | 1.1784 |
| 0.4015 | 537.0 | 2148 | 1.1824 |
| 0.4015 | 538.0 | 2152 | 1.1839 |
| 0.4015 | 539.0 | 2156 | 1.1836 |
| 0.4015 | 540.0 | 2160 | 1.1816 |
| 0.4015 | 541.0 | 2164 | 1.1767 |
| 0.4015 | 542.0 | 2168 | 1.1693 |
| 0.4015 | 543.0 | 2172 | 1.1573 |
| 0.4015 | 544.0 | 2176 | 1.1424 |
| 0.4015 | 545.0 | 2180 | 1.1312 |
| 0.4015 | 546.0 | 2184 | 1.1262 |
| 0.4015 | 547.0 | 2188 | 1.1330 |
| 0.4015 | 548.0 | 2192 | 1.1370 |
| 0.4015 | 549.0 | 2196 | 1.1386 |
| 0.4015 | 550.0 | 2200 | 1.1450 |
| 0.4015 | 551.0 | 2204 | 1.1489 |
| 0.4015 | 552.0 | 2208 | 1.1465 |
| 0.4015 | 553.0 | 2212 | 1.1458 |
| 0.4015 | 554.0 | 2216 | 1.1438 |
| 0.4015 | 555.0 | 2220 | 1.1405 |
| 0.4015 | 556.0 | 2224 | 1.1413 |
| 0.4015 | 557.0 | 2228 | 1.1443 |
| 0.4015 | 558.0 | 2232 | 1.1478 |
| 0.4015 | 559.0 | 2236 | 1.1519 |
| 0.4015 | 560.0 | 2240 | 1.1579 |
| 0.4015 | 561.0 | 2244 | 1.1543 |
| 0.4015 | 562.0 | 2248 | 1.1479 |
| 0.4015 | 563.0 | 2252 | 1.1474 |
| 0.4015 | 564.0 | 2256 | 1.1388 |
| 0.4015 | 565.0 | 2260 | 1.1312 |
| 0.4015 | 566.0 | 2264 | 1.1319 |
| 0.4015 | 567.0 | 2268 | 1.1345 |
| 0.4015 | 568.0 | 2272 | 1.1379 |
| 0.4015 | 569.0 | 2276 | 1.1343 |
| 0.4015 | 570.0 | 2280 | 1.1312 |
| 0.4015 | 571.0 | 2284 | 1.1294 |
| 0.4015 | 572.0 | 2288 | 1.1286 |
| 0.4015 | 573.0 | 2292 | 1.1313 |
| 0.4015 | 574.0 | 2296 | 1.1344 |
| 0.4015 | 575.0 | 2300 | 1.1408 |
| 0.4015 | 576.0 | 2304 | 1.1502 |
| 0.4015 | 577.0 | 2308 | 1.1605 |
| 0.4015 | 578.0 | 2312 | 1.1661 |
| 0.4015 | 579.0 | 2316 | 1.1772 |
| 0.4015 | 580.0 | 2320 | 1.1835 |
| 0.4015 | 581.0 | 2324 | 1.1882 |
| 0.4015 | 582.0 | 2328 | 1.1931 |
| 0.4015 | 583.0 | 2332 | 1.1966 |
| 0.4015 | 584.0 | 2336 | 1.1995 |
| 0.4015 | 585.0 | 2340 | 1.1999 |
| 0.4015 | 586.0 | 2344 | 1.1976 |
| 0.4015 | 587.0 | 2348 | 1.2158 |
| 0.4015 | 588.0 | 2352 | 1.2351 |
| 0.4015 | 589.0 | 2356 | 1.2386 |
| 0.4015 | 590.0 | 2360 | 1.2322 |
| 0.4015 | 591.0 | 2364 | 1.2268 |
| 0.4015 | 592.0 | 2368 | 1.2168 |
| 0.4015 | 593.0 | 2372 | 1.2058 |
| 0.4015 | 594.0 | 2376 | 1.1940 |
| 0.4015 | 595.0 | 2380 | 1.1846 |
| 0.4015 | 596.0 | 2384 | 1.1756 |
| 0.4015 | 597.0 | 2388 | 1.1728 |
| 0.4015 | 598.0 | 2392 | 1.1731 |
| 0.4015 | 599.0 | 2396 | 1.1747 |
| 0.4015 | 600.0 | 2400 | 1.1754 |
| 0.4015 | 601.0 | 2404 | 1.1738 |
| 0.4015 | 602.0 | 2408 | 1.1766 |
| 0.4015 | 603.0 | 2412 | 1.1779 |
| 0.4015 | 604.0 | 2416 | 1.1781 |
| 0.4015 | 605.0 | 2420 | 1.1755 |
| 0.4015 | 606.0 | 2424 | 1.1726 |
| 0.4015 | 607.0 | 2428 | 1.1691 |
| 0.4015 | 608.0 | 2432 | 1.1652 |
| 0.4015 | 609.0 | 2436 | 1.1594 |
| 0.4015 | 610.0 | 2440 | 1.1497 |
| 0.4015 | 611.0 | 2444 | 1.1450 |
| 0.4015 | 612.0 | 2448 | 1.1467 |
| 0.4015 | 613.0 | 2452 | 1.1463 |
| 0.4015 | 614.0 | 2456 | 1.1456 |
| 0.4015 | 615.0 | 2460 | 1.1613 |
| 0.4015 | 616.0 | 2464 | 1.1746 |
| 0.4015 | 617.0 | 2468 | 1.1846 |
| 0.4015 | 618.0 | 2472 | 1.1864 |
| 0.4015 | 619.0 | 2476 | 1.1849 |
| 0.4015 | 620.0 | 2480 | 1.1839 |
| 0.4015 | 621.0 | 2484 | 1.1802 |
| 0.4015 | 622.0 | 2488 | 1.1759 |
| 0.4015 | 623.0 | 2492 | 1.1711 |
| 0.4015 | 624.0 | 2496 | 1.1654 |
| 0.4009 | 625.0 | 2500 | 1.1607 |
| 0.4009 | 626.0 | 2504 | 1.1558 |
| 0.4009 | 627.0 | 2508 | 1.1530 |
| 0.4009 | 628.0 | 2512 | 1.1523 |
| 0.4009 | 629.0 | 2516 | 1.1515 |
| 0.4009 | 630.0 | 2520 | 1.1477 |
| 0.4009 | 631.0 | 2524 | 1.1447 |
| 0.4009 | 632.0 | 2528 | 1.1449 |
| 0.4009 | 633.0 | 2532 | 1.1450 |
| 0.4009 | 634.0 | 2536 | 1.1520 |
| 0.4009 | 635.0 | 2540 | 1.1594 |
| 0.4009 | 636.0 | 2544 | 1.1627 |
| 0.4009 | 637.0 | 2548 | 1.1648 |
| 0.4009 | 638.0 | 2552 | 1.1668 |
| 0.4009 | 639.0 | 2556 | 1.1679 |
| 0.4009 | 640.0 | 2560 | 1.1674 |
| 0.4009 | 641.0 | 2564 | 1.1629 |
| 0.4009 | 642.0 | 2568 | 1.1590 |
| 0.4009 | 643.0 | 2572 | 1.1572 |
| 0.4009 | 644.0 | 2576 | 1.1574 |
| 0.4009 | 645.0 | 2580 | 1.1560 |
| 0.4009 | 646.0 | 2584 | 1.1547 |
| 0.4009 | 647.0 | 2588 | 1.1626 |
| 0.4009 | 648.0 | 2592 | 1.1698 |
| 0.4009 | 649.0 | 2596 | 1.1810 |
| 0.4009 | 650.0 | 2600 | 1.1890 |
| 0.4009 | 651.0 | 2604 | 1.1906 |
| 0.4009 | 652.0 | 2608 | 1.1845 |
| 0.4009 | 653.0 | 2612 | 1.1802 |
| 0.4009 | 654.0 | 2616 | 1.1777 |
| 0.4009 | 655.0 | 2620 | 1.1755 |
| 0.4009 | 656.0 | 2624 | 1.1743 |
| 0.4009 | 657.0 | 2628 | 1.1838 |
| 0.4009 | 658.0 | 2632 | 1.1907 |
| 0.4009 | 659.0 | 2636 | 1.1953 |
| 0.4009 | 660.0 | 2640 | 1.2169 |
| 0.4009 | 661.0 | 2644 | 1.2343 |
| 0.4009 | 662.0 | 2648 | 1.2517 |
| 0.4009 | 663.0 | 2652 | 1.2641 |
| 0.4009 | 664.0 | 2656 | 1.2559 |
| 0.4009 | 665.0 | 2660 | 1.2292 |
| 0.4009 | 666.0 | 2664 | 1.2040 |
| 0.4009 | 667.0 | 2668 | 1.1851 |
| 0.4009 | 668.0 | 2672 | 1.1710 |
| 0.4009 | 669.0 | 2676 | 1.1577 |
| 0.4009 | 670.0 | 2680 | 1.1502 |
| 0.4009 | 671.0 | 2684 | 1.1591 |
| 0.4009 | 672.0 | 2688 | 1.1709 |
| 0.4009 | 673.0 | 2692 | 1.1813 |
| 0.4009 | 674.0 | 2696 | 1.1893 |
| 0.4009 | 675.0 | 2700 | 1.1942 |
| 0.4009 | 676.0 | 2704 | 1.1949 |
| 0.4009 | 677.0 | 2708 | 1.1814 |
| 0.4009 | 678.0 | 2712 | 1.1825 |
| 0.4009 | 679.0 | 2716 | 1.1880 |
| 0.4009 | 680.0 | 2720 | 1.1829 |
| 0.4009 | 681.0 | 2724 | 1.1667 |
| 0.4009 | 682.0 | 2728 | 1.1637 |
| 0.4009 | 683.0 | 2732 | 1.1631 |
| 0.4009 | 684.0 | 2736 | 1.1605 |
| 0.4009 | 685.0 | 2740 | 1.1599 |
| 0.4009 | 686.0 | 2744 | 1.1571 |
| 0.4009 | 687.0 | 2748 | 1.1528 |
| 0.4009 | 688.0 | 2752 | 1.1541 |
| 0.4009 | 689.0 | 2756 | 1.1628 |
| 0.4009 | 690.0 | 2760 | 1.1750 |
| 0.4009 | 691.0 | 2764 | 1.1855 |
| 0.4009 | 692.0 | 2768 | 1.1928 |
| 0.4009 | 693.0 | 2772 | 1.1962 |
| 0.4009 | 694.0 | 2776 | 1.1970 |
| 0.4009 | 695.0 | 2780 | 1.1976 |
| 0.4009 | 696.0 | 2784 | 1.1929 |
| 0.4009 | 697.0 | 2788 | 1.1959 |
| 0.4009 | 698.0 | 2792 | 1.2003 |
| 0.4009 | 699.0 | 2796 | 1.2046 |
| 0.4009 | 700.0 | 2800 | 1.2084 |
| 0.4009 | 701.0 | 2804 | 1.2097 |
| 0.4009 | 702.0 | 2808 | 1.2109 |
| 0.4009 | 703.0 | 2812 | 1.2124 |
| 0.4009 | 704.0 | 2816 | 1.2159 |
| 0.4009 | 705.0 | 2820 | 1.2190 |
| 0.4009 | 706.0 | 2824 | 1.2203 |
| 0.4009 | 707.0 | 2828 | 1.2186 |
| 0.4009 | 708.0 | 2832 | 1.2156 |
| 0.4009 | 709.0 | 2836 | 1.2086 |
| 0.4009 | 710.0 | 2840 | 1.2024 |
| 0.4009 | 711.0 | 2844 | 1.1998 |
| 0.4009 | 712.0 | 2848 | 1.1986 |
| 0.4009 | 713.0 | 2852 | 1.1981 |
| 0.4009 | 714.0 | 2856 | 1.2001 |
| 0.4009 | 715.0 | 2860 | 1.2019 |
| 0.4009 | 716.0 | 2864 | 1.2038 |
| 0.4009 | 717.0 | 2868 | 1.2051 |
| 0.4009 | 718.0 | 2872 | 1.1869 |
| 0.4009 | 719.0 | 2876 | 1.1780 |
| 0.4009 | 720.0 | 2880 | 1.1821 |
| 0.4009 | 721.0 | 2884 | 1.1875 |
| 0.4009 | 722.0 | 2888 | 1.1881 |
| 0.4009 | 723.0 | 2892 | 1.1867 |
| 0.4009 | 724.0 | 2896 | 1.1862 |
| 0.4009 | 725.0 | 2900 | 1.1858 |
| 0.4009 | 726.0 | 2904 | 1.1841 |
| 0.4009 | 727.0 | 2908 | 1.1803 |
| 0.4009 | 728.0 | 2912 | 1.1781 |
| 0.4009 | 729.0 | 2916 | 1.1751 |
| 0.4009 | 730.0 | 2920 | 1.1735 |
| 0.4009 | 731.0 | 2924 | 1.1709 |
| 0.4009 | 732.0 | 2928 | 1.1676 |
| 0.4009 | 733.0 | 2932 | 1.1643 |
| 0.4009 | 734.0 | 2936 | 1.1640 |
| 0.4009 | 735.0 | 2940 | 1.1636 |
| 0.4009 | 736.0 | 2944 | 1.1596 |
| 0.4009 | 737.0 | 2948 | 1.1704 |
| 0.4009 | 738.0 | 2952 | 1.1773 |
| 0.4009 | 739.0 | 2956 | 1.1814 |
| 0.4009 | 740.0 | 2960 | 1.1891 |
| 0.4009 | 741.0 | 2964 | 1.1954 |
| 0.4009 | 742.0 | 2968 | 1.2006 |
| 0.4009 | 743.0 | 2972 | 1.1996 |
| 0.4009 | 744.0 | 2976 | 1.1986 |
| 0.4009 | 745.0 | 2980 | 1.1979 |
| 0.4009 | 746.0 | 2984 | 1.1958 |
| 0.4009 | 747.0 | 2988 | 1.1947 |
| 0.4009 | 748.0 | 2992 | 1.1930 |
| 0.4009 | 749.0 | 2996 | 1.1894 |
| 0.4006 | 750.0 | 3000 | 1.1871 |
| 0.4006 | 751.0 | 3004 | 1.1853 |
| 0.4006 | 752.0 | 3008 | 1.1854 |
| 0.4006 | 753.0 | 3012 | 1.1866 |
| 0.4006 | 754.0 | 3016 | 1.1901 |
| 0.4006 | 755.0 | 3020 | 1.1924 |
| 0.4006 | 756.0 | 3024 | 1.1946 |
| 0.4006 | 757.0 | 3028 | 1.2176 |
| 0.4006 | 758.0 | 3032 | 1.2392 |
| 0.4006 | 759.0 | 3036 | 1.2502 |
| 0.4006 | 760.0 | 3040 | 1.2617 |
| 0.4006 | 761.0 | 3044 | 1.2924 |
| 0.4006 | 762.0 | 3048 | 1.3111 |
| 0.4006 | 763.0 | 3052 | 1.3042 |
| 0.4006 | 764.0 | 3056 | 1.2828 |
| 0.4006 | 765.0 | 3060 | 1.2628 |
| 0.4006 | 766.0 | 3064 | 1.2553 |
| 0.4006 | 767.0 | 3068 | 1.2600 |
| 0.4006 | 768.0 | 3072 | 1.2645 |
| 0.4006 | 769.0 | 3076 | 1.2678 |
| 0.4006 | 770.0 | 3080 | 1.2706 |
| 0.4006 | 771.0 | 3084 | 1.2620 |
| 0.4006 | 772.0 | 3088 | 1.2547 |
| 0.4006 | 773.0 | 3092 | 1.2503 |
| 0.4006 | 774.0 | 3096 | 1.2459 |
| 0.4006 | 775.0 | 3100 | 1.2452 |
| 0.4006 | 776.0 | 3104 | 1.2442 |
| 0.4006 | 777.0 | 3108 | 1.2393 |
| 0.4006 | 778.0 | 3112 | 1.2328 |
| 0.4006 | 779.0 | 3116 | 1.2249 |
| 0.4006 | 780.0 | 3120 | 1.2223 |
| 0.4006 | 781.0 | 3124 | 1.2302 |
| 0.4006 | 782.0 | 3128 | 1.2334 |
| 0.4006 | 783.0 | 3132 | 1.2332 |
| 0.4006 | 784.0 | 3136 | 1.2326 |
| 0.4006 | 785.0 | 3140 | 1.2330 |
| 0.4006 | 786.0 | 3144 | 1.2281 |
| 0.4006 | 787.0 | 3148 | 1.2294 |
| 0.4006 | 788.0 | 3152 | 1.2327 |
| 0.4006 | 789.0 | 3156 | 1.2408 |
| 0.4006 | 790.0 | 3160 | 1.2459 |
| 0.4006 | 791.0 | 3164 | 1.2488 |
| 0.4006 | 792.0 | 3168 | 1.2509 |
| 0.4006 | 793.0 | 3172 | 1.2510 |
| 0.4006 | 794.0 | 3176 | 1.2514 |
| 0.4006 | 795.0 | 3180 | 1.2491 |
| 0.4006 | 796.0 | 3184 | 1.2476 |
| 0.4006 | 797.0 | 3188 | 1.2470 |
| 0.4006 | 798.0 | 3192 | 1.2470 |
| 0.4006 | 799.0 | 3196 | 1.2464 |
| 0.4006 | 800.0 | 3200 | 1.2468 |
| 0.4006 | 801.0 | 3204 | 1.2460 |
| 0.4006 | 802.0 | 3208 | 1.2425 |
| 0.4006 | 803.0 | 3212 | 1.2415 |
| 0.4006 | 804.0 | 3216 | 1.2416 |
| 0.4006 | 805.0 | 3220 | 1.2420 |
| 0.4006 | 806.0 | 3224 | 1.2442 |
| 0.4006 | 807.0 | 3228 | 1.2465 |
| 0.4006 | 808.0 | 3232 | 1.2481 |
| 0.4006 | 809.0 | 3236 | 1.2477 |
| 0.4006 | 810.0 | 3240 | 1.2468 |
| 0.4006 | 811.0 | 3244 | 1.2467 |
| 0.4006 | 812.0 | 3248 | 1.2471 |
| 0.4006 | 813.0 | 3252 | 1.2486 |
| 0.4006 | 814.0 | 3256 | 1.2484 |
| 0.4006 | 815.0 | 3260 | 1.2484 |
| 0.4006 | 816.0 | 3264 | 1.2477 |
| 0.4006 | 817.0 | 3268 | 1.2545 |
| 0.4006 | 818.0 | 3272 | 1.2622 |
| 0.4006 | 819.0 | 3276 | 1.2672 |
| 0.4006 | 820.0 | 3280 | 1.2704 |
| 0.4006 | 821.0 | 3284 | 1.2719 |
| 0.4006 | 822.0 | 3288 | 1.2710 |
| 0.4006 | 823.0 | 3292 | 1.2697 |
| 0.4006 | 824.0 | 3296 | 1.2671 |
| 0.4006 | 825.0 | 3300 | 1.2717 |
| 0.4006 | 826.0 | 3304 | 1.2763 |
| 0.4006 | 827.0 | 3308 | 1.2774 |
| 0.4006 | 828.0 | 3312 | 1.2773 |
| 0.4006 | 829.0 | 3316 | 1.2765 |
| 0.4006 | 830.0 | 3320 | 1.2767 |
| 0.4006 | 831.0 | 3324 | 1.2760 |
| 0.4006 | 832.0 | 3328 | 1.2755 |
| 0.4006 | 833.0 | 3332 | 1.2742 |
| 0.4006 | 834.0 | 3336 | 1.2732 |
| 0.4006 | 835.0 | 3340 | 1.2681 |
| 0.4006 | 836.0 | 3344 | 1.2624 |
| 0.4006 | 837.0 | 3348 | 1.2577 |
| 0.4006 | 838.0 | 3352 | 1.2530 |
| 0.4006 | 839.0 | 3356 | 1.2488 |
| 0.4006 | 840.0 | 3360 | 1.2455 |
| 0.4006 | 841.0 | 3364 | 1.2440 |
| 0.4006 | 842.0 | 3368 | 1.2459 |
| 0.4006 | 843.0 | 3372 | 1.2487 |
| 0.4006 | 844.0 | 3376 | 1.2498 |
| 0.4006 | 845.0 | 3380 | 1.2504 |
| 0.4006 | 846.0 | 3384 | 1.2476 |
| 0.4006 | 847.0 | 3388 | 1.2446 |
| 0.4006 | 848.0 | 3392 | 1.2400 |
| 0.4006 | 849.0 | 3396 | 1.2353 |
| 0.4006 | 850.0 | 3400 | 1.2298 |
| 0.4006 | 851.0 | 3404 | 1.2246 |
| 0.4006 | 852.0 | 3408 | 1.2207 |
| 0.4006 | 853.0 | 3412 | 1.2129 |
| 0.4006 | 854.0 | 3416 | 1.2030 |
| 0.4006 | 855.0 | 3420 | 1.1937 |
| 0.4006 | 856.0 | 3424 | 1.1898 |
| 0.4006 | 857.0 | 3428 | 1.1907 |
| 0.4006 | 858.0 | 3432 | 1.1910 |
| 0.4006 | 859.0 | 3436 | 1.1919 |
| 0.4006 | 860.0 | 3440 | 1.1920 |
| 0.4006 | 861.0 | 3444 | 1.1923 |
| 0.4006 | 862.0 | 3448 | 1.1927 |
| 0.4006 | 863.0 | 3452 | 1.1933 |
| 0.4006 | 864.0 | 3456 | 1.1934 |
| 0.4006 | 865.0 | 3460 | 1.1937 |
| 0.4006 | 866.0 | 3464 | 1.1936 |
| 0.4006 | 867.0 | 3468 | 1.1932 |
| 0.4006 | 868.0 | 3472 | 1.1926 |
| 0.4006 | 869.0 | 3476 | 1.1917 |
| 0.4006 | 870.0 | 3480 | 1.1899 |
| 0.4006 | 871.0 | 3484 | 1.1884 |
| 0.4006 | 872.0 | 3488 | 1.1858 |
| 0.4006 | 873.0 | 3492 | 1.1842 |
| 0.4006 | 874.0 | 3496 | 1.1835 |
| 0.4 | 875.0 | 3500 | 1.1836 |
| 0.4 | 876.0 | 3504 | 1.1845 |
| 0.4 | 877.0 | 3508 | 1.1867 |
| 0.4 | 878.0 | 3512 | 1.1902 |
| 0.4 | 879.0 | 3516 | 1.1945 |
| 0.4 | 880.0 | 3520 | 1.1972 |
| 0.4 | 881.0 | 3524 | 1.1996 |
| 0.4 | 882.0 | 3528 | 1.2025 |
| 0.4 | 883.0 | 3532 | 1.2048 |
| 0.4 | 884.0 | 3536 | 1.2061 |
| 0.4 | 885.0 | 3540 | 1.2076 |
| 0.4 | 886.0 | 3544 | 1.2078 |
| 0.4 | 887.0 | 3548 | 1.2093 |
| 0.4 | 888.0 | 3552 | 1.2160 |
| 0.4 | 889.0 | 3556 | 1.2185 |
| 0.4 | 890.0 | 3560 | 1.2167 |
| 0.4 | 891.0 | 3564 | 1.2196 |
| 0.4 | 892.0 | 3568 | 1.2207 |
| 0.4 | 893.0 | 3572 | 1.2203 |
| 0.4 | 894.0 | 3576 | 1.2191 |
| 0.4 | 895.0 | 3580 | 1.2181 |
| 0.4 | 896.0 | 3584 | 1.2176 |
| 0.4 | 897.0 | 3588 | 1.2169 |
| 0.4 | 898.0 | 3592 | 1.2157 |
| 0.4 | 899.0 | 3596 | 1.2177 |
| 0.4 | 900.0 | 3600 | 1.2208 |
| 0.4 | 901.0 | 3604 | 1.2232 |
| 0.4 | 902.0 | 3608 | 1.2245 |
| 0.4 | 903.0 | 3612 | 1.2242 |
| 0.4 | 904.0 | 3616 | 1.2231 |
| 0.4 | 905.0 | 3620 | 1.2219 |
| 0.4 | 906.0 | 3624 | 1.2211 |
| 0.4 | 907.0 | 3628 | 1.2215 |
| 0.4 | 908.0 | 3632 | 1.2216 |
| 0.4 | 909.0 | 3636 | 1.2204 |
| 0.4 | 910.0 | 3640 | 1.2193 |
| 0.4 | 911.0 | 3644 | 1.2182 |
| 0.4 | 912.0 | 3648 | 1.2165 |
| 0.4 | 913.0 | 3652 | 1.2148 |
| 0.4 | 914.0 | 3656 | 1.2128 |
| 0.4 | 915.0 | 3660 | 1.2120 |
| 0.4 | 916.0 | 3664 | 1.2113 |
| 0.4 | 917.0 | 3668 | 1.2111 |
| 0.4 | 918.0 | 3672 | 1.2114 |
| 0.4 | 919.0 | 3676 | 1.2117 |
| 0.4 | 920.0 | 3680 | 1.2108 |
| 0.4 | 921.0 | 3684 | 1.2107 |
| 0.4 | 922.0 | 3688 | 1.2097 |
| 0.4 | 923.0 | 3692 | 1.2084 |
| 0.4 | 924.0 | 3696 | 1.2072 |
| 0.4 | 925.0 | 3700 | 1.2063 |
| 0.4 | 926.0 | 3704 | 1.2060 |
| 0.4 | 927.0 | 3708 | 1.2055 |
| 0.4 | 928.0 | 3712 | 1.2053 |
| 0.4 | 929.0 | 3716 | 1.2053 |
| 0.4 | 930.0 | 3720 | 1.2055 |
| 0.4 | 931.0 | 3724 | 1.2061 |
| 0.4 | 932.0 | 3728 | 1.2091 |
| 0.4 | 933.0 | 3732 | 1.2121 |
| 0.4 | 934.0 | 3736 | 1.2141 |
| 0.4 | 935.0 | 3740 | 1.2150 |
| 0.4 | 936.0 | 3744 | 1.2152 |
| 0.4 | 937.0 | 3748 | 1.2153 |
| 0.4 | 938.0 | 3752 | 1.2153 |
| 0.4 | 939.0 | 3756 | 1.2150 |
| 0.4 | 940.0 | 3760 | 1.2153 |
| 0.4 | 941.0 | 3764 | 1.2154 |
| 0.4 | 942.0 | 3768 | 1.2156 |
| 0.4 | 943.0 | 3772 | 1.2156 |
| 0.4 | 944.0 | 3776 | 1.2144 |
| 0.4 | 945.0 | 3780 | 1.2107 |
| 0.4 | 946.0 | 3784 | 1.2078 |
| 0.4 | 947.0 | 3788 | 1.2060 |
| 0.4 | 948.0 | 3792 | 1.2047 |
| 0.4 | 949.0 | 3796 | 1.2026 |
| 0.4 | 950.0 | 3800 | 1.2003 |
| 0.4 | 951.0 | 3804 | 1.1986 |
| 0.4 | 952.0 | 3808 | 1.1975 |
| 0.4 | 953.0 | 3812 | 1.1969 |
| 0.4 | 954.0 | 3816 | 1.1958 |
| 0.4 | 955.0 | 3820 | 1.1946 |
| 0.4 | 956.0 | 3824 | 1.1937 |
| 0.4 | 957.0 | 3828 | 1.1928 |
| 0.4 | 958.0 | 3832 | 1.1928 |
| 0.4 | 959.0 | 3836 | 1.1928 |
| 0.4 | 960.0 | 3840 | 1.1933 |
| 0.4 | 961.0 | 3844 | 1.1939 |
| 0.4 | 962.0 | 3848 | 1.1942 |
| 0.4 | 963.0 | 3852 | 1.1947 |
| 0.4 | 964.0 | 3856 | 1.1954 |
| 0.4 | 965.0 | 3860 | 1.1961 |
| 0.4 | 966.0 | 3864 | 1.1966 |
| 0.4 | 967.0 | 3868 | 1.1985 |
| 0.4 | 968.0 | 3872 | 1.2002 |
| 0.4 | 969.0 | 3876 | 1.2015 |
| 0.4 | 970.0 | 3880 | 1.2035 |
| 0.4 | 971.0 | 3884 | 1.2047 |
| 0.4 | 972.0 | 3888 | 1.2050 |
| 0.4 | 973.0 | 3892 | 1.2057 |
| 0.4 | 974.0 | 3896 | 1.2064 |
| 0.4 | 975.0 | 3900 | 1.2068 |
| 0.4 | 976.0 | 3904 | 1.2067 |
| 0.4 | 977.0 | 3908 | 1.2067 |
| 0.4 | 978.0 | 3912 | 1.2065 |
| 0.4 | 979.0 | 3916 | 1.2063 |
| 0.4 | 980.0 | 3920 | 1.2060 |
| 0.4 | 981.0 | 3924 | 1.2059 |
| 0.4 | 982.0 | 3928 | 1.2059 |
| 0.4 | 983.0 | 3932 | 1.2059 |
| 0.4 | 984.0 | 3936 | 1.2060 |
| 0.4 | 985.0 | 3940 | 1.2060 |
| 0.4 | 986.0 | 3944 | 1.2059 |
| 0.4 | 987.0 | 3948 | 1.2059 |
| 0.4 | 988.0 | 3952 | 1.2059 |
| 0.4 | 989.0 | 3956 | 1.2059 |
| 0.4 | 990.0 | 3960 | 1.2059 |
| 0.4 | 991.0 | 3964 | 1.2060 |
| 0.4 | 992.0 | 3968 | 1.2060 |
| 0.4 | 993.0 | 3972 | 1.2060 |
| 0.4 | 994.0 | 3976 | 1.2054 |
| 0.4 | 995.0 | 3980 | 1.2047 |
| 0.4 | 996.0 | 3984 | 1.2043 |
| 0.4 | 997.0 | 3988 | 1.2041 |
| 0.4 | 998.0 | 3992 | 1.2040 |
| 0.4 | 999.0 | 3996 | 1.2039 |
| 0.4009 | 1000.0 | 4000 | 1.2040 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.0
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_16_64_0.01_16_0.0002
|
ferrazzipietro
| 2024-03-07T21:24:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T16:26:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Shawt/Shawt
|
Shawt
| 2024-03-07T21:23:17Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/sdxl-turbo",
"base_model:finetune:stabilityai/sdxl-turbo",
"region:us"
] |
text-to-image
| 2023-07-11T04:45:40Z |
---
base_model: stabilityai/sdxl-turbo
instance_prompt: <shawt>
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
Mitsubachi/Voices_in_Latin_Spanish
|
Mitsubachi
| 2024-03-07T21:23:00Z | 0 | 0 | null |
[
"audio-to-audio",
"es",
"ja",
"license:openrail",
"region:us"
] |
audio-to-audio
| 2023-07-02T00:55:03Z |
---
license: openrail
language:
- es
- ja
pipeline_tag: audio-to-audio
---
# Native voices of Latin Spanish
Models of Latin Spanish dubbing or speaking voices made by me.
---
**Voice model:**
**Trunks Future "Mexican Voice" (Dragon Ball Z)**: RVC v2 5k, 360 epochs, 6 minutes of data
**Optimus Prime "Mexican Voice" (Live Action Movies 2007-2023)**: RVC v2, 200 epochs, 8 minutes of data + crepe hop 64
|
mehdirafiei/SQLCODER16L
|
mehdirafiei
| 2024-03-07T21:18:40Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T21:10:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KamielK/distilbert-base-uncased-finetuned-cola
|
KamielK
| 2024-03-07T21:16:35Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-07T21:09:37Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9605
- Matthews Correlation: 0.5435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.3362 | 1.0 | 535 | 0.4862 | 0.4884 |
| 0.2249 | 2.0 | 1070 | 0.5900 | 0.5375 |
| 0.1667 | 3.0 | 1605 | 0.7921 | 0.5193 |
| 0.1279 | 4.0 | 2140 | 0.9516 | 0.5393 |
| 0.0837 | 5.0 | 2675 | 0.9605 | 0.5435 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
abrocadabro/dqn-SpaceInvadersNoFrameskip-v4
|
abrocadabro
| 2024-03-07T21:13:37Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-07T21:12:58Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 672.50 +/- 253.60
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga abrocadabro -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga abrocadabro -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga abrocadabro
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
luna-code/langchain-codegen-350M-mono-prefix
|
luna-code
| 2024-03-07T21:02:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T21:02:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
erikbritto/ppo-LunarLander-v2
|
erikbritto
| 2024-03-07T20:50:59Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-07T20:50:39Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 235.80 +/- 25.60
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
happybusinessperson/distilroberta-base-finetuned-leftarticles-mlm
|
happybusinessperson
| 2024-03-07T20:48:39Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-03-07T05:15:45Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-leftarticles-mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-leftarticles-mlm
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on a dataset of left wing news articles, [happybusinessperson/leftarticles](https://huggingface.co/datasets/happybusinessperson/leftarticles), adapted from https://www.kaggle.com/datasets/mhoali/right-and-left-wing-news-articles-with-nlp.
It achieves the following results on the evaluation set:
- Loss: 1.7207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9373 | 1.0 | 2239 | 1.7868 |
| 1.8676 | 2.0 | 4478 | 1.7572 |
| 1.8194 | 3.0 | 6717 | 1.7297 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
happybusinessperson/distilroberta-base-finetuned-rightarticles-mlm
|
happybusinessperson
| 2024-03-07T20:47:30Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-03-07T20:34:57Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-rightarticles-mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-rightarticles-mlm
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on a dataset of right wing news articles, [happybusinessperson/rightarticles](https://huggingface.co/datasets/happybusinessperson/rightarticles), adapted from https://www.kaggle.com/datasets/mhoali/right-and-left-wing-news-articles-with-nlp. It achieves the following results on the evaluation set:
It achieves the following results on the evaluation set:
- Loss: 1.7292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9611 | 1.0 | 806 | 1.7669 |
| 1.8516 | 2.0 | 1612 | 1.7297 |
| 1.798 | 3.0 | 2418 | 1.7239 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
completelyboofyblitzed/neural-chat-7b-v3-3lr_1.25e-05_lora_alpha_8_r_16_wd_0.001_warmup_ratio_0.3
|
completelyboofyblitzed
| 2024-03-07T20:40:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T20:40:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RefalMachine/ruadapt_solar_10.7_part2_v3_as2_rsg_lora
|
RefalMachine
| 2024-03-07T20:40:05Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:msu-rcc-lair/ruadapt_solar_10.7_darulm_unigram_proj_init_twostage_v1",
"base_model:finetune:msu-rcc-lair/ruadapt_solar_10.7_darulm_unigram_proj_init_twostage_v1",
"region:us"
] | null | 2024-03-07T16:42:02Z |
---
base_model: RefalMachine/ruadapt_solar_10.7_darulm_unigram_proj_init_part2_v3_alpha_scale_2
tags:
- generated_from_trainer
model-index:
- name: ruadapt_solar_10.7_part2_v3_as2_rsg_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ruadapt_solar_10.7_part2_v3_as2_rsg_lora
This model is a fine-tuned version of [RefalMachine/ruadapt_solar_10.7_darulm_unigram_proj_init_part2_v3_alpha_scale_2](https://huggingface.co/RefalMachine/ruadapt_solar_10.7_darulm_unigram_proj_init_part2_v3_alpha_scale_2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1 | 0.45 | 100 | 0.0669 |
| 0.0837 | 0.9 | 200 | 0.0682 |
| 0.0511 | 1.35 | 300 | 0.0762 |
| 0.0473 | 1.8 | 400 | 0.0662 |
| 0.0188 | 2.24 | 500 | 0.0841 |
| 0.0268 | 2.69 | 600 | 0.0744 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.2
- Datasets 2.14.4
- Tokenizers 0.14.1
|
tsavage68/mistralit2_200_STEPS_5e7_SFT
|
tsavage68
| 2024-03-07T20:39:57Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T20:31:23Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: mistralit2_200_STEPS_SFT_SFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistralit2_200_STEPS_SFT_SFT
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.498 | 0.1 | 50 | 0.3952 |
| 0.3309 | 0.2 | 100 | 0.3213 |
| 0.3236 | 0.29 | 150 | 0.3108 |
| 0.3 | 0.39 | 200 | 0.3095 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ramixpe/Llama-2-7b-chat-hf-sft-test-push-adapters
|
ramixpe
| 2024-03-07T20:30:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-21T22:47:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
farid1088/GQA_BERT_legal_SQuAD_complete_augmented_100
|
farid1088
| 2024-03-07T20:28:20Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-05T19:38:45Z |
---
tags:
- generated_from_trainer
model-index:
- name: GQA_BERT_legal_SQuAD_complete_augmented_100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GQA_BERT_legal_SQuAD_complete_augmented_100
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 160
- eval_batch_size: 40
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 5.1190 |
| No log | 2.0 | 6 | 4.5892 |
| No log | 3.0 | 9 | 3.9684 |
| No log | 4.0 | 12 | 3.6427 |
| No log | 5.0 | 15 | 3.2081 |
| No log | 6.0 | 18 | 2.8413 |
| No log | 7.0 | 21 | 2.5487 |
| No log | 8.0 | 24 | 2.2830 |
| No log | 9.0 | 27 | 2.0807 |
| No log | 10.0 | 30 | 1.8644 |
| No log | 11.0 | 33 | 1.7166 |
| No log | 12.0 | 36 | 1.5672 |
| No log | 13.0 | 39 | 1.3949 |
| No log | 14.0 | 42 | 1.3109 |
| No log | 15.0 | 45 | 1.2622 |
| No log | 16.0 | 48 | 1.1875 |
| No log | 17.0 | 51 | 1.1579 |
| No log | 18.0 | 54 | 1.1329 |
| No log | 19.0 | 57 | 1.1090 |
| No log | 20.0 | 60 | 1.0811 |
| No log | 21.0 | 63 | 1.0542 |
| No log | 22.0 | 66 | 1.0481 |
| No log | 23.0 | 69 | 1.0355 |
| No log | 24.0 | 72 | 1.0304 |
| No log | 25.0 | 75 | 1.0276 |
| No log | 26.0 | 78 | 1.0277 |
| No log | 27.0 | 81 | 1.0329 |
| No log | 28.0 | 84 | 1.0356 |
| No log | 29.0 | 87 | 1.0410 |
| No log | 30.0 | 90 | 1.0267 |
| No log | 31.0 | 93 | 1.0280 |
| No log | 32.0 | 96 | 1.0453 |
| No log | 33.0 | 99 | 1.0520 |
| No log | 34.0 | 102 | 1.0430 |
| No log | 35.0 | 105 | 1.0393 |
| No log | 36.0 | 108 | 1.0370 |
| No log | 37.0 | 111 | 1.0284 |
| No log | 38.0 | 114 | 1.0313 |
| No log | 39.0 | 117 | 1.0376 |
| No log | 40.0 | 120 | 1.0312 |
| No log | 41.0 | 123 | 1.0218 |
| No log | 42.0 | 126 | 1.0348 |
| No log | 43.0 | 129 | 1.0426 |
| No log | 44.0 | 132 | 1.0411 |
| No log | 45.0 | 135 | 1.0463 |
| No log | 46.0 | 138 | 1.0661 |
| No log | 47.0 | 141 | 1.0733 |
| No log | 48.0 | 144 | 1.0609 |
| No log | 49.0 | 147 | 1.0578 |
| No log | 50.0 | 150 | 1.0639 |
| No log | 51.0 | 153 | 1.0490 |
| No log | 52.0 | 156 | 1.0507 |
| No log | 53.0 | 159 | 1.0460 |
| No log | 54.0 | 162 | 1.0534 |
| No log | 55.0 | 165 | 1.0530 |
| No log | 56.0 | 168 | 1.0521 |
| No log | 57.0 | 171 | 1.0470 |
| No log | 58.0 | 174 | 1.0462 |
| No log | 59.0 | 177 | 1.0547 |
| No log | 60.0 | 180 | 1.0628 |
| No log | 61.0 | 183 | 1.0550 |
| No log | 62.0 | 186 | 1.0474 |
| No log | 63.0 | 189 | 1.0536 |
| No log | 64.0 | 192 | 1.0711 |
| No log | 65.0 | 195 | 1.0832 |
| No log | 66.0 | 198 | 1.0855 |
| No log | 67.0 | 201 | 1.0901 |
| No log | 68.0 | 204 | 1.0912 |
| No log | 69.0 | 207 | 1.0888 |
| No log | 70.0 | 210 | 1.0882 |
| No log | 71.0 | 213 | 1.0985 |
| No log | 72.0 | 216 | 1.1056 |
| No log | 73.0 | 219 | 1.0876 |
| No log | 74.0 | 222 | 1.0781 |
| No log | 75.0 | 225 | 1.0894 |
| No log | 76.0 | 228 | 1.0906 |
| No log | 77.0 | 231 | 1.0848 |
| No log | 78.0 | 234 | 1.0851 |
| No log | 79.0 | 237 | 1.0949 |
| No log | 80.0 | 240 | 1.0982 |
| No log | 81.0 | 243 | 1.0932 |
| No log | 82.0 | 246 | 1.0825 |
| No log | 83.0 | 249 | 1.0791 |
| No log | 84.0 | 252 | 1.0821 |
| No log | 85.0 | 255 | 1.0819 |
| No log | 86.0 | 258 | 1.0808 |
| No log | 87.0 | 261 | 1.0794 |
| No log | 88.0 | 264 | 1.0815 |
| No log | 89.0 | 267 | 1.0859 |
| No log | 90.0 | 270 | 1.0883 |
| No log | 91.0 | 273 | 1.0890 |
| No log | 92.0 | 276 | 1.0935 |
| No log | 93.0 | 279 | 1.0982 |
| No log | 94.0 | 282 | 1.1007 |
| No log | 95.0 | 285 | 1.0994 |
| No log | 96.0 | 288 | 1.0997 |
| No log | 97.0 | 291 | 1.0998 |
| No log | 98.0 | 294 | 1.0978 |
| No log | 99.0 | 297 | 1.0970 |
| No log | 100.0 | 300 | 1.0964 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.0
|
RodMed0709/Modelo_Resumen
|
RodMed0709
| 2024-03-07T20:21:45Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:stevhliu/my_awesome_billsum_model",
"base_model:finetune:stevhliu/my_awesome_billsum_model",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-07T20:01:57Z |
---
license: apache-2.0
base_model: stevhliu/my_awesome_billsum_model
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: Modelo_Resumen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Modelo_Resumen
Éste modelo fue creado para la clase del Dr. Gendry
- Loss: 2.2007
- Rouge1: 0.1957
- Rouge2: 0.095
- Rougel: 0.1645
- Rougelsum: 0.1645
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.3594 | 0.1957 | 0.0921 | 0.1645 | 0.1645 | 19.0 |
| No log | 2.0 | 124 | 2.3268 | 0.194 | 0.0898 | 0.1631 | 0.1631 | 19.0 |
| No log | 3.0 | 186 | 2.3013 | 0.194 | 0.0923 | 0.1649 | 0.1648 | 19.0 |
| No log | 4.0 | 248 | 2.2776 | 0.195 | 0.0932 | 0.166 | 0.1659 | 19.0 |
| No log | 5.0 | 310 | 2.2620 | 0.1944 | 0.0925 | 0.165 | 0.1649 | 19.0 |
| No log | 6.0 | 372 | 2.2474 | 0.1935 | 0.0917 | 0.1648 | 0.1646 | 19.0 |
| No log | 7.0 | 434 | 2.2362 | 0.1931 | 0.0929 | 0.1642 | 0.1642 | 19.0 |
| No log | 8.0 | 496 | 2.2276 | 0.1937 | 0.0935 | 0.1642 | 0.1644 | 19.0 |
| 2.4678 | 9.0 | 558 | 2.2203 | 0.1941 | 0.0938 | 0.164 | 0.164 | 19.0 |
| 2.4678 | 10.0 | 620 | 2.2141 | 0.195 | 0.0954 | 0.1648 | 0.1648 | 19.0 |
| 2.4678 | 11.0 | 682 | 2.2095 | 0.1956 | 0.096 | 0.1649 | 0.1649 | 19.0 |
| 2.4678 | 12.0 | 744 | 2.2055 | 0.1952 | 0.0955 | 0.1645 | 0.1645 | 19.0 |
| 2.4678 | 13.0 | 806 | 2.2030 | 0.1945 | 0.0947 | 0.1639 | 0.1638 | 19.0 |
| 2.4678 | 14.0 | 868 | 2.2014 | 0.1956 | 0.095 | 0.1644 | 0.1644 | 19.0 |
| 2.4678 | 15.0 | 930 | 2.2007 | 0.1957 | 0.095 | 0.1645 | 0.1645 | 19.0 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
farid1088/GQA_RoBERTa_legal_SQuAD_complete_augmented_17
|
farid1088
| 2024-03-07T20:20:59Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-06T01:58:58Z |
---
tags:
- generated_from_trainer
model-index:
- name: GQA_RoBERTa_legal_SQuAD_complete_augmented_17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GQA_RoBERTa_legal_SQuAD_complete_augmented_17
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 17
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 4 | 3.7900 |
| No log | 2.0 | 8 | 3.1774 |
| No log | 3.0 | 12 | 2.8018 |
| No log | 4.0 | 16 | 2.5378 |
| No log | 5.0 | 20 | 2.2274 |
| No log | 6.0 | 24 | 2.0786 |
| No log | 7.0 | 28 | 1.9560 |
| No log | 8.0 | 32 | 1.7749 |
| No log | 9.0 | 36 | 1.6928 |
| No log | 10.0 | 40 | 1.5814 |
| No log | 11.0 | 44 | 1.4800 |
| No log | 12.0 | 48 | 1.4188 |
| No log | 13.0 | 52 | 1.3761 |
| No log | 14.0 | 56 | 1.3139 |
| No log | 15.0 | 60 | 1.2892 |
| No log | 16.0 | 64 | 1.2752 |
| No log | 17.0 | 68 | 1.2674 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.0
|
farid1088/GQA_RoBERTa_German_legal_SQuAD_part_augmented_2000
|
farid1088
| 2024-03-07T20:19:30Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-07T15:06:57Z |
---
tags:
- generated_from_trainer
model-index:
- name: GQA_RoBERTa_German_legal_SQuAD_part_augmented_2000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GQA_RoBERTa_German_legal_SQuAD_part_augmented_2000
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 1.0 | 4 | 3.7756 |
| No log | 2.0 | 8 | 3.1205 |
| No log | 3.0 | 12 | 2.7419 |
| No log | 4.0 | 16 | 2.3978 |
| No log | 5.0 | 20 | 2.0572 |
| No log | 6.0 | 24 | 1.9690 |
| No log | 7.0 | 28 | 1.6922 |
| No log | 8.0 | 32 | 1.4999 |
| No log | 9.0 | 36 | 1.4624 |
| No log | 10.0 | 40 | 1.1915 |
| No log | 11.0 | 44 | 1.1501 |
| No log | 12.0 | 48 | 0.9852 |
| No log | 13.0 | 52 | 0.9573 |
| No log | 14.0 | 56 | 0.9131 |
| No log | 15.0 | 60 | 0.8843 |
| No log | 16.0 | 64 | 0.7765 |
| No log | 17.0 | 68 | 0.7787 |
| No log | 18.0 | 72 | 0.7613 |
| No log | 19.0 | 76 | 0.7610 |
| No log | 20.0 | 80 | 0.7447 |
| No log | 21.0 | 84 | 0.7049 |
| No log | 22.0 | 88 | 0.7030 |
| No log | 23.0 | 92 | 0.7066 |
| No log | 24.0 | 96 | 0.7073 |
| No log | 25.0 | 100 | 0.7238 |
| No log | 26.0 | 104 | 0.7560 |
| No log | 27.0 | 108 | 0.7350 |
| No log | 28.0 | 112 | 0.7325 |
| No log | 29.0 | 116 | 0.7513 |
| No log | 30.0 | 120 | 0.7656 |
| No log | 31.0 | 124 | 0.7594 |
| No log | 32.0 | 128 | 0.7744 |
| No log | 33.0 | 132 | 0.7835 |
| No log | 34.0 | 136 | 0.7608 |
| No log | 35.0 | 140 | 0.7423 |
| No log | 36.0 | 144 | 0.7543 |
| No log | 37.0 | 148 | 0.7305 |
| No log | 38.0 | 152 | 0.7398 |
| No log | 39.0 | 156 | 0.7364 |
| No log | 40.0 | 160 | 0.7313 |
| No log | 41.0 | 164 | 0.7163 |
| No log | 42.0 | 168 | 0.7181 |
| No log | 43.0 | 172 | 0.7243 |
| No log | 44.0 | 176 | 0.7259 |
| No log | 45.0 | 180 | 0.7980 |
| No log | 46.0 | 184 | 0.7784 |
| No log | 47.0 | 188 | 0.7271 |
| No log | 48.0 | 192 | 0.7014 |
| No log | 49.0 | 196 | 0.7110 |
| No log | 50.0 | 200 | 0.7621 |
| No log | 51.0 | 204 | 0.7851 |
| No log | 52.0 | 208 | 0.7917 |
| No log | 53.0 | 212 | 0.7877 |
| No log | 54.0 | 216 | 0.8123 |
| No log | 55.0 | 220 | 0.8462 |
| No log | 56.0 | 224 | 0.8405 |
| No log | 57.0 | 228 | 0.8330 |
| No log | 58.0 | 232 | 0.8115 |
| No log | 59.0 | 236 | 0.8067 |
| No log | 60.0 | 240 | 0.8457 |
| No log | 61.0 | 244 | 0.9419 |
| No log | 62.0 | 248 | 0.9387 |
| No log | 63.0 | 252 | 0.9612 |
| No log | 64.0 | 256 | 0.9213 |
| No log | 65.0 | 260 | 0.9035 |
| No log | 66.0 | 264 | 0.8863 |
| No log | 67.0 | 268 | 0.8914 |
| No log | 68.0 | 272 | 0.9060 |
| No log | 69.0 | 276 | 0.9424 |
| No log | 70.0 | 280 | 0.9367 |
| No log | 71.0 | 284 | 0.9201 |
| No log | 72.0 | 288 | 0.9070 |
| No log | 73.0 | 292 | 0.9037 |
| No log | 74.0 | 296 | 0.9116 |
| No log | 75.0 | 300 | 0.9108 |
| No log | 76.0 | 304 | 0.9139 |
| No log | 77.0 | 308 | 0.9506 |
| No log | 78.0 | 312 | 0.9703 |
| No log | 79.0 | 316 | 0.9848 |
| No log | 80.0 | 320 | 0.9586 |
| No log | 81.0 | 324 | 0.9591 |
| No log | 82.0 | 328 | 0.9678 |
| No log | 83.0 | 332 | 0.9951 |
| No log | 84.0 | 336 | 0.9788 |
| No log | 85.0 | 340 | 0.9374 |
| No log | 86.0 | 344 | 0.9085 |
| No log | 87.0 | 348 | 0.8789 |
| No log | 88.0 | 352 | 0.8838 |
| No log | 89.0 | 356 | 0.8711 |
| No log | 90.0 | 360 | 0.8792 |
| No log | 91.0 | 364 | 0.8904 |
| No log | 92.0 | 368 | 0.9014 |
| No log | 93.0 | 372 | 0.9518 |
| No log | 94.0 | 376 | 0.9872 |
| No log | 95.0 | 380 | 0.9193 |
| No log | 96.0 | 384 | 0.8909 |
| No log | 97.0 | 388 | 0.8989 |
| No log | 98.0 | 392 | 0.9064 |
| No log | 99.0 | 396 | 0.9341 |
| No log | 100.0 | 400 | 0.9550 |
| No log | 101.0 | 404 | 0.9706 |
| No log | 102.0 | 408 | 1.0495 |
| No log | 103.0 | 412 | 1.0350 |
| No log | 104.0 | 416 | 0.9688 |
| No log | 105.0 | 420 | 0.9610 |
| No log | 106.0 | 424 | 0.9537 |
| No log | 107.0 | 428 | 0.9579 |
| No log | 108.0 | 432 | 0.9877 |
| No log | 109.0 | 436 | 1.0223 |
| No log | 110.0 | 440 | 1.0488 |
| No log | 111.0 | 444 | 1.0673 |
| No log | 112.0 | 448 | 0.9968 |
| No log | 113.0 | 452 | 1.0307 |
| No log | 114.0 | 456 | 1.0888 |
| No log | 115.0 | 460 | 1.0773 |
| No log | 116.0 | 464 | 1.0990 |
| No log | 117.0 | 468 | 1.1120 |
| No log | 118.0 | 472 | 1.0821 |
| No log | 119.0 | 476 | 1.0407 |
| No log | 120.0 | 480 | 1.0365 |
| No log | 121.0 | 484 | 1.0269 |
| No log | 122.0 | 488 | 0.9804 |
| No log | 123.0 | 492 | 0.9752 |
| No log | 124.0 | 496 | 0.9785 |
| 0.9513 | 125.0 | 500 | 0.9739 |
| 0.9513 | 126.0 | 504 | 0.9894 |
| 0.9513 | 127.0 | 508 | 1.0625 |
| 0.9513 | 128.0 | 512 | 1.0423 |
| 0.9513 | 129.0 | 516 | 1.0479 |
| 0.9513 | 130.0 | 520 | 1.0725 |
| 0.9513 | 131.0 | 524 | 1.1035 |
| 0.9513 | 132.0 | 528 | 1.0921 |
| 0.9513 | 133.0 | 532 | 0.9806 |
| 0.9513 | 134.0 | 536 | 0.9012 |
| 0.9513 | 135.0 | 540 | 0.9527 |
| 0.9513 | 136.0 | 544 | 1.0029 |
| 0.9513 | 137.0 | 548 | 1.0212 |
| 0.9513 | 138.0 | 552 | 1.0392 |
| 0.9513 | 139.0 | 556 | 0.9753 |
| 0.9513 | 140.0 | 560 | 0.9817 |
| 0.9513 | 141.0 | 564 | 0.9755 |
| 0.9513 | 142.0 | 568 | 0.9933 |
| 0.9513 | 143.0 | 572 | 1.0276 |
| 0.9513 | 144.0 | 576 | 1.0285 |
| 0.9513 | 145.0 | 580 | 1.0276 |
| 0.9513 | 146.0 | 584 | 1.0582 |
| 0.9513 | 147.0 | 588 | 1.0810 |
| 0.9513 | 148.0 | 592 | 1.0618 |
| 0.9513 | 149.0 | 596 | 1.0152 |
| 0.9513 | 150.0 | 600 | 1.0553 |
| 0.9513 | 151.0 | 604 | 1.0921 |
| 0.9513 | 152.0 | 608 | 1.0401 |
| 0.9513 | 153.0 | 612 | 0.9760 |
| 0.9513 | 154.0 | 616 | 0.9576 |
| 0.9513 | 155.0 | 620 | 0.9523 |
| 0.9513 | 156.0 | 624 | 0.9901 |
| 0.9513 | 157.0 | 628 | 0.9793 |
| 0.9513 | 158.0 | 632 | 0.9726 |
| 0.9513 | 159.0 | 636 | 0.9676 |
| 0.9513 | 160.0 | 640 | 1.0070 |
| 0.9513 | 161.0 | 644 | 1.0107 |
| 0.9513 | 162.0 | 648 | 1.0067 |
| 0.9513 | 163.0 | 652 | 1.0042 |
| 0.9513 | 164.0 | 656 | 0.9888 |
| 0.9513 | 165.0 | 660 | 0.9758 |
| 0.9513 | 166.0 | 664 | 0.9983 |
| 0.9513 | 167.0 | 668 | 1.0273 |
| 0.9513 | 168.0 | 672 | 1.0220 |
| 0.9513 | 169.0 | 676 | 1.0063 |
| 0.9513 | 170.0 | 680 | 0.9852 |
| 0.9513 | 171.0 | 684 | 1.0590 |
| 0.9513 | 172.0 | 688 | 1.1016 |
| 0.9513 | 173.0 | 692 | 1.0622 |
| 0.9513 | 174.0 | 696 | 1.0408 |
| 0.9513 | 175.0 | 700 | 1.0156 |
| 0.9513 | 176.0 | 704 | 1.0073 |
| 0.9513 | 177.0 | 708 | 1.0284 |
| 0.9513 | 178.0 | 712 | 1.0398 |
| 0.9513 | 179.0 | 716 | 0.9925 |
| 0.9513 | 180.0 | 720 | 1.0192 |
| 0.9513 | 181.0 | 724 | 1.0434 |
| 0.9513 | 182.0 | 728 | 1.0429 |
| 0.9513 | 183.0 | 732 | 1.0614 |
| 0.9513 | 184.0 | 736 | 1.0663 |
| 0.9513 | 185.0 | 740 | 1.0529 |
| 0.9513 | 186.0 | 744 | 1.0479 |
| 0.9513 | 187.0 | 748 | 1.0352 |
| 0.9513 | 188.0 | 752 | 1.0374 |
| 0.9513 | 189.0 | 756 | 1.0061 |
| 0.9513 | 190.0 | 760 | 0.9905 |
| 0.9513 | 191.0 | 764 | 0.9959 |
| 0.9513 | 192.0 | 768 | 1.0204 |
| 0.9513 | 193.0 | 772 | 1.0509 |
| 0.9513 | 194.0 | 776 | 1.0616 |
| 0.9513 | 195.0 | 780 | 1.0709 |
| 0.9513 | 196.0 | 784 | 1.0794 |
| 0.9513 | 197.0 | 788 | 1.0797 |
| 0.9513 | 198.0 | 792 | 1.0722 |
| 0.9513 | 199.0 | 796 | 1.0697 |
| 0.9513 | 200.0 | 800 | 1.0759 |
| 0.9513 | 201.0 | 804 | 1.0787 |
| 0.9513 | 202.0 | 808 | 1.1036 |
| 0.9513 | 203.0 | 812 | 1.1021 |
| 0.9513 | 204.0 | 816 | 1.1088 |
| 0.9513 | 205.0 | 820 | 1.1201 |
| 0.9513 | 206.0 | 824 | 1.1168 |
| 0.9513 | 207.0 | 828 | 1.1030 |
| 0.9513 | 208.0 | 832 | 1.0986 |
| 0.9513 | 209.0 | 836 | 1.0953 |
| 0.9513 | 210.0 | 840 | 1.0708 |
| 0.9513 | 211.0 | 844 | 1.0704 |
| 0.9513 | 212.0 | 848 | 1.0681 |
| 0.9513 | 213.0 | 852 | 1.0676 |
| 0.9513 | 214.0 | 856 | 1.0789 |
| 0.9513 | 215.0 | 860 | 1.1193 |
| 0.9513 | 216.0 | 864 | 1.1378 |
| 0.9513 | 217.0 | 868 | 1.1566 |
| 0.9513 | 218.0 | 872 | 1.1650 |
| 0.9513 | 219.0 | 876 | 1.1268 |
| 0.9513 | 220.0 | 880 | 1.1152 |
| 0.9513 | 221.0 | 884 | 1.0909 |
| 0.9513 | 222.0 | 888 | 1.0778 |
| 0.9513 | 223.0 | 892 | 1.0819 |
| 0.9513 | 224.0 | 896 | 1.1042 |
| 0.9513 | 225.0 | 900 | 1.1532 |
| 0.9513 | 226.0 | 904 | 1.1695 |
| 0.9513 | 227.0 | 908 | 1.1730 |
| 0.9513 | 228.0 | 912 | 1.1549 |
| 0.9513 | 229.0 | 916 | 1.1318 |
| 0.9513 | 230.0 | 920 | 1.1319 |
| 0.9513 | 231.0 | 924 | 1.1306 |
| 0.9513 | 232.0 | 928 | 1.1583 |
| 0.9513 | 233.0 | 932 | 1.1915 |
| 0.9513 | 234.0 | 936 | 1.2038 |
| 0.9513 | 235.0 | 940 | 1.1877 |
| 0.9513 | 236.0 | 944 | 1.1775 |
| 0.9513 | 237.0 | 948 | 1.1820 |
| 0.9513 | 238.0 | 952 | 1.1885 |
| 0.9513 | 239.0 | 956 | 1.2012 |
| 0.9513 | 240.0 | 960 | 1.2013 |
| 0.9513 | 241.0 | 964 | 1.1876 |
| 0.9513 | 242.0 | 968 | 1.1801 |
| 0.9513 | 243.0 | 972 | 1.1799 |
| 0.9513 | 244.0 | 976 | 1.1711 |
| 0.9513 | 245.0 | 980 | 1.1550 |
| 0.9513 | 246.0 | 984 | 1.1499 |
| 0.9513 | 247.0 | 988 | 1.1303 |
| 0.9513 | 248.0 | 992 | 1.1138 |
| 0.9513 | 249.0 | 996 | 1.1351 |
| 0.4059 | 250.0 | 1000 | 1.1635 |
| 0.4059 | 251.0 | 1004 | 1.1975 |
| 0.4059 | 252.0 | 1008 | 1.2352 |
| 0.4059 | 253.0 | 1012 | 1.2442 |
| 0.4059 | 254.0 | 1016 | 1.2108 |
| 0.4059 | 255.0 | 1020 | 1.1813 |
| 0.4059 | 256.0 | 1024 | 1.1469 |
| 0.4059 | 257.0 | 1028 | 1.0936 |
| 0.4059 | 258.0 | 1032 | 1.0322 |
| 0.4059 | 259.0 | 1036 | 1.0076 |
| 0.4059 | 260.0 | 1040 | 1.0304 |
| 0.4059 | 261.0 | 1044 | 1.0946 |
| 0.4059 | 262.0 | 1048 | 1.1132 |
| 0.4059 | 263.0 | 1052 | 1.1231 |
| 0.4059 | 264.0 | 1056 | 1.1268 |
| 0.4059 | 265.0 | 1060 | 1.1290 |
| 0.4059 | 266.0 | 1064 | 1.1261 |
| 0.4059 | 267.0 | 1068 | 1.1095 |
| 0.4059 | 268.0 | 1072 | 1.0643 |
| 0.4059 | 269.0 | 1076 | 1.0283 |
| 0.4059 | 270.0 | 1080 | 1.0181 |
| 0.4059 | 271.0 | 1084 | 1.0670 |
| 0.4059 | 272.0 | 1088 | 1.1049 |
| 0.4059 | 273.0 | 1092 | 1.1309 |
| 0.4059 | 274.0 | 1096 | 1.1533 |
| 0.4059 | 275.0 | 1100 | 1.1767 |
| 0.4059 | 276.0 | 1104 | 1.1846 |
| 0.4059 | 277.0 | 1108 | 1.1899 |
| 0.4059 | 278.0 | 1112 | 1.1834 |
| 0.4059 | 279.0 | 1116 | 1.2054 |
| 0.4059 | 280.0 | 1120 | 1.1807 |
| 0.4059 | 281.0 | 1124 | 1.1238 |
| 0.4059 | 282.0 | 1128 | 1.0955 |
| 0.4059 | 283.0 | 1132 | 1.0557 |
| 0.4059 | 284.0 | 1136 | 1.0615 |
| 0.4059 | 285.0 | 1140 | 1.0758 |
| 0.4059 | 286.0 | 1144 | 1.1007 |
| 0.4059 | 287.0 | 1148 | 1.1431 |
| 0.4059 | 288.0 | 1152 | 1.1335 |
| 0.4059 | 289.0 | 1156 | 1.0713 |
| 0.4059 | 290.0 | 1160 | 1.0302 |
| 0.4059 | 291.0 | 1164 | 1.0070 |
| 0.4059 | 292.0 | 1168 | 1.0587 |
| 0.4059 | 293.0 | 1172 | 1.1093 |
| 0.4059 | 294.0 | 1176 | 1.1549 |
| 0.4059 | 295.0 | 1180 | 1.1744 |
| 0.4059 | 296.0 | 1184 | 1.1590 |
| 0.4059 | 297.0 | 1188 | 1.0999 |
| 0.4059 | 298.0 | 1192 | 1.0508 |
| 0.4059 | 299.0 | 1196 | 1.0082 |
| 0.4059 | 300.0 | 1200 | 1.0266 |
| 0.4059 | 301.0 | 1204 | 1.0897 |
| 0.4059 | 302.0 | 1208 | 1.2008 |
| 0.4059 | 303.0 | 1212 | 1.2833 |
| 0.4059 | 304.0 | 1216 | 1.2775 |
| 0.4059 | 305.0 | 1220 | 1.2754 |
| 0.4059 | 306.0 | 1224 | 1.2059 |
| 0.4059 | 307.0 | 1228 | 1.1187 |
| 0.4059 | 308.0 | 1232 | 1.1612 |
| 0.4059 | 309.0 | 1236 | 1.1794 |
| 0.4059 | 310.0 | 1240 | 1.1969 |
| 0.4059 | 311.0 | 1244 | 1.1991 |
| 0.4059 | 312.0 | 1248 | 1.1921 |
| 0.4059 | 313.0 | 1252 | 1.2148 |
| 0.4059 | 314.0 | 1256 | 1.2524 |
| 0.4059 | 315.0 | 1260 | 1.2606 |
| 0.4059 | 316.0 | 1264 | 1.2423 |
| 0.4059 | 317.0 | 1268 | 1.1989 |
| 0.4059 | 318.0 | 1272 | 1.1552 |
| 0.4059 | 319.0 | 1276 | 1.1222 |
| 0.4059 | 320.0 | 1280 | 1.1219 |
| 0.4059 | 321.0 | 1284 | 1.1678 |
| 0.4059 | 322.0 | 1288 | 1.1853 |
| 0.4059 | 323.0 | 1292 | 1.1274 |
| 0.4059 | 324.0 | 1296 | 1.0615 |
| 0.4059 | 325.0 | 1300 | 1.1044 |
| 0.4059 | 326.0 | 1304 | 1.1874 |
| 0.4059 | 327.0 | 1308 | 1.1911 |
| 0.4059 | 328.0 | 1312 | 1.1513 |
| 0.4059 | 329.0 | 1316 | 1.0682 |
| 0.4059 | 330.0 | 1320 | 1.0366 |
| 0.4059 | 331.0 | 1324 | 1.0736 |
| 0.4059 | 332.0 | 1328 | 1.1319 |
| 0.4059 | 333.0 | 1332 | 1.1256 |
| 0.4059 | 334.0 | 1336 | 1.0977 |
| 0.4059 | 335.0 | 1340 | 1.0509 |
| 0.4059 | 336.0 | 1344 | 1.0081 |
| 0.4059 | 337.0 | 1348 | 1.0239 |
| 0.4059 | 338.0 | 1352 | 1.0681 |
| 0.4059 | 339.0 | 1356 | 1.1298 |
| 0.4059 | 340.0 | 1360 | 1.1369 |
| 0.4059 | 341.0 | 1364 | 1.0729 |
| 0.4059 | 342.0 | 1368 | 0.9855 |
| 0.4059 | 343.0 | 1372 | 0.9409 |
| 0.4059 | 344.0 | 1376 | 0.9527 |
| 0.4059 | 345.0 | 1380 | 1.0270 |
| 0.4059 | 346.0 | 1384 | 1.0781 |
| 0.4059 | 347.0 | 1388 | 1.1151 |
| 0.4059 | 348.0 | 1392 | 1.1403 |
| 0.4059 | 349.0 | 1396 | 1.1603 |
| 0.4059 | 350.0 | 1400 | 1.1856 |
| 0.4059 | 351.0 | 1404 | 1.1898 |
| 0.4059 | 352.0 | 1408 | 1.1933 |
| 0.4059 | 353.0 | 1412 | 1.2285 |
| 0.4059 | 354.0 | 1416 | 1.2589 |
| 0.4059 | 355.0 | 1420 | 1.2458 |
| 0.4059 | 356.0 | 1424 | 1.2131 |
| 0.4059 | 357.0 | 1428 | 1.2127 |
| 0.4059 | 358.0 | 1432 | 1.2372 |
| 0.4059 | 359.0 | 1436 | 1.2434 |
| 0.4059 | 360.0 | 1440 | 1.2399 |
| 0.4059 | 361.0 | 1444 | 1.2213 |
| 0.4059 | 362.0 | 1448 | 1.1881 |
| 0.4059 | 363.0 | 1452 | 1.1636 |
| 0.4059 | 364.0 | 1456 | 1.1456 |
| 0.4059 | 365.0 | 1460 | 1.1520 |
| 0.4059 | 366.0 | 1464 | 1.1635 |
| 0.4059 | 367.0 | 1468 | 1.1836 |
| 0.4059 | 368.0 | 1472 | 1.1956 |
| 0.4059 | 369.0 | 1476 | 1.2053 |
| 0.4059 | 370.0 | 1480 | 1.2042 |
| 0.4059 | 371.0 | 1484 | 1.1728 |
| 0.4059 | 372.0 | 1488 | 1.1536 |
| 0.4059 | 373.0 | 1492 | 1.1376 |
| 0.4059 | 374.0 | 1496 | 1.1239 |
| 0.4026 | 375.0 | 1500 | 1.1201 |
| 0.4026 | 376.0 | 1504 | 1.1128 |
| 0.4026 | 377.0 | 1508 | 1.1067 |
| 0.4026 | 378.0 | 1512 | 1.1073 |
| 0.4026 | 379.0 | 1516 | 1.1112 |
| 0.4026 | 380.0 | 1520 | 1.1212 |
| 0.4026 | 381.0 | 1524 | 1.1387 |
| 0.4026 | 382.0 | 1528 | 1.1460 |
| 0.4026 | 383.0 | 1532 | 1.1238 |
| 0.4026 | 384.0 | 1536 | 1.1028 |
| 0.4026 | 385.0 | 1540 | 1.1051 |
| 0.4026 | 386.0 | 1544 | 1.1086 |
| 0.4026 | 387.0 | 1548 | 1.0921 |
| 0.4026 | 388.0 | 1552 | 1.0765 |
| 0.4026 | 389.0 | 1556 | 1.0831 |
| 0.4026 | 390.0 | 1560 | 1.0897 |
| 0.4026 | 391.0 | 1564 | 1.0915 |
| 0.4026 | 392.0 | 1568 | 1.0901 |
| 0.4026 | 393.0 | 1572 | 1.0891 |
| 0.4026 | 394.0 | 1576 | 1.0918 |
| 0.4026 | 395.0 | 1580 | 1.0979 |
| 0.4026 | 396.0 | 1584 | 1.0970 |
| 0.4026 | 397.0 | 1588 | 1.0804 |
| 0.4026 | 398.0 | 1592 | 1.0838 |
| 0.4026 | 399.0 | 1596 | 1.0858 |
| 0.4026 | 400.0 | 1600 | 1.0962 |
| 0.4026 | 401.0 | 1604 | 1.1256 |
| 0.4026 | 402.0 | 1608 | 1.1424 |
| 0.4026 | 403.0 | 1612 | 1.1586 |
| 0.4026 | 404.0 | 1616 | 1.1724 |
| 0.4026 | 405.0 | 1620 | 1.1751 |
| 0.4026 | 406.0 | 1624 | 1.1961 |
| 0.4026 | 407.0 | 1628 | 1.2155 |
| 0.4026 | 408.0 | 1632 | 1.2273 |
| 0.4026 | 409.0 | 1636 | 1.2307 |
| 0.4026 | 410.0 | 1640 | 1.2315 |
| 0.4026 | 411.0 | 1644 | 1.2128 |
| 0.4026 | 412.0 | 1648 | 1.1893 |
| 0.4026 | 413.0 | 1652 | 1.1579 |
| 0.4026 | 414.0 | 1656 | 1.1366 |
| 0.4026 | 415.0 | 1660 | 1.1357 |
| 0.4026 | 416.0 | 1664 | 1.1407 |
| 0.4026 | 417.0 | 1668 | 1.1430 |
| 0.4026 | 418.0 | 1672 | 1.1448 |
| 0.4026 | 419.0 | 1676 | 1.1484 |
| 0.4026 | 420.0 | 1680 | 1.1536 |
| 0.4026 | 421.0 | 1684 | 1.1489 |
| 0.4026 | 422.0 | 1688 | 1.1727 |
| 0.4026 | 423.0 | 1692 | 1.1906 |
| 0.4026 | 424.0 | 1696 | 1.1960 |
| 0.4026 | 425.0 | 1700 | 1.1939 |
| 0.4026 | 426.0 | 1704 | 1.1789 |
| 0.4026 | 427.0 | 1708 | 1.1635 |
| 0.4026 | 428.0 | 1712 | 1.1499 |
| 0.4026 | 429.0 | 1716 | 1.1432 |
| 0.4026 | 430.0 | 1720 | 1.1382 |
| 0.4026 | 431.0 | 1724 | 1.1275 |
| 0.4026 | 432.0 | 1728 | 1.1173 |
| 0.4026 | 433.0 | 1732 | 1.1088 |
| 0.4026 | 434.0 | 1736 | 1.0911 |
| 0.4026 | 435.0 | 1740 | 1.0853 |
| 0.4026 | 436.0 | 1744 | 1.0861 |
| 0.4026 | 437.0 | 1748 | 1.1100 |
| 0.4026 | 438.0 | 1752 | 1.1545 |
| 0.4026 | 439.0 | 1756 | 1.1714 |
| 0.4026 | 440.0 | 1760 | 1.1520 |
| 0.4026 | 441.0 | 1764 | 1.1242 |
| 0.4026 | 442.0 | 1768 | 1.1029 |
| 0.4026 | 443.0 | 1772 | 1.0844 |
| 0.4026 | 444.0 | 1776 | 1.0676 |
| 0.4026 | 445.0 | 1780 | 1.0830 |
| 0.4026 | 446.0 | 1784 | 1.0936 |
| 0.4026 | 447.0 | 1788 | 1.0992 |
| 0.4026 | 448.0 | 1792 | 1.1024 |
| 0.4026 | 449.0 | 1796 | 1.1005 |
| 0.4026 | 450.0 | 1800 | 1.0968 |
| 0.4026 | 451.0 | 1804 | 1.0915 |
| 0.4026 | 452.0 | 1808 | 1.0914 |
| 0.4026 | 453.0 | 1812 | 1.0897 |
| 0.4026 | 454.0 | 1816 | 1.0799 |
| 0.4026 | 455.0 | 1820 | 1.1148 |
| 0.4026 | 456.0 | 1824 | 1.1440 |
| 0.4026 | 457.0 | 1828 | 1.1571 |
| 0.4026 | 458.0 | 1832 | 1.1594 |
| 0.4026 | 459.0 | 1836 | 1.1520 |
| 0.4026 | 460.0 | 1840 | 1.1392 |
| 0.4026 | 461.0 | 1844 | 1.1145 |
| 0.4026 | 462.0 | 1848 | 1.1045 |
| 0.4026 | 463.0 | 1852 | 1.0923 |
| 0.4026 | 464.0 | 1856 | 1.0772 |
| 0.4026 | 465.0 | 1860 | 1.0652 |
| 0.4026 | 466.0 | 1864 | 1.0405 |
| 0.4026 | 467.0 | 1868 | 1.0121 |
| 0.4026 | 468.0 | 1872 | 1.0254 |
| 0.4026 | 469.0 | 1876 | 1.1054 |
| 0.4026 | 470.0 | 1880 | 1.1700 |
| 0.4026 | 471.0 | 1884 | 1.1976 |
| 0.4026 | 472.0 | 1888 | 1.1985 |
| 0.4026 | 473.0 | 1892 | 1.2013 |
| 0.4026 | 474.0 | 1896 | 1.1945 |
| 0.4026 | 475.0 | 1900 | 1.1819 |
| 0.4026 | 476.0 | 1904 | 1.1745 |
| 0.4026 | 477.0 | 1908 | 1.1637 |
| 0.4026 | 478.0 | 1912 | 1.1613 |
| 0.4026 | 479.0 | 1916 | 1.2205 |
| 0.4026 | 480.0 | 1920 | 1.3217 |
| 0.4026 | 481.0 | 1924 | 1.3495 |
| 0.4026 | 482.0 | 1928 | 1.3611 |
| 0.4026 | 483.0 | 1932 | 1.3540 |
| 0.4026 | 484.0 | 1936 | 1.3446 |
| 0.4026 | 485.0 | 1940 | 1.3276 |
| 0.4026 | 486.0 | 1944 | 1.2940 |
| 0.4026 | 487.0 | 1948 | 1.2593 |
| 0.4026 | 488.0 | 1952 | 1.2319 |
| 0.4026 | 489.0 | 1956 | 1.2247 |
| 0.4026 | 490.0 | 1960 | 1.2264 |
| 0.4026 | 491.0 | 1964 | 1.2378 |
| 0.4026 | 492.0 | 1968 | 1.2434 |
| 0.4026 | 493.0 | 1972 | 1.2530 |
| 0.4026 | 494.0 | 1976 | 1.2621 |
| 0.4026 | 495.0 | 1980 | 1.2628 |
| 0.4026 | 496.0 | 1984 | 1.2380 |
| 0.4026 | 497.0 | 1988 | 1.2284 |
| 0.4026 | 498.0 | 1992 | 1.2583 |
| 0.4026 | 499.0 | 1996 | 1.2241 |
| 0.4132 | 500.0 | 2000 | 1.2637 |
| 0.4132 | 501.0 | 2004 | 1.2356 |
| 0.4132 | 502.0 | 2008 | 1.1919 |
| 0.4132 | 503.0 | 2012 | 1.1615 |
| 0.4132 | 504.0 | 2016 | 1.1739 |
| 0.4132 | 505.0 | 2020 | 1.1578 |
| 0.4132 | 506.0 | 2024 | 1.1376 |
| 0.4132 | 507.0 | 2028 | 1.1027 |
| 0.4132 | 508.0 | 2032 | 1.0491 |
| 0.4132 | 509.0 | 2036 | 1.0300 |
| 0.4132 | 510.0 | 2040 | 1.0555 |
| 0.4132 | 511.0 | 2044 | 1.0936 |
| 0.4132 | 512.0 | 2048 | 1.1107 |
| 0.4132 | 513.0 | 2052 | 1.1290 |
| 0.4132 | 514.0 | 2056 | 1.1403 |
| 0.4132 | 515.0 | 2060 | 1.1134 |
| 0.4132 | 516.0 | 2064 | 1.0623 |
| 0.4132 | 517.0 | 2068 | 1.1057 |
| 0.4132 | 518.0 | 2072 | 1.0797 |
| 0.4132 | 519.0 | 2076 | 1.1629 |
| 0.4132 | 520.0 | 2080 | 1.2167 |
| 0.4132 | 521.0 | 2084 | 1.2047 |
| 0.4132 | 522.0 | 2088 | 1.1083 |
| 0.4132 | 523.0 | 2092 | 1.0418 |
| 0.4132 | 524.0 | 2096 | 1.0102 |
| 0.4132 | 525.0 | 2100 | 1.0244 |
| 0.4132 | 526.0 | 2104 | 1.1072 |
| 0.4132 | 527.0 | 2108 | 1.1927 |
| 0.4132 | 528.0 | 2112 | 1.2431 |
| 0.4132 | 529.0 | 2116 | 1.2620 |
| 0.4132 | 530.0 | 2120 | 1.2626 |
| 0.4132 | 531.0 | 2124 | 1.2374 |
| 0.4132 | 532.0 | 2128 | 1.2128 |
| 0.4132 | 533.0 | 2132 | 1.1929 |
| 0.4132 | 534.0 | 2136 | 1.1825 |
| 0.4132 | 535.0 | 2140 | 1.1820 |
| 0.4132 | 536.0 | 2144 | 1.1747 |
| 0.4132 | 537.0 | 2148 | 1.1500 |
| 0.4132 | 538.0 | 2152 | 1.1300 |
| 0.4132 | 539.0 | 2156 | 1.1154 |
| 0.4132 | 540.0 | 2160 | 1.1131 |
| 0.4132 | 541.0 | 2164 | 1.2039 |
| 0.4132 | 542.0 | 2168 | 1.2969 |
| 0.4132 | 543.0 | 2172 | 1.3467 |
| 0.4132 | 544.0 | 2176 | 1.3269 |
| 0.4132 | 545.0 | 2180 | 1.2708 |
| 0.4132 | 546.0 | 2184 | 1.2328 |
| 0.4132 | 547.0 | 2188 | 1.2018 |
| 0.4132 | 548.0 | 2192 | 1.2414 |
| 0.4132 | 549.0 | 2196 | 1.3077 |
| 0.4132 | 550.0 | 2200 | 1.3456 |
| 0.4132 | 551.0 | 2204 | 1.3697 |
| 0.4132 | 552.0 | 2208 | 1.3549 |
| 0.4132 | 553.0 | 2212 | 1.3114 |
| 0.4132 | 554.0 | 2216 | 1.2546 |
| 0.4132 | 555.0 | 2220 | 1.1885 |
| 0.4132 | 556.0 | 2224 | 1.1551 |
| 0.4132 | 557.0 | 2228 | 1.1560 |
| 0.4132 | 558.0 | 2232 | 1.1636 |
| 0.4132 | 559.0 | 2236 | 1.1683 |
| 0.4132 | 560.0 | 2240 | 1.1802 |
| 0.4132 | 561.0 | 2244 | 1.1915 |
| 0.4132 | 562.0 | 2248 | 1.2013 |
| 0.4132 | 563.0 | 2252 | 1.2959 |
| 0.4132 | 564.0 | 2256 | 1.3462 |
| 0.4132 | 565.0 | 2260 | 1.3304 |
| 0.4132 | 566.0 | 2264 | 1.2797 |
| 0.4132 | 567.0 | 2268 | 1.2271 |
| 0.4132 | 568.0 | 2272 | 1.1545 |
| 0.4132 | 569.0 | 2276 | 1.0932 |
| 0.4132 | 570.0 | 2280 | 1.0846 |
| 0.4132 | 571.0 | 2284 | 1.1062 |
| 0.4132 | 572.0 | 2288 | 1.1248 |
| 0.4132 | 573.0 | 2292 | 1.1334 |
| 0.4132 | 574.0 | 2296 | 1.1361 |
| 0.4132 | 575.0 | 2300 | 1.1488 |
| 0.4132 | 576.0 | 2304 | 1.1842 |
| 0.4132 | 577.0 | 2308 | 1.2073 |
| 0.4132 | 578.0 | 2312 | 1.2114 |
| 0.4132 | 579.0 | 2316 | 1.2072 |
| 0.4132 | 580.0 | 2320 | 1.2062 |
| 0.4132 | 581.0 | 2324 | 1.2102 |
| 0.4132 | 582.0 | 2328 | 1.1919 |
| 0.4132 | 583.0 | 2332 | 1.1725 |
| 0.4132 | 584.0 | 2336 | 1.1534 |
| 0.4132 | 585.0 | 2340 | 1.1383 |
| 0.4132 | 586.0 | 2344 | 1.1390 |
| 0.4132 | 587.0 | 2348 | 1.1535 |
| 0.4132 | 588.0 | 2352 | 1.1533 |
| 0.4132 | 589.0 | 2356 | 1.1464 |
| 0.4132 | 590.0 | 2360 | 1.1425 |
| 0.4132 | 591.0 | 2364 | 1.1457 |
| 0.4132 | 592.0 | 2368 | 1.1446 |
| 0.4132 | 593.0 | 2372 | 1.1400 |
| 0.4132 | 594.0 | 2376 | 1.1323 |
| 0.4132 | 595.0 | 2380 | 1.1214 |
| 0.4132 | 596.0 | 2384 | 1.1196 |
| 0.4132 | 597.0 | 2388 | 1.1202 |
| 0.4132 | 598.0 | 2392 | 1.1111 |
| 0.4132 | 599.0 | 2396 | 1.1033 |
| 0.4132 | 600.0 | 2400 | 1.0880 |
| 0.4132 | 601.0 | 2404 | 1.0803 |
| 0.4132 | 602.0 | 2408 | 1.1013 |
| 0.4132 | 603.0 | 2412 | 1.1340 |
| 0.4132 | 604.0 | 2416 | 1.1478 |
| 0.4132 | 605.0 | 2420 | 1.1489 |
| 0.4132 | 606.0 | 2424 | 1.1421 |
| 0.4132 | 607.0 | 2428 | 1.1339 |
| 0.4132 | 608.0 | 2432 | 1.1218 |
| 0.4132 | 609.0 | 2436 | 1.1091 |
| 0.4132 | 610.0 | 2440 | 1.1061 |
| 0.4132 | 611.0 | 2444 | 1.0998 |
| 0.4132 | 612.0 | 2448 | 1.1126 |
| 0.4132 | 613.0 | 2452 | 1.1213 |
| 0.4132 | 614.0 | 2456 | 1.1272 |
| 0.4132 | 615.0 | 2460 | 1.1455 |
| 0.4132 | 616.0 | 2464 | 1.1578 |
| 0.4132 | 617.0 | 2468 | 1.1805 |
| 0.4132 | 618.0 | 2472 | 1.2011 |
| 0.4132 | 619.0 | 2476 | 1.2163 |
| 0.4132 | 620.0 | 2480 | 1.2338 |
| 0.4132 | 621.0 | 2484 | 1.2324 |
| 0.4132 | 622.0 | 2488 | 1.2222 |
| 0.4132 | 623.0 | 2492 | 1.1981 |
| 0.4132 | 624.0 | 2496 | 1.1771 |
| 0.4061 | 625.0 | 2500 | 1.1522 |
| 0.4061 | 626.0 | 2504 | 1.1489 |
| 0.4061 | 627.0 | 2508 | 1.1523 |
| 0.4061 | 628.0 | 2512 | 1.1616 |
| 0.4061 | 629.0 | 2516 | 1.1826 |
| 0.4061 | 630.0 | 2520 | 1.2340 |
| 0.4061 | 631.0 | 2524 | 1.2748 |
| 0.4061 | 632.0 | 2528 | 1.2921 |
| 0.4061 | 633.0 | 2532 | 1.2943 |
| 0.4061 | 634.0 | 2536 | 1.2903 |
| 0.4061 | 635.0 | 2540 | 1.2727 |
| 0.4061 | 636.0 | 2544 | 1.2437 |
| 0.4061 | 637.0 | 2548 | 1.2215 |
| 0.4061 | 638.0 | 2552 | 1.2745 |
| 0.4061 | 639.0 | 2556 | 1.3062 |
| 0.4061 | 640.0 | 2560 | 1.3212 |
| 0.4061 | 641.0 | 2564 | 1.3231 |
| 0.4061 | 642.0 | 2568 | 1.3165 |
| 0.4061 | 643.0 | 2572 | 1.2992 |
| 0.4061 | 644.0 | 2576 | 1.2758 |
| 0.4061 | 645.0 | 2580 | 1.2506 |
| 0.4061 | 646.0 | 2584 | 1.2508 |
| 0.4061 | 647.0 | 2588 | 1.2453 |
| 0.4061 | 648.0 | 2592 | 1.2296 |
| 0.4061 | 649.0 | 2596 | 1.2141 |
| 0.4061 | 650.0 | 2600 | 1.2024 |
| 0.4061 | 651.0 | 2604 | 1.1930 |
| 0.4061 | 652.0 | 2608 | 1.2219 |
| 0.4061 | 653.0 | 2612 | 1.2306 |
| 0.4061 | 654.0 | 2616 | 1.2269 |
| 0.4061 | 655.0 | 2620 | 1.2037 |
| 0.4061 | 656.0 | 2624 | 1.1795 |
| 0.4061 | 657.0 | 2628 | 1.1435 |
| 0.4061 | 658.0 | 2632 | 1.1146 |
| 0.4061 | 659.0 | 2636 | 1.0946 |
| 0.4061 | 660.0 | 2640 | 1.0931 |
| 0.4061 | 661.0 | 2644 | 1.1798 |
| 0.4061 | 662.0 | 2648 | 1.1944 |
| 0.4061 | 663.0 | 2652 | 1.1942 |
| 0.4061 | 664.0 | 2656 | 1.2285 |
| 0.4061 | 665.0 | 2660 | 1.3122 |
| 0.4061 | 666.0 | 2664 | 1.3508 |
| 0.4061 | 667.0 | 2668 | 1.3625 |
| 0.4061 | 668.0 | 2672 | 1.3328 |
| 0.4061 | 669.0 | 2676 | 1.2849 |
| 0.4061 | 670.0 | 2680 | 1.2284 |
| 0.4061 | 671.0 | 2684 | 1.1931 |
| 0.4061 | 672.0 | 2688 | 1.1913 |
| 0.4061 | 673.0 | 2692 | 1.2059 |
| 0.4061 | 674.0 | 2696 | 1.2328 |
| 0.4061 | 675.0 | 2700 | 1.2668 |
| 0.4061 | 676.0 | 2704 | 1.2732 |
| 0.4061 | 677.0 | 2708 | 1.2647 |
| 0.4061 | 678.0 | 2712 | 1.2574 |
| 0.4061 | 679.0 | 2716 | 1.2319 |
| 0.4061 | 680.0 | 2720 | 1.2031 |
| 0.4061 | 681.0 | 2724 | 1.2425 |
| 0.4061 | 682.0 | 2728 | 1.2883 |
| 0.4061 | 683.0 | 2732 | 1.3076 |
| 0.4061 | 684.0 | 2736 | 1.3102 |
| 0.4061 | 685.0 | 2740 | 1.3046 |
| 0.4061 | 686.0 | 2744 | 1.2982 |
| 0.4061 | 687.0 | 2748 | 1.2846 |
| 0.4061 | 688.0 | 2752 | 1.2751 |
| 0.4061 | 689.0 | 2756 | 1.2671 |
| 0.4061 | 690.0 | 2760 | 1.2551 |
| 0.4061 | 691.0 | 2764 | 1.2444 |
| 0.4061 | 692.0 | 2768 | 1.2144 |
| 0.4061 | 693.0 | 2772 | 1.1945 |
| 0.4061 | 694.0 | 2776 | 1.1846 |
| 0.4061 | 695.0 | 2780 | 1.1939 |
| 0.4061 | 696.0 | 2784 | 1.1949 |
| 0.4061 | 697.0 | 2788 | 1.2070 |
| 0.4061 | 698.0 | 2792 | 1.2194 |
| 0.4061 | 699.0 | 2796 | 1.2330 |
| 0.4061 | 700.0 | 2800 | 1.2461 |
| 0.4061 | 701.0 | 2804 | 1.2499 |
| 0.4061 | 702.0 | 2808 | 1.2419 |
| 0.4061 | 703.0 | 2812 | 1.2619 |
| 0.4061 | 704.0 | 2816 | 1.2295 |
| 0.4061 | 705.0 | 2820 | 1.2170 |
| 0.4061 | 706.0 | 2824 | 1.2960 |
| 0.4061 | 707.0 | 2828 | 1.3246 |
| 0.4061 | 708.0 | 2832 | 1.3304 |
| 0.4061 | 709.0 | 2836 | 1.3395 |
| 0.4061 | 710.0 | 2840 | 1.3449 |
| 0.4061 | 711.0 | 2844 | 1.3399 |
| 0.4061 | 712.0 | 2848 | 1.3301 |
| 0.4061 | 713.0 | 2852 | 1.3168 |
| 0.4061 | 714.0 | 2856 | 1.3108 |
| 0.4061 | 715.0 | 2860 | 1.3146 |
| 0.4061 | 716.0 | 2864 | 1.3229 |
| 0.4061 | 717.0 | 2868 | 1.3482 |
| 0.4061 | 718.0 | 2872 | 1.3742 |
| 0.4061 | 719.0 | 2876 | 1.3829 |
| 0.4061 | 720.0 | 2880 | 1.3847 |
| 0.4061 | 721.0 | 2884 | 1.3867 |
| 0.4061 | 722.0 | 2888 | 1.3857 |
| 0.4061 | 723.0 | 2892 | 1.3810 |
| 0.4061 | 724.0 | 2896 | 1.3730 |
| 0.4061 | 725.0 | 2900 | 1.3631 |
| 0.4061 | 726.0 | 2904 | 1.3527 |
| 0.4061 | 727.0 | 2908 | 1.3418 |
| 0.4061 | 728.0 | 2912 | 1.3186 |
| 0.4061 | 729.0 | 2916 | 1.3084 |
| 0.4061 | 730.0 | 2920 | 1.3000 |
| 0.4061 | 731.0 | 2924 | 1.2873 |
| 0.4061 | 732.0 | 2928 | 1.2775 |
| 0.4061 | 733.0 | 2932 | 1.2699 |
| 0.4061 | 734.0 | 2936 | 1.2703 |
| 0.4061 | 735.0 | 2940 | 1.2799 |
| 0.4061 | 736.0 | 2944 | 1.2905 |
| 0.4061 | 737.0 | 2948 | 1.3006 |
| 0.4061 | 738.0 | 2952 | 1.3002 |
| 0.4061 | 739.0 | 2956 | 1.2978 |
| 0.4061 | 740.0 | 2960 | 1.2848 |
| 0.4061 | 741.0 | 2964 | 1.2631 |
| 0.4061 | 742.0 | 2968 | 1.2506 |
| 0.4061 | 743.0 | 2972 | 1.2557 |
| 0.4061 | 744.0 | 2976 | 1.2643 |
| 0.4061 | 745.0 | 2980 | 1.2719 |
| 0.4061 | 746.0 | 2984 | 1.2731 |
| 0.4061 | 747.0 | 2988 | 1.3278 |
| 0.4061 | 748.0 | 2992 | 1.3545 |
| 0.4061 | 749.0 | 2996 | 1.3598 |
| 0.4016 | 750.0 | 3000 | 1.3552 |
| 0.4016 | 751.0 | 3004 | 1.3679 |
| 0.4016 | 752.0 | 3008 | 1.3758 |
| 0.4016 | 753.0 | 3012 | 1.3602 |
| 0.4016 | 754.0 | 3016 | 1.3482 |
| 0.4016 | 755.0 | 3020 | 1.3237 |
| 0.4016 | 756.0 | 3024 | 1.3004 |
| 0.4016 | 757.0 | 3028 | 1.2859 |
| 0.4016 | 758.0 | 3032 | 1.2923 |
| 0.4016 | 759.0 | 3036 | 1.3164 |
| 0.4016 | 760.0 | 3040 | 1.3224 |
| 0.4016 | 761.0 | 3044 | 1.3039 |
| 0.4016 | 762.0 | 3048 | 1.2589 |
| 0.4016 | 763.0 | 3052 | 1.1517 |
| 0.4016 | 764.0 | 3056 | 1.0966 |
| 0.4016 | 765.0 | 3060 | 1.1509 |
| 0.4016 | 766.0 | 3064 | 1.2219 |
| 0.4016 | 767.0 | 3068 | 1.2252 |
| 0.4016 | 768.0 | 3072 | 1.2120 |
| 0.4016 | 769.0 | 3076 | 1.1997 |
| 0.4016 | 770.0 | 3080 | 1.1788 |
| 0.4016 | 771.0 | 3084 | 1.1522 |
| 0.4016 | 772.0 | 3088 | 1.1402 |
| 0.4016 | 773.0 | 3092 | 1.1456 |
| 0.4016 | 774.0 | 3096 | 1.1622 |
| 0.4016 | 775.0 | 3100 | 1.1761 |
| 0.4016 | 776.0 | 3104 | 1.1781 |
| 0.4016 | 777.0 | 3108 | 1.1733 |
| 0.4016 | 778.0 | 3112 | 1.1608 |
| 0.4016 | 779.0 | 3116 | 1.1462 |
| 0.4016 | 780.0 | 3120 | 1.1350 |
| 0.4016 | 781.0 | 3124 | 1.1381 |
| 0.4016 | 782.0 | 3128 | 1.1442 |
| 0.4016 | 783.0 | 3132 | 1.1534 |
| 0.4016 | 784.0 | 3136 | 1.1221 |
| 0.4016 | 785.0 | 3140 | 1.1822 |
| 0.4016 | 786.0 | 3144 | 1.2308 |
| 0.4016 | 787.0 | 3148 | 1.2633 |
| 0.4016 | 788.0 | 3152 | 1.2659 |
| 0.4016 | 789.0 | 3156 | 1.2471 |
| 0.4016 | 790.0 | 3160 | 1.1818 |
| 0.4016 | 791.0 | 3164 | 1.1384 |
| 0.4016 | 792.0 | 3168 | 1.1248 |
| 0.4016 | 793.0 | 3172 | 1.1100 |
| 0.4016 | 794.0 | 3176 | 1.1004 |
| 0.4016 | 795.0 | 3180 | 1.1016 |
| 0.4016 | 796.0 | 3184 | 1.1277 |
| 0.4016 | 797.0 | 3188 | 1.1689 |
| 0.4016 | 798.0 | 3192 | 1.1946 |
| 0.4016 | 799.0 | 3196 | 1.2127 |
| 0.4016 | 800.0 | 3200 | 1.2245 |
| 0.4016 | 801.0 | 3204 | 1.2228 |
| 0.4016 | 802.0 | 3208 | 1.2164 |
| 0.4016 | 803.0 | 3212 | 1.2172 |
| 0.4016 | 804.0 | 3216 | 1.2180 |
| 0.4016 | 805.0 | 3220 | 1.2165 |
| 0.4016 | 806.0 | 3224 | 1.2123 |
| 0.4016 | 807.0 | 3228 | 1.2098 |
| 0.4016 | 808.0 | 3232 | 1.2090 |
| 0.4016 | 809.0 | 3236 | 1.2058 |
| 0.4016 | 810.0 | 3240 | 1.2009 |
| 0.4016 | 811.0 | 3244 | 1.2007 |
| 0.4016 | 812.0 | 3248 | 1.2076 |
| 0.4016 | 813.0 | 3252 | 1.2389 |
| 0.4016 | 814.0 | 3256 | 1.2485 |
| 0.4016 | 815.0 | 3260 | 1.2495 |
| 0.4016 | 816.0 | 3264 | 1.2480 |
| 0.4016 | 817.0 | 3268 | 1.2444 |
| 0.4016 | 818.0 | 3272 | 1.2378 |
| 0.4016 | 819.0 | 3276 | 1.2285 |
| 0.4016 | 820.0 | 3280 | 1.2135 |
| 0.4016 | 821.0 | 3284 | 1.1896 |
| 0.4016 | 822.0 | 3288 | 1.1637 |
| 0.4016 | 823.0 | 3292 | 1.1443 |
| 0.4016 | 824.0 | 3296 | 1.1267 |
| 0.4016 | 825.0 | 3300 | 1.1119 |
| 0.4016 | 826.0 | 3304 | 1.1052 |
| 0.4016 | 827.0 | 3308 | 1.1026 |
| 0.4016 | 828.0 | 3312 | 1.1021 |
| 0.4016 | 829.0 | 3316 | 1.1042 |
| 0.4016 | 830.0 | 3320 | 1.1077 |
| 0.4016 | 831.0 | 3324 | 1.1123 |
| 0.4016 | 832.0 | 3328 | 1.1195 |
| 0.4016 | 833.0 | 3332 | 1.1204 |
| 0.4016 | 834.0 | 3336 | 1.1215 |
| 0.4016 | 835.0 | 3340 | 1.1350 |
| 0.4016 | 836.0 | 3344 | 1.1476 |
| 0.4016 | 837.0 | 3348 | 1.1558 |
| 0.4016 | 838.0 | 3352 | 1.1687 |
| 0.4016 | 839.0 | 3356 | 1.1715 |
| 0.4016 | 840.0 | 3360 | 1.1797 |
| 0.4016 | 841.0 | 3364 | 1.2209 |
| 0.4016 | 842.0 | 3368 | 1.2569 |
| 0.4016 | 843.0 | 3372 | 1.2802 |
| 0.4016 | 844.0 | 3376 | 1.3029 |
| 0.4016 | 845.0 | 3380 | 1.2870 |
| 0.4016 | 846.0 | 3384 | 1.1964 |
| 0.4016 | 847.0 | 3388 | 1.1334 |
| 0.4016 | 848.0 | 3392 | 1.1218 |
| 0.4016 | 849.0 | 3396 | 1.1278 |
| 0.4016 | 850.0 | 3400 | 1.1315 |
| 0.4016 | 851.0 | 3404 | 1.1784 |
| 0.4016 | 852.0 | 3408 | 1.2120 |
| 0.4016 | 853.0 | 3412 | 1.2280 |
| 0.4016 | 854.0 | 3416 | 1.2320 |
| 0.4016 | 855.0 | 3420 | 1.1869 |
| 0.4016 | 856.0 | 3424 | 1.1227 |
| 0.4016 | 857.0 | 3428 | 1.0755 |
| 0.4016 | 858.0 | 3432 | 1.0452 |
| 0.4016 | 859.0 | 3436 | 1.0299 |
| 0.4016 | 860.0 | 3440 | 1.0241 |
| 0.4016 | 861.0 | 3444 | 1.0236 |
| 0.4016 | 862.0 | 3448 | 1.0262 |
| 0.4016 | 863.0 | 3452 | 1.0287 |
| 0.4016 | 864.0 | 3456 | 1.0308 |
| 0.4016 | 865.0 | 3460 | 1.0330 |
| 0.4016 | 866.0 | 3464 | 1.0352 |
| 0.4016 | 867.0 | 3468 | 1.0370 |
| 0.4016 | 868.0 | 3472 | 1.0386 |
| 0.4016 | 869.0 | 3476 | 1.0386 |
| 0.4016 | 870.0 | 3480 | 1.0296 |
| 0.4016 | 871.0 | 3484 | 1.0207 |
| 0.4016 | 872.0 | 3488 | 1.0171 |
| 0.4016 | 873.0 | 3492 | 1.0158 |
| 0.4016 | 874.0 | 3496 | 1.0149 |
| 0.4014 | 875.0 | 3500 | 1.0150 |
| 0.4014 | 876.0 | 3504 | 1.0162 |
| 0.4014 | 877.0 | 3508 | 1.0176 |
| 0.4014 | 878.0 | 3512 | 1.0295 |
| 0.4014 | 879.0 | 3516 | 1.0410 |
| 0.4014 | 880.0 | 3520 | 1.0489 |
| 0.4014 | 881.0 | 3524 | 1.0540 |
| 0.4014 | 882.0 | 3528 | 1.0578 |
| 0.4014 | 883.0 | 3532 | 1.0607 |
| 0.4014 | 884.0 | 3536 | 1.0630 |
| 0.4014 | 885.0 | 3540 | 1.0675 |
| 0.4014 | 886.0 | 3544 | 1.0700 |
| 0.4014 | 887.0 | 3548 | 1.0726 |
| 0.4014 | 888.0 | 3552 | 1.0851 |
| 0.4014 | 889.0 | 3556 | 1.0946 |
| 0.4014 | 890.0 | 3560 | 1.1003 |
| 0.4014 | 891.0 | 3564 | 1.0967 |
| 0.4014 | 892.0 | 3568 | 1.0899 |
| 0.4014 | 893.0 | 3572 | 1.0831 |
| 0.4014 | 894.0 | 3576 | 1.0767 |
| 0.4014 | 895.0 | 3580 | 1.0696 |
| 0.4014 | 896.0 | 3584 | 1.0664 |
| 0.4014 | 897.0 | 3588 | 1.0691 |
| 0.4014 | 898.0 | 3592 | 1.0772 |
| 0.4014 | 899.0 | 3596 | 1.0807 |
| 0.4014 | 900.0 | 3600 | 1.0831 |
| 0.4014 | 901.0 | 3604 | 1.0822 |
| 0.4014 | 902.0 | 3608 | 1.0792 |
| 0.4014 | 903.0 | 3612 | 1.0659 |
| 0.4014 | 904.0 | 3616 | 1.0539 |
| 0.4014 | 905.0 | 3620 | 1.0426 |
| 0.4014 | 906.0 | 3624 | 1.0392 |
| 0.4014 | 907.0 | 3628 | 1.0473 |
| 0.4014 | 908.0 | 3632 | 1.0532 |
| 0.4014 | 909.0 | 3636 | 1.0545 |
| 0.4014 | 910.0 | 3640 | 1.0536 |
| 0.4014 | 911.0 | 3644 | 1.0540 |
| 0.4014 | 912.0 | 3648 | 1.0546 |
| 0.4014 | 913.0 | 3652 | 1.0587 |
| 0.4014 | 914.0 | 3656 | 1.0701 |
| 0.4014 | 915.0 | 3660 | 1.0807 |
| 0.4014 | 916.0 | 3664 | 1.0884 |
| 0.4014 | 917.0 | 3668 | 1.0956 |
| 0.4014 | 918.0 | 3672 | 1.1019 |
| 0.4014 | 919.0 | 3676 | 1.1053 |
| 0.4014 | 920.0 | 3680 | 1.1067 |
| 0.4014 | 921.0 | 3684 | 1.1044 |
| 0.4014 | 922.0 | 3688 | 1.1030 |
| 0.4014 | 923.0 | 3692 | 1.1033 |
| 0.4014 | 924.0 | 3696 | 1.1041 |
| 0.4014 | 925.0 | 3700 | 1.1068 |
| 0.4014 | 926.0 | 3704 | 1.1116 |
| 0.4014 | 927.0 | 3708 | 1.1157 |
| 0.4014 | 928.0 | 3712 | 1.1195 |
| 0.4014 | 929.0 | 3716 | 1.1245 |
| 0.4014 | 930.0 | 3720 | 1.1271 |
| 0.4014 | 931.0 | 3724 | 1.1289 |
| 0.4014 | 932.0 | 3728 | 1.1316 |
| 0.4014 | 933.0 | 3732 | 1.1340 |
| 0.4014 | 934.0 | 3736 | 1.1367 |
| 0.4014 | 935.0 | 3740 | 1.1425 |
| 0.4014 | 936.0 | 3744 | 1.1488 |
| 0.4014 | 937.0 | 3748 | 1.1515 |
| 0.4014 | 938.0 | 3752 | 1.1503 |
| 0.4014 | 939.0 | 3756 | 1.1478 |
| 0.4014 | 940.0 | 3760 | 1.1487 |
| 0.4014 | 941.0 | 3764 | 1.1488 |
| 0.4014 | 942.0 | 3768 | 1.1488 |
| 0.4014 | 943.0 | 3772 | 1.1493 |
| 0.4014 | 944.0 | 3776 | 1.1358 |
| 0.4014 | 945.0 | 3780 | 1.0983 |
| 0.4014 | 946.0 | 3784 | 1.0740 |
| 0.4014 | 947.0 | 3788 | 1.0641 |
| 0.4014 | 948.0 | 3792 | 1.0617 |
| 0.4014 | 949.0 | 3796 | 1.0639 |
| 0.4014 | 950.0 | 3800 | 1.0667 |
| 0.4014 | 951.0 | 3804 | 1.0778 |
| 0.4014 | 952.0 | 3808 | 1.0883 |
| 0.4014 | 953.0 | 3812 | 1.1023 |
| 0.4014 | 954.0 | 3816 | 1.1139 |
| 0.4014 | 955.0 | 3820 | 1.1205 |
| 0.4014 | 956.0 | 3824 | 1.1238 |
| 0.4014 | 957.0 | 3828 | 1.1264 |
| 0.4014 | 958.0 | 3832 | 1.1328 |
| 0.4014 | 959.0 | 3836 | 1.1374 |
| 0.4014 | 960.0 | 3840 | 1.1400 |
| 0.4014 | 961.0 | 3844 | 1.1397 |
| 0.4014 | 962.0 | 3848 | 1.1388 |
| 0.4014 | 963.0 | 3852 | 1.1385 |
| 0.4014 | 964.0 | 3856 | 1.1390 |
| 0.4014 | 965.0 | 3860 | 1.1397 |
| 0.4014 | 966.0 | 3864 | 1.1413 |
| 0.4014 | 967.0 | 3868 | 1.1471 |
| 0.4014 | 968.0 | 3872 | 1.1519 |
| 0.4014 | 969.0 | 3876 | 1.1541 |
| 0.4014 | 970.0 | 3880 | 1.1526 |
| 0.4014 | 971.0 | 3884 | 1.1506 |
| 0.4014 | 972.0 | 3888 | 1.1494 |
| 0.4014 | 973.0 | 3892 | 1.1484 |
| 0.4014 | 974.0 | 3896 | 1.1436 |
| 0.4014 | 975.0 | 3900 | 1.1406 |
| 0.4014 | 976.0 | 3904 | 1.1369 |
| 0.4014 | 977.0 | 3908 | 1.1329 |
| 0.4014 | 978.0 | 3912 | 1.1309 |
| 0.4014 | 979.0 | 3916 | 1.1291 |
| 0.4014 | 980.0 | 3920 | 1.1285 |
| 0.4014 | 981.0 | 3924 | 1.1298 |
| 0.4014 | 982.0 | 3928 | 1.1328 |
| 0.4014 | 983.0 | 3932 | 1.1266 |
| 0.4014 | 984.0 | 3936 | 1.1233 |
| 0.4014 | 985.0 | 3940 | 1.1279 |
| 0.4014 | 986.0 | 3944 | 1.1331 |
| 0.4014 | 987.0 | 3948 | 1.1367 |
| 0.4014 | 988.0 | 3952 | 1.1336 |
| 0.4014 | 989.0 | 3956 | 1.1305 |
| 0.4014 | 990.0 | 3960 | 1.1284 |
| 0.4014 | 991.0 | 3964 | 1.1270 |
| 0.4014 | 992.0 | 3968 | 1.1256 |
| 0.4014 | 993.0 | 3972 | 1.1231 |
| 0.4014 | 994.0 | 3976 | 1.1220 |
| 0.4014 | 995.0 | 3980 | 1.1229 |
| 0.4014 | 996.0 | 3984 | 1.1074 |
| 0.4014 | 997.0 | 3988 | 1.1741 |
| 0.4014 | 998.0 | 3992 | 1.2255 |
| 0.4014 | 999.0 | 3996 | 1.2600 |
| 0.4025 | 1000.0 | 4000 | 1.2943 |
| 0.4025 | 1001.0 | 4004 | 1.3115 |
| 0.4025 | 1002.0 | 4008 | 1.3149 |
| 0.4025 | 1003.0 | 4012 | 1.2950 |
| 0.4025 | 1004.0 | 4016 | 1.2578 |
| 0.4025 | 1005.0 | 4020 | 1.2230 |
| 0.4025 | 1006.0 | 4024 | 1.1886 |
| 0.4025 | 1007.0 | 4028 | 1.1686 |
| 0.4025 | 1008.0 | 4032 | 1.1784 |
| 0.4025 | 1009.0 | 4036 | 1.1909 |
| 0.4025 | 1010.0 | 4040 | 1.1984 |
| 0.4025 | 1011.0 | 4044 | 1.2013 |
| 0.4025 | 1012.0 | 4048 | 1.2029 |
| 0.4025 | 1013.0 | 4052 | 1.2016 |
| 0.4025 | 1014.0 | 4056 | 1.1755 |
| 0.4025 | 1015.0 | 4060 | 1.0993 |
| 0.4025 | 1016.0 | 4064 | 1.0576 |
| 0.4025 | 1017.0 | 4068 | 1.0620 |
| 0.4025 | 1018.0 | 4072 | 1.0791 |
| 0.4025 | 1019.0 | 4076 | 1.0938 |
| 0.4025 | 1020.0 | 4080 | 1.1000 |
| 0.4025 | 1021.0 | 4084 | 1.1049 |
| 0.4025 | 1022.0 | 4088 | 1.1093 |
| 0.4025 | 1023.0 | 4092 | 1.1115 |
| 0.4025 | 1024.0 | 4096 | 1.1253 |
| 0.4025 | 1025.0 | 4100 | 1.1377 |
| 0.4025 | 1026.0 | 4104 | 1.1378 |
| 0.4025 | 1027.0 | 4108 | 1.1303 |
| 0.4025 | 1028.0 | 4112 | 1.1133 |
| 0.4025 | 1029.0 | 4116 | 1.0965 |
| 0.4025 | 1030.0 | 4120 | 1.0833 |
| 0.4025 | 1031.0 | 4124 | 1.0750 |
| 0.4025 | 1032.0 | 4128 | 1.0715 |
| 0.4025 | 1033.0 | 4132 | 1.0742 |
| 0.4025 | 1034.0 | 4136 | 1.0822 |
| 0.4025 | 1035.0 | 4140 | 1.0887 |
| 0.4025 | 1036.0 | 4144 | 1.0935 |
| 0.4025 | 1037.0 | 4148 | 1.0960 |
| 0.4025 | 1038.0 | 4152 | 1.0993 |
| 0.4025 | 1039.0 | 4156 | 1.1041 |
| 0.4025 | 1040.0 | 4160 | 1.1087 |
| 0.4025 | 1041.0 | 4164 | 1.1171 |
| 0.4025 | 1042.0 | 4168 | 1.1270 |
| 0.4025 | 1043.0 | 4172 | 1.1340 |
| 0.4025 | 1044.0 | 4176 | 1.1404 |
| 0.4025 | 1045.0 | 4180 | 1.1455 |
| 0.4025 | 1046.0 | 4184 | 1.1466 |
| 0.4025 | 1047.0 | 4188 | 1.1479 |
| 0.4025 | 1048.0 | 4192 | 1.1482 |
| 0.4025 | 1049.0 | 4196 | 1.1489 |
| 0.4025 | 1050.0 | 4200 | 1.1486 |
| 0.4025 | 1051.0 | 4204 | 1.1477 |
| 0.4025 | 1052.0 | 4208 | 1.1471 |
| 0.4025 | 1053.0 | 4212 | 1.1478 |
| 0.4025 | 1054.0 | 4216 | 1.1483 |
| 0.4025 | 1055.0 | 4220 | 1.1424 |
| 0.4025 | 1056.0 | 4224 | 1.1357 |
| 0.4025 | 1057.0 | 4228 | 1.1308 |
| 0.4025 | 1058.0 | 4232 | 1.1275 |
| 0.4025 | 1059.0 | 4236 | 1.1346 |
| 0.4025 | 1060.0 | 4240 | 1.1628 |
| 0.4025 | 1061.0 | 4244 | 1.1450 |
| 0.4025 | 1062.0 | 4248 | 1.1331 |
| 0.4025 | 1063.0 | 4252 | 1.1271 |
| 0.4025 | 1064.0 | 4256 | 1.1263 |
| 0.4025 | 1065.0 | 4260 | 1.1266 |
| 0.4025 | 1066.0 | 4264 | 1.1259 |
| 0.4025 | 1067.0 | 4268 | 1.1255 |
| 0.4025 | 1068.0 | 4272 | 1.1248 |
| 0.4025 | 1069.0 | 4276 | 1.1228 |
| 0.4025 | 1070.0 | 4280 | 1.1207 |
| 0.4025 | 1071.0 | 4284 | 1.1215 |
| 0.4025 | 1072.0 | 4288 | 1.1191 |
| 0.4025 | 1073.0 | 4292 | 1.1177 |
| 0.4025 | 1074.0 | 4296 | 1.1179 |
| 0.4025 | 1075.0 | 4300 | 1.1181 |
| 0.4025 | 1076.0 | 4304 | 1.1181 |
| 0.4025 | 1077.0 | 4308 | 1.1172 |
| 0.4025 | 1078.0 | 4312 | 1.1154 |
| 0.4025 | 1079.0 | 4316 | 1.1134 |
| 0.4025 | 1080.0 | 4320 | 1.1121 |
| 0.4025 | 1081.0 | 4324 | 1.1111 |
| 0.4025 | 1082.0 | 4328 | 1.1102 |
| 0.4025 | 1083.0 | 4332 | 1.1102 |
| 0.4025 | 1084.0 | 4336 | 1.1109 |
| 0.4025 | 1085.0 | 4340 | 1.1119 |
| 0.4025 | 1086.0 | 4344 | 1.1126 |
| 0.4025 | 1087.0 | 4348 | 1.1129 |
| 0.4025 | 1088.0 | 4352 | 1.1131 |
| 0.4025 | 1089.0 | 4356 | 1.1131 |
| 0.4025 | 1090.0 | 4360 | 1.1129 |
| 0.4025 | 1091.0 | 4364 | 1.1130 |
| 0.4025 | 1092.0 | 4368 | 1.0967 |
| 0.4025 | 1093.0 | 4372 | 1.0824 |
| 0.4025 | 1094.0 | 4376 | 1.0799 |
| 0.4025 | 1095.0 | 4380 | 1.0830 |
| 0.4025 | 1096.0 | 4384 | 1.0894 |
| 0.4025 | 1097.0 | 4388 | 1.0983 |
| 0.4025 | 1098.0 | 4392 | 1.1050 |
| 0.4025 | 1099.0 | 4396 | 1.1161 |
| 0.4025 | 1100.0 | 4400 | 1.1332 |
| 0.4025 | 1101.0 | 4404 | 1.1434 |
| 0.4025 | 1102.0 | 4408 | 1.1527 |
| 0.4025 | 1103.0 | 4412 | 1.1581 |
| 0.4025 | 1104.0 | 4416 | 1.1606 |
| 0.4025 | 1105.0 | 4420 | 1.1648 |
| 0.4025 | 1106.0 | 4424 | 1.1656 |
| 0.4025 | 1107.0 | 4428 | 1.1644 |
| 0.4025 | 1108.0 | 4432 | 1.1646 |
| 0.4025 | 1109.0 | 4436 | 1.1654 |
| 0.4025 | 1110.0 | 4440 | 1.1610 |
| 0.4025 | 1111.0 | 4444 | 1.1545 |
| 0.4025 | 1112.0 | 4448 | 1.1492 |
| 0.4025 | 1113.0 | 4452 | 1.1442 |
| 0.4025 | 1114.0 | 4456 | 1.1438 |
| 0.4025 | 1115.0 | 4460 | 1.1538 |
| 0.4025 | 1116.0 | 4464 | 1.1623 |
| 0.4025 | 1117.0 | 4468 | 1.1693 |
| 0.4025 | 1118.0 | 4472 | 1.1743 |
| 0.4025 | 1119.0 | 4476 | 1.1749 |
| 0.4025 | 1120.0 | 4480 | 1.1382 |
| 0.4025 | 1121.0 | 4484 | 1.1209 |
| 0.4025 | 1122.0 | 4488 | 1.1680 |
| 0.4025 | 1123.0 | 4492 | 1.2175 |
| 0.4025 | 1124.0 | 4496 | 1.2453 |
| 0.4015 | 1125.0 | 4500 | 1.2393 |
| 0.4015 | 1126.0 | 4504 | 1.2185 |
| 0.4015 | 1127.0 | 4508 | 1.1926 |
| 0.4015 | 1128.0 | 4512 | 1.1660 |
| 0.4015 | 1129.0 | 4516 | 1.1457 |
| 0.4015 | 1130.0 | 4520 | 1.1286 |
| 0.4015 | 1131.0 | 4524 | 1.1176 |
| 0.4015 | 1132.0 | 4528 | 1.1100 |
| 0.4015 | 1133.0 | 4532 | 1.1023 |
| 0.4015 | 1134.0 | 4536 | 1.0997 |
| 0.4015 | 1135.0 | 4540 | 1.0973 |
| 0.4015 | 1136.0 | 4544 | 1.0962 |
| 0.4015 | 1137.0 | 4548 | 1.0984 |
| 0.4015 | 1138.0 | 4552 | 1.1027 |
| 0.4015 | 1139.0 | 4556 | 1.1081 |
| 0.4015 | 1140.0 | 4560 | 1.1123 |
| 0.4015 | 1141.0 | 4564 | 1.1148 |
| 0.4015 | 1142.0 | 4568 | 1.1128 |
| 0.4015 | 1143.0 | 4572 | 1.1084 |
| 0.4015 | 1144.0 | 4576 | 1.1048 |
| 0.4015 | 1145.0 | 4580 | 1.0997 |
| 0.4015 | 1146.0 | 4584 | 1.1051 |
| 0.4015 | 1147.0 | 4588 | 1.1135 |
| 0.4015 | 1148.0 | 4592 | 1.1169 |
| 0.4015 | 1149.0 | 4596 | 1.1196 |
| 0.4015 | 1150.0 | 4600 | 1.1214 |
| 0.4015 | 1151.0 | 4604 | 1.1132 |
| 0.4015 | 1152.0 | 4608 | 1.1172 |
| 0.4015 | 1153.0 | 4612 | 1.1228 |
| 0.4015 | 1154.0 | 4616 | 1.1291 |
| 0.4015 | 1155.0 | 4620 | 1.1335 |
| 0.4015 | 1156.0 | 4624 | 1.1364 |
| 0.4015 | 1157.0 | 4628 | 1.1378 |
| 0.4015 | 1158.0 | 4632 | 1.1378 |
| 0.4015 | 1159.0 | 4636 | 1.1380 |
| 0.4015 | 1160.0 | 4640 | 1.1300 |
| 0.4015 | 1161.0 | 4644 | 1.1238 |
| 0.4015 | 1162.0 | 4648 | 1.1207 |
| 0.4015 | 1163.0 | 4652 | 1.1203 |
| 0.4015 | 1164.0 | 4656 | 1.1198 |
| 0.4015 | 1165.0 | 4660 | 1.1092 |
| 0.4015 | 1166.0 | 4664 | 1.1052 |
| 0.4015 | 1167.0 | 4668 | 1.1309 |
| 0.4015 | 1168.0 | 4672 | 1.1826 |
| 0.4015 | 1169.0 | 4676 | 1.1280 |
| 0.4015 | 1170.0 | 4680 | 1.1234 |
| 0.4015 | 1171.0 | 4684 | 1.1804 |
| 0.4015 | 1172.0 | 4688 | 1.2199 |
| 0.4015 | 1173.0 | 4692 | 1.2259 |
| 0.4015 | 1174.0 | 4696 | 1.2267 |
| 0.4015 | 1175.0 | 4700 | 1.2261 |
| 0.4015 | 1176.0 | 4704 | 1.2248 |
| 0.4015 | 1177.0 | 4708 | 1.2086 |
| 0.4015 | 1178.0 | 4712 | 1.1969 |
| 0.4015 | 1179.0 | 4716 | 1.1937 |
| 0.4015 | 1180.0 | 4720 | 1.1915 |
| 0.4015 | 1181.0 | 4724 | 1.1917 |
| 0.4015 | 1182.0 | 4728 | 1.1925 |
| 0.4015 | 1183.0 | 4732 | 1.2010 |
| 0.4015 | 1184.0 | 4736 | 1.2017 |
| 0.4015 | 1185.0 | 4740 | 1.1974 |
| 0.4015 | 1186.0 | 4744 | 1.1934 |
| 0.4015 | 1187.0 | 4748 | 1.1915 |
| 0.4015 | 1188.0 | 4752 | 1.1902 |
| 0.4015 | 1189.0 | 4756 | 1.1896 |
| 0.4015 | 1190.0 | 4760 | 1.1888 |
| 0.4015 | 1191.0 | 4764 | 1.1806 |
| 0.4015 | 1192.0 | 4768 | 1.1684 |
| 0.4015 | 1193.0 | 4772 | 1.1584 |
| 0.4015 | 1194.0 | 4776 | 1.1505 |
| 0.4015 | 1195.0 | 4780 | 1.1480 |
| 0.4015 | 1196.0 | 4784 | 1.1483 |
| 0.4015 | 1197.0 | 4788 | 1.1506 |
| 0.4015 | 1198.0 | 4792 | 1.1532 |
| 0.4015 | 1199.0 | 4796 | 1.1542 |
| 0.4015 | 1200.0 | 4800 | 1.1539 |
| 0.4015 | 1201.0 | 4804 | 1.1521 |
| 0.4015 | 1202.0 | 4808 | 1.1509 |
| 0.4015 | 1203.0 | 4812 | 1.1495 |
| 0.4015 | 1204.0 | 4816 | 1.1499 |
| 0.4015 | 1205.0 | 4820 | 1.1519 |
| 0.4015 | 1206.0 | 4824 | 1.1538 |
| 0.4015 | 1207.0 | 4828 | 1.1569 |
| 0.4015 | 1208.0 | 4832 | 1.1558 |
| 0.4015 | 1209.0 | 4836 | 1.1562 |
| 0.4015 | 1210.0 | 4840 | 1.1556 |
| 0.4015 | 1211.0 | 4844 | 1.1548 |
| 0.4015 | 1212.0 | 4848 | 1.1574 |
| 0.4015 | 1213.0 | 4852 | 1.1591 |
| 0.4015 | 1214.0 | 4856 | 1.1590 |
| 0.4015 | 1215.0 | 4860 | 1.1575 |
| 0.4015 | 1216.0 | 4864 | 1.1385 |
| 0.4015 | 1217.0 | 4868 | 1.1270 |
| 0.4015 | 1218.0 | 4872 | 1.1209 |
| 0.4015 | 1219.0 | 4876 | 1.1201 |
| 0.4015 | 1220.0 | 4880 | 1.1297 |
| 0.4015 | 1221.0 | 4884 | 1.1371 |
| 0.4015 | 1222.0 | 4888 | 1.1426 |
| 0.4015 | 1223.0 | 4892 | 1.1456 |
| 0.4015 | 1224.0 | 4896 | 1.1458 |
| 0.4015 | 1225.0 | 4900 | 1.1463 |
| 0.4015 | 1226.0 | 4904 | 1.1458 |
| 0.4015 | 1227.0 | 4908 | 1.1445 |
| 0.4015 | 1228.0 | 4912 | 1.1438 |
| 0.4015 | 1229.0 | 4916 | 1.1434 |
| 0.4015 | 1230.0 | 4920 | 1.1434 |
| 0.4015 | 1231.0 | 4924 | 1.1420 |
| 0.4015 | 1232.0 | 4928 | 1.1431 |
| 0.4015 | 1233.0 | 4932 | 1.1469 |
| 0.4015 | 1234.0 | 4936 | 1.1481 |
| 0.4015 | 1235.0 | 4940 | 1.1464 |
| 0.4015 | 1236.0 | 4944 | 1.1433 |
| 0.4015 | 1237.0 | 4948 | 1.1392 |
| 0.4015 | 1238.0 | 4952 | 1.1353 |
| 0.4015 | 1239.0 | 4956 | 1.1318 |
| 0.4015 | 1240.0 | 4960 | 1.1300 |
| 0.4015 | 1241.0 | 4964 | 1.1287 |
| 0.4015 | 1242.0 | 4968 | 1.1837 |
| 0.4015 | 1243.0 | 4972 | 1.2690 |
| 0.4015 | 1244.0 | 4976 | 1.3062 |
| 0.4015 | 1245.0 | 4980 | 1.3034 |
| 0.4015 | 1246.0 | 4984 | 1.2571 |
| 0.4015 | 1247.0 | 4988 | 1.2178 |
| 0.4015 | 1248.0 | 4992 | 1.1835 |
| 0.4015 | 1249.0 | 4996 | 1.1600 |
| 0.4008 | 1250.0 | 5000 | 1.1461 |
| 0.4008 | 1251.0 | 5004 | 1.1375 |
| 0.4008 | 1252.0 | 5008 | 1.1322 |
| 0.4008 | 1253.0 | 5012 | 1.1299 |
| 0.4008 | 1254.0 | 5016 | 1.1389 |
| 0.4008 | 1255.0 | 5020 | 1.1511 |
| 0.4008 | 1256.0 | 5024 | 1.1566 |
| 0.4008 | 1257.0 | 5028 | 1.1594 |
| 0.4008 | 1258.0 | 5032 | 1.1602 |
| 0.4008 | 1259.0 | 5036 | 1.1609 |
| 0.4008 | 1260.0 | 5040 | 1.1610 |
| 0.4008 | 1261.0 | 5044 | 1.1608 |
| 0.4008 | 1262.0 | 5048 | 1.1597 |
| 0.4008 | 1263.0 | 5052 | 1.1590 |
| 0.4008 | 1264.0 | 5056 | 1.1597 |
| 0.4008 | 1265.0 | 5060 | 1.1603 |
| 0.4008 | 1266.0 | 5064 | 1.1604 |
| 0.4008 | 1267.0 | 5068 | 1.1602 |
| 0.4008 | 1268.0 | 5072 | 1.1598 |
| 0.4008 | 1269.0 | 5076 | 1.1579 |
| 0.4008 | 1270.0 | 5080 | 1.1565 |
| 0.4008 | 1271.0 | 5084 | 1.1558 |
| 0.4008 | 1272.0 | 5088 | 1.1548 |
| 0.4008 | 1273.0 | 5092 | 1.1559 |
| 0.4008 | 1274.0 | 5096 | 1.1588 |
| 0.4008 | 1275.0 | 5100 | 1.1622 |
| 0.4008 | 1276.0 | 5104 | 1.1649 |
| 0.4008 | 1277.0 | 5108 | 1.1670 |
| 0.4008 | 1278.0 | 5112 | 1.1698 |
| 0.4008 | 1279.0 | 5116 | 1.1725 |
| 0.4008 | 1280.0 | 5120 | 1.1868 |
| 0.4008 | 1281.0 | 5124 | 1.2203 |
| 0.4008 | 1282.0 | 5128 | 1.2401 |
| 0.4008 | 1283.0 | 5132 | 1.2493 |
| 0.4008 | 1284.0 | 5136 | 1.2511 |
| 0.4008 | 1285.0 | 5140 | 1.2476 |
| 0.4008 | 1286.0 | 5144 | 1.2440 |
| 0.4008 | 1287.0 | 5148 | 1.2408 |
| 0.4008 | 1288.0 | 5152 | 1.2389 |
| 0.4008 | 1289.0 | 5156 | 1.2452 |
| 0.4008 | 1290.0 | 5160 | 1.2512 |
| 0.4008 | 1291.0 | 5164 | 1.2502 |
| 0.4008 | 1292.0 | 5168 | 1.2396 |
| 0.4008 | 1293.0 | 5172 | 1.2263 |
| 0.4008 | 1294.0 | 5176 | 1.2149 |
| 0.4008 | 1295.0 | 5180 | 1.2061 |
| 0.4008 | 1296.0 | 5184 | 1.1999 |
| 0.4008 | 1297.0 | 5188 | 1.1953 |
| 0.4008 | 1298.0 | 5192 | 1.1914 |
| 0.4008 | 1299.0 | 5196 | 1.1855 |
| 0.4008 | 1300.0 | 5200 | 1.1795 |
| 0.4008 | 1301.0 | 5204 | 1.1830 |
| 0.4008 | 1302.0 | 5208 | 1.1923 |
| 0.4008 | 1303.0 | 5212 | 1.2020 |
| 0.4008 | 1304.0 | 5216 | 1.2060 |
| 0.4008 | 1305.0 | 5220 | 1.2277 |
| 0.4008 | 1306.0 | 5224 | 1.2438 |
| 0.4008 | 1307.0 | 5228 | 1.2499 |
| 0.4008 | 1308.0 | 5232 | 1.2500 |
| 0.4008 | 1309.0 | 5236 | 1.2497 |
| 0.4008 | 1310.0 | 5240 | 1.2522 |
| 0.4008 | 1311.0 | 5244 | 1.2541 |
| 0.4008 | 1312.0 | 5248 | 1.2537 |
| 0.4008 | 1313.0 | 5252 | 1.2522 |
| 0.4008 | 1314.0 | 5256 | 1.2485 |
| 0.4008 | 1315.0 | 5260 | 1.2415 |
| 0.4008 | 1316.0 | 5264 | 1.2388 |
| 0.4008 | 1317.0 | 5268 | 1.2365 |
| 0.4008 | 1318.0 | 5272 | 1.2348 |
| 0.4008 | 1319.0 | 5276 | 1.2331 |
| 0.4008 | 1320.0 | 5280 | 1.2321 |
| 0.4008 | 1321.0 | 5284 | 1.2298 |
| 0.4008 | 1322.0 | 5288 | 1.2291 |
| 0.4008 | 1323.0 | 5292 | 1.2288 |
| 0.4008 | 1324.0 | 5296 | 1.2259 |
| 0.4008 | 1325.0 | 5300 | 1.2227 |
| 0.4008 | 1326.0 | 5304 | 1.2183 |
| 0.4008 | 1327.0 | 5308 | 1.2139 |
| 0.4008 | 1328.0 | 5312 | 1.2110 |
| 0.4008 | 1329.0 | 5316 | 1.2143 |
| 0.4008 | 1330.0 | 5320 | 1.2166 |
| 0.4008 | 1331.0 | 5324 | 1.2170 |
| 0.4008 | 1332.0 | 5328 | 1.2170 |
| 0.4008 | 1333.0 | 5332 | 1.2179 |
| 0.4008 | 1334.0 | 5336 | 1.2179 |
| 0.4008 | 1335.0 | 5340 | 1.2162 |
| 0.4008 | 1336.0 | 5344 | 1.2154 |
| 0.4008 | 1337.0 | 5348 | 1.2187 |
| 0.4008 | 1338.0 | 5352 | 1.2213 |
| 0.4008 | 1339.0 | 5356 | 1.2225 |
| 0.4008 | 1340.0 | 5360 | 1.2231 |
| 0.4008 | 1341.0 | 5364 | 1.2304 |
| 0.4008 | 1342.0 | 5368 | 1.2316 |
| 0.4008 | 1343.0 | 5372 | 1.2299 |
| 0.4008 | 1344.0 | 5376 | 1.2254 |
| 0.4008 | 1345.0 | 5380 | 1.2162 |
| 0.4008 | 1346.0 | 5384 | 1.2209 |
| 0.4008 | 1347.0 | 5388 | 1.2183 |
| 0.4008 | 1348.0 | 5392 | 1.2093 |
| 0.4008 | 1349.0 | 5396 | 1.1974 |
| 0.4008 | 1350.0 | 5400 | 1.1941 |
| 0.4008 | 1351.0 | 5404 | 1.1966 |
| 0.4008 | 1352.0 | 5408 | 1.2073 |
| 0.4008 | 1353.0 | 5412 | 1.2096 |
| 0.4008 | 1354.0 | 5416 | 1.2137 |
| 0.4008 | 1355.0 | 5420 | 1.2198 |
| 0.4008 | 1356.0 | 5424 | 1.2200 |
| 0.4008 | 1357.0 | 5428 | 1.2225 |
| 0.4008 | 1358.0 | 5432 | 1.2242 |
| 0.4008 | 1359.0 | 5436 | 1.2235 |
| 0.4008 | 1360.0 | 5440 | 1.2221 |
| 0.4008 | 1361.0 | 5444 | 1.2212 |
| 0.4008 | 1362.0 | 5448 | 1.2151 |
| 0.4008 | 1363.0 | 5452 | 1.2104 |
| 0.4008 | 1364.0 | 5456 | 1.2369 |
| 0.4008 | 1365.0 | 5460 | 1.2581 |
| 0.4008 | 1366.0 | 5464 | 1.2742 |
| 0.4008 | 1367.0 | 5468 | 1.2864 |
| 0.4008 | 1368.0 | 5472 | 1.2911 |
| 0.4008 | 1369.0 | 5476 | 1.2839 |
| 0.4008 | 1370.0 | 5480 | 1.2776 |
| 0.4008 | 1371.0 | 5484 | 1.2769 |
| 0.4008 | 1372.0 | 5488 | 1.2795 |
| 0.4008 | 1373.0 | 5492 | 1.2875 |
| 0.4008 | 1374.0 | 5496 | 1.2917 |
| 0.4015 | 1375.0 | 5500 | 1.2912 |
| 0.4015 | 1376.0 | 5504 | 1.2882 |
| 0.4015 | 1377.0 | 5508 | 1.2835 |
| 0.4015 | 1378.0 | 5512 | 1.2786 |
| 0.4015 | 1379.0 | 5516 | 1.2770 |
| 0.4015 | 1380.0 | 5520 | 1.2903 |
| 0.4015 | 1381.0 | 5524 | 1.2977 |
| 0.4015 | 1382.0 | 5528 | 1.3009 |
| 0.4015 | 1383.0 | 5532 | 1.3018 |
| 0.4015 | 1384.0 | 5536 | 1.3013 |
| 0.4015 | 1385.0 | 5540 | 1.2998 |
| 0.4015 | 1386.0 | 5544 | 1.2951 |
| 0.4015 | 1387.0 | 5548 | 1.2918 |
| 0.4015 | 1388.0 | 5552 | 1.2899 |
| 0.4015 | 1389.0 | 5556 | 1.2895 |
| 0.4015 | 1390.0 | 5560 | 1.2881 |
| 0.4015 | 1391.0 | 5564 | 1.2862 |
| 0.4015 | 1392.0 | 5568 | 1.2841 |
| 0.4015 | 1393.0 | 5572 | 1.2819 |
| 0.4015 | 1394.0 | 5576 | 1.2798 |
| 0.4015 | 1395.0 | 5580 | 1.2772 |
| 0.4015 | 1396.0 | 5584 | 1.2705 |
| 0.4015 | 1397.0 | 5588 | 1.2660 |
| 0.4015 | 1398.0 | 5592 | 1.2614 |
| 0.4015 | 1399.0 | 5596 | 1.2573 |
| 0.4015 | 1400.0 | 5600 | 1.2546 |
| 0.4015 | 1401.0 | 5604 | 1.2531 |
| 0.4015 | 1402.0 | 5608 | 1.2521 |
| 0.4015 | 1403.0 | 5612 | 1.2500 |
| 0.4015 | 1404.0 | 5616 | 1.2508 |
| 0.4015 | 1405.0 | 5620 | 1.2504 |
| 0.4015 | 1406.0 | 5624 | 1.2504 |
| 0.4015 | 1407.0 | 5628 | 1.2498 |
| 0.4015 | 1408.0 | 5632 | 1.2506 |
| 0.4015 | 1409.0 | 5636 | 1.2501 |
| 0.4015 | 1410.0 | 5640 | 1.2494 |
| 0.4015 | 1411.0 | 5644 | 1.2472 |
| 0.4015 | 1412.0 | 5648 | 1.2456 |
| 0.4015 | 1413.0 | 5652 | 1.2446 |
| 0.4015 | 1414.0 | 5656 | 1.2436 |
| 0.4015 | 1415.0 | 5660 | 1.2433 |
| 0.4015 | 1416.0 | 5664 | 1.2426 |
| 0.4015 | 1417.0 | 5668 | 1.2430 |
| 0.4015 | 1418.0 | 5672 | 1.2423 |
| 0.4015 | 1419.0 | 5676 | 1.2421 |
| 0.4015 | 1420.0 | 5680 | 1.2426 |
| 0.4015 | 1421.0 | 5684 | 1.2434 |
| 0.4015 | 1422.0 | 5688 | 1.2442 |
| 0.4015 | 1423.0 | 5692 | 1.2458 |
| 0.4015 | 1424.0 | 5696 | 1.2465 |
| 0.4015 | 1425.0 | 5700 | 1.2464 |
| 0.4015 | 1426.0 | 5704 | 1.2464 |
| 0.4015 | 1427.0 | 5708 | 1.2456 |
| 0.4015 | 1428.0 | 5712 | 1.2452 |
| 0.4015 | 1429.0 | 5716 | 1.2433 |
| 0.4015 | 1430.0 | 5720 | 1.2398 |
| 0.4015 | 1431.0 | 5724 | 1.2345 |
| 0.4015 | 1432.0 | 5728 | 1.2310 |
| 0.4015 | 1433.0 | 5732 | 1.2283 |
| 0.4015 | 1434.0 | 5736 | 1.2254 |
| 0.4015 | 1435.0 | 5740 | 1.2245 |
| 0.4015 | 1436.0 | 5744 | 1.2243 |
| 0.4015 | 1437.0 | 5748 | 1.2281 |
| 0.4015 | 1438.0 | 5752 | 1.2306 |
| 0.4015 | 1439.0 | 5756 | 1.2311 |
| 0.4015 | 1440.0 | 5760 | 1.2309 |
| 0.4015 | 1441.0 | 5764 | 1.2304 |
| 0.4015 | 1442.0 | 5768 | 1.2311 |
| 0.4015 | 1443.0 | 5772 | 1.2319 |
| 0.4015 | 1444.0 | 5776 | 1.2317 |
| 0.4015 | 1445.0 | 5780 | 1.2316 |
| 0.4015 | 1446.0 | 5784 | 1.2310 |
| 0.4015 | 1447.0 | 5788 | 1.2289 |
| 0.4015 | 1448.0 | 5792 | 1.2265 |
| 0.4015 | 1449.0 | 5796 | 1.2239 |
| 0.4015 | 1450.0 | 5800 | 1.2194 |
| 0.4015 | 1451.0 | 5804 | 1.2156 |
| 0.4015 | 1452.0 | 5808 | 1.2129 |
| 0.4015 | 1453.0 | 5812 | 1.2106 |
| 0.4015 | 1454.0 | 5816 | 1.2093 |
| 0.4015 | 1455.0 | 5820 | 1.2084 |
| 0.4015 | 1456.0 | 5824 | 1.2084 |
| 0.4015 | 1457.0 | 5828 | 1.2071 |
| 0.4015 | 1458.0 | 5832 | 1.2051 |
| 0.4015 | 1459.0 | 5836 | 1.2022 |
| 0.4015 | 1460.0 | 5840 | 1.2007 |
| 0.4015 | 1461.0 | 5844 | 1.1995 |
| 0.4015 | 1462.0 | 5848 | 1.2008 |
| 0.4015 | 1463.0 | 5852 | 1.2019 |
| 0.4015 | 1464.0 | 5856 | 1.2022 |
| 0.4015 | 1465.0 | 5860 | 1.2017 |
| 0.4015 | 1466.0 | 5864 | 1.2005 |
| 0.4015 | 1467.0 | 5868 | 1.1990 |
| 0.4015 | 1468.0 | 5872 | 1.1974 |
| 0.4015 | 1469.0 | 5876 | 1.1966 |
| 0.4015 | 1470.0 | 5880 | 1.1973 |
| 0.4015 | 1471.0 | 5884 | 1.1988 |
| 0.4015 | 1472.0 | 5888 | 1.1995 |
| 0.4015 | 1473.0 | 5892 | 1.1972 |
| 0.4015 | 1474.0 | 5896 | 1.1946 |
| 0.4015 | 1475.0 | 5900 | 1.1937 |
| 0.4015 | 1476.0 | 5904 | 1.1935 |
| 0.4015 | 1477.0 | 5908 | 1.1945 |
| 0.4015 | 1478.0 | 5912 | 1.1963 |
| 0.4015 | 1479.0 | 5916 | 1.1971 |
| 0.4015 | 1480.0 | 5920 | 1.1973 |
| 0.4015 | 1481.0 | 5924 | 1.1968 |
| 0.4015 | 1482.0 | 5928 | 1.1970 |
| 0.4015 | 1483.0 | 5932 | 1.1981 |
| 0.4015 | 1484.0 | 5936 | 1.2011 |
| 0.4015 | 1485.0 | 5940 | 1.2031 |
| 0.4015 | 1486.0 | 5944 | 1.2038 |
| 0.4015 | 1487.0 | 5948 | 1.2041 |
| 0.4015 | 1488.0 | 5952 | 1.2046 |
| 0.4015 | 1489.0 | 5956 | 1.2054 |
| 0.4015 | 1490.0 | 5960 | 1.2053 |
| 0.4015 | 1491.0 | 5964 | 1.2047 |
| 0.4015 | 1492.0 | 5968 | 1.2043 |
| 0.4015 | 1493.0 | 5972 | 1.2037 |
| 0.4015 | 1494.0 | 5976 | 1.2039 |
| 0.4015 | 1495.0 | 5980 | 1.2042 |
| 0.4015 | 1496.0 | 5984 | 1.2033 |
| 0.4015 | 1497.0 | 5988 | 1.2028 |
| 0.4015 | 1498.0 | 5992 | 1.2025 |
| 0.4015 | 1499.0 | 5996 | 1.2027 |
| 0.4005 | 1500.0 | 6000 | 1.2024 |
| 0.4005 | 1501.0 | 6004 | 1.2017 |
| 0.4005 | 1502.0 | 6008 | 1.2016 |
| 0.4005 | 1503.0 | 6012 | 1.2028 |
| 0.4005 | 1504.0 | 6016 | 1.2034 |
| 0.4005 | 1505.0 | 6020 | 1.2017 |
| 0.4005 | 1506.0 | 6024 | 1.2009 |
| 0.4005 | 1507.0 | 6028 | 1.2023 |
| 0.4005 | 1508.0 | 6032 | 1.2039 |
| 0.4005 | 1509.0 | 6036 | 1.2052 |
| 0.4005 | 1510.0 | 6040 | 1.2066 |
| 0.4005 | 1511.0 | 6044 | 1.2072 |
| 0.4005 | 1512.0 | 6048 | 1.2076 |
| 0.4005 | 1513.0 | 6052 | 1.2075 |
| 0.4005 | 1514.0 | 6056 | 1.2071 |
| 0.4005 | 1515.0 | 6060 | 1.2070 |
| 0.4005 | 1516.0 | 6064 | 1.2072 |
| 0.4005 | 1517.0 | 6068 | 1.2076 |
| 0.4005 | 1518.0 | 6072 | 1.2063 |
| 0.4005 | 1519.0 | 6076 | 1.2048 |
| 0.4005 | 1520.0 | 6080 | 1.2035 |
| 0.4005 | 1521.0 | 6084 | 1.2034 |
| 0.4005 | 1522.0 | 6088 | 1.2024 |
| 0.4005 | 1523.0 | 6092 | 1.2014 |
| 0.4005 | 1524.0 | 6096 | 1.2002 |
| 0.4005 | 1525.0 | 6100 | 1.2007 |
| 0.4005 | 1526.0 | 6104 | 1.2013 |
| 0.4005 | 1527.0 | 6108 | 1.2028 |
| 0.4005 | 1528.0 | 6112 | 1.2047 |
| 0.4005 | 1529.0 | 6116 | 1.2052 |
| 0.4005 | 1530.0 | 6120 | 1.2029 |
| 0.4005 | 1531.0 | 6124 | 1.1988 |
| 0.4005 | 1532.0 | 6128 | 1.1963 |
| 0.4005 | 1533.0 | 6132 | 1.1948 |
| 0.4005 | 1534.0 | 6136 | 1.2572 |
| 0.4005 | 1535.0 | 6140 | 1.3083 |
| 0.4005 | 1536.0 | 6144 | 1.3353 |
| 0.4005 | 1537.0 | 6148 | 1.3495 |
| 0.4005 | 1538.0 | 6152 | 1.3553 |
| 0.4005 | 1539.0 | 6156 | 1.3575 |
| 0.4005 | 1540.0 | 6160 | 1.3562 |
| 0.4005 | 1541.0 | 6164 | 1.3531 |
| 0.4005 | 1542.0 | 6168 | 1.3512 |
| 0.4005 | 1543.0 | 6172 | 1.3500 |
| 0.4005 | 1544.0 | 6176 | 1.3490 |
| 0.4005 | 1545.0 | 6180 | 1.3482 |
| 0.4005 | 1546.0 | 6184 | 1.3469 |
| 0.4005 | 1547.0 | 6188 | 1.3453 |
| 0.4005 | 1548.0 | 6192 | 1.3416 |
| 0.4005 | 1549.0 | 6196 | 1.3357 |
| 0.4005 | 1550.0 | 6200 | 1.3297 |
| 0.4005 | 1551.0 | 6204 | 1.3243 |
| 0.4005 | 1552.0 | 6208 | 1.3198 |
| 0.4005 | 1553.0 | 6212 | 1.3167 |
| 0.4005 | 1554.0 | 6216 | 1.3153 |
| 0.4005 | 1555.0 | 6220 | 1.3178 |
| 0.4005 | 1556.0 | 6224 | 1.3195 |
| 0.4005 | 1557.0 | 6228 | 1.3196 |
| 0.4005 | 1558.0 | 6232 | 1.3191 |
| 0.4005 | 1559.0 | 6236 | 1.3161 |
| 0.4005 | 1560.0 | 6240 | 1.3133 |
| 0.4005 | 1561.0 | 6244 | 1.3188 |
| 0.4005 | 1562.0 | 6248 | 1.3219 |
| 0.4005 | 1563.0 | 6252 | 1.3229 |
| 0.4005 | 1564.0 | 6256 | 1.3212 |
| 0.4005 | 1565.0 | 6260 | 1.3197 |
| 0.4005 | 1566.0 | 6264 | 1.3178 |
| 0.4005 | 1567.0 | 6268 | 1.3158 |
| 0.4005 | 1568.0 | 6272 | 1.3133 |
| 0.4005 | 1569.0 | 6276 | 1.2699 |
| 0.4005 | 1570.0 | 6280 | 1.2334 |
| 0.4005 | 1571.0 | 6284 | 1.2064 |
| 0.4005 | 1572.0 | 6288 | 1.1874 |
| 0.4005 | 1573.0 | 6292 | 1.1745 |
| 0.4005 | 1574.0 | 6296 | 1.1676 |
| 0.4005 | 1575.0 | 6300 | 1.1638 |
| 0.4005 | 1576.0 | 6304 | 1.1626 |
| 0.4005 | 1577.0 | 6308 | 1.1644 |
| 0.4005 | 1578.0 | 6312 | 1.1544 |
| 0.4005 | 1579.0 | 6316 | 1.1388 |
| 0.4005 | 1580.0 | 6320 | 1.1285 |
| 0.4005 | 1581.0 | 6324 | 1.1222 |
| 0.4005 | 1582.0 | 6328 | 1.1200 |
| 0.4005 | 1583.0 | 6332 | 1.1229 |
| 0.4005 | 1584.0 | 6336 | 1.1250 |
| 0.4005 | 1585.0 | 6340 | 1.1318 |
| 0.4005 | 1586.0 | 6344 | 1.1341 |
| 0.4005 | 1587.0 | 6348 | 1.1354 |
| 0.4005 | 1588.0 | 6352 | 1.1353 |
| 0.4005 | 1589.0 | 6356 | 1.1354 |
| 0.4005 | 1590.0 | 6360 | 1.1357 |
| 0.4005 | 1591.0 | 6364 | 1.1355 |
| 0.4005 | 1592.0 | 6368 | 1.1338 |
| 0.4005 | 1593.0 | 6372 | 1.1318 |
| 0.4005 | 1594.0 | 6376 | 1.1298 |
| 0.4005 | 1595.0 | 6380 | 1.1265 |
| 0.4005 | 1596.0 | 6384 | 1.1231 |
| 0.4005 | 1597.0 | 6388 | 1.1209 |
| 0.4005 | 1598.0 | 6392 | 1.1193 |
| 0.4005 | 1599.0 | 6396 | 1.1188 |
| 0.4005 | 1600.0 | 6400 | 1.1357 |
| 0.4005 | 1601.0 | 6404 | 1.1445 |
| 0.4005 | 1602.0 | 6408 | 1.1491 |
| 0.4005 | 1603.0 | 6412 | 1.1495 |
| 0.4005 | 1604.0 | 6416 | 1.1489 |
| 0.4005 | 1605.0 | 6420 | 1.1499 |
| 0.4005 | 1606.0 | 6424 | 1.1537 |
| 0.4005 | 1607.0 | 6428 | 1.1544 |
| 0.4005 | 1608.0 | 6432 | 1.1567 |
| 0.4005 | 1609.0 | 6436 | 1.1581 |
| 0.4005 | 1610.0 | 6440 | 1.1583 |
| 0.4005 | 1611.0 | 6444 | 1.1580 |
| 0.4005 | 1612.0 | 6448 | 1.1578 |
| 0.4005 | 1613.0 | 6452 | 1.1684 |
| 0.4005 | 1614.0 | 6456 | 1.1755 |
| 0.4005 | 1615.0 | 6460 | 1.1773 |
| 0.4005 | 1616.0 | 6464 | 1.1752 |
| 0.4005 | 1617.0 | 6468 | 1.1739 |
| 0.4005 | 1618.0 | 6472 | 1.1721 |
| 0.4005 | 1619.0 | 6476 | 1.1710 |
| 0.4005 | 1620.0 | 6480 | 1.1708 |
| 0.4005 | 1621.0 | 6484 | 1.1690 |
| 0.4005 | 1622.0 | 6488 | 1.1667 |
| 0.4005 | 1623.0 | 6492 | 1.1625 |
| 0.4005 | 1624.0 | 6496 | 1.1594 |
| 0.4004 | 1625.0 | 6500 | 1.1572 |
| 0.4004 | 1626.0 | 6504 | 1.1549 |
| 0.4004 | 1627.0 | 6508 | 1.1524 |
| 0.4004 | 1628.0 | 6512 | 1.1513 |
| 0.4004 | 1629.0 | 6516 | 1.1508 |
| 0.4004 | 1630.0 | 6520 | 1.1507 |
| 0.4004 | 1631.0 | 6524 | 1.1514 |
| 0.4004 | 1632.0 | 6528 | 1.1496 |
| 0.4004 | 1633.0 | 6532 | 1.1472 |
| 0.4004 | 1634.0 | 6536 | 1.1463 |
| 0.4004 | 1635.0 | 6540 | 1.1457 |
| 0.4004 | 1636.0 | 6544 | 1.1459 |
| 0.4004 | 1637.0 | 6548 | 1.1460 |
| 0.4004 | 1638.0 | 6552 | 1.1470 |
| 0.4004 | 1639.0 | 6556 | 1.1465 |
| 0.4004 | 1640.0 | 6560 | 1.1463 |
| 0.4004 | 1641.0 | 6564 | 1.1468 |
| 0.4004 | 1642.0 | 6568 | 1.1471 |
| 0.4004 | 1643.0 | 6572 | 1.1464 |
| 0.4004 | 1644.0 | 6576 | 1.1461 |
| 0.4004 | 1645.0 | 6580 | 1.1466 |
| 0.4004 | 1646.0 | 6584 | 1.1476 |
| 0.4004 | 1647.0 | 6588 | 1.1477 |
| 0.4004 | 1648.0 | 6592 | 1.1476 |
| 0.4004 | 1649.0 | 6596 | 1.1481 |
| 0.4004 | 1650.0 | 6600 | 1.1645 |
| 0.4004 | 1651.0 | 6604 | 1.1910 |
| 0.4004 | 1652.0 | 6608 | 1.2079 |
| 0.4004 | 1653.0 | 6612 | 1.2180 |
| 0.4004 | 1654.0 | 6616 | 1.2234 |
| 0.4004 | 1655.0 | 6620 | 1.2256 |
| 0.4004 | 1656.0 | 6624 | 1.2252 |
| 0.4004 | 1657.0 | 6628 | 1.2233 |
| 0.4004 | 1658.0 | 6632 | 1.2203 |
| 0.4004 | 1659.0 | 6636 | 1.2179 |
| 0.4004 | 1660.0 | 6640 | 1.2146 |
| 0.4004 | 1661.0 | 6644 | 1.2111 |
| 0.4004 | 1662.0 | 6648 | 1.2098 |
| 0.4004 | 1663.0 | 6652 | 1.2081 |
| 0.4004 | 1664.0 | 6656 | 1.2055 |
| 0.4004 | 1665.0 | 6660 | 1.1987 |
| 0.4004 | 1666.0 | 6664 | 1.1908 |
| 0.4004 | 1667.0 | 6668 | 1.1863 |
| 0.4004 | 1668.0 | 6672 | 1.1831 |
| 0.4004 | 1669.0 | 6676 | 1.1824 |
| 0.4004 | 1670.0 | 6680 | 1.1804 |
| 0.4004 | 1671.0 | 6684 | 1.1798 |
| 0.4004 | 1672.0 | 6688 | 1.1807 |
| 0.4004 | 1673.0 | 6692 | 1.1830 |
| 0.4004 | 1674.0 | 6696 | 1.1838 |
| 0.4004 | 1675.0 | 6700 | 1.1842 |
| 0.4004 | 1676.0 | 6704 | 1.1839 |
| 0.4004 | 1677.0 | 6708 | 1.1832 |
| 0.4004 | 1678.0 | 6712 | 1.1821 |
| 0.4004 | 1679.0 | 6716 | 1.1809 |
| 0.4004 | 1680.0 | 6720 | 1.1799 |
| 0.4004 | 1681.0 | 6724 | 1.1793 |
| 0.4004 | 1682.0 | 6728 | 1.1780 |
| 0.4004 | 1683.0 | 6732 | 1.1765 |
| 0.4004 | 1684.0 | 6736 | 1.1746 |
| 0.4004 | 1685.0 | 6740 | 1.1736 |
| 0.4004 | 1686.0 | 6744 | 1.1737 |
| 0.4004 | 1687.0 | 6748 | 1.1750 |
| 0.4004 | 1688.0 | 6752 | 1.1762 |
| 0.4004 | 1689.0 | 6756 | 1.1767 |
| 0.4004 | 1690.0 | 6760 | 1.1776 |
| 0.4004 | 1691.0 | 6764 | 1.1783 |
| 0.4004 | 1692.0 | 6768 | 1.1797 |
| 0.4004 | 1693.0 | 6772 | 1.1809 |
| 0.4004 | 1694.0 | 6776 | 1.1814 |
| 0.4004 | 1695.0 | 6780 | 1.1826 |
| 0.4004 | 1696.0 | 6784 | 1.1843 |
| 0.4004 | 1697.0 | 6788 | 1.1839 |
| 0.4004 | 1698.0 | 6792 | 1.1827 |
| 0.4004 | 1699.0 | 6796 | 1.1809 |
| 0.4004 | 1700.0 | 6800 | 1.1802 |
| 0.4004 | 1701.0 | 6804 | 1.1792 |
| 0.4004 | 1702.0 | 6808 | 1.1789 |
| 0.4004 | 1703.0 | 6812 | 1.1785 |
| 0.4004 | 1704.0 | 6816 | 1.1786 |
| 0.4004 | 1705.0 | 6820 | 1.1774 |
| 0.4004 | 1706.0 | 6824 | 1.1759 |
| 0.4004 | 1707.0 | 6828 | 1.1745 |
| 0.4004 | 1708.0 | 6832 | 1.1737 |
| 0.4004 | 1709.0 | 6836 | 1.1730 |
| 0.4004 | 1710.0 | 6840 | 1.1725 |
| 0.4004 | 1711.0 | 6844 | 1.1828 |
| 0.4004 | 1712.0 | 6848 | 1.1921 |
| 0.4004 | 1713.0 | 6852 | 1.1985 |
| 0.4004 | 1714.0 | 6856 | 1.2017 |
| 0.4004 | 1715.0 | 6860 | 1.2036 |
| 0.4004 | 1716.0 | 6864 | 1.2047 |
| 0.4004 | 1717.0 | 6868 | 1.2047 |
| 0.4004 | 1718.0 | 6872 | 1.2048 |
| 0.4004 | 1719.0 | 6876 | 1.2044 |
| 0.4004 | 1720.0 | 6880 | 1.2031 |
| 0.4004 | 1721.0 | 6884 | 1.2019 |
| 0.4004 | 1722.0 | 6888 | 1.2012 |
| 0.4004 | 1723.0 | 6892 | 1.2003 |
| 0.4004 | 1724.0 | 6896 | 1.1991 |
| 0.4004 | 1725.0 | 6900 | 1.1993 |
| 0.4004 | 1726.0 | 6904 | 1.1991 |
| 0.4004 | 1727.0 | 6908 | 1.1984 |
| 0.4004 | 1728.0 | 6912 | 1.1980 |
| 0.4004 | 1729.0 | 6916 | 1.1972 |
| 0.4004 | 1730.0 | 6920 | 1.1966 |
| 0.4004 | 1731.0 | 6924 | 1.1963 |
| 0.4004 | 1732.0 | 6928 | 1.1960 |
| 0.4004 | 1733.0 | 6932 | 1.1964 |
| 0.4004 | 1734.0 | 6936 | 1.1965 |
| 0.4004 | 1735.0 | 6940 | 1.1961 |
| 0.4004 | 1736.0 | 6944 | 1.1961 |
| 0.4004 | 1737.0 | 6948 | 1.1961 |
| 0.4004 | 1738.0 | 6952 | 1.1952 |
| 0.4004 | 1739.0 | 6956 | 1.1941 |
| 0.4004 | 1740.0 | 6960 | 1.1927 |
| 0.4004 | 1741.0 | 6964 | 1.1918 |
| 0.4004 | 1742.0 | 6968 | 1.1915 |
| 0.4004 | 1743.0 | 6972 | 1.1917 |
| 0.4004 | 1744.0 | 6976 | 1.1916 |
| 0.4004 | 1745.0 | 6980 | 1.1904 |
| 0.4004 | 1746.0 | 6984 | 1.1885 |
| 0.4004 | 1747.0 | 6988 | 1.1858 |
| 0.4004 | 1748.0 | 6992 | 1.1834 |
| 0.4004 | 1749.0 | 6996 | 1.1813 |
| 0.401 | 1750.0 | 7000 | 1.1793 |
| 0.401 | 1751.0 | 7004 | 1.1773 |
| 0.401 | 1752.0 | 7008 | 1.1912 |
| 0.401 | 1753.0 | 7012 | 1.1996 |
| 0.401 | 1754.0 | 7016 | 1.2069 |
| 0.401 | 1755.0 | 7020 | 1.2124 |
| 0.401 | 1756.0 | 7024 | 1.2148 |
| 0.401 | 1757.0 | 7028 | 1.2169 |
| 0.401 | 1758.0 | 7032 | 1.2179 |
| 0.401 | 1759.0 | 7036 | 1.2280 |
| 0.401 | 1760.0 | 7040 | 1.2425 |
| 0.401 | 1761.0 | 7044 | 1.2519 |
| 0.401 | 1762.0 | 7048 | 1.2579 |
| 0.401 | 1763.0 | 7052 | 1.2617 |
| 0.401 | 1764.0 | 7056 | 1.2642 |
| 0.401 | 1765.0 | 7060 | 1.2660 |
| 0.401 | 1766.0 | 7064 | 1.2669 |
| 0.401 | 1767.0 | 7068 | 1.2672 |
| 0.401 | 1768.0 | 7072 | 1.2671 |
| 0.401 | 1769.0 | 7076 | 1.2670 |
| 0.401 | 1770.0 | 7080 | 1.2663 |
| 0.401 | 1771.0 | 7084 | 1.2653 |
| 0.401 | 1772.0 | 7088 | 1.2647 |
| 0.401 | 1773.0 | 7092 | 1.2646 |
| 0.401 | 1774.0 | 7096 | 1.2632 |
| 0.401 | 1775.0 | 7100 | 1.2631 |
| 0.401 | 1776.0 | 7104 | 1.2633 |
| 0.401 | 1777.0 | 7108 | 1.2632 |
| 0.401 | 1778.0 | 7112 | 1.2627 |
| 0.401 | 1779.0 | 7116 | 1.2621 |
| 0.401 | 1780.0 | 7120 | 1.2621 |
| 0.401 | 1781.0 | 7124 | 1.2613 |
| 0.401 | 1782.0 | 7128 | 1.2605 |
| 0.401 | 1783.0 | 7132 | 1.2607 |
| 0.401 | 1784.0 | 7136 | 1.2611 |
| 0.401 | 1785.0 | 7140 | 1.2613 |
| 0.401 | 1786.0 | 7144 | 1.2615 |
| 0.401 | 1787.0 | 7148 | 1.2603 |
| 0.401 | 1788.0 | 7152 | 1.2549 |
| 0.401 | 1789.0 | 7156 | 1.2472 |
| 0.401 | 1790.0 | 7160 | 1.2418 |
| 0.401 | 1791.0 | 7164 | 1.2381 |
| 0.401 | 1792.0 | 7168 | 1.2356 |
| 0.401 | 1793.0 | 7172 | 1.2338 |
| 0.401 | 1794.0 | 7176 | 1.2328 |
| 0.401 | 1795.0 | 7180 | 1.2314 |
| 0.401 | 1796.0 | 7184 | 1.2304 |
| 0.401 | 1797.0 | 7188 | 1.2291 |
| 0.401 | 1798.0 | 7192 | 1.2275 |
| 0.401 | 1799.0 | 7196 | 1.2232 |
| 0.401 | 1800.0 | 7200 | 1.2205 |
| 0.401 | 1801.0 | 7204 | 1.2190 |
| 0.401 | 1802.0 | 7208 | 1.2192 |
| 0.401 | 1803.0 | 7212 | 1.2199 |
| 0.401 | 1804.0 | 7216 | 1.2199 |
| 0.401 | 1805.0 | 7220 | 1.2201 |
| 0.401 | 1806.0 | 7224 | 1.2204 |
| 0.401 | 1807.0 | 7228 | 1.2204 |
| 0.401 | 1808.0 | 7232 | 1.2202 |
| 0.401 | 1809.0 | 7236 | 1.2199 |
| 0.401 | 1810.0 | 7240 | 1.2195 |
| 0.401 | 1811.0 | 7244 | 1.2194 |
| 0.401 | 1812.0 | 7248 | 1.2195 |
| 0.401 | 1813.0 | 7252 | 1.2191 |
| 0.401 | 1814.0 | 7256 | 1.2185 |
| 0.401 | 1815.0 | 7260 | 1.2183 |
| 0.401 | 1816.0 | 7264 | 1.2184 |
| 0.401 | 1817.0 | 7268 | 1.2186 |
| 0.401 | 1818.0 | 7272 | 1.2190 |
| 0.401 | 1819.0 | 7276 | 1.2189 |
| 0.401 | 1820.0 | 7280 | 1.2186 |
| 0.401 | 1821.0 | 7284 | 1.2183 |
| 0.401 | 1822.0 | 7288 | 1.2191 |
| 0.401 | 1823.0 | 7292 | 1.2202 |
| 0.401 | 1824.0 | 7296 | 1.2214 |
| 0.401 | 1825.0 | 7300 | 1.2223 |
| 0.401 | 1826.0 | 7304 | 1.2224 |
| 0.401 | 1827.0 | 7308 | 1.2203 |
| 0.401 | 1828.0 | 7312 | 1.2192 |
| 0.401 | 1829.0 | 7316 | 1.2193 |
| 0.401 | 1830.0 | 7320 | 1.2190 |
| 0.401 | 1831.0 | 7324 | 1.2184 |
| 0.401 | 1832.0 | 7328 | 1.2176 |
| 0.401 | 1833.0 | 7332 | 1.2078 |
| 0.401 | 1834.0 | 7336 | 1.2013 |
| 0.401 | 1835.0 | 7340 | 1.1970 |
| 0.401 | 1836.0 | 7344 | 1.1946 |
| 0.401 | 1837.0 | 7348 | 1.1931 |
| 0.401 | 1838.0 | 7352 | 1.1918 |
| 0.401 | 1839.0 | 7356 | 1.1913 |
| 0.401 | 1840.0 | 7360 | 1.1914 |
| 0.401 | 1841.0 | 7364 | 1.1920 |
| 0.401 | 1842.0 | 7368 | 1.1927 |
| 0.401 | 1843.0 | 7372 | 1.1929 |
| 0.401 | 1844.0 | 7376 | 1.1928 |
| 0.401 | 1845.0 | 7380 | 1.1923 |
| 0.401 | 1846.0 | 7384 | 1.1920 |
| 0.401 | 1847.0 | 7388 | 1.1924 |
| 0.401 | 1848.0 | 7392 | 1.1927 |
| 0.401 | 1849.0 | 7396 | 1.1930 |
| 0.401 | 1850.0 | 7400 | 1.1929 |
| 0.401 | 1851.0 | 7404 | 1.1927 |
| 0.401 | 1852.0 | 7408 | 1.1921 |
| 0.401 | 1853.0 | 7412 | 1.1916 |
| 0.401 | 1854.0 | 7416 | 1.1914 |
| 0.401 | 1855.0 | 7420 | 1.1913 |
| 0.401 | 1856.0 | 7424 | 1.1914 |
| 0.401 | 1857.0 | 7428 | 1.1913 |
| 0.401 | 1858.0 | 7432 | 1.1909 |
| 0.401 | 1859.0 | 7436 | 1.1907 |
| 0.401 | 1860.0 | 7440 | 1.1907 |
| 0.401 | 1861.0 | 7444 | 1.1906 |
| 0.401 | 1862.0 | 7448 | 1.1903 |
| 0.401 | 1863.0 | 7452 | 1.1902 |
| 0.401 | 1864.0 | 7456 | 1.1926 |
| 0.401 | 1865.0 | 7460 | 1.1959 |
| 0.401 | 1866.0 | 7464 | 1.1985 |
| 0.401 | 1867.0 | 7468 | 1.2005 |
| 0.401 | 1868.0 | 7472 | 1.2018 |
| 0.401 | 1869.0 | 7476 | 1.2014 |
| 0.401 | 1870.0 | 7480 | 1.2009 |
| 0.401 | 1871.0 | 7484 | 1.2010 |
| 0.401 | 1872.0 | 7488 | 1.2009 |
| 0.401 | 1873.0 | 7492 | 1.2003 |
| 0.401 | 1874.0 | 7496 | 1.1998 |
| 0.4005 | 1875.0 | 7500 | 1.1991 |
| 0.4005 | 1876.0 | 7504 | 1.1985 |
| 0.4005 | 1877.0 | 7508 | 1.1982 |
| 0.4005 | 1878.0 | 7512 | 1.1978 |
| 0.4005 | 1879.0 | 7516 | 1.1976 |
| 0.4005 | 1880.0 | 7520 | 1.1963 |
| 0.4005 | 1881.0 | 7524 | 1.1952 |
| 0.4005 | 1882.0 | 7528 | 1.1948 |
| 0.4005 | 1883.0 | 7532 | 1.1940 |
| 0.4005 | 1884.0 | 7536 | 1.1932 |
| 0.4005 | 1885.0 | 7540 | 1.1927 |
| 0.4005 | 1886.0 | 7544 | 1.1924 |
| 0.4005 | 1887.0 | 7548 | 1.1916 |
| 0.4005 | 1888.0 | 7552 | 1.1905 |
| 0.4005 | 1889.0 | 7556 | 1.1893 |
| 0.4005 | 1890.0 | 7560 | 1.1883 |
| 0.4005 | 1891.0 | 7564 | 1.1873 |
| 0.4005 | 1892.0 | 7568 | 1.1865 |
| 0.4005 | 1893.0 | 7572 | 1.1862 |
| 0.4005 | 1894.0 | 7576 | 1.1853 |
| 0.4005 | 1895.0 | 7580 | 1.1847 |
| 0.4005 | 1896.0 | 7584 | 1.1843 |
| 0.4005 | 1897.0 | 7588 | 1.1842 |
| 0.4005 | 1898.0 | 7592 | 1.1848 |
| 0.4005 | 1899.0 | 7596 | 1.1855 |
| 0.4005 | 1900.0 | 7600 | 1.1866 |
| 0.4005 | 1901.0 | 7604 | 1.1875 |
| 0.4005 | 1902.0 | 7608 | 1.1883 |
| 0.4005 | 1903.0 | 7612 | 1.1892 |
| 0.4005 | 1904.0 | 7616 | 1.1896 |
| 0.4005 | 1905.0 | 7620 | 1.1896 |
| 0.4005 | 1906.0 | 7624 | 1.1895 |
| 0.4005 | 1907.0 | 7628 | 1.1892 |
| 0.4005 | 1908.0 | 7632 | 1.1890 |
| 0.4005 | 1909.0 | 7636 | 1.1892 |
| 0.4005 | 1910.0 | 7640 | 1.1892 |
| 0.4005 | 1911.0 | 7644 | 1.1888 |
| 0.4005 | 1912.0 | 7648 | 1.1884 |
| 0.4005 | 1913.0 | 7652 | 1.1881 |
| 0.4005 | 1914.0 | 7656 | 1.1876 |
| 0.4005 | 1915.0 | 7660 | 1.1870 |
| 0.4005 | 1916.0 | 7664 | 1.1866 |
| 0.4005 | 1917.0 | 7668 | 1.1865 |
| 0.4005 | 1918.0 | 7672 | 1.1863 |
| 0.4005 | 1919.0 | 7676 | 1.1863 |
| 0.4005 | 1920.0 | 7680 | 1.1848 |
| 0.4005 | 1921.0 | 7684 | 1.1799 |
| 0.4005 | 1922.0 | 7688 | 1.1758 |
| 0.4005 | 1923.0 | 7692 | 1.1711 |
| 0.4005 | 1924.0 | 7696 | 1.1681 |
| 0.4005 | 1925.0 | 7700 | 1.1661 |
| 0.4005 | 1926.0 | 7704 | 1.1651 |
| 0.4005 | 1927.0 | 7708 | 1.1649 |
| 0.4005 | 1928.0 | 7712 | 1.1646 |
| 0.4005 | 1929.0 | 7716 | 1.1639 |
| 0.4005 | 1930.0 | 7720 | 1.1634 |
| 0.4005 | 1931.0 | 7724 | 1.1628 |
| 0.4005 | 1932.0 | 7728 | 1.1627 |
| 0.4005 | 1933.0 | 7732 | 1.1624 |
| 0.4005 | 1934.0 | 7736 | 1.1620 |
| 0.4005 | 1935.0 | 7740 | 1.1619 |
| 0.4005 | 1936.0 | 7744 | 1.1618 |
| 0.4005 | 1937.0 | 7748 | 1.1618 |
| 0.4005 | 1938.0 | 7752 | 1.1618 |
| 0.4005 | 1939.0 | 7756 | 1.1632 |
| 0.4005 | 1940.0 | 7760 | 1.1642 |
| 0.4005 | 1941.0 | 7764 | 1.1649 |
| 0.4005 | 1942.0 | 7768 | 1.1653 |
| 0.4005 | 1943.0 | 7772 | 1.1657 |
| 0.4005 | 1944.0 | 7776 | 1.1660 |
| 0.4005 | 1945.0 | 7780 | 1.1657 |
| 0.4005 | 1946.0 | 7784 | 1.1653 |
| 0.4005 | 1947.0 | 7788 | 1.1650 |
| 0.4005 | 1948.0 | 7792 | 1.1648 |
| 0.4005 | 1949.0 | 7796 | 1.1646 |
| 0.4005 | 1950.0 | 7800 | 1.1644 |
| 0.4005 | 1951.0 | 7804 | 1.1642 |
| 0.4005 | 1952.0 | 7808 | 1.1637 |
| 0.4005 | 1953.0 | 7812 | 1.1635 |
| 0.4005 | 1954.0 | 7816 | 1.1633 |
| 0.4005 | 1955.0 | 7820 | 1.1631 |
| 0.4005 | 1956.0 | 7824 | 1.1629 |
| 0.4005 | 1957.0 | 7828 | 1.1628 |
| 0.4005 | 1958.0 | 7832 | 1.1628 |
| 0.4005 | 1959.0 | 7836 | 1.1628 |
| 0.4005 | 1960.0 | 7840 | 1.1629 |
| 0.4005 | 1961.0 | 7844 | 1.1631 |
| 0.4005 | 1962.0 | 7848 | 1.1633 |
| 0.4005 | 1963.0 | 7852 | 1.1634 |
| 0.4005 | 1964.0 | 7856 | 1.1634 |
| 0.4005 | 1965.0 | 7860 | 1.1666 |
| 0.4005 | 1966.0 | 7864 | 1.1694 |
| 0.4005 | 1967.0 | 7868 | 1.1712 |
| 0.4005 | 1968.0 | 7872 | 1.1723 |
| 0.4005 | 1969.0 | 7876 | 1.1733 |
| 0.4005 | 1970.0 | 7880 | 1.1740 |
| 0.4005 | 1971.0 | 7884 | 1.1742 |
| 0.4005 | 1972.0 | 7888 | 1.1745 |
| 0.4005 | 1973.0 | 7892 | 1.1747 |
| 0.4005 | 1974.0 | 7896 | 1.1752 |
| 0.4005 | 1975.0 | 7900 | 1.1760 |
| 0.4005 | 1976.0 | 7904 | 1.1766 |
| 0.4005 | 1977.0 | 7908 | 1.1769 |
| 0.4005 | 1978.0 | 7912 | 1.1771 |
| 0.4005 | 1979.0 | 7916 | 1.1773 |
| 0.4005 | 1980.0 | 7920 | 1.1774 |
| 0.4005 | 1981.0 | 7924 | 1.1773 |
| 0.4005 | 1982.0 | 7928 | 1.1773 |
| 0.4005 | 1983.0 | 7932 | 1.1771 |
| 0.4005 | 1984.0 | 7936 | 1.1768 |
| 0.4005 | 1985.0 | 7940 | 1.1762 |
| 0.4005 | 1986.0 | 7944 | 1.1758 |
| 0.4005 | 1987.0 | 7948 | 1.1756 |
| 0.4005 | 1988.0 | 7952 | 1.1754 |
| 0.4005 | 1989.0 | 7956 | 1.1753 |
| 0.4005 | 1990.0 | 7960 | 1.1754 |
| 0.4005 | 1991.0 | 7964 | 1.1757 |
| 0.4005 | 1992.0 | 7968 | 1.1759 |
| 0.4005 | 1993.0 | 7972 | 1.1760 |
| 0.4005 | 1994.0 | 7976 | 1.1761 |
| 0.4005 | 1995.0 | 7980 | 1.1761 |
| 0.4005 | 1996.0 | 7984 | 1.1761 |
| 0.4005 | 1997.0 | 7988 | 1.1761 |
| 0.4005 | 1998.0 | 7992 | 1.1761 |
| 0.4005 | 1999.0 | 7996 | 1.1761 |
| 0.4011 | 2000.0 | 8000 | 1.1761 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.