modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-04 06:29:44
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 550
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-04 06:26:08
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
om-ashish-soni/output
|
om-ashish-soni
| 2024-03-08T04:27:13Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:om-ashish-soni/shiv-mahapuran-ai",
"base_model:finetune:om-ashish-soni/shiv-mahapuran-ai",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T15:34:24Z |
---
base_model: om-ashish-soni/shiv-mahapuran-ai
tags:
- generated_from_trainer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [om-ashish-soni/shiv-mahapuran-ai](https://huggingface.co/om-ashish-soni/shiv-mahapuran-ai) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
om-ashish-soni/shripad-charitramrutam-lm-v2
|
om-ashish-soni
| 2024-03-08T04:27:07Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-08T04:26:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
okeanos/uptimeai-8273
|
okeanos
| 2024-03-08T04:26:46Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:Phind/Phind-CodeLlama-34B-v2",
"base_model:merge:Phind/Phind-CodeLlama-34B-v2",
"base_model:codellama/CodeLlama-34b-Instruct-hf",
"base_model:merge:codellama/CodeLlama-34b-Instruct-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T18:56:55Z |
---
base_model:
- codellama/CodeLlama-34b-Instruct-hf
- Phind/Phind-CodeLlama-34B-v2
library_name: transformers
tags:
- mergekit
- merge
---
# merge-legacy
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) as a base.
### Models Merged
The following models were included in the merge:
* [Phind/Phind-CodeLlama-34B-v2](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: codellama/CodeLlama-34b-Instruct-hf
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: Phind/Phind-CodeLlama-34B-v2
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
merge_method: dare_ties
base_model: codellama/CodeLlama-34b-Instruct-hf
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
ahmedgongi10/mistral_instruct_devops11
|
ahmedgongi10
| 2024-03-08T04:23:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-08T04:23:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
saqidr/pegasus-samsum
|
saqidr
| 2024-03-08T04:18:13Z | 71 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"base_model:google/pegasus-cnn_dailymail",
"base_model:finetune:google/pegasus-cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-08T03:54:21Z |
---
base_model: google/pegasus-cnn_dailymail
tags:
- generated_from_trainer
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6267 | 0.54 | 500 | 1.4849 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
frankenmerger/delta-4b-notso-base
|
frankenmerger
| 2024-03-08T04:04:06Z | 66 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"conversational",
"custom_code",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T18:41:10Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- conversational
---
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "gmonsoon/Delta-4B-notso-base"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Lemunite/vietinbank-vistral-7b-chat_merged
|
Lemunite
| 2024-03-08T03:56:29Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-08T03:52:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sumail/Silver_Waves04_2b
|
Sumail
| 2024-03-08T03:56:12Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:deepnetguy/gemma-55",
"base_model:merge:deepnetguy/gemma-55",
"base_model:tomaszki/gemma-31",
"base_model:merge:tomaszki/gemma-31",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-08T03:47:28Z |
---
base_model:
- deepnetguy/gemma-55
- 0x0dad0/nous_nb20_plus
- tomaszki/gemma-31
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [tomaszki/gemma-31](https://huggingface.co/tomaszki/gemma-31) as a base.
### Models Merged
The following models were included in the merge:
* [deepnetguy/gemma-55](https://huggingface.co/deepnetguy/gemma-55)
* [0x0dad0/nous_nb20_plus](https://huggingface.co/0x0dad0/nous_nb20_plus)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: tomaszki/gemma-31
# no parameters necessary for base model
- model: deepnetguy/gemma-55
parameters:
density: 0.5
weight: 0.4
- model: 0x0dad0/nous_nb20_plus
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: tomaszki/gemma-31
parameters:
normalize: true
dtype: bfloat16
```
|
Neela/layoutlm-funsd
|
Neela
| 2024-03-08T03:53:17Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"layoutlm",
"token-classification",
"generated_from_trainer",
"dataset:funsd",
"base_model:microsoft/layoutlm-base-uncased",
"base_model:finetune:microsoft/layoutlm-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-03-07T17:07:05Z |
---
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
- generated_from_trainer
datasets:
- funsd
model-index:
- name: layoutlm-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1243
- Answer: {'precision': 0.40076335877862596, 'recall': 0.519159456118665, 'f1': 0.4523424878836834, 'number': 809}
- Header: {'precision': 0.28421052631578947, 'recall': 0.226890756302521, 'f1': 0.25233644859813087, 'number': 119}
- Question: {'precision': 0.5280065897858319, 'recall': 0.6018779342723005, 'f1': 0.5625274243089073, 'number': 1065}
- Overall Precision: 0.4616
- Overall Recall: 0.5459
- Overall F1: 0.5002
- Overall Accuracy: 0.6215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.7728 | 1.0 | 10 | 1.5441 | {'precision': 0.04580152671755725, 'recall': 0.059332509270704575, 'f1': 0.05169628432956382, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.20335429769392033, 'recall': 0.18215962441314554, 'f1': 0.19217434373452202, 'number': 1065} | 0.1209 | 0.1214 | 0.1212 | 0.3719 |
| 1.4551 | 2.0 | 20 | 1.3517 | {'precision': 0.20478234212139793, 'recall': 0.41285537700865266, 'f1': 0.27377049180327867, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.26090225563909775, 'recall': 0.32582159624413143, 'f1': 0.28977035490605424, 'number': 1065} | 0.2297 | 0.3417 | 0.2747 | 0.4263 |
| 1.295 | 3.0 | 30 | 1.2465 | {'precision': 0.26224426534407935, 'recall': 0.522867737948084, 'f1': 0.34929810074318746, 'number': 809} | {'precision': 0.058823529411764705, 'recall': 0.01680672268907563, 'f1': 0.026143790849673203, 'number': 119} | {'precision': 0.3458528951486698, 'recall': 0.41502347417840374, 'f1': 0.37729406743491256, 'number': 1065} | 0.2964 | 0.4350 | 0.3526 | 0.4803 |
| 1.1635 | 4.0 | 40 | 1.1449 | {'precision': 0.28778467908902694, 'recall': 0.515451174289246, 'f1': 0.3693534100974314, 'number': 809} | {'precision': 0.2638888888888889, 'recall': 0.15966386554621848, 'f1': 0.19895287958115182, 'number': 119} | {'precision': 0.412396694214876, 'recall': 0.46854460093896716, 'f1': 0.4386813186813187, 'number': 1065} | 0.3424 | 0.4691 | 0.3959 | 0.5521 |
| 1.0456 | 5.0 | 50 | 1.0703 | {'precision': 0.3060240963855422, 'recall': 0.47095179233621753, 'f1': 0.37098344693281404, 'number': 809} | {'precision': 0.3472222222222222, 'recall': 0.21008403361344538, 'f1': 0.2617801047120419, 'number': 119} | {'precision': 0.40298507462686567, 'recall': 0.5830985915492958, 'f1': 0.476592478894858, 'number': 1065} | 0.3593 | 0.5153 | 0.4234 | 0.5797 |
| 0.9601 | 6.0 | 60 | 1.2304 | {'precision': 0.30907920154539603, 'recall': 0.5933250927070457, 'f1': 0.40643522438611346, 'number': 809} | {'precision': 0.3333333333333333, 'recall': 0.16806722689075632, 'f1': 0.223463687150838, 'number': 119} | {'precision': 0.4642857142857143, 'recall': 0.4394366197183099, 'f1': 0.4515195369030391, 'number': 1065} | 0.3693 | 0.4857 | 0.4196 | 0.5479 |
| 0.9153 | 7.0 | 70 | 1.1091 | {'precision': 0.35518157661647476, 'recall': 0.4956736711990111, 'f1': 0.41382868937048506, 'number': 809} | {'precision': 0.3125, 'recall': 0.21008403361344538, 'f1': 0.25125628140703515, 'number': 119} | {'precision': 0.5262645914396887, 'recall': 0.507981220657277, 'f1': 0.5169612995699953, 'number': 1065} | 0.4323 | 0.4852 | 0.4572 | 0.6011 |
| 0.8346 | 8.0 | 80 | 1.0632 | {'precision': 0.35597826086956524, 'recall': 0.4857849196538937, 'f1': 0.4108729743857816, 'number': 809} | {'precision': 0.28421052631578947, 'recall': 0.226890756302521, 'f1': 0.25233644859813087, 'number': 119} | {'precision': 0.46401799100449775, 'recall': 0.5812206572769953, 'f1': 0.516048353480617, 'number': 1065} | 0.4102 | 0.5213 | 0.4591 | 0.6103 |
| 0.7789 | 9.0 | 90 | 1.0955 | {'precision': 0.3817062445030783, 'recall': 0.5364647713226205, 'f1': 0.44604316546762585, 'number': 809} | {'precision': 0.26, 'recall': 0.2184873949579832, 'f1': 0.23744292237442924, 'number': 119} | {'precision': 0.5137693631669535, 'recall': 0.5605633802816902, 'f1': 0.5361472833408173, 'number': 1065} | 0.4406 | 0.5304 | 0.4813 | 0.6082 |
| 0.7751 | 10.0 | 100 | 1.1232 | {'precision': 0.38474434199497065, 'recall': 0.5673671199011124, 'f1': 0.45854145854145856, 'number': 809} | {'precision': 0.3010752688172043, 'recall': 0.23529411764705882, 'f1': 0.2641509433962264, 'number': 119} | {'precision': 0.5040358744394619, 'recall': 0.5276995305164319, 'f1': 0.5155963302752293, 'number': 1065} | 0.4369 | 0.5263 | 0.4775 | 0.6032 |
| 0.6875 | 11.0 | 110 | 1.1092 | {'precision': 0.39342723004694835, 'recall': 0.5179233621755254, 'f1': 0.44717182497331914, 'number': 809} | {'precision': 0.34146341463414637, 'recall': 0.23529411764705882, 'f1': 0.27860696517412936, 'number': 119} | {'precision': 0.5076305220883535, 'recall': 0.5934272300469483, 'f1': 0.5471861471861472, 'number': 1065} | 0.4511 | 0.5414 | 0.4921 | 0.6233 |
| 0.6808 | 12.0 | 120 | 1.1286 | {'precision': 0.40641158221303, 'recall': 0.4857849196538937, 'f1': 0.44256756756756754, 'number': 809} | {'precision': 0.24561403508771928, 'recall': 0.23529411764705882, 'f1': 0.24034334763948498, 'number': 119} | {'precision': 0.49772036474164133, 'recall': 0.6150234741784038, 'f1': 0.5501889962200757, 'number': 1065} | 0.4489 | 0.5399 | 0.4902 | 0.6159 |
| 0.656 | 13.0 | 130 | 1.1237 | {'precision': 0.39822134387351776, 'recall': 0.49814585908529047, 'f1': 0.442613948380011, 'number': 809} | {'precision': 0.2967032967032967, 'recall': 0.226890756302521, 'f1': 0.2571428571428572, 'number': 119} | {'precision': 0.5141732283464567, 'recall': 0.6131455399061033, 'f1': 0.5593147751605996, 'number': 1065} | 0.4564 | 0.5434 | 0.4961 | 0.6179 |
| 0.6359 | 14.0 | 140 | 1.1296 | {'precision': 0.3996399639963996, 'recall': 0.5488257107540173, 'f1': 0.46249999999999997, 'number': 809} | {'precision': 0.32926829268292684, 'recall': 0.226890756302521, 'f1': 0.26865671641791045, 'number': 119} | {'precision': 0.5376712328767124, 'recall': 0.5896713615023474, 'f1': 0.5624720107478729, 'number': 1065} | 0.4655 | 0.5514 | 0.5048 | 0.6173 |
| 0.6117 | 15.0 | 150 | 1.1243 | {'precision': 0.40076335877862596, 'recall': 0.519159456118665, 'f1': 0.4523424878836834, 'number': 809} | {'precision': 0.28421052631578947, 'recall': 0.226890756302521, 'f1': 0.25233644859813087, 'number': 119} | {'precision': 0.5280065897858319, 'recall': 0.6018779342723005, 'f1': 0.5625274243089073, 'number': 1065} | 0.4616 | 0.5459 | 0.5002 | 0.6215 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
OwOOwO/eacc_contTrain_m6_2
|
OwOOwO
| 2024-03-08T03:52:45Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-08T03:50:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
debasishdas/llama2-7b-chat-finetuned-legal
|
debasishdas
| 2024-03-08T03:36:52Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"region:us"
] | null | 2024-03-07T08:55:42Z |
---
tags:
- generated_from_trainer
model-index:
- name: llama2-7b-chat-finetuned-legal
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7b-chat-finetuned-legal
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Sumail/Silver_Waves03_2b
|
Sumail
| 2024-03-08T03:31:47Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"mergewss]",
"mergekit",
"lazymergekit",
"tomaszki/gemma-31",
"0x0dad0/nous_nb20_plus",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-08T03:19:41Z |
---
license: apache-2.0
tags:
- mergewss]
- mergekit
- lazymergekit
- tomaszki/gemma-31
- 0x0dad0/nous_nb20_plus
---
# Silver_Waves03_2b
Silver_Waves03_2b is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [tomaszki/gemma-31](https://huggingface.co/tomaszki/gemma-31)
* [0x0dad0/nous_nb20_plus](https://huggingface.co/0x0dad0/nous_nb20_plus)
## π§© Configuration
```yaml
models:
- model: deepnetguy/gemma-55
# no parameters necessary for base model
- model: tomaszki/gemma-31
parameters:
density: 0.5
weight: 0.3
- model: 0x0dad0/nous_nb20_plus
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: deepnetguy/gemma-55
parameters:
normalize: true
dtype: bfloat16
```
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_64_64_0.05_4_0.0002
|
ferrazzipietro
| 2024-03-08T03:30:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-08T03:29:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Aayush232730/Unit1
|
Aayush232730
| 2024-03-08T03:30:45Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-07T23:25:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.74 +/- 21.90
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
KhangSimple/output
|
KhangSimple
| 2024-03-08T03:27:55Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-08T02:29:51Z |
---
license: apache-2.0
base_model: sentence-transformers/all-MiniLM-L6-v2
tags:
- generated_from_trainer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
dbvenkat/code-search-net-tokenizer
|
dbvenkat
| 2024-03-08T03:17:33Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-08T03:17:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jayfeather1024/alpaca_struq
|
Jayfeather1024
| 2024-03-08T03:11:52Z | 20 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2402.06363",
"license:unknown",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-23T16:04:40Z |
---
license: unknown
---
Unofficial checkpoint for the StruQ defense method against prompt injection attack. The base model is https://huggingface.co/chavinlo/alpaca-native.
StruQ: Defending Against Prompt Injection with Structured Queries (https://arxiv.org/abs/2402.06363)
|
Kudod/hoa-1b4_model_kaggle_format
|
Kudod
| 2024-03-08T03:09:38Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:vlsp-2023-vllm/hoa-1b4",
"base_model:adapter:vlsp-2023-vllm/hoa-1b4",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-03-08T02:55:15Z |
---
license: bigscience-bloom-rail-1.0
library_name: peft
tags:
- generated_from_trainer
base_model: vlsp-2023-vllm/hoa-1b4
model-index:
- name: hoa-1b4_model_kaggle_format
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hoa-1b4_model_kaggle_format
This model is a fine-tuned version of [vlsp-2023-vllm/hoa-1b4](https://huggingface.co/vlsp-2023-vllm/hoa-1b4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 65 | 2.6363 |
| No log | 2.0 | 130 | 1.8356 |
| No log | 3.0 | 195 | 1.3984 |
| No log | 4.0 | 260 | 1.1658 |
| No log | 5.0 | 325 | 0.9857 |
| No log | 6.0 | 390 | 0.8724 |
| No log | 7.0 | 455 | 0.8085 |
| 1.4171 | 8.0 | 520 | 0.7400 |
| 1.4171 | 9.0 | 585 | 0.6925 |
| 1.4171 | 10.0 | 650 | 0.6654 |
| 1.4171 | 11.0 | 715 | 0.6383 |
| 1.4171 | 12.0 | 780 | 0.6341 |
| 1.4171 | 13.0 | 845 | 0.6148 |
| 1.4171 | 14.0 | 910 | 0.5979 |
| 1.4171 | 15.0 | 975 | 0.6061 |
| 0.2596 | 16.0 | 1040 | 0.5960 |
| 0.2596 | 17.0 | 1105 | 0.5810 |
| 0.2596 | 18.0 | 1170 | 0.5812 |
| 0.2596 | 19.0 | 1235 | 0.5761 |
| 0.2596 | 20.0 | 1300 | 0.5724 |
| 0.2596 | 21.0 | 1365 | 0.5600 |
| 0.2596 | 22.0 | 1430 | 0.5927 |
| 0.2596 | 23.0 | 1495 | 0.5627 |
| 0.1245 | 24.0 | 1560 | 0.5500 |
| 0.1245 | 25.0 | 1625 | 0.5706 |
| 0.1245 | 26.0 | 1690 | 0.5551 |
| 0.1245 | 27.0 | 1755 | 0.5548 |
| 0.1245 | 28.0 | 1820 | 0.5573 |
| 0.1245 | 29.0 | 1885 | 0.5642 |
| 0.1245 | 30.0 | 1950 | 0.5712 |
| 0.0896 | 31.0 | 2015 | 0.5524 |
| 0.0896 | 32.0 | 2080 | 0.5644 |
| 0.0896 | 33.0 | 2145 | 0.5511 |
| 0.0896 | 34.0 | 2210 | 0.5648 |
| 0.0896 | 35.0 | 2275 | 0.5722 |
| 0.0896 | 36.0 | 2340 | 0.5619 |
| 0.0896 | 37.0 | 2405 | 0.5632 |
| 0.0896 | 38.0 | 2470 | 0.5628 |
| 0.0746 | 39.0 | 2535 | 0.5593 |
| 0.0746 | 40.0 | 2600 | 0.5624 |
| 0.0746 | 41.0 | 2665 | 0.5744 |
| 0.0746 | 42.0 | 2730 | 0.5525 |
| 0.0746 | 43.0 | 2795 | 0.5858 |
| 0.0746 | 44.0 | 2860 | 0.5615 |
| 0.0746 | 45.0 | 2925 | 0.5614 |
| 0.0746 | 46.0 | 2990 | 0.5678 |
| 0.0696 | 47.0 | 3055 | 0.5735 |
| 0.0696 | 48.0 | 3120 | 0.5674 |
| 0.0696 | 49.0 | 3185 | 0.5637 |
| 0.0696 | 50.0 | 3250 | 0.5623 |
| 0.0696 | 51.0 | 3315 | 0.5668 |
| 0.0696 | 52.0 | 3380 | 0.5625 |
| 0.0696 | 53.0 | 3445 | 0.5630 |
| 0.0636 | 54.0 | 3510 | 0.5675 |
| 0.0636 | 55.0 | 3575 | 0.5646 |
| 0.0636 | 56.0 | 3640 | 0.5702 |
| 0.0636 | 57.0 | 3705 | 0.5729 |
| 0.0636 | 58.0 | 3770 | 0.5745 |
| 0.0636 | 59.0 | 3835 | 0.5737 |
| 0.0636 | 60.0 | 3900 | 0.5724 |
| 0.0636 | 61.0 | 3965 | 0.5658 |
| 0.0579 | 62.0 | 4030 | 0.5759 |
| 0.0579 | 63.0 | 4095 | 0.5777 |
| 0.0579 | 64.0 | 4160 | 0.5722 |
| 0.0579 | 65.0 | 4225 | 0.5721 |
| 0.0579 | 66.0 | 4290 | 0.5772 |
| 0.0579 | 67.0 | 4355 | 0.5747 |
| 0.0579 | 68.0 | 4420 | 0.5800 |
| 0.0579 | 69.0 | 4485 | 0.5814 |
| 0.0557 | 70.0 | 4550 | 0.5777 |
| 0.0557 | 71.0 | 4615 | 0.5765 |
| 0.0557 | 72.0 | 4680 | 0.5790 |
| 0.0557 | 73.0 | 4745 | 0.5845 |
| 0.0557 | 74.0 | 4810 | 0.5788 |
| 0.0557 | 75.0 | 4875 | 0.5836 |
| 0.0557 | 76.0 | 4940 | 0.5911 |
| 0.052 | 77.0 | 5005 | 0.5841 |
| 0.052 | 78.0 | 5070 | 0.5822 |
| 0.052 | 79.0 | 5135 | 0.5828 |
| 0.052 | 80.0 | 5200 | 0.5868 |
| 0.052 | 81.0 | 5265 | 0.5858 |
| 0.052 | 82.0 | 5330 | 0.5899 |
| 0.052 | 83.0 | 5395 | 0.5888 |
| 0.052 | 84.0 | 5460 | 0.5871 |
| 0.0478 | 85.0 | 5525 | 0.5867 |
| 0.0478 | 86.0 | 5590 | 0.5894 |
| 0.0478 | 87.0 | 5655 | 0.5899 |
| 0.0478 | 88.0 | 5720 | 0.5899 |
| 0.0478 | 89.0 | 5785 | 0.5915 |
| 0.0478 | 90.0 | 5850 | 0.5901 |
| 0.0478 | 91.0 | 5915 | 0.5919 |
| 0.0478 | 92.0 | 5980 | 0.5919 |
| 0.0458 | 93.0 | 6045 | 0.5916 |
| 0.0458 | 94.0 | 6110 | 0.5914 |
| 0.0458 | 95.0 | 6175 | 0.5929 |
| 0.0458 | 96.0 | 6240 | 0.5920 |
| 0.0458 | 97.0 | 6305 | 0.5922 |
| 0.0458 | 98.0 | 6370 | 0.5922 |
| 0.0458 | 99.0 | 6435 | 0.5924 |
| 0.0425 | 100.0 | 6500 | 0.5927 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
core-3/kuno-royale-v3-7b
|
core-3
| 2024-03-08T03:01:36Z | 57 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"SanjiWatsuki/Kunoichi-DPO-v2-7B",
"eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:merge:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3",
"base_model:merge:eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3",
"license:cc-by-nc-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T14:55:59Z |
---
license: cc-by-nc-2.0
tags:
- merge
- mergekit
- lazymergekit
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3
base_model:
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3
model-index:
- name: kuno-royale-v3-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.76
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v3-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.23
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v3-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v3-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.13
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v3-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v3-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.81
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v3-7b
name: Open LLM Leaderboard
---
# kuno-royale-v3-7b
Another experimental combination of eren23's ongo-monarch-jaskier merges and Kunoichi-DPO-v2-7B. Untested.
kuno-royale-v3-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3)
## π§© Configuration
```yaml
slices:
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [0, 32]
- model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3
layer_range: [0, 32]
merge_method: slerp
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "core-3/kuno-royale-v3-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_core-3__kuno-royale-v3-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.88|
|AI2 Reasoning Challenge (25-Shot)|71.76|
|HellaSwag (10-Shot) |88.23|
|MMLU (5-Shot) |65.06|
|TruthfulQA (0-shot) |71.13|
|Winogrande (5-shot) |82.32|
|GSM8k (5-shot) |70.81|
|
madroid/qwen1.5-0.5B-4bit-new
|
madroid
| 2024-03-08T02:57:06Z | 5 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen2",
"chat",
"text-generation",
"conversational",
"en",
"license:other",
"region:us"
] |
text-generation
| 2024-03-08T02:56:16Z |
---
language:
- en
license: other
tags:
- chat
- mlx
- mlx
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat/blob/main/LICENSE
pipeline_tag: text-generation
---
# madroid/Qwen1.5-0.5B-4bit-new
This model was converted to MLX format from [`mlx-community/Qwen1.5-0.5B-Chat-4bit`]().
Refer to the [original model card](https://huggingface.co/mlx-community/Qwen1.5-0.5B-Chat-4bit) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("madroid/Qwen1.5-0.5B-4bit-new")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_64_32_0.01_8_0.0002
|
ferrazzipietro
| 2024-03-08T02:51:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-08T02:50:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
denyslinkov/sentiment-lora-dpo
|
denyslinkov
| 2024-03-08T02:47:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-08T02:40:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
OwOOwO/eacc_bm_sl3
|
OwOOwO
| 2024-03-08T02:35:23Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-03-07T01:29:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_64_32_0.05_16_0.0002
|
ferrazzipietro
| 2024-03-08T02:11:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-08T02:10:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EricValen/ppo-LunarLander-v2-CleanRL
|
EricValen
| 2024-03-08T02:11:33Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-08T02:10:51Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -10.07 +/- 84.91
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 500000
'learning_rate': 0.0001
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'EricValen/ppo-LunarLander-v2-CleanRL'
'batch_size': 512
'minibatch_size': 128}
```
|
OwOOwO/eacc_contTrain_m2_55_ori2
|
OwOOwO
| 2024-03-08T02:09:24Z | 88 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-08T02:06:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_64_32_0.05_8_0.0002
|
ferrazzipietro
| 2024-03-08T01:52:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-08T01:51:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
automerger/Experiment27Inex12-7B
|
automerger
| 2024-03-08T01:45:29Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"base_model:MSL7/INEX12-7b",
"base_model:merge:MSL7/INEX12-7b",
"base_model:yam-peleg/Experiment27-7B",
"base_model:merge:yam-peleg/Experiment27-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-08T01:44:27Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
base_model:
- yam-peleg/Experiment27-7B
- MSL7/INEX12-7b
---
# Experiment27Inex12-7B
Experiment27Inex12-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [yam-peleg/Experiment27-7B](https://huggingface.co/yam-peleg/Experiment27-7B)
* [MSL7/INEX12-7b](https://huggingface.co/MSL7/INEX12-7b)
## π§© Configuration
```yaml
slices:
- sources:
- model: yam-peleg/Experiment27-7B
layer_range: [0, 32]
- model: MSL7/INEX12-7b
layer_range: [0, 32]
merge_method: slerp
base_model: yam-peleg/Experiment27-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Experiment27Inex12-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
AyeSee/roberta-large-lora-token-classification_v1
|
AyeSee
| 2024-03-08T01:35:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T13:48:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
abhishekchoudhary0509/distilbert-base-uncased-lora-text-classification
|
abhishekchoudhary0509
| 2024-03-08T01:32:52Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-03-08T01:32:46Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: distilbert-base-uncased
model-index:
- name: distilbert-base-uncased-lora-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-lora-text-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2642
- Accuracy: {'accuracy': 0.902}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|
| No log | 1.0 | 63 | 0.2566 | {'accuracy': 0.902} |
| No log | 2.0 | 126 | 0.2642 | {'accuracy': 0.902} |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_64_32_0.05_4_0.0002
|
ferrazzipietro
| 2024-03-08T01:32:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-08T01:31:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
daze-unlv/axolotl-medmcqa-2-epoch
|
daze-unlv
| 2024-03-08T01:28:30Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"mistral",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-03-07T22:54:28Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: lora-out/medmcqa-2-epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: mistralai/Mistral-7B-v0.1
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: daze-unlv/medmcqa_axolotl
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0
output_dir: ./lora-out/medmcqa-2-epoch
eval_sample_packing: false
adapter: lora
lora_model_dir:
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16: false
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: false
sdp_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# lora-out/medmcqa-2-epoch
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.9.1.dev0
- Transformers 4.39.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0
|
farid1088/RoBERTa-legal-de-cased_German_legal_SQuAD_1000
|
farid1088
| 2024-03-08T01:27:10Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-05T20:54:11Z |
---
tags:
- generated_from_trainer
model-index:
- name: RoBERTa-legal-de-cased_German_legal_SQuAD_1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoBERTa-legal-de-cased_German_legal_SQuAD_1000
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 160
- eval_batch_size: 40
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 1.0 | 2 | 6.2392 |
| No log | 2.0 | 4 | 6.3606 |
| No log | 3.0 | 6 | 6.3302 |
| No log | 4.0 | 8 | 6.2556 |
| No log | 5.0 | 10 | 5.9838 |
| No log | 6.0 | 12 | 5.6434 |
| No log | 7.0 | 14 | 5.4625 |
| No log | 8.0 | 16 | 5.3216 |
| No log | 9.0 | 18 | 5.1803 |
| No log | 10.0 | 20 | 5.0230 |
| No log | 11.0 | 22 | 4.8791 |
| No log | 12.0 | 24 | 4.8112 |
| No log | 13.0 | 26 | 4.6359 |
| No log | 14.0 | 28 | 4.4133 |
| No log | 15.0 | 30 | 4.2477 |
| No log | 16.0 | 32 | 4.0479 |
| No log | 17.0 | 34 | 3.8281 |
| No log | 18.0 | 36 | 3.6850 |
| No log | 19.0 | 38 | 3.5521 |
| No log | 20.0 | 40 | 3.3836 |
| No log | 21.0 | 42 | 3.2738 |
| No log | 22.0 | 44 | 3.1723 |
| No log | 23.0 | 46 | 3.1062 |
| No log | 24.0 | 48 | 3.0506 |
| No log | 25.0 | 50 | 2.9974 |
| No log | 26.0 | 52 | 2.8952 |
| No log | 27.0 | 54 | 2.8692 |
| No log | 28.0 | 56 | 2.8122 |
| No log | 29.0 | 58 | 2.7477 |
| No log | 30.0 | 60 | 2.7818 |
| No log | 31.0 | 62 | 2.7222 |
| No log | 32.0 | 64 | 2.6513 |
| No log | 33.0 | 66 | 2.5553 |
| No log | 34.0 | 68 | 2.4697 |
| No log | 35.0 | 70 | 2.5147 |
| No log | 36.0 | 72 | 2.4701 |
| No log | 37.0 | 74 | 2.3817 |
| No log | 38.0 | 76 | 2.3397 |
| No log | 39.0 | 78 | 2.3285 |
| No log | 40.0 | 80 | 2.3427 |
| No log | 41.0 | 82 | 2.1274 |
| No log | 42.0 | 84 | 2.0858 |
| No log | 43.0 | 86 | 2.0831 |
| No log | 44.0 | 88 | 1.9282 |
| No log | 45.0 | 90 | 1.9103 |
| No log | 46.0 | 92 | 1.8713 |
| No log | 47.0 | 94 | 1.7713 |
| No log | 48.0 | 96 | 1.7105 |
| No log | 49.0 | 98 | 1.6483 |
| No log | 50.0 | 100 | 1.6115 |
| No log | 51.0 | 102 | 1.5694 |
| No log | 52.0 | 104 | 1.5768 |
| No log | 53.0 | 106 | 1.4820 |
| No log | 54.0 | 108 | 1.4422 |
| No log | 55.0 | 110 | 1.4515 |
| No log | 56.0 | 112 | 1.3668 |
| No log | 57.0 | 114 | 1.4229 |
| No log | 58.0 | 116 | 1.3764 |
| No log | 59.0 | 118 | 1.3159 |
| No log | 60.0 | 120 | 1.3684 |
| No log | 61.0 | 122 | 1.4024 |
| No log | 62.0 | 124 | 1.4022 |
| No log | 63.0 | 126 | 1.4163 |
| No log | 64.0 | 128 | 1.3030 |
| No log | 65.0 | 130 | 1.3511 |
| No log | 66.0 | 132 | 1.4307 |
| No log | 67.0 | 134 | 1.3482 |
| No log | 68.0 | 136 | 1.2050 |
| No log | 69.0 | 138 | 1.2197 |
| No log | 70.0 | 140 | 1.2955 |
| No log | 71.0 | 142 | 1.2603 |
| No log | 72.0 | 144 | 1.2188 |
| No log | 73.0 | 146 | 1.2209 |
| No log | 74.0 | 148 | 1.3233 |
| No log | 75.0 | 150 | 1.3907 |
| No log | 76.0 | 152 | 1.2892 |
| No log | 77.0 | 154 | 1.2385 |
| No log | 78.0 | 156 | 1.2649 |
| No log | 79.0 | 158 | 1.2912 |
| No log | 80.0 | 160 | 1.2787 |
| No log | 81.0 | 162 | 1.2894 |
| No log | 82.0 | 164 | 1.2219 |
| No log | 83.0 | 166 | 1.2526 |
| No log | 84.0 | 168 | 1.3134 |
| No log | 85.0 | 170 | 1.2738 |
| No log | 86.0 | 172 | 1.1862 |
| No log | 87.0 | 174 | 1.1754 |
| No log | 88.0 | 176 | 1.1856 |
| No log | 89.0 | 178 | 1.1411 |
| No log | 90.0 | 180 | 1.1468 |
| No log | 91.0 | 182 | 1.2219 |
| No log | 92.0 | 184 | 1.2348 |
| No log | 93.0 | 186 | 1.2539 |
| No log | 94.0 | 188 | 1.3142 |
| No log | 95.0 | 190 | 1.3426 |
| No log | 96.0 | 192 | 1.2950 |
| No log | 97.0 | 194 | 1.1578 |
| No log | 98.0 | 196 | 1.1482 |
| No log | 99.0 | 198 | 1.2214 |
| No log | 100.0 | 200 | 1.2621 |
| No log | 101.0 | 202 | 1.2568 |
| No log | 102.0 | 204 | 1.2313 |
| No log | 103.0 | 206 | 1.1385 |
| No log | 104.0 | 208 | 1.1336 |
| No log | 105.0 | 210 | 1.1972 |
| No log | 106.0 | 212 | 1.2738 |
| No log | 107.0 | 214 | 1.2061 |
| No log | 108.0 | 216 | 1.1259 |
| No log | 109.0 | 218 | 1.1295 |
| No log | 110.0 | 220 | 1.1641 |
| No log | 111.0 | 222 | 1.1672 |
| No log | 112.0 | 224 | 1.2211 |
| No log | 113.0 | 226 | 1.2340 |
| No log | 114.0 | 228 | 1.2608 |
| No log | 115.0 | 230 | 1.2590 |
| No log | 116.0 | 232 | 1.2412 |
| No log | 117.0 | 234 | 1.2275 |
| No log | 118.0 | 236 | 1.2798 |
| No log | 119.0 | 238 | 1.3240 |
| No log | 120.0 | 240 | 1.2910 |
| No log | 121.0 | 242 | 1.2228 |
| No log | 122.0 | 244 | 1.1676 |
| No log | 123.0 | 246 | 1.2019 |
| No log | 124.0 | 248 | 1.2762 |
| No log | 125.0 | 250 | 1.3170 |
| No log | 126.0 | 252 | 1.2557 |
| No log | 127.0 | 254 | 1.2017 |
| No log | 128.0 | 256 | 1.2145 |
| No log | 129.0 | 258 | 1.3000 |
| No log | 130.0 | 260 | 1.3371 |
| No log | 131.0 | 262 | 1.3282 |
| No log | 132.0 | 264 | 1.2549 |
| No log | 133.0 | 266 | 1.2636 |
| No log | 134.0 | 268 | 1.3543 |
| No log | 135.0 | 270 | 1.3776 |
| No log | 136.0 | 272 | 1.3820 |
| No log | 137.0 | 274 | 1.3624 |
| No log | 138.0 | 276 | 1.3286 |
| No log | 139.0 | 278 | 1.3389 |
| No log | 140.0 | 280 | 1.3843 |
| No log | 141.0 | 282 | 1.4119 |
| No log | 142.0 | 284 | 1.3404 |
| No log | 143.0 | 286 | 1.2233 |
| No log | 144.0 | 288 | 1.1634 |
| No log | 145.0 | 290 | 1.1743 |
| No log | 146.0 | 292 | 1.2216 |
| No log | 147.0 | 294 | 1.2615 |
| No log | 148.0 | 296 | 1.2698 |
| No log | 149.0 | 298 | 1.2574 |
| No log | 150.0 | 300 | 1.2013 |
| No log | 151.0 | 302 | 1.1782 |
| No log | 152.0 | 304 | 1.1868 |
| No log | 153.0 | 306 | 1.2209 |
| No log | 154.0 | 308 | 1.2650 |
| No log | 155.0 | 310 | 1.2678 |
| No log | 156.0 | 312 | 1.2483 |
| No log | 157.0 | 314 | 1.2249 |
| No log | 158.0 | 316 | 1.2192 |
| No log | 159.0 | 318 | 1.2685 |
| No log | 160.0 | 320 | 1.3042 |
| No log | 161.0 | 322 | 1.3329 |
| No log | 162.0 | 324 | 1.3820 |
| No log | 163.0 | 326 | 1.3776 |
| No log | 164.0 | 328 | 1.3062 |
| No log | 165.0 | 330 | 1.2287 |
| No log | 166.0 | 332 | 1.1804 |
| No log | 167.0 | 334 | 1.1878 |
| No log | 168.0 | 336 | 1.2288 |
| No log | 169.0 | 338 | 1.2620 |
| No log | 170.0 | 340 | 1.2738 |
| No log | 171.0 | 342 | 1.2856 |
| No log | 172.0 | 344 | 1.3189 |
| No log | 173.0 | 346 | 1.2971 |
| No log | 174.0 | 348 | 1.2227 |
| No log | 175.0 | 350 | 1.2113 |
| No log | 176.0 | 352 | 1.2372 |
| No log | 177.0 | 354 | 1.2345 |
| No log | 178.0 | 356 | 1.2357 |
| No log | 179.0 | 358 | 1.2578 |
| No log | 180.0 | 360 | 1.2575 |
| No log | 181.0 | 362 | 1.2438 |
| No log | 182.0 | 364 | 1.2362 |
| No log | 183.0 | 366 | 1.2906 |
| No log | 184.0 | 368 | 1.3564 |
| No log | 185.0 | 370 | 1.3361 |
| No log | 186.0 | 372 | 1.3235 |
| No log | 187.0 | 374 | 1.3131 |
| No log | 188.0 | 376 | 1.3451 |
| No log | 189.0 | 378 | 1.3708 |
| No log | 190.0 | 380 | 1.3735 |
| No log | 191.0 | 382 | 1.3659 |
| No log | 192.0 | 384 | 1.3499 |
| No log | 193.0 | 386 | 1.3248 |
| No log | 194.0 | 388 | 1.2972 |
| No log | 195.0 | 390 | 1.3089 |
| No log | 196.0 | 392 | 1.3088 |
| No log | 197.0 | 394 | 1.3057 |
| No log | 198.0 | 396 | 1.2836 |
| No log | 199.0 | 398 | 1.2748 |
| No log | 200.0 | 400 | 1.2783 |
| No log | 201.0 | 402 | 1.3234 |
| No log | 202.0 | 404 | 1.3851 |
| No log | 203.0 | 406 | 1.4287 |
| No log | 204.0 | 408 | 1.3798 |
| No log | 205.0 | 410 | 1.2660 |
| No log | 206.0 | 412 | 1.2068 |
| No log | 207.0 | 414 | 1.2213 |
| No log | 208.0 | 416 | 1.2811 |
| No log | 209.0 | 418 | 1.3142 |
| No log | 210.0 | 420 | 1.3317 |
| No log | 211.0 | 422 | 1.3334 |
| No log | 212.0 | 424 | 1.3037 |
| No log | 213.0 | 426 | 1.2620 |
| No log | 214.0 | 428 | 1.2192 |
| No log | 215.0 | 430 | 1.2268 |
| No log | 216.0 | 432 | 1.2740 |
| No log | 217.0 | 434 | 1.3298 |
| No log | 218.0 | 436 | 1.3930 |
| No log | 219.0 | 438 | 1.4287 |
| No log | 220.0 | 440 | 1.4227 |
| No log | 221.0 | 442 | 1.3803 |
| No log | 222.0 | 444 | 1.3389 |
| No log | 223.0 | 446 | 1.3402 |
| No log | 224.0 | 448 | 1.3458 |
| No log | 225.0 | 450 | 1.3779 |
| No log | 226.0 | 452 | 1.4241 |
| No log | 227.0 | 454 | 1.4453 |
| No log | 228.0 | 456 | 1.4269 |
| No log | 229.0 | 458 | 1.3875 |
| No log | 230.0 | 460 | 1.3527 |
| No log | 231.0 | 462 | 1.3338 |
| No log | 232.0 | 464 | 1.3420 |
| No log | 233.0 | 466 | 1.3536 |
| No log | 234.0 | 468 | 1.3931 |
| No log | 235.0 | 470 | 1.4257 |
| No log | 236.0 | 472 | 1.4281 |
| No log | 237.0 | 474 | 1.4027 |
| No log | 238.0 | 476 | 1.3635 |
| No log | 239.0 | 478 | 1.3048 |
| No log | 240.0 | 480 | 1.2874 |
| No log | 241.0 | 482 | 1.3135 |
| No log | 242.0 | 484 | 1.3534 |
| No log | 243.0 | 486 | 1.3877 |
| No log | 244.0 | 488 | 1.4125 |
| No log | 245.0 | 490 | 1.4280 |
| No log | 246.0 | 492 | 1.4330 |
| No log | 247.0 | 494 | 1.4254 |
| No log | 248.0 | 496 | 1.4343 |
| No log | 249.0 | 498 | 1.3983 |
| 0.4984 | 250.0 | 500 | 1.3501 |
| 0.4984 | 251.0 | 502 | 1.3319 |
| 0.4984 | 252.0 | 504 | 1.3261 |
| 0.4984 | 253.0 | 506 | 1.3543 |
| 0.4984 | 254.0 | 508 | 1.3817 |
| 0.4984 | 255.0 | 510 | 1.4107 |
| 0.4984 | 256.0 | 512 | 1.4216 |
| 0.4984 | 257.0 | 514 | 1.3670 |
| 0.4984 | 258.0 | 516 | 1.3489 |
| 0.4984 | 259.0 | 518 | 1.3245 |
| 0.4984 | 260.0 | 520 | 1.3046 |
| 0.4984 | 261.0 | 522 | 1.3024 |
| 0.4984 | 262.0 | 524 | 1.2989 |
| 0.4984 | 263.0 | 526 | 1.3072 |
| 0.4984 | 264.0 | 528 | 1.3100 |
| 0.4984 | 265.0 | 530 | 1.3296 |
| 0.4984 | 266.0 | 532 | 1.3444 |
| 0.4984 | 267.0 | 534 | 1.3580 |
| 0.4984 | 268.0 | 536 | 1.3623 |
| 0.4984 | 269.0 | 538 | 1.3863 |
| 0.4984 | 270.0 | 540 | 1.4010 |
| 0.4984 | 271.0 | 542 | 1.4060 |
| 0.4984 | 272.0 | 544 | 1.4048 |
| 0.4984 | 273.0 | 546 | 1.4001 |
| 0.4984 | 274.0 | 548 | 1.3804 |
| 0.4984 | 275.0 | 550 | 1.3607 |
| 0.4984 | 276.0 | 552 | 1.3414 |
| 0.4984 | 277.0 | 554 | 1.3338 |
| 0.4984 | 278.0 | 556 | 1.3401 |
| 0.4984 | 279.0 | 558 | 1.3512 |
| 0.4984 | 280.0 | 560 | 1.3606 |
| 0.4984 | 281.0 | 562 | 1.3636 |
| 0.4984 | 282.0 | 564 | 1.3589 |
| 0.4984 | 283.0 | 566 | 1.3478 |
| 0.4984 | 284.0 | 568 | 1.3387 |
| 0.4984 | 285.0 | 570 | 1.3533 |
| 0.4984 | 286.0 | 572 | 1.3818 |
| 0.4984 | 287.0 | 574 | 1.4216 |
| 0.4984 | 288.0 | 576 | 1.4690 |
| 0.4984 | 289.0 | 578 | 1.4980 |
| 0.4984 | 290.0 | 580 | 1.5126 |
| 0.4984 | 291.0 | 582 | 1.5328 |
| 0.4984 | 292.0 | 584 | 1.5507 |
| 0.4984 | 293.0 | 586 | 1.5507 |
| 0.4984 | 294.0 | 588 | 1.5699 |
| 0.4984 | 295.0 | 590 | 1.5493 |
| 0.4984 | 296.0 | 592 | 1.5112 |
| 0.4984 | 297.0 | 594 | 1.4635 |
| 0.4984 | 298.0 | 596 | 1.4157 |
| 0.4984 | 299.0 | 598 | 1.3829 |
| 0.4984 | 300.0 | 600 | 1.3594 |
| 0.4984 | 301.0 | 602 | 1.3757 |
| 0.4984 | 302.0 | 604 | 1.4016 |
| 0.4984 | 303.0 | 606 | 1.4373 |
| 0.4984 | 304.0 | 608 | 1.4400 |
| 0.4984 | 305.0 | 610 | 1.4478 |
| 0.4984 | 306.0 | 612 | 1.4511 |
| 0.4984 | 307.0 | 614 | 1.4484 |
| 0.4984 | 308.0 | 616 | 1.4229 |
| 0.4984 | 309.0 | 618 | 1.3912 |
| 0.4984 | 310.0 | 620 | 1.3733 |
| 0.4984 | 311.0 | 622 | 1.3450 |
| 0.4984 | 312.0 | 624 | 1.3264 |
| 0.4984 | 313.0 | 626 | 1.3251 |
| 0.4984 | 314.0 | 628 | 1.3312 |
| 0.4984 | 315.0 | 630 | 1.3335 |
| 0.4984 | 316.0 | 632 | 1.3298 |
| 0.4984 | 317.0 | 634 | 1.3226 |
| 0.4984 | 318.0 | 636 | 1.3150 |
| 0.4984 | 319.0 | 638 | 1.3055 |
| 0.4984 | 320.0 | 640 | 1.2983 |
| 0.4984 | 321.0 | 642 | 1.2899 |
| 0.4984 | 322.0 | 644 | 1.2646 |
| 0.4984 | 323.0 | 646 | 1.2413 |
| 0.4984 | 324.0 | 648 | 1.2316 |
| 0.4984 | 325.0 | 650 | 1.2295 |
| 0.4984 | 326.0 | 652 | 1.2300 |
| 0.4984 | 327.0 | 654 | 1.2594 |
| 0.4984 | 328.0 | 656 | 1.2869 |
| 0.4984 | 329.0 | 658 | 1.2923 |
| 0.4984 | 330.0 | 660 | 1.3231 |
| 0.4984 | 331.0 | 662 | 1.3421 |
| 0.4984 | 332.0 | 664 | 1.3503 |
| 0.4984 | 333.0 | 666 | 1.3452 |
| 0.4984 | 334.0 | 668 | 1.3347 |
| 0.4984 | 335.0 | 670 | 1.3203 |
| 0.4984 | 336.0 | 672 | 1.3098 |
| 0.4984 | 337.0 | 674 | 1.3021 |
| 0.4984 | 338.0 | 676 | 1.3016 |
| 0.4984 | 339.0 | 678 | 1.3007 |
| 0.4984 | 340.0 | 680 | 1.3001 |
| 0.4984 | 341.0 | 682 | 1.3070 |
| 0.4984 | 342.0 | 684 | 1.3475 |
| 0.4984 | 343.0 | 686 | 1.3788 |
| 0.4984 | 344.0 | 688 | 1.3991 |
| 0.4984 | 345.0 | 690 | 1.4028 |
| 0.4984 | 346.0 | 692 | 1.4028 |
| 0.4984 | 347.0 | 694 | 1.3971 |
| 0.4984 | 348.0 | 696 | 1.3793 |
| 0.4984 | 349.0 | 698 | 1.3543 |
| 0.4984 | 350.0 | 700 | 1.3296 |
| 0.4984 | 351.0 | 702 | 1.3322 |
| 0.4984 | 352.0 | 704 | 1.3556 |
| 0.4984 | 353.0 | 706 | 1.3936 |
| 0.4984 | 354.0 | 708 | 1.4202 |
| 0.4984 | 355.0 | 710 | 1.4235 |
| 0.4984 | 356.0 | 712 | 1.3934 |
| 0.4984 | 357.0 | 714 | 1.3511 |
| 0.4984 | 358.0 | 716 | 1.2957 |
| 0.4984 | 359.0 | 718 | 1.2690 |
| 0.4984 | 360.0 | 720 | 1.2670 |
| 0.4984 | 361.0 | 722 | 1.2906 |
| 0.4984 | 362.0 | 724 | 1.3083 |
| 0.4984 | 363.0 | 726 | 1.3239 |
| 0.4984 | 364.0 | 728 | 1.3353 |
| 0.4984 | 365.0 | 730 | 1.3442 |
| 0.4984 | 366.0 | 732 | 1.3308 |
| 0.4984 | 367.0 | 734 | 1.3172 |
| 0.4984 | 368.0 | 736 | 1.3009 |
| 0.4984 | 369.0 | 738 | 1.2826 |
| 0.4984 | 370.0 | 740 | 1.2781 |
| 0.4984 | 371.0 | 742 | 1.2796 |
| 0.4984 | 372.0 | 744 | 1.2815 |
| 0.4984 | 373.0 | 746 | 1.3100 |
| 0.4984 | 374.0 | 748 | 1.3447 |
| 0.4984 | 375.0 | 750 | 1.3591 |
| 0.4984 | 376.0 | 752 | 1.3892 |
| 0.4984 | 377.0 | 754 | 1.4185 |
| 0.4984 | 378.0 | 756 | 1.4329 |
| 0.4984 | 379.0 | 758 | 1.4273 |
| 0.4984 | 380.0 | 760 | 1.4074 |
| 0.4984 | 381.0 | 762 | 1.3999 |
| 0.4984 | 382.0 | 764 | 1.3906 |
| 0.4984 | 383.0 | 766 | 1.3857 |
| 0.4984 | 384.0 | 768 | 1.3740 |
| 0.4984 | 385.0 | 770 | 1.3637 |
| 0.4984 | 386.0 | 772 | 1.3600 |
| 0.4984 | 387.0 | 774 | 1.3614 |
| 0.4984 | 388.0 | 776 | 1.3720 |
| 0.4984 | 389.0 | 778 | 1.3822 |
| 0.4984 | 390.0 | 780 | 1.3862 |
| 0.4984 | 391.0 | 782 | 1.3850 |
| 0.4984 | 392.0 | 784 | 1.3857 |
| 0.4984 | 393.0 | 786 | 1.3859 |
| 0.4984 | 394.0 | 788 | 1.3968 |
| 0.4984 | 395.0 | 790 | 1.4054 |
| 0.4984 | 396.0 | 792 | 1.4105 |
| 0.4984 | 397.0 | 794 | 1.4135 |
| 0.4984 | 398.0 | 796 | 1.4122 |
| 0.4984 | 399.0 | 798 | 1.3965 |
| 0.4984 | 400.0 | 800 | 1.3806 |
| 0.4984 | 401.0 | 802 | 1.3833 |
| 0.4984 | 402.0 | 804 | 1.3848 |
| 0.4984 | 403.0 | 806 | 1.3755 |
| 0.4984 | 404.0 | 808 | 1.3663 |
| 0.4984 | 405.0 | 810 | 1.3541 |
| 0.4984 | 406.0 | 812 | 1.3481 |
| 0.4984 | 407.0 | 814 | 1.3484 |
| 0.4984 | 408.0 | 816 | 1.3506 |
| 0.4984 | 409.0 | 818 | 1.3486 |
| 0.4984 | 410.0 | 820 | 1.3474 |
| 0.4984 | 411.0 | 822 | 1.3512 |
| 0.4984 | 412.0 | 824 | 1.3562 |
| 0.4984 | 413.0 | 826 | 1.3683 |
| 0.4984 | 414.0 | 828 | 1.3778 |
| 0.4984 | 415.0 | 830 | 1.3839 |
| 0.4984 | 416.0 | 832 | 1.3879 |
| 0.4984 | 417.0 | 834 | 1.3888 |
| 0.4984 | 418.0 | 836 | 1.3952 |
| 0.4984 | 419.0 | 838 | 1.4006 |
| 0.4984 | 420.0 | 840 | 1.3990 |
| 0.4984 | 421.0 | 842 | 1.3698 |
| 0.4984 | 422.0 | 844 | 1.3452 |
| 0.4984 | 423.0 | 846 | 1.3087 |
| 0.4984 | 424.0 | 848 | 1.2798 |
| 0.4984 | 425.0 | 850 | 1.2656 |
| 0.4984 | 426.0 | 852 | 1.2812 |
| 0.4984 | 427.0 | 854 | 1.2965 |
| 0.4984 | 428.0 | 856 | 1.3184 |
| 0.4984 | 429.0 | 858 | 1.3456 |
| 0.4984 | 430.0 | 860 | 1.3730 |
| 0.4984 | 431.0 | 862 | 1.3882 |
| 0.4984 | 432.0 | 864 | 1.3960 |
| 0.4984 | 433.0 | 866 | 1.3961 |
| 0.4984 | 434.0 | 868 | 1.3904 |
| 0.4984 | 435.0 | 870 | 1.3826 |
| 0.4984 | 436.0 | 872 | 1.3876 |
| 0.4984 | 437.0 | 874 | 1.3942 |
| 0.4984 | 438.0 | 876 | 1.3903 |
| 0.4984 | 439.0 | 878 | 1.4131 |
| 0.4984 | 440.0 | 880 | 1.4386 |
| 0.4984 | 441.0 | 882 | 1.4533 |
| 0.4984 | 442.0 | 884 | 1.4633 |
| 0.4984 | 443.0 | 886 | 1.4364 |
| 0.4984 | 444.0 | 888 | 1.3961 |
| 0.4984 | 445.0 | 890 | 1.3603 |
| 0.4984 | 446.0 | 892 | 1.3205 |
| 0.4984 | 447.0 | 894 | 1.2876 |
| 0.4984 | 448.0 | 896 | 1.2629 |
| 0.4984 | 449.0 | 898 | 1.2929 |
| 0.4984 | 450.0 | 900 | 1.3158 |
| 0.4984 | 451.0 | 902 | 1.3561 |
| 0.4984 | 452.0 | 904 | 1.4016 |
| 0.4984 | 453.0 | 906 | 1.4331 |
| 0.4984 | 454.0 | 908 | 1.4514 |
| 0.4984 | 455.0 | 910 | 1.4568 |
| 0.4984 | 456.0 | 912 | 1.4481 |
| 0.4984 | 457.0 | 914 | 1.4331 |
| 0.4984 | 458.0 | 916 | 1.4101 |
| 0.4984 | 459.0 | 918 | 1.4124 |
| 0.4984 | 460.0 | 920 | 1.4035 |
| 0.4984 | 461.0 | 922 | 1.3846 |
| 0.4984 | 462.0 | 924 | 1.3591 |
| 0.4984 | 463.0 | 926 | 1.3337 |
| 0.4984 | 464.0 | 928 | 1.3211 |
| 0.4984 | 465.0 | 930 | 1.3289 |
| 0.4984 | 466.0 | 932 | 1.3686 |
| 0.4984 | 467.0 | 934 | 1.4247 |
| 0.4984 | 468.0 | 936 | 1.4679 |
| 0.4984 | 469.0 | 938 | 1.4892 |
| 0.4984 | 470.0 | 940 | 1.5036 |
| 0.4984 | 471.0 | 942 | 1.5144 |
| 0.4984 | 472.0 | 944 | 1.5118 |
| 0.4984 | 473.0 | 946 | 1.4974 |
| 0.4984 | 474.0 | 948 | 1.4768 |
| 0.4984 | 475.0 | 950 | 1.4562 |
| 0.4984 | 476.0 | 952 | 1.4385 |
| 0.4984 | 477.0 | 954 | 1.4229 |
| 0.4984 | 478.0 | 956 | 1.4084 |
| 0.4984 | 479.0 | 958 | 1.4004 |
| 0.4984 | 480.0 | 960 | 1.4004 |
| 0.4984 | 481.0 | 962 | 1.3982 |
| 0.4984 | 482.0 | 964 | 1.3999 |
| 0.4984 | 483.0 | 966 | 1.4041 |
| 0.4984 | 484.0 | 968 | 1.4065 |
| 0.4984 | 485.0 | 970 | 1.4074 |
| 0.4984 | 486.0 | 972 | 1.3975 |
| 0.4984 | 487.0 | 974 | 1.4100 |
| 0.4984 | 488.0 | 976 | 1.4375 |
| 0.4984 | 489.0 | 978 | 1.4597 |
| 0.4984 | 490.0 | 980 | 1.4732 |
| 0.4984 | 491.0 | 982 | 1.4704 |
| 0.4984 | 492.0 | 984 | 1.4610 |
| 0.4984 | 493.0 | 986 | 1.4437 |
| 0.4984 | 494.0 | 988 | 1.4284 |
| 0.4984 | 495.0 | 990 | 1.4139 |
| 0.4984 | 496.0 | 992 | 1.4026 |
| 0.4984 | 497.0 | 994 | 1.3938 |
| 0.4984 | 498.0 | 996 | 1.4228 |
| 0.4984 | 499.0 | 998 | 1.4441 |
| 0.0013 | 500.0 | 1000 | 1.4600 |
| 0.0013 | 501.0 | 1002 | 1.4651 |
| 0.0013 | 502.0 | 1004 | 1.4571 |
| 0.0013 | 503.0 | 1006 | 1.4481 |
| 0.0013 | 504.0 | 1008 | 1.4398 |
| 0.0013 | 505.0 | 1010 | 1.4303 |
| 0.0013 | 506.0 | 1012 | 1.4208 |
| 0.0013 | 507.0 | 1014 | 1.4074 |
| 0.0013 | 508.0 | 1016 | 1.3926 |
| 0.0013 | 509.0 | 1018 | 1.3814 |
| 0.0013 | 510.0 | 1020 | 1.3729 |
| 0.0013 | 511.0 | 1022 | 1.3687 |
| 0.0013 | 512.0 | 1024 | 1.3629 |
| 0.0013 | 513.0 | 1026 | 1.3900 |
| 0.0013 | 514.0 | 1028 | 1.4067 |
| 0.0013 | 515.0 | 1030 | 1.3830 |
| 0.0013 | 516.0 | 1032 | 1.3642 |
| 0.0013 | 517.0 | 1034 | 1.3945 |
| 0.0013 | 518.0 | 1036 | 1.4173 |
| 0.0013 | 519.0 | 1038 | 1.4311 |
| 0.0013 | 520.0 | 1040 | 1.4405 |
| 0.0013 | 521.0 | 1042 | 1.4485 |
| 0.0013 | 522.0 | 1044 | 1.4568 |
| 0.0013 | 523.0 | 1046 | 1.4552 |
| 0.0013 | 524.0 | 1048 | 1.4257 |
| 0.0013 | 525.0 | 1050 | 1.3988 |
| 0.0013 | 526.0 | 1052 | 1.3722 |
| 0.0013 | 527.0 | 1054 | 1.3477 |
| 0.0013 | 528.0 | 1056 | 1.3285 |
| 0.0013 | 529.0 | 1058 | 1.3126 |
| 0.0013 | 530.0 | 1060 | 1.2998 |
| 0.0013 | 531.0 | 1062 | 1.2948 |
| 0.0013 | 532.0 | 1064 | 1.2972 |
| 0.0013 | 533.0 | 1066 | 1.2976 |
| 0.0013 | 534.0 | 1068 | 1.2979 |
| 0.0013 | 535.0 | 1070 | 1.3181 |
| 0.0013 | 536.0 | 1072 | 1.3510 |
| 0.0013 | 537.0 | 1074 | 1.3788 |
| 0.0013 | 538.0 | 1076 | 1.3992 |
| 0.0013 | 539.0 | 1078 | 1.4265 |
| 0.0013 | 540.0 | 1080 | 1.4463 |
| 0.0013 | 541.0 | 1082 | 1.4578 |
| 0.0013 | 542.0 | 1084 | 1.4586 |
| 0.0013 | 543.0 | 1086 | 1.4551 |
| 0.0013 | 544.0 | 1088 | 1.4510 |
| 0.0013 | 545.0 | 1090 | 1.4462 |
| 0.0013 | 546.0 | 1092 | 1.4394 |
| 0.0013 | 547.0 | 1094 | 1.4334 |
| 0.0013 | 548.0 | 1096 | 1.4384 |
| 0.0013 | 549.0 | 1098 | 1.4397 |
| 0.0013 | 550.0 | 1100 | 1.4445 |
| 0.0013 | 551.0 | 1102 | 1.4514 |
| 0.0013 | 552.0 | 1104 | 1.4554 |
| 0.0013 | 553.0 | 1106 | 1.4576 |
| 0.0013 | 554.0 | 1108 | 1.4583 |
| 0.0013 | 555.0 | 1110 | 1.4601 |
| 0.0013 | 556.0 | 1112 | 1.4597 |
| 0.0013 | 557.0 | 1114 | 1.4596 |
| 0.0013 | 558.0 | 1116 | 1.4577 |
| 0.0013 | 559.0 | 1118 | 1.4520 |
| 0.0013 | 560.0 | 1120 | 1.4491 |
| 0.0013 | 561.0 | 1122 | 1.4455 |
| 0.0013 | 562.0 | 1124 | 1.4424 |
| 0.0013 | 563.0 | 1126 | 1.4388 |
| 0.0013 | 564.0 | 1128 | 1.4303 |
| 0.0013 | 565.0 | 1130 | 1.4266 |
| 0.0013 | 566.0 | 1132 | 1.4235 |
| 0.0013 | 567.0 | 1134 | 1.4207 |
| 0.0013 | 568.0 | 1136 | 1.4185 |
| 0.0013 | 569.0 | 1138 | 1.4172 |
| 0.0013 | 570.0 | 1140 | 1.4145 |
| 0.0013 | 571.0 | 1142 | 1.4177 |
| 0.0013 | 572.0 | 1144 | 1.4230 |
| 0.0013 | 573.0 | 1146 | 1.4247 |
| 0.0013 | 574.0 | 1148 | 1.4152 |
| 0.0013 | 575.0 | 1150 | 1.4082 |
| 0.0013 | 576.0 | 1152 | 1.4027 |
| 0.0013 | 577.0 | 1154 | 1.4000 |
| 0.0013 | 578.0 | 1156 | 1.3985 |
| 0.0013 | 579.0 | 1158 | 1.4005 |
| 0.0013 | 580.0 | 1160 | 1.4054 |
| 0.0013 | 581.0 | 1162 | 1.4075 |
| 0.0013 | 582.0 | 1164 | 1.4120 |
| 0.0013 | 583.0 | 1166 | 1.4161 |
| 0.0013 | 584.0 | 1168 | 1.4199 |
| 0.0013 | 585.0 | 1170 | 1.4222 |
| 0.0013 | 586.0 | 1172 | 1.4239 |
| 0.0013 | 587.0 | 1174 | 1.4254 |
| 0.0013 | 588.0 | 1176 | 1.4162 |
| 0.0013 | 589.0 | 1178 | 1.4203 |
| 0.0013 | 590.0 | 1180 | 1.4341 |
| 0.0013 | 591.0 | 1182 | 1.4659 |
| 0.0013 | 592.0 | 1184 | 1.4891 |
| 0.0013 | 593.0 | 1186 | 1.5046 |
| 0.0013 | 594.0 | 1188 | 1.5110 |
| 0.0013 | 595.0 | 1190 | 1.5053 |
| 0.0013 | 596.0 | 1192 | 1.5001 |
| 0.0013 | 597.0 | 1194 | 1.4795 |
| 0.0013 | 598.0 | 1196 | 1.4530 |
| 0.0013 | 599.0 | 1198 | 1.4300 |
| 0.0013 | 600.0 | 1200 | 1.4101 |
| 0.0013 | 601.0 | 1202 | 1.3887 |
| 0.0013 | 602.0 | 1204 | 1.3722 |
| 0.0013 | 603.0 | 1206 | 1.3588 |
| 0.0013 | 604.0 | 1208 | 1.3521 |
| 0.0013 | 605.0 | 1210 | 1.3470 |
| 0.0013 | 606.0 | 1212 | 1.3519 |
| 0.0013 | 607.0 | 1214 | 1.3647 |
| 0.0013 | 608.0 | 1216 | 1.3756 |
| 0.0013 | 609.0 | 1218 | 1.3838 |
| 0.0013 | 610.0 | 1220 | 1.3876 |
| 0.0013 | 611.0 | 1222 | 1.3876 |
| 0.0013 | 612.0 | 1224 | 1.3871 |
| 0.0013 | 613.0 | 1226 | 1.3861 |
| 0.0013 | 614.0 | 1228 | 1.3932 |
| 0.0013 | 615.0 | 1230 | 1.4157 |
| 0.0013 | 616.0 | 1232 | 1.4386 |
| 0.0013 | 617.0 | 1234 | 1.4567 |
| 0.0013 | 618.0 | 1236 | 1.4693 |
| 0.0013 | 619.0 | 1238 | 1.4772 |
| 0.0013 | 620.0 | 1240 | 1.4793 |
| 0.0013 | 621.0 | 1242 | 1.4671 |
| 0.0013 | 622.0 | 1244 | 1.4450 |
| 0.0013 | 623.0 | 1246 | 1.4167 |
| 0.0013 | 624.0 | 1248 | 1.3841 |
| 0.0013 | 625.0 | 1250 | 1.3548 |
| 0.0013 | 626.0 | 1252 | 1.3333 |
| 0.0013 | 627.0 | 1254 | 1.3233 |
| 0.0013 | 628.0 | 1256 | 1.3179 |
| 0.0013 | 629.0 | 1258 | 1.3158 |
| 0.0013 | 630.0 | 1260 | 1.3153 |
| 0.0013 | 631.0 | 1262 | 1.3201 |
| 0.0013 | 632.0 | 1264 | 1.3260 |
| 0.0013 | 633.0 | 1266 | 1.3341 |
| 0.0013 | 634.0 | 1268 | 1.3430 |
| 0.0013 | 635.0 | 1270 | 1.3519 |
| 0.0013 | 636.0 | 1272 | 1.3612 |
| 0.0013 | 637.0 | 1274 | 1.3718 |
| 0.0013 | 638.0 | 1276 | 1.3815 |
| 0.0013 | 639.0 | 1278 | 1.3941 |
| 0.0013 | 640.0 | 1280 | 1.4047 |
| 0.0013 | 641.0 | 1282 | 1.4108 |
| 0.0013 | 642.0 | 1284 | 1.4149 |
| 0.0013 | 643.0 | 1286 | 1.4114 |
| 0.0013 | 644.0 | 1288 | 1.4072 |
| 0.0013 | 645.0 | 1290 | 1.4023 |
| 0.0013 | 646.0 | 1292 | 1.3963 |
| 0.0013 | 647.0 | 1294 | 1.3909 |
| 0.0013 | 648.0 | 1296 | 1.3862 |
| 0.0013 | 649.0 | 1298 | 1.3821 |
| 0.0013 | 650.0 | 1300 | 1.3786 |
| 0.0013 | 651.0 | 1302 | 1.3785 |
| 0.0013 | 652.0 | 1304 | 1.3798 |
| 0.0013 | 653.0 | 1306 | 1.3825 |
| 0.0013 | 654.0 | 1308 | 1.3856 |
| 0.0013 | 655.0 | 1310 | 1.3837 |
| 0.0013 | 656.0 | 1312 | 1.3796 |
| 0.0013 | 657.0 | 1314 | 1.3739 |
| 0.0013 | 658.0 | 1316 | 1.3675 |
| 0.0013 | 659.0 | 1318 | 1.3617 |
| 0.0013 | 660.0 | 1320 | 1.3569 |
| 0.0013 | 661.0 | 1322 | 1.3516 |
| 0.0013 | 662.0 | 1324 | 1.3562 |
| 0.0013 | 663.0 | 1326 | 1.3711 |
| 0.0013 | 664.0 | 1328 | 1.3824 |
| 0.0013 | 665.0 | 1330 | 1.3873 |
| 0.0013 | 666.0 | 1332 | 1.3901 |
| 0.0013 | 667.0 | 1334 | 1.3912 |
| 0.0013 | 668.0 | 1336 | 1.3906 |
| 0.0013 | 669.0 | 1338 | 1.3892 |
| 0.0013 | 670.0 | 1340 | 1.3863 |
| 0.0013 | 671.0 | 1342 | 1.3834 |
| 0.0013 | 672.0 | 1344 | 1.3811 |
| 0.0013 | 673.0 | 1346 | 1.3789 |
| 0.0013 | 674.0 | 1348 | 1.3783 |
| 0.0013 | 675.0 | 1350 | 1.3775 |
| 0.0013 | 676.0 | 1352 | 1.3765 |
| 0.0013 | 677.0 | 1354 | 1.3750 |
| 0.0013 | 678.0 | 1356 | 1.3732 |
| 0.0013 | 679.0 | 1358 | 1.3714 |
| 0.0013 | 680.0 | 1360 | 1.3701 |
| 0.0013 | 681.0 | 1362 | 1.3690 |
| 0.0013 | 682.0 | 1364 | 1.3669 |
| 0.0013 | 683.0 | 1366 | 1.3650 |
| 0.0013 | 684.0 | 1368 | 1.3652 |
| 0.0013 | 685.0 | 1370 | 1.3661 |
| 0.0013 | 686.0 | 1372 | 1.3711 |
| 0.0013 | 687.0 | 1374 | 1.3762 |
| 0.0013 | 688.0 | 1376 | 1.3815 |
| 0.0013 | 689.0 | 1378 | 1.3849 |
| 0.0013 | 690.0 | 1380 | 1.3866 |
| 0.0013 | 691.0 | 1382 | 1.3856 |
| 0.0013 | 692.0 | 1384 | 1.3827 |
| 0.0013 | 693.0 | 1386 | 1.3785 |
| 0.0013 | 694.0 | 1388 | 1.3752 |
| 0.0013 | 695.0 | 1390 | 1.3722 |
| 0.0013 | 696.0 | 1392 | 1.3719 |
| 0.0013 | 697.0 | 1394 | 1.3713 |
| 0.0013 | 698.0 | 1396 | 1.3706 |
| 0.0013 | 699.0 | 1398 | 1.3682 |
| 0.0013 | 700.0 | 1400 | 1.3655 |
| 0.0013 | 701.0 | 1402 | 1.3735 |
| 0.0013 | 702.0 | 1404 | 1.3824 |
| 0.0013 | 703.0 | 1406 | 1.3917 |
| 0.0013 | 704.0 | 1408 | 1.3977 |
| 0.0013 | 705.0 | 1410 | 1.4018 |
| 0.0013 | 706.0 | 1412 | 1.4048 |
| 0.0013 | 707.0 | 1414 | 1.4069 |
| 0.0013 | 708.0 | 1416 | 1.4071 |
| 0.0013 | 709.0 | 1418 | 1.4056 |
| 0.0013 | 710.0 | 1420 | 1.4038 |
| 0.0013 | 711.0 | 1422 | 1.4027 |
| 0.0013 | 712.0 | 1424 | 1.3999 |
| 0.0013 | 713.0 | 1426 | 1.3940 |
| 0.0013 | 714.0 | 1428 | 1.3880 |
| 0.0013 | 715.0 | 1430 | 1.3814 |
| 0.0013 | 716.0 | 1432 | 1.3756 |
| 0.0013 | 717.0 | 1434 | 1.3708 |
| 0.0013 | 718.0 | 1436 | 1.3658 |
| 0.0013 | 719.0 | 1438 | 1.3619 |
| 0.0013 | 720.0 | 1440 | 1.3605 |
| 0.0013 | 721.0 | 1442 | 1.3587 |
| 0.0013 | 722.0 | 1444 | 1.3685 |
| 0.0013 | 723.0 | 1446 | 1.3823 |
| 0.0013 | 724.0 | 1448 | 1.3939 |
| 0.0013 | 725.0 | 1450 | 1.4022 |
| 0.0013 | 726.0 | 1452 | 1.4089 |
| 0.0013 | 727.0 | 1454 | 1.4147 |
| 0.0013 | 728.0 | 1456 | 1.4190 |
| 0.0013 | 729.0 | 1458 | 1.4273 |
| 0.0013 | 730.0 | 1460 | 1.4373 |
| 0.0013 | 731.0 | 1462 | 1.4448 |
| 0.0013 | 732.0 | 1464 | 1.4494 |
| 0.0013 | 733.0 | 1466 | 1.4507 |
| 0.0013 | 734.0 | 1468 | 1.4513 |
| 0.0013 | 735.0 | 1470 | 1.4585 |
| 0.0013 | 736.0 | 1472 | 1.4685 |
| 0.0013 | 737.0 | 1474 | 1.4767 |
| 0.0013 | 738.0 | 1476 | 1.4740 |
| 0.0013 | 739.0 | 1478 | 1.4713 |
| 0.0013 | 740.0 | 1480 | 1.4689 |
| 0.0013 | 741.0 | 1482 | 1.4668 |
| 0.0013 | 742.0 | 1484 | 1.4648 |
| 0.0013 | 743.0 | 1486 | 1.4631 |
| 0.0013 | 744.0 | 1488 | 1.4613 |
| 0.0013 | 745.0 | 1490 | 1.4588 |
| 0.0013 | 746.0 | 1492 | 1.4550 |
| 0.0013 | 747.0 | 1494 | 1.4507 |
| 0.0013 | 748.0 | 1496 | 1.4456 |
| 0.0013 | 749.0 | 1498 | 1.4401 |
| 0.0003 | 750.0 | 1500 | 1.4360 |
| 0.0003 | 751.0 | 1502 | 1.4327 |
| 0.0003 | 752.0 | 1504 | 1.4302 |
| 0.0003 | 753.0 | 1506 | 1.4290 |
| 0.0003 | 754.0 | 1508 | 1.4285 |
| 0.0003 | 755.0 | 1510 | 1.4290 |
| 0.0003 | 756.0 | 1512 | 1.4267 |
| 0.0003 | 757.0 | 1514 | 1.4248 |
| 0.0003 | 758.0 | 1516 | 1.4228 |
| 0.0003 | 759.0 | 1518 | 1.4206 |
| 0.0003 | 760.0 | 1520 | 1.4184 |
| 0.0003 | 761.0 | 1522 | 1.4166 |
| 0.0003 | 762.0 | 1524 | 1.4148 |
| 0.0003 | 763.0 | 1526 | 1.4132 |
| 0.0003 | 764.0 | 1528 | 1.4126 |
| 0.0003 | 765.0 | 1530 | 1.4171 |
| 0.0003 | 766.0 | 1532 | 1.4209 |
| 0.0003 | 767.0 | 1534 | 1.4356 |
| 0.0003 | 768.0 | 1536 | 1.4466 |
| 0.0003 | 769.0 | 1538 | 1.4545 |
| 0.0003 | 770.0 | 1540 | 1.4605 |
| 0.0003 | 771.0 | 1542 | 1.4648 |
| 0.0003 | 772.0 | 1544 | 1.4678 |
| 0.0003 | 773.0 | 1546 | 1.4697 |
| 0.0003 | 774.0 | 1548 | 1.4707 |
| 0.0003 | 775.0 | 1550 | 1.4709 |
| 0.0003 | 776.0 | 1552 | 1.4680 |
| 0.0003 | 777.0 | 1554 | 1.4634 |
| 0.0003 | 778.0 | 1556 | 1.4592 |
| 0.0003 | 779.0 | 1558 | 1.4550 |
| 0.0003 | 780.0 | 1560 | 1.4512 |
| 0.0003 | 781.0 | 1562 | 1.4479 |
| 0.0003 | 782.0 | 1564 | 1.4652 |
| 0.0003 | 783.0 | 1566 | 1.4978 |
| 0.0003 | 784.0 | 1568 | 1.5235 |
| 0.0003 | 785.0 | 1570 | 1.5399 |
| 0.0003 | 786.0 | 1572 | 1.5518 |
| 0.0003 | 787.0 | 1574 | 1.5597 |
| 0.0003 | 788.0 | 1576 | 1.5629 |
| 0.0003 | 789.0 | 1578 | 1.5628 |
| 0.0003 | 790.0 | 1580 | 1.5599 |
| 0.0003 | 791.0 | 1582 | 1.5538 |
| 0.0003 | 792.0 | 1584 | 1.5479 |
| 0.0003 | 793.0 | 1586 | 1.5405 |
| 0.0003 | 794.0 | 1588 | 1.5318 |
| 0.0003 | 795.0 | 1590 | 1.5236 |
| 0.0003 | 796.0 | 1592 | 1.5222 |
| 0.0003 | 797.0 | 1594 | 1.5259 |
| 0.0003 | 798.0 | 1596 | 1.5279 |
| 0.0003 | 799.0 | 1598 | 1.5291 |
| 0.0003 | 800.0 | 1600 | 1.5242 |
| 0.0003 | 801.0 | 1602 | 1.5197 |
| 0.0003 | 802.0 | 1604 | 1.5153 |
| 0.0003 | 803.0 | 1606 | 1.5091 |
| 0.0003 | 804.0 | 1608 | 1.5018 |
| 0.0003 | 805.0 | 1610 | 1.4950 |
| 0.0003 | 806.0 | 1612 | 1.4887 |
| 0.0003 | 807.0 | 1614 | 1.4833 |
| 0.0003 | 808.0 | 1616 | 1.4786 |
| 0.0003 | 809.0 | 1618 | 1.4726 |
| 0.0003 | 810.0 | 1620 | 1.4676 |
| 0.0003 | 811.0 | 1622 | 1.4762 |
| 0.0003 | 812.0 | 1624 | 1.4831 |
| 0.0003 | 813.0 | 1626 | 1.4911 |
| 0.0003 | 814.0 | 1628 | 1.5145 |
| 0.0003 | 815.0 | 1630 | 1.5310 |
| 0.0003 | 816.0 | 1632 | 1.5441 |
| 0.0003 | 817.0 | 1634 | 1.5537 |
| 0.0003 | 818.0 | 1636 | 1.5606 |
| 0.0003 | 819.0 | 1638 | 1.5644 |
| 0.0003 | 820.0 | 1640 | 1.5652 |
| 0.0003 | 821.0 | 1642 | 1.5639 |
| 0.0003 | 822.0 | 1644 | 1.5595 |
| 0.0003 | 823.0 | 1646 | 1.5473 |
| 0.0003 | 824.0 | 1648 | 1.5360 |
| 0.0003 | 825.0 | 1650 | 1.5237 |
| 0.0003 | 826.0 | 1652 | 1.5143 |
| 0.0003 | 827.0 | 1654 | 1.5092 |
| 0.0003 | 828.0 | 1656 | 1.4986 |
| 0.0003 | 829.0 | 1658 | 1.4837 |
| 0.0003 | 830.0 | 1660 | 1.4722 |
| 0.0003 | 831.0 | 1662 | 1.4626 |
| 0.0003 | 832.0 | 1664 | 1.4545 |
| 0.0003 | 833.0 | 1666 | 1.4480 |
| 0.0003 | 834.0 | 1668 | 1.4345 |
| 0.0003 | 835.0 | 1670 | 1.4235 |
| 0.0003 | 836.0 | 1672 | 1.4138 |
| 0.0003 | 837.0 | 1674 | 1.4071 |
| 0.0003 | 838.0 | 1676 | 1.4051 |
| 0.0003 | 839.0 | 1678 | 1.4036 |
| 0.0003 | 840.0 | 1680 | 1.4020 |
| 0.0003 | 841.0 | 1682 | 1.3985 |
| 0.0003 | 842.0 | 1684 | 1.3947 |
| 0.0003 | 843.0 | 1686 | 1.3917 |
| 0.0003 | 844.0 | 1688 | 1.3896 |
| 0.0003 | 845.0 | 1690 | 1.3882 |
| 0.0003 | 846.0 | 1692 | 1.3870 |
| 0.0003 | 847.0 | 1694 | 1.4005 |
| 0.0003 | 848.0 | 1696 | 1.4152 |
| 0.0003 | 849.0 | 1698 | 1.4301 |
| 0.0003 | 850.0 | 1700 | 1.4422 |
| 0.0003 | 851.0 | 1702 | 1.4517 |
| 0.0003 | 852.0 | 1704 | 1.4587 |
| 0.0003 | 853.0 | 1706 | 1.4637 |
| 0.0003 | 854.0 | 1708 | 1.4669 |
| 0.0003 | 855.0 | 1710 | 1.4685 |
| 0.0003 | 856.0 | 1712 | 1.4689 |
| 0.0003 | 857.0 | 1714 | 1.4679 |
| 0.0003 | 858.0 | 1716 | 1.4595 |
| 0.0003 | 859.0 | 1718 | 1.4518 |
| 0.0003 | 860.0 | 1720 | 1.4440 |
| 0.0003 | 861.0 | 1722 | 1.4372 |
| 0.0003 | 862.0 | 1724 | 1.4310 |
| 0.0003 | 863.0 | 1726 | 1.4251 |
| 0.0003 | 864.0 | 1728 | 1.4212 |
| 0.0003 | 865.0 | 1730 | 1.4181 |
| 0.0003 | 866.0 | 1732 | 1.4154 |
| 0.0003 | 867.0 | 1734 | 1.4129 |
| 0.0003 | 868.0 | 1736 | 1.4109 |
| 0.0003 | 869.0 | 1738 | 1.4092 |
| 0.0003 | 870.0 | 1740 | 1.4077 |
| 0.0003 | 871.0 | 1742 | 1.4063 |
| 0.0003 | 872.0 | 1744 | 1.4045 |
| 0.0003 | 873.0 | 1746 | 1.4027 |
| 0.0003 | 874.0 | 1748 | 1.4011 |
| 0.0003 | 875.0 | 1750 | 1.3993 |
| 0.0003 | 876.0 | 1752 | 1.4034 |
| 0.0003 | 877.0 | 1754 | 1.4118 |
| 0.0003 | 878.0 | 1756 | 1.4173 |
| 0.0003 | 879.0 | 1758 | 1.4212 |
| 0.0003 | 880.0 | 1760 | 1.4245 |
| 0.0003 | 881.0 | 1762 | 1.4271 |
| 0.0003 | 882.0 | 1764 | 1.4292 |
| 0.0003 | 883.0 | 1766 | 1.4308 |
| 0.0003 | 884.0 | 1768 | 1.4316 |
| 0.0003 | 885.0 | 1770 | 1.4318 |
| 0.0003 | 886.0 | 1772 | 1.4317 |
| 0.0003 | 887.0 | 1774 | 1.4315 |
| 0.0003 | 888.0 | 1776 | 1.4311 |
| 0.0003 | 889.0 | 1778 | 1.4301 |
| 0.0003 | 890.0 | 1780 | 1.4281 |
| 0.0003 | 891.0 | 1782 | 1.4265 |
| 0.0003 | 892.0 | 1784 | 1.4248 |
| 0.0003 | 893.0 | 1786 | 1.4226 |
| 0.0003 | 894.0 | 1788 | 1.4189 |
| 0.0003 | 895.0 | 1790 | 1.4158 |
| 0.0003 | 896.0 | 1792 | 1.4134 |
| 0.0003 | 897.0 | 1794 | 1.4114 |
| 0.0003 | 898.0 | 1796 | 1.4095 |
| 0.0003 | 899.0 | 1798 | 1.4070 |
| 0.0003 | 900.0 | 1800 | 1.4048 |
| 0.0003 | 901.0 | 1802 | 1.4032 |
| 0.0003 | 902.0 | 1804 | 1.4020 |
| 0.0003 | 903.0 | 1806 | 1.4013 |
| 0.0003 | 904.0 | 1808 | 1.4006 |
| 0.0003 | 905.0 | 1810 | 1.4000 |
| 0.0003 | 906.0 | 1812 | 1.3997 |
| 0.0003 | 907.0 | 1814 | 1.3994 |
| 0.0003 | 908.0 | 1816 | 1.3990 |
| 0.0003 | 909.0 | 1818 | 1.3983 |
| 0.0003 | 910.0 | 1820 | 1.3979 |
| 0.0003 | 911.0 | 1822 | 1.3978 |
| 0.0003 | 912.0 | 1824 | 1.3986 |
| 0.0003 | 913.0 | 1826 | 1.3978 |
| 0.0003 | 914.0 | 1828 | 1.3970 |
| 0.0003 | 915.0 | 1830 | 1.3964 |
| 0.0003 | 916.0 | 1832 | 1.3958 |
| 0.0003 | 917.0 | 1834 | 1.3953 |
| 0.0003 | 918.0 | 1836 | 1.3945 |
| 0.0003 | 919.0 | 1838 | 1.3944 |
| 0.0003 | 920.0 | 1840 | 1.3942 |
| 0.0003 | 921.0 | 1842 | 1.3940 |
| 0.0003 | 922.0 | 1844 | 1.3935 |
| 0.0003 | 923.0 | 1846 | 1.3932 |
| 0.0003 | 924.0 | 1848 | 1.3927 |
| 0.0003 | 925.0 | 1850 | 1.3925 |
| 0.0003 | 926.0 | 1852 | 1.3925 |
| 0.0003 | 927.0 | 1854 | 1.3926 |
| 0.0003 | 928.0 | 1856 | 1.3928 |
| 0.0003 | 929.0 | 1858 | 1.3928 |
| 0.0003 | 930.0 | 1860 | 1.3903 |
| 0.0003 | 931.0 | 1862 | 1.3883 |
| 0.0003 | 932.0 | 1864 | 1.3866 |
| 0.0003 | 933.0 | 1866 | 1.3853 |
| 0.0003 | 934.0 | 1868 | 1.3842 |
| 0.0003 | 935.0 | 1870 | 1.3834 |
| 0.0003 | 936.0 | 1872 | 1.3826 |
| 0.0003 | 937.0 | 1874 | 1.3818 |
| 0.0003 | 938.0 | 1876 | 1.3803 |
| 0.0003 | 939.0 | 1878 | 1.3791 |
| 0.0003 | 940.0 | 1880 | 1.3782 |
| 0.0003 | 941.0 | 1882 | 1.3776 |
| 0.0003 | 942.0 | 1884 | 1.3770 |
| 0.0003 | 943.0 | 1886 | 1.3764 |
| 0.0003 | 944.0 | 1888 | 1.3758 |
| 0.0003 | 945.0 | 1890 | 1.3760 |
| 0.0003 | 946.0 | 1892 | 1.3763 |
| 0.0003 | 947.0 | 1894 | 1.3766 |
| 0.0003 | 948.0 | 1896 | 1.3770 |
| 0.0003 | 949.0 | 1898 | 1.3773 |
| 0.0003 | 950.0 | 1900 | 1.3776 |
| 0.0003 | 951.0 | 1902 | 1.3778 |
| 0.0003 | 952.0 | 1904 | 1.3780 |
| 0.0003 | 953.0 | 1906 | 1.3796 |
| 0.0003 | 954.0 | 1908 | 1.3821 |
| 0.0003 | 955.0 | 1910 | 1.3841 |
| 0.0003 | 956.0 | 1912 | 1.3858 |
| 0.0003 | 957.0 | 1914 | 1.3859 |
| 0.0003 | 958.0 | 1916 | 1.3858 |
| 0.0003 | 959.0 | 1918 | 1.3859 |
| 0.0003 | 960.0 | 1920 | 1.3857 |
| 0.0003 | 961.0 | 1922 | 1.3853 |
| 0.0003 | 962.0 | 1924 | 1.3850 |
| 0.0003 | 963.0 | 1926 | 1.3848 |
| 0.0003 | 964.0 | 1928 | 1.3847 |
| 0.0003 | 965.0 | 1930 | 1.3845 |
| 0.0003 | 966.0 | 1932 | 1.3843 |
| 0.0003 | 967.0 | 1934 | 1.3841 |
| 0.0003 | 968.0 | 1936 | 1.3839 |
| 0.0003 | 969.0 | 1938 | 1.3837 |
| 0.0003 | 970.0 | 1940 | 1.3836 |
| 0.0003 | 971.0 | 1942 | 1.3836 |
| 0.0003 | 972.0 | 1944 | 1.3836 |
| 0.0003 | 973.0 | 1946 | 1.3835 |
| 0.0003 | 974.0 | 1948 | 1.3838 |
| 0.0003 | 975.0 | 1950 | 1.3843 |
| 0.0003 | 976.0 | 1952 | 1.3847 |
| 0.0003 | 977.0 | 1954 | 1.3850 |
| 0.0003 | 978.0 | 1956 | 1.3852 |
| 0.0003 | 979.0 | 1958 | 1.3853 |
| 0.0003 | 980.0 | 1960 | 1.3854 |
| 0.0003 | 981.0 | 1962 | 1.3855 |
| 0.0003 | 982.0 | 1964 | 1.3855 |
| 0.0003 | 983.0 | 1966 | 1.3854 |
| 0.0003 | 984.0 | 1968 | 1.3854 |
| 0.0003 | 985.0 | 1970 | 1.3855 |
| 0.0003 | 986.0 | 1972 | 1.3857 |
| 0.0003 | 987.0 | 1974 | 1.3858 |
| 0.0003 | 988.0 | 1976 | 1.3859 |
| 0.0003 | 989.0 | 1978 | 1.3860 |
| 0.0003 | 990.0 | 1980 | 1.3860 |
| 0.0003 | 991.0 | 1982 | 1.3861 |
| 0.0003 | 992.0 | 1984 | 1.3860 |
| 0.0003 | 993.0 | 1986 | 1.3860 |
| 0.0003 | 994.0 | 1988 | 1.3860 |
| 0.0003 | 995.0 | 1990 | 1.3860 |
| 0.0003 | 996.0 | 1992 | 1.3860 |
| 0.0003 | 997.0 | 1994 | 1.3859 |
| 0.0003 | 998.0 | 1996 | 1.3859 |
| 0.0003 | 999.0 | 1998 | 1.3859 |
| 0.0002 | 1000.0 | 2000 | 1.3859 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.0
|
daze-unlv/axolotl-medmcqa-4-epoch
|
daze-unlv
| 2024-03-08T01:23:40Z | 4 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"mistral",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-03-07T23:04:09Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: lora-out/medmcqa-4-epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: mistralai/Mistral-7B-v0.1
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: daze-unlv/medmcqa_axolotl
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0
output_dir: ./lora-out/medmcqa-4-epoch
eval_sample_packing: false
adapter: lora
lora_model_dir:
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 4
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16: false
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: false
sdp_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# lora-out/medmcqa-4-epoch
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
### Framework versions
- PEFT 0.9.1.dev0
- Transformers 4.39.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0
|
Maqqq/OpenHermes-2.5-Mistral-7B-16
|
Maqqq
| 2024-03-08T01:17:04Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-08T00:56:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_32_64_0.01_8_0.0002
|
ferrazzipietro
| 2024-03-08T00:53:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-08T00:52:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
keanurefresh/73981
|
keanurefresh
| 2024-03-08T00:44:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-03-08T00:00:03Z |
Include in your prompt <lora:facialized:1>, cum, facial.You might want to include in your negative prompt cum on breasts, cum on body.
The model works best with low steps and CFG, I get good results with 10 steps and a CFG of 3 or 4.
|
not-lain/BaseModelWithConfigAndNamedParameter
|
not-lain
| 2024-03-08T00:41:29Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-03-08T00:29:16Z |
---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using ****:
- Repo: [More Information Needed]
- Docs: [More Information Needed]
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_32_64_0.01_4_0.0002
|
ferrazzipietro
| 2024-03-08T00:34:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-08T00:33:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
not-lain/BaseModelWithJustConfig
|
not-lain
| 2024-03-08T00:25:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-03-08T00:24:23Z |
---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using ****:
- Repo: [More Information Needed]
- Docs: [More Information Needed]
|
farid1088/GQA_BERT_legal_SQuAD_complete_augmented_2000
|
farid1088
| 2024-03-08T00:22:50Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-07T21:47:56Z |
---
tags:
- generated_from_trainer
model-index:
- name: GQA_BERT_legal_SQuAD_complete_augmented_2000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GQA_BERT_legal_SQuAD_complete_augmented_2000
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 160
- eval_batch_size: 40
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 1.0 | 3 | 5.1193 |
| No log | 2.0 | 6 | 4.5794 |
| No log | 3.0 | 9 | 3.9562 |
| No log | 4.0 | 12 | 3.6226 |
| No log | 5.0 | 15 | 3.1767 |
| No log | 6.0 | 18 | 2.8026 |
| No log | 7.0 | 21 | 2.5106 |
| No log | 8.0 | 24 | 2.2343 |
| No log | 9.0 | 27 | 2.0290 |
| No log | 10.0 | 30 | 1.8059 |
| No log | 11.0 | 33 | 1.6448 |
| No log | 12.0 | 36 | 1.4814 |
| No log | 13.0 | 39 | 1.3270 |
| No log | 14.0 | 42 | 1.2522 |
| No log | 15.0 | 45 | 1.1957 |
| No log | 16.0 | 48 | 1.1489 |
| No log | 17.0 | 51 | 1.1251 |
| No log | 18.0 | 54 | 1.1000 |
| No log | 19.0 | 57 | 1.0762 |
| No log | 20.0 | 60 | 1.0465 |
| No log | 21.0 | 63 | 1.0398 |
| No log | 22.0 | 66 | 1.0363 |
| No log | 23.0 | 69 | 1.0388 |
| No log | 24.0 | 72 | 1.0330 |
| No log | 25.0 | 75 | 1.0242 |
| No log | 26.0 | 78 | 1.0188 |
| No log | 27.0 | 81 | 1.0227 |
| No log | 28.0 | 84 | 1.0281 |
| No log | 29.0 | 87 | 1.0362 |
| No log | 30.0 | 90 | 1.0278 |
| No log | 31.0 | 93 | 1.0463 |
| No log | 32.0 | 96 | 1.0733 |
| No log | 33.0 | 99 | 1.0895 |
| No log | 34.0 | 102 | 1.0818 |
| No log | 35.0 | 105 | 1.0836 |
| No log | 36.0 | 108 | 1.0664 |
| No log | 37.0 | 111 | 1.0578 |
| No log | 38.0 | 114 | 1.0792 |
| No log | 39.0 | 117 | 1.0465 |
| No log | 40.0 | 120 | 1.0288 |
| No log | 41.0 | 123 | 1.0609 |
| No log | 42.0 | 126 | 1.0676 |
| No log | 43.0 | 129 | 1.0343 |
| No log | 44.0 | 132 | 1.0653 |
| No log | 45.0 | 135 | 1.1017 |
| No log | 46.0 | 138 | 1.0780 |
| No log | 47.0 | 141 | 1.0841 |
| No log | 48.0 | 144 | 1.0921 |
| No log | 49.0 | 147 | 1.0919 |
| No log | 50.0 | 150 | 1.1088 |
| No log | 51.0 | 153 | 1.0983 |
| No log | 52.0 | 156 | 1.0897 |
| No log | 53.0 | 159 | 1.0991 |
| No log | 54.0 | 162 | 1.1124 |
| No log | 55.0 | 165 | 1.0800 |
| No log | 56.0 | 168 | 1.1173 |
| No log | 57.0 | 171 | 1.1244 |
| No log | 58.0 | 174 | 1.1127 |
| No log | 59.0 | 177 | 1.1290 |
| No log | 60.0 | 180 | 1.1127 |
| No log | 61.0 | 183 | 1.1141 |
| No log | 62.0 | 186 | 1.1494 |
| No log | 63.0 | 189 | 1.1185 |
| No log | 64.0 | 192 | 1.1394 |
| No log | 65.0 | 195 | 1.1624 |
| No log | 66.0 | 198 | 1.1620 |
| No log | 67.0 | 201 | 1.1518 |
| No log | 68.0 | 204 | 1.1353 |
| No log | 69.0 | 207 | 1.2165 |
| No log | 70.0 | 210 | 1.1765 |
| No log | 71.0 | 213 | 1.1964 |
| No log | 72.0 | 216 | 1.2078 |
| No log | 73.0 | 219 | 1.1245 |
| No log | 74.0 | 222 | 1.1631 |
| No log | 75.0 | 225 | 1.1314 |
| No log | 76.0 | 228 | 1.0521 |
| No log | 77.0 | 231 | 1.1047 |
| No log | 78.0 | 234 | 1.1412 |
| No log | 79.0 | 237 | 1.1133 |
| No log | 80.0 | 240 | 1.1257 |
| No log | 81.0 | 243 | 1.1375 |
| No log | 82.0 | 246 | 1.0486 |
| No log | 83.0 | 249 | 1.1223 |
| No log | 84.0 | 252 | 1.1664 |
| No log | 85.0 | 255 | 1.0748 |
| No log | 86.0 | 258 | 1.1151 |
| No log | 87.0 | 261 | 1.1358 |
| No log | 88.0 | 264 | 1.0981 |
| No log | 89.0 | 267 | 1.2120 |
| No log | 90.0 | 270 | 1.1805 |
| No log | 91.0 | 273 | 1.1296 |
| No log | 92.0 | 276 | 1.3029 |
| No log | 93.0 | 279 | 1.2570 |
| No log | 94.0 | 282 | 1.1256 |
| No log | 95.0 | 285 | 1.1910 |
| No log | 96.0 | 288 | 1.2814 |
| No log | 97.0 | 291 | 1.1195 |
| No log | 98.0 | 294 | 1.0572 |
| No log | 99.0 | 297 | 1.1948 |
| No log | 100.0 | 300 | 1.1649 |
| No log | 101.0 | 303 | 1.0716 |
| No log | 102.0 | 306 | 1.1648 |
| No log | 103.0 | 309 | 1.1558 |
| No log | 104.0 | 312 | 1.1381 |
| No log | 105.0 | 315 | 1.2201 |
| No log | 106.0 | 318 | 1.2335 |
| No log | 107.0 | 321 | 1.0798 |
| No log | 108.0 | 324 | 1.1202 |
| No log | 109.0 | 327 | 1.2209 |
| No log | 110.0 | 330 | 1.2331 |
| No log | 111.0 | 333 | 1.1878 |
| No log | 112.0 | 336 | 1.2108 |
| No log | 113.0 | 339 | 1.2244 |
| No log | 114.0 | 342 | 1.1712 |
| No log | 115.0 | 345 | 1.1699 |
| No log | 116.0 | 348 | 1.2039 |
| No log | 117.0 | 351 | 1.0968 |
| No log | 118.0 | 354 | 1.1880 |
| No log | 119.0 | 357 | 1.1514 |
| No log | 120.0 | 360 | 1.0878 |
| No log | 121.0 | 363 | 1.1416 |
| No log | 122.0 | 366 | 1.1696 |
| No log | 123.0 | 369 | 1.1387 |
| No log | 124.0 | 372 | 1.1488 |
| No log | 125.0 | 375 | 1.1840 |
| No log | 126.0 | 378 | 1.1501 |
| No log | 127.0 | 381 | 1.1900 |
| No log | 128.0 | 384 | 1.1478 |
| No log | 129.0 | 387 | 1.2309 |
| No log | 130.0 | 390 | 1.3350 |
| No log | 131.0 | 393 | 1.2147 |
| No log | 132.0 | 396 | 1.1993 |
| No log | 133.0 | 399 | 1.2747 |
| No log | 134.0 | 402 | 1.2372 |
| No log | 135.0 | 405 | 1.2479 |
| No log | 136.0 | 408 | 1.2942 |
| No log | 137.0 | 411 | 1.2322 |
| No log | 138.0 | 414 | 1.2148 |
| No log | 139.0 | 417 | 1.2922 |
| No log | 140.0 | 420 | 1.3430 |
| No log | 141.0 | 423 | 1.3824 |
| No log | 142.0 | 426 | 1.2082 |
| No log | 143.0 | 429 | 1.1967 |
| No log | 144.0 | 432 | 1.2483 |
| No log | 145.0 | 435 | 1.1599 |
| No log | 146.0 | 438 | 1.0864 |
| No log | 147.0 | 441 | 1.1238 |
| No log | 148.0 | 444 | 1.2074 |
| No log | 149.0 | 447 | 1.1902 |
| No log | 150.0 | 450 | 1.1397 |
| No log | 151.0 | 453 | 1.1546 |
| No log | 152.0 | 456 | 1.2126 |
| No log | 153.0 | 459 | 1.2443 |
| No log | 154.0 | 462 | 1.2378 |
| No log | 155.0 | 465 | 1.2335 |
| No log | 156.0 | 468 | 1.1798 |
| No log | 157.0 | 471 | 1.1297 |
| No log | 158.0 | 474 | 1.1737 |
| No log | 159.0 | 477 | 1.0970 |
| No log | 160.0 | 480 | 1.1708 |
| No log | 161.0 | 483 | 1.1551 |
| No log | 162.0 | 486 | 1.1848 |
| No log | 163.0 | 489 | 1.1971 |
| No log | 164.0 | 492 | 1.1720 |
| No log | 165.0 | 495 | 1.1960 |
| No log | 166.0 | 498 | 1.2754 |
| 1.0047 | 167.0 | 501 | 1.2083 |
| 1.0047 | 168.0 | 504 | 1.0888 |
| 1.0047 | 169.0 | 507 | 1.2684 |
| 1.0047 | 170.0 | 510 | 1.3395 |
| 1.0047 | 171.0 | 513 | 1.2508 |
| 1.0047 | 172.0 | 516 | 1.1460 |
| 1.0047 | 173.0 | 519 | 1.2464 |
| 1.0047 | 174.0 | 522 | 1.2131 |
| 1.0047 | 175.0 | 525 | 1.1181 |
| 1.0047 | 176.0 | 528 | 1.2012 |
| 1.0047 | 177.0 | 531 | 1.2957 |
| 1.0047 | 178.0 | 534 | 1.1890 |
| 1.0047 | 179.0 | 537 | 1.1628 |
| 1.0047 | 180.0 | 540 | 1.1929 |
| 1.0047 | 181.0 | 543 | 1.2900 |
| 1.0047 | 182.0 | 546 | 1.3240 |
| 1.0047 | 183.0 | 549 | 1.2145 |
| 1.0047 | 184.0 | 552 | 1.2942 |
| 1.0047 | 185.0 | 555 | 1.3425 |
| 1.0047 | 186.0 | 558 | 1.1772 |
| 1.0047 | 187.0 | 561 | 1.2255 |
| 1.0047 | 188.0 | 564 | 1.4528 |
| 1.0047 | 189.0 | 567 | 1.3898 |
| 1.0047 | 190.0 | 570 | 1.1862 |
| 1.0047 | 191.0 | 573 | 1.1700 |
| 1.0047 | 192.0 | 576 | 1.2801 |
| 1.0047 | 193.0 | 579 | 1.2571 |
| 1.0047 | 194.0 | 582 | 1.1962 |
| 1.0047 | 195.0 | 585 | 1.2228 |
| 1.0047 | 196.0 | 588 | 1.2153 |
| 1.0047 | 197.0 | 591 | 1.1498 |
| 1.0047 | 198.0 | 594 | 1.1130 |
| 1.0047 | 199.0 | 597 | 1.1537 |
| 1.0047 | 200.0 | 600 | 1.2239 |
| 1.0047 | 201.0 | 603 | 1.1742 |
| 1.0047 | 202.0 | 606 | 1.1292 |
| 1.0047 | 203.0 | 609 | 1.1688 |
| 1.0047 | 204.0 | 612 | 1.1844 |
| 1.0047 | 205.0 | 615 | 1.1928 |
| 1.0047 | 206.0 | 618 | 1.2253 |
| 1.0047 | 207.0 | 621 | 1.2585 |
| 1.0047 | 208.0 | 624 | 1.3174 |
| 1.0047 | 209.0 | 627 | 1.3660 |
| 1.0047 | 210.0 | 630 | 1.2523 |
| 1.0047 | 211.0 | 633 | 1.2249 |
| 1.0047 | 212.0 | 636 | 1.4178 |
| 1.0047 | 213.0 | 639 | 1.3895 |
| 1.0047 | 214.0 | 642 | 1.2523 |
| 1.0047 | 215.0 | 645 | 1.1921 |
| 1.0047 | 216.0 | 648 | 1.2245 |
| 1.0047 | 217.0 | 651 | 1.3426 |
| 1.0047 | 218.0 | 654 | 1.3673 |
| 1.0047 | 219.0 | 657 | 1.1933 |
| 1.0047 | 220.0 | 660 | 1.1469 |
| 1.0047 | 221.0 | 663 | 1.2684 |
| 1.0047 | 222.0 | 666 | 1.4222 |
| 1.0047 | 223.0 | 669 | 1.4067 |
| 1.0047 | 224.0 | 672 | 1.3425 |
| 1.0047 | 225.0 | 675 | 1.3358 |
| 1.0047 | 226.0 | 678 | 1.4246 |
| 1.0047 | 227.0 | 681 | 1.3301 |
| 1.0047 | 228.0 | 684 | 1.1915 |
| 1.0047 | 229.0 | 687 | 1.2654 |
| 1.0047 | 230.0 | 690 | 1.4043 |
| 1.0047 | 231.0 | 693 | 1.3357 |
| 1.0047 | 232.0 | 696 | 1.2512 |
| 1.0047 | 233.0 | 699 | 1.2383 |
| 1.0047 | 234.0 | 702 | 1.1516 |
| 1.0047 | 235.0 | 705 | 1.1382 |
| 1.0047 | 236.0 | 708 | 1.2749 |
| 1.0047 | 237.0 | 711 | 1.3747 |
| 1.0047 | 238.0 | 714 | 1.1791 |
| 1.0047 | 239.0 | 717 | 1.1527 |
| 1.0047 | 240.0 | 720 | 1.2194 |
| 1.0047 | 241.0 | 723 | 1.2754 |
| 1.0047 | 242.0 | 726 | 1.3448 |
| 1.0047 | 243.0 | 729 | 1.3382 |
| 1.0047 | 244.0 | 732 | 1.2932 |
| 1.0047 | 245.0 | 735 | 1.3135 |
| 1.0047 | 246.0 | 738 | 1.3671 |
| 1.0047 | 247.0 | 741 | 1.3735 |
| 1.0047 | 248.0 | 744 | 1.4142 |
| 1.0047 | 249.0 | 747 | 1.4000 |
| 1.0047 | 250.0 | 750 | 1.2954 |
| 1.0047 | 251.0 | 753 | 1.2629 |
| 1.0047 | 252.0 | 756 | 1.2982 |
| 1.0047 | 253.0 | 759 | 1.2750 |
| 1.0047 | 254.0 | 762 | 1.2273 |
| 1.0047 | 255.0 | 765 | 1.2209 |
| 1.0047 | 256.0 | 768 | 1.2359 |
| 1.0047 | 257.0 | 771 | 1.2626 |
| 1.0047 | 258.0 | 774 | 1.1799 |
| 1.0047 | 259.0 | 777 | 1.1506 |
| 1.0047 | 260.0 | 780 | 1.1846 |
| 1.0047 | 261.0 | 783 | 1.2278 |
| 1.0047 | 262.0 | 786 | 1.2040 |
| 1.0047 | 263.0 | 789 | 1.1920 |
| 1.0047 | 264.0 | 792 | 1.1921 |
| 1.0047 | 265.0 | 795 | 1.2421 |
| 1.0047 | 266.0 | 798 | 1.2557 |
| 1.0047 | 267.0 | 801 | 1.2245 |
| 1.0047 | 268.0 | 804 | 1.2240 |
| 1.0047 | 269.0 | 807 | 1.3193 |
| 1.0047 | 270.0 | 810 | 1.3523 |
| 1.0047 | 271.0 | 813 | 1.3143 |
| 1.0047 | 272.0 | 816 | 1.2657 |
| 1.0047 | 273.0 | 819 | 1.3099 |
| 1.0047 | 274.0 | 822 | 1.2485 |
| 1.0047 | 275.0 | 825 | 1.1617 |
| 1.0047 | 276.0 | 828 | 1.2186 |
| 1.0047 | 277.0 | 831 | 1.2683 |
| 1.0047 | 278.0 | 834 | 1.2432 |
| 1.0047 | 279.0 | 837 | 1.3252 |
| 1.0047 | 280.0 | 840 | 1.4173 |
| 1.0047 | 281.0 | 843 | 1.3807 |
| 1.0047 | 282.0 | 846 | 1.3895 |
| 1.0047 | 283.0 | 849 | 1.3531 |
| 1.0047 | 284.0 | 852 | 1.2847 |
| 1.0047 | 285.0 | 855 | 1.2734 |
| 1.0047 | 286.0 | 858 | 1.2917 |
| 1.0047 | 287.0 | 861 | 1.3048 |
| 1.0047 | 288.0 | 864 | 1.3169 |
| 1.0047 | 289.0 | 867 | 1.3620 |
| 1.0047 | 290.0 | 870 | 1.4486 |
| 1.0047 | 291.0 | 873 | 1.3860 |
| 1.0047 | 292.0 | 876 | 1.3026 |
| 1.0047 | 293.0 | 879 | 1.2993 |
| 1.0047 | 294.0 | 882 | 1.2825 |
| 1.0047 | 295.0 | 885 | 1.2764 |
| 1.0047 | 296.0 | 888 | 1.3134 |
| 1.0047 | 297.0 | 891 | 1.3452 |
| 1.0047 | 298.0 | 894 | 1.3714 |
| 1.0047 | 299.0 | 897 | 1.3125 |
| 1.0047 | 300.0 | 900 | 1.2099 |
| 1.0047 | 301.0 | 903 | 1.2298 |
| 1.0047 | 302.0 | 906 | 1.3122 |
| 1.0047 | 303.0 | 909 | 1.3047 |
| 1.0047 | 304.0 | 912 | 1.2591 |
| 1.0047 | 305.0 | 915 | 1.2820 |
| 1.0047 | 306.0 | 918 | 1.2770 |
| 1.0047 | 307.0 | 921 | 1.2783 |
| 1.0047 | 308.0 | 924 | 1.3475 |
| 1.0047 | 309.0 | 927 | 1.3819 |
| 1.0047 | 310.0 | 930 | 1.2759 |
| 1.0047 | 311.0 | 933 | 1.1658 |
| 1.0047 | 312.0 | 936 | 1.1919 |
| 1.0047 | 313.0 | 939 | 1.3712 |
| 1.0047 | 314.0 | 942 | 1.4586 |
| 1.0047 | 315.0 | 945 | 1.4405 |
| 1.0047 | 316.0 | 948 | 1.2275 |
| 1.0047 | 317.0 | 951 | 1.2043 |
| 1.0047 | 318.0 | 954 | 1.3147 |
| 1.0047 | 319.0 | 957 | 1.4305 |
| 1.0047 | 320.0 | 960 | 1.3858 |
| 1.0047 | 321.0 | 963 | 1.2997 |
| 1.0047 | 322.0 | 966 | 1.2348 |
| 1.0047 | 323.0 | 969 | 1.2264 |
| 1.0047 | 324.0 | 972 | 1.2819 |
| 1.0047 | 325.0 | 975 | 1.3146 |
| 1.0047 | 326.0 | 978 | 1.3341 |
| 1.0047 | 327.0 | 981 | 1.3511 |
| 1.0047 | 328.0 | 984 | 1.3223 |
| 1.0047 | 329.0 | 987 | 1.3236 |
| 1.0047 | 330.0 | 990 | 1.3429 |
| 1.0047 | 331.0 | 993 | 1.2715 |
| 1.0047 | 332.0 | 996 | 1.2452 |
| 1.0047 | 333.0 | 999 | 1.2350 |
| 0.5933 | 334.0 | 1002 | 1.1789 |
| 0.5933 | 335.0 | 1005 | 1.2327 |
| 0.5933 | 336.0 | 1008 | 1.2986 |
| 0.5933 | 337.0 | 1011 | 1.2372 |
| 0.5933 | 338.0 | 1014 | 1.1142 |
| 0.5933 | 339.0 | 1017 | 1.1219 |
| 0.5933 | 340.0 | 1020 | 1.2149 |
| 0.5933 | 341.0 | 1023 | 1.3215 |
| 0.5933 | 342.0 | 1026 | 1.3930 |
| 0.5933 | 343.0 | 1029 | 1.3952 |
| 0.5933 | 344.0 | 1032 | 1.3798 |
| 0.5933 | 345.0 | 1035 | 1.3870 |
| 0.5933 | 346.0 | 1038 | 1.3835 |
| 0.5933 | 347.0 | 1041 | 1.2778 |
| 0.5933 | 348.0 | 1044 | 1.2079 |
| 0.5933 | 349.0 | 1047 | 1.2545 |
| 0.5933 | 350.0 | 1050 | 1.3546 |
| 0.5933 | 351.0 | 1053 | 1.3485 |
| 0.5933 | 352.0 | 1056 | 1.2388 |
| 0.5933 | 353.0 | 1059 | 1.1877 |
| 0.5933 | 354.0 | 1062 | 1.1707 |
| 0.5933 | 355.0 | 1065 | 1.3036 |
| 0.5933 | 356.0 | 1068 | 1.4033 |
| 0.5933 | 357.0 | 1071 | 1.3046 |
| 0.5933 | 358.0 | 1074 | 1.1871 |
| 0.5933 | 359.0 | 1077 | 1.2303 |
| 0.5933 | 360.0 | 1080 | 1.4086 |
| 0.5933 | 361.0 | 1083 | 1.3546 |
| 0.5933 | 362.0 | 1086 | 1.1697 |
| 0.5933 | 363.0 | 1089 | 1.1320 |
| 0.5933 | 364.0 | 1092 | 1.1799 |
| 0.5933 | 365.0 | 1095 | 1.2172 |
| 0.5933 | 366.0 | 1098 | 1.3199 |
| 0.5933 | 367.0 | 1101 | 1.3302 |
| 0.5933 | 368.0 | 1104 | 1.3020 |
| 0.5933 | 369.0 | 1107 | 1.2652 |
| 0.5933 | 370.0 | 1110 | 1.3420 |
| 0.5933 | 371.0 | 1113 | 1.3486 |
| 0.5933 | 372.0 | 1116 | 1.2853 |
| 0.5933 | 373.0 | 1119 | 1.2203 |
| 0.5933 | 374.0 | 1122 | 1.1671 |
| 0.5933 | 375.0 | 1125 | 1.3050 |
| 0.5933 | 376.0 | 1128 | 1.4090 |
| 0.5933 | 377.0 | 1131 | 1.3682 |
| 0.5933 | 378.0 | 1134 | 1.2919 |
| 0.5933 | 379.0 | 1137 | 1.2611 |
| 0.5933 | 380.0 | 1140 | 1.2714 |
| 0.5933 | 381.0 | 1143 | 1.3204 |
| 0.5933 | 382.0 | 1146 | 1.3206 |
| 0.5933 | 383.0 | 1149 | 1.2592 |
| 0.5933 | 384.0 | 1152 | 1.1575 |
| 0.5933 | 385.0 | 1155 | 1.1801 |
| 0.5933 | 386.0 | 1158 | 1.2966 |
| 0.5933 | 387.0 | 1161 | 1.3092 |
| 0.5933 | 388.0 | 1164 | 1.3284 |
| 0.5933 | 389.0 | 1167 | 1.3397 |
| 0.5933 | 390.0 | 1170 | 1.3137 |
| 0.5933 | 391.0 | 1173 | 1.2775 |
| 0.5933 | 392.0 | 1176 | 1.1970 |
| 0.5933 | 393.0 | 1179 | 1.1671 |
| 0.5933 | 394.0 | 1182 | 1.3037 |
| 0.5933 | 395.0 | 1185 | 1.3400 |
| 0.5933 | 396.0 | 1188 | 1.2243 |
| 0.5933 | 397.0 | 1191 | 1.2322 |
| 0.5933 | 398.0 | 1194 | 1.3279 |
| 0.5933 | 399.0 | 1197 | 1.3577 |
| 0.5933 | 400.0 | 1200 | 1.3690 |
| 0.5933 | 401.0 | 1203 | 1.3068 |
| 0.5933 | 402.0 | 1206 | 1.2011 |
| 0.5933 | 403.0 | 1209 | 1.2389 |
| 0.5933 | 404.0 | 1212 | 1.3540 |
| 0.5933 | 405.0 | 1215 | 1.3858 |
| 0.5933 | 406.0 | 1218 | 1.3326 |
| 0.5933 | 407.0 | 1221 | 1.2234 |
| 0.5933 | 408.0 | 1224 | 1.1657 |
| 0.5933 | 409.0 | 1227 | 1.1664 |
| 0.5933 | 410.0 | 1230 | 1.2766 |
| 0.5933 | 411.0 | 1233 | 1.3610 |
| 0.5933 | 412.0 | 1236 | 1.3622 |
| 0.5933 | 413.0 | 1239 | 1.3024 |
| 0.5933 | 414.0 | 1242 | 1.2516 |
| 0.5933 | 415.0 | 1245 | 1.2160 |
| 0.5933 | 416.0 | 1248 | 1.1839 |
| 0.5933 | 417.0 | 1251 | 1.1225 |
| 0.5933 | 418.0 | 1254 | 1.1113 |
| 0.5933 | 419.0 | 1257 | 1.1720 |
| 0.5933 | 420.0 | 1260 | 1.3755 |
| 0.5933 | 421.0 | 1263 | 1.3626 |
| 0.5933 | 422.0 | 1266 | 1.2200 |
| 0.5933 | 423.0 | 1269 | 1.2175 |
| 0.5933 | 424.0 | 1272 | 1.3046 |
| 0.5933 | 425.0 | 1275 | 1.3120 |
| 0.5933 | 426.0 | 1278 | 1.3499 |
| 0.5933 | 427.0 | 1281 | 1.3850 |
| 0.5933 | 428.0 | 1284 | 1.3673 |
| 0.5933 | 429.0 | 1287 | 1.3124 |
| 0.5933 | 430.0 | 1290 | 1.2314 |
| 0.5933 | 431.0 | 1293 | 1.1724 |
| 0.5933 | 432.0 | 1296 | 1.2057 |
| 0.5933 | 433.0 | 1299 | 1.3040 |
| 0.5933 | 434.0 | 1302 | 1.3551 |
| 0.5933 | 435.0 | 1305 | 1.3777 |
| 0.5933 | 436.0 | 1308 | 1.3375 |
| 0.5933 | 437.0 | 1311 | 1.2963 |
| 0.5933 | 438.0 | 1314 | 1.3388 |
| 0.5933 | 439.0 | 1317 | 1.3685 |
| 0.5933 | 440.0 | 1320 | 1.3634 |
| 0.5933 | 441.0 | 1323 | 1.3484 |
| 0.5933 | 442.0 | 1326 | 1.3536 |
| 0.5933 | 443.0 | 1329 | 1.3584 |
| 0.5933 | 444.0 | 1332 | 1.3452 |
| 0.5933 | 445.0 | 1335 | 1.3379 |
| 0.5933 | 446.0 | 1338 | 1.3434 |
| 0.5933 | 447.0 | 1341 | 1.3378 |
| 0.5933 | 448.0 | 1344 | 1.3451 |
| 0.5933 | 449.0 | 1347 | 1.3583 |
| 0.5933 | 450.0 | 1350 | 1.3498 |
| 0.5933 | 451.0 | 1353 | 1.3202 |
| 0.5933 | 452.0 | 1356 | 1.3219 |
| 0.5933 | 453.0 | 1359 | 1.3534 |
| 0.5933 | 454.0 | 1362 | 1.3738 |
| 0.5933 | 455.0 | 1365 | 1.3947 |
| 0.5933 | 456.0 | 1368 | 1.3863 |
| 0.5933 | 457.0 | 1371 | 1.3747 |
| 0.5933 | 458.0 | 1374 | 1.3685 |
| 0.5933 | 459.0 | 1377 | 1.3519 |
| 0.5933 | 460.0 | 1380 | 1.3706 |
| 0.5933 | 461.0 | 1383 | 1.3956 |
| 0.5933 | 462.0 | 1386 | 1.3628 |
| 0.5933 | 463.0 | 1389 | 1.3669 |
| 0.5933 | 464.0 | 1392 | 1.3338 |
| 0.5933 | 465.0 | 1395 | 1.3316 |
| 0.5933 | 466.0 | 1398 | 1.3641 |
| 0.5933 | 467.0 | 1401 | 1.3980 |
| 0.5933 | 468.0 | 1404 | 1.4046 |
| 0.5933 | 469.0 | 1407 | 1.3757 |
| 0.5933 | 470.0 | 1410 | 1.3437 |
| 0.5933 | 471.0 | 1413 | 1.3552 |
| 0.5933 | 472.0 | 1416 | 1.3930 |
| 0.5933 | 473.0 | 1419 | 1.3926 |
| 0.5933 | 474.0 | 1422 | 1.3316 |
| 0.5933 | 475.0 | 1425 | 1.2435 |
| 0.5933 | 476.0 | 1428 | 1.2005 |
| 0.5933 | 477.0 | 1431 | 1.2154 |
| 0.5933 | 478.0 | 1434 | 1.2495 |
| 0.5933 | 479.0 | 1437 | 1.2615 |
| 0.5933 | 480.0 | 1440 | 1.2665 |
| 0.5933 | 481.0 | 1443 | 1.2593 |
| 0.5933 | 482.0 | 1446 | 1.2442 |
| 0.5933 | 483.0 | 1449 | 1.2603 |
| 0.5933 | 484.0 | 1452 | 1.2821 |
| 0.5933 | 485.0 | 1455 | 1.2940 |
| 0.5933 | 486.0 | 1458 | 1.2904 |
| 0.5933 | 487.0 | 1461 | 1.2815 |
| 0.5933 | 488.0 | 1464 | 1.2719 |
| 0.5933 | 489.0 | 1467 | 1.2950 |
| 0.5933 | 490.0 | 1470 | 1.3589 |
| 0.5933 | 491.0 | 1473 | 1.4231 |
| 0.5933 | 492.0 | 1476 | 1.4325 |
| 0.5933 | 493.0 | 1479 | 1.3372 |
| 0.5933 | 494.0 | 1482 | 1.2722 |
| 0.5933 | 495.0 | 1485 | 1.3250 |
| 0.5933 | 496.0 | 1488 | 1.4279 |
| 0.5933 | 497.0 | 1491 | 1.4185 |
| 0.5933 | 498.0 | 1494 | 1.3254 |
| 0.5933 | 499.0 | 1497 | 1.2996 |
| 0.5698 | 500.0 | 1500 | 1.2436 |
| 0.5698 | 501.0 | 1503 | 1.2112 |
| 0.5698 | 502.0 | 1506 | 1.2390 |
| 0.5698 | 503.0 | 1509 | 1.2883 |
| 0.5698 | 504.0 | 1512 | 1.3407 |
| 0.5698 | 505.0 | 1515 | 1.3793 |
| 0.5698 | 506.0 | 1518 | 1.4309 |
| 0.5698 | 507.0 | 1521 | 1.4088 |
| 0.5698 | 508.0 | 1524 | 1.3966 |
| 0.5698 | 509.0 | 1527 | 1.4082 |
| 0.5698 | 510.0 | 1530 | 1.3814 |
| 0.5698 | 511.0 | 1533 | 1.3396 |
| 0.5698 | 512.0 | 1536 | 1.3387 |
| 0.5698 | 513.0 | 1539 | 1.3057 |
| 0.5698 | 514.0 | 1542 | 1.2687 |
| 0.5698 | 515.0 | 1545 | 1.2707 |
| 0.5698 | 516.0 | 1548 | 1.4157 |
| 0.5698 | 517.0 | 1551 | 1.4618 |
| 0.5698 | 518.0 | 1554 | 1.4597 |
| 0.5698 | 519.0 | 1557 | 1.4605 |
| 0.5698 | 520.0 | 1560 | 1.4481 |
| 0.5698 | 521.0 | 1563 | 1.4423 |
| 0.5698 | 522.0 | 1566 | 1.4312 |
| 0.5698 | 523.0 | 1569 | 1.4020 |
| 0.5698 | 524.0 | 1572 | 1.3645 |
| 0.5698 | 525.0 | 1575 | 1.3438 |
| 0.5698 | 526.0 | 1578 | 1.3205 |
| 0.5698 | 527.0 | 1581 | 1.3053 |
| 0.5698 | 528.0 | 1584 | 1.2944 |
| 0.5698 | 529.0 | 1587 | 1.3649 |
| 0.5698 | 530.0 | 1590 | 1.4252 |
| 0.5698 | 531.0 | 1593 | 1.4653 |
| 0.5698 | 532.0 | 1596 | 1.4664 |
| 0.5698 | 533.0 | 1599 | 1.4386 |
| 0.5698 | 534.0 | 1602 | 1.3703 |
| 0.5698 | 535.0 | 1605 | 1.3156 |
| 0.5698 | 536.0 | 1608 | 1.3263 |
| 0.5698 | 537.0 | 1611 | 1.3055 |
| 0.5698 | 538.0 | 1614 | 1.3066 |
| 0.5698 | 539.0 | 1617 | 1.3549 |
| 0.5698 | 540.0 | 1620 | 1.4445 |
| 0.5698 | 541.0 | 1623 | 1.4701 |
| 0.5698 | 542.0 | 1626 | 1.4265 |
| 0.5698 | 543.0 | 1629 | 1.3599 |
| 0.5698 | 544.0 | 1632 | 1.3451 |
| 0.5698 | 545.0 | 1635 | 1.3428 |
| 0.5698 | 546.0 | 1638 | 1.3231 |
| 0.5698 | 547.0 | 1641 | 1.3266 |
| 0.5698 | 548.0 | 1644 | 1.3216 |
| 0.5698 | 549.0 | 1647 | 1.2599 |
| 0.5698 | 550.0 | 1650 | 1.2338 |
| 0.5698 | 551.0 | 1653 | 1.2140 |
| 0.5698 | 552.0 | 1656 | 1.2297 |
| 0.5698 | 553.0 | 1659 | 1.2842 |
| 0.5698 | 554.0 | 1662 | 1.3357 |
| 0.5698 | 555.0 | 1665 | 1.3797 |
| 0.5698 | 556.0 | 1668 | 1.3690 |
| 0.5698 | 557.0 | 1671 | 1.3163 |
| 0.5698 | 558.0 | 1674 | 1.2510 |
| 0.5698 | 559.0 | 1677 | 1.2714 |
| 0.5698 | 560.0 | 1680 | 1.3403 |
| 0.5698 | 561.0 | 1683 | 1.4387 |
| 0.5698 | 562.0 | 1686 | 1.4697 |
| 0.5698 | 563.0 | 1689 | 1.4641 |
| 0.5698 | 564.0 | 1692 | 1.4123 |
| 0.5698 | 565.0 | 1695 | 1.3808 |
| 0.5698 | 566.0 | 1698 | 1.3325 |
| 0.5698 | 567.0 | 1701 | 1.3470 |
| 0.5698 | 568.0 | 1704 | 1.3301 |
| 0.5698 | 569.0 | 1707 | 1.3255 |
| 0.5698 | 570.0 | 1710 | 1.3614 |
| 0.5698 | 571.0 | 1713 | 1.4034 |
| 0.5698 | 572.0 | 1716 | 1.4201 |
| 0.5698 | 573.0 | 1719 | 1.4221 |
| 0.5698 | 574.0 | 1722 | 1.4100 |
| 0.5698 | 575.0 | 1725 | 1.3791 |
| 0.5698 | 576.0 | 1728 | 1.3478 |
| 0.5698 | 577.0 | 1731 | 1.3398 |
| 0.5698 | 578.0 | 1734 | 1.3408 |
| 0.5698 | 579.0 | 1737 | 1.3577 |
| 0.5698 | 580.0 | 1740 | 1.3780 |
| 0.5698 | 581.0 | 1743 | 1.3871 |
| 0.5698 | 582.0 | 1746 | 1.3754 |
| 0.5698 | 583.0 | 1749 | 1.3487 |
| 0.5698 | 584.0 | 1752 | 1.3299 |
| 0.5698 | 585.0 | 1755 | 1.3215 |
| 0.5698 | 586.0 | 1758 | 1.3004 |
| 0.5698 | 587.0 | 1761 | 1.2819 |
| 0.5698 | 588.0 | 1764 | 1.2804 |
| 0.5698 | 589.0 | 1767 | 1.2724 |
| 0.5698 | 590.0 | 1770 | 1.2975 |
| 0.5698 | 591.0 | 1773 | 1.3615 |
| 0.5698 | 592.0 | 1776 | 1.4006 |
| 0.5698 | 593.0 | 1779 | 1.4037 |
| 0.5698 | 594.0 | 1782 | 1.3882 |
| 0.5698 | 595.0 | 1785 | 1.3919 |
| 0.5698 | 596.0 | 1788 | 1.3759 |
| 0.5698 | 597.0 | 1791 | 1.3215 |
| 0.5698 | 598.0 | 1794 | 1.3130 |
| 0.5698 | 599.0 | 1797 | 1.3547 |
| 0.5698 | 600.0 | 1800 | 1.3832 |
| 0.5698 | 601.0 | 1803 | 1.3755 |
| 0.5698 | 602.0 | 1806 | 1.3555 |
| 0.5698 | 603.0 | 1809 | 1.3085 |
| 0.5698 | 604.0 | 1812 | 1.3235 |
| 0.5698 | 605.0 | 1815 | 1.3616 |
| 0.5698 | 606.0 | 1818 | 1.4128 |
| 0.5698 | 607.0 | 1821 | 1.4333 |
| 0.5698 | 608.0 | 1824 | 1.4124 |
| 0.5698 | 609.0 | 1827 | 1.3622 |
| 0.5698 | 610.0 | 1830 | 1.2583 |
| 0.5698 | 611.0 | 1833 | 1.2334 |
| 0.5698 | 612.0 | 1836 | 1.2316 |
| 0.5698 | 613.0 | 1839 | 1.2430 |
| 0.5698 | 614.0 | 1842 | 1.2659 |
| 0.5698 | 615.0 | 1845 | 1.2801 |
| 0.5698 | 616.0 | 1848 | 1.3092 |
| 0.5698 | 617.0 | 1851 | 1.3340 |
| 0.5698 | 618.0 | 1854 | 1.3543 |
| 0.5698 | 619.0 | 1857 | 1.3771 |
| 0.5698 | 620.0 | 1860 | 1.3764 |
| 0.5698 | 621.0 | 1863 | 1.3577 |
| 0.5698 | 622.0 | 1866 | 1.3255 |
| 0.5698 | 623.0 | 1869 | 1.2972 |
| 0.5698 | 624.0 | 1872 | 1.2877 |
| 0.5698 | 625.0 | 1875 | 1.3092 |
| 0.5698 | 626.0 | 1878 | 1.3348 |
| 0.5698 | 627.0 | 1881 | 1.3486 |
| 0.5698 | 628.0 | 1884 | 1.3543 |
| 0.5698 | 629.0 | 1887 | 1.3504 |
| 0.5698 | 630.0 | 1890 | 1.3544 |
| 0.5698 | 631.0 | 1893 | 1.3419 |
| 0.5698 | 632.0 | 1896 | 1.3093 |
| 0.5698 | 633.0 | 1899 | 1.2775 |
| 0.5698 | 634.0 | 1902 | 1.2783 |
| 0.5698 | 635.0 | 1905 | 1.2753 |
| 0.5698 | 636.0 | 1908 | 1.2506 |
| 0.5698 | 637.0 | 1911 | 1.2332 |
| 0.5698 | 638.0 | 1914 | 1.2763 |
| 0.5698 | 639.0 | 1917 | 1.3084 |
| 0.5698 | 640.0 | 1920 | 1.3237 |
| 0.5698 | 641.0 | 1923 | 1.3340 |
| 0.5698 | 642.0 | 1926 | 1.3339 |
| 0.5698 | 643.0 | 1929 | 1.3103 |
| 0.5698 | 644.0 | 1932 | 1.2959 |
| 0.5698 | 645.0 | 1935 | 1.2915 |
| 0.5698 | 646.0 | 1938 | 1.3321 |
| 0.5698 | 647.0 | 1941 | 1.3656 |
| 0.5698 | 648.0 | 1944 | 1.3728 |
| 0.5698 | 649.0 | 1947 | 1.3629 |
| 0.5698 | 650.0 | 1950 | 1.3502 |
| 0.5698 | 651.0 | 1953 | 1.3297 |
| 0.5698 | 652.0 | 1956 | 1.3057 |
| 0.5698 | 653.0 | 1959 | 1.3008 |
| 0.5698 | 654.0 | 1962 | 1.2932 |
| 0.5698 | 655.0 | 1965 | 1.2945 |
| 0.5698 | 656.0 | 1968 | 1.2929 |
| 0.5698 | 657.0 | 1971 | 1.3073 |
| 0.5698 | 658.0 | 1974 | 1.3311 |
| 0.5698 | 659.0 | 1977 | 1.3472 |
| 0.5698 | 660.0 | 1980 | 1.3409 |
| 0.5698 | 661.0 | 1983 | 1.3315 |
| 0.5698 | 662.0 | 1986 | 1.3154 |
| 0.5698 | 663.0 | 1989 | 1.3030 |
| 0.5698 | 664.0 | 1992 | 1.3006 |
| 0.5698 | 665.0 | 1995 | 1.2968 |
| 0.5698 | 666.0 | 1998 | 1.3045 |
| 0.5609 | 667.0 | 2001 | 1.3166 |
| 0.5609 | 668.0 | 2004 | 1.3430 |
| 0.5609 | 669.0 | 2007 | 1.3718 |
| 0.5609 | 670.0 | 2010 | 1.3945 |
| 0.5609 | 671.0 | 2013 | 1.3919 |
| 0.5609 | 672.0 | 2016 | 1.3895 |
| 0.5609 | 673.0 | 2019 | 1.3659 |
| 0.5609 | 674.0 | 2022 | 1.3276 |
| 0.5609 | 675.0 | 2025 | 1.3060 |
| 0.5609 | 676.0 | 2028 | 1.2941 |
| 0.5609 | 677.0 | 2031 | 1.2893 |
| 0.5609 | 678.0 | 2034 | 1.2937 |
| 0.5609 | 679.0 | 2037 | 1.3019 |
| 0.5609 | 680.0 | 2040 | 1.3119 |
| 0.5609 | 681.0 | 2043 | 1.3222 |
| 0.5609 | 682.0 | 2046 | 1.3238 |
| 0.5609 | 683.0 | 2049 | 1.3280 |
| 0.5609 | 684.0 | 2052 | 1.3324 |
| 0.5609 | 685.0 | 2055 | 1.3401 |
| 0.5609 | 686.0 | 2058 | 1.3452 |
| 0.5609 | 687.0 | 2061 | 1.3752 |
| 0.5609 | 688.0 | 2064 | 1.3987 |
| 0.5609 | 689.0 | 2067 | 1.4118 |
| 0.5609 | 690.0 | 2070 | 1.4179 |
| 0.5609 | 691.0 | 2073 | 1.4122 |
| 0.5609 | 692.0 | 2076 | 1.3909 |
| 0.5609 | 693.0 | 2079 | 1.3439 |
| 0.5609 | 694.0 | 2082 | 1.3072 |
| 0.5609 | 695.0 | 2085 | 1.2981 |
| 0.5609 | 696.0 | 2088 | 1.3195 |
| 0.5609 | 697.0 | 2091 | 1.3502 |
| 0.5609 | 698.0 | 2094 | 1.3783 |
| 0.5609 | 699.0 | 2097 | 1.3925 |
| 0.5609 | 700.0 | 2100 | 1.4000 |
| 0.5609 | 701.0 | 2103 | 1.3797 |
| 0.5609 | 702.0 | 2106 | 1.3620 |
| 0.5609 | 703.0 | 2109 | 1.3533 |
| 0.5609 | 704.0 | 2112 | 1.3492 |
| 0.5609 | 705.0 | 2115 | 1.3400 |
| 0.5609 | 706.0 | 2118 | 1.3346 |
| 0.5609 | 707.0 | 2121 | 1.3254 |
| 0.5609 | 708.0 | 2124 | 1.3290 |
| 0.5609 | 709.0 | 2127 | 1.3406 |
| 0.5609 | 710.0 | 2130 | 1.3619 |
| 0.5609 | 711.0 | 2133 | 1.3898 |
| 0.5609 | 712.0 | 2136 | 1.3945 |
| 0.5609 | 713.0 | 2139 | 1.3817 |
| 0.5609 | 714.0 | 2142 | 1.3686 |
| 0.5609 | 715.0 | 2145 | 1.3627 |
| 0.5609 | 716.0 | 2148 | 1.3617 |
| 0.5609 | 717.0 | 2151 | 1.3548 |
| 0.5609 | 718.0 | 2154 | 1.3464 |
| 0.5609 | 719.0 | 2157 | 1.3368 |
| 0.5609 | 720.0 | 2160 | 1.3138 |
| 0.5609 | 721.0 | 2163 | 1.3073 |
| 0.5609 | 722.0 | 2166 | 1.3203 |
| 0.5609 | 723.0 | 2169 | 1.3342 |
| 0.5609 | 724.0 | 2172 | 1.3562 |
| 0.5609 | 725.0 | 2175 | 1.3725 |
| 0.5609 | 726.0 | 2178 | 1.3748 |
| 0.5609 | 727.0 | 2181 | 1.3711 |
| 0.5609 | 728.0 | 2184 | 1.3717 |
| 0.5609 | 729.0 | 2187 | 1.3627 |
| 0.5609 | 730.0 | 2190 | 1.3515 |
| 0.5609 | 731.0 | 2193 | 1.3373 |
| 0.5609 | 732.0 | 2196 | 1.3160 |
| 0.5609 | 733.0 | 2199 | 1.3125 |
| 0.5609 | 734.0 | 2202 | 1.3301 |
| 0.5609 | 735.0 | 2205 | 1.3197 |
| 0.5609 | 736.0 | 2208 | 1.3125 |
| 0.5609 | 737.0 | 2211 | 1.3072 |
| 0.5609 | 738.0 | 2214 | 1.2798 |
| 0.5609 | 739.0 | 2217 | 1.2672 |
| 0.5609 | 740.0 | 2220 | 1.2533 |
| 0.5609 | 741.0 | 2223 | 1.2383 |
| 0.5609 | 742.0 | 2226 | 1.2450 |
| 0.5609 | 743.0 | 2229 | 1.2557 |
| 0.5609 | 744.0 | 2232 | 1.2751 |
| 0.5609 | 745.0 | 2235 | 1.3235 |
| 0.5609 | 746.0 | 2238 | 1.3708 |
| 0.5609 | 747.0 | 2241 | 1.3867 |
| 0.5609 | 748.0 | 2244 | 1.3686 |
| 0.5609 | 749.0 | 2247 | 1.3309 |
| 0.5609 | 750.0 | 2250 | 1.2811 |
| 0.5609 | 751.0 | 2253 | 1.2294 |
| 0.5609 | 752.0 | 2256 | 1.1340 |
| 0.5609 | 753.0 | 2259 | 1.1346 |
| 0.5609 | 754.0 | 2262 | 1.2078 |
| 0.5609 | 755.0 | 2265 | 1.2462 |
| 0.5609 | 756.0 | 2268 | 1.2557 |
| 0.5609 | 757.0 | 2271 | 1.2358 |
| 0.5609 | 758.0 | 2274 | 1.2225 |
| 0.5609 | 759.0 | 2277 | 1.2298 |
| 0.5609 | 760.0 | 2280 | 1.2561 |
| 0.5609 | 761.0 | 2283 | 1.2861 |
| 0.5609 | 762.0 | 2286 | 1.3017 |
| 0.5609 | 763.0 | 2289 | 1.3228 |
| 0.5609 | 764.0 | 2292 | 1.3235 |
| 0.5609 | 765.0 | 2295 | 1.3232 |
| 0.5609 | 766.0 | 2298 | 1.3236 |
| 0.5609 | 767.0 | 2301 | 1.3289 |
| 0.5609 | 768.0 | 2304 | 1.3324 |
| 0.5609 | 769.0 | 2307 | 1.3325 |
| 0.5609 | 770.0 | 2310 | 1.3282 |
| 0.5609 | 771.0 | 2313 | 1.3176 |
| 0.5609 | 772.0 | 2316 | 1.2927 |
| 0.5609 | 773.0 | 2319 | 1.2773 |
| 0.5609 | 774.0 | 2322 | 1.2617 |
| 0.5609 | 775.0 | 2325 | 1.2578 |
| 0.5609 | 776.0 | 2328 | 1.2454 |
| 0.5609 | 777.0 | 2331 | 1.2212 |
| 0.5609 | 778.0 | 2334 | 1.2459 |
| 0.5609 | 779.0 | 2337 | 1.3040 |
| 0.5609 | 780.0 | 2340 | 1.3453 |
| 0.5609 | 781.0 | 2343 | 1.3773 |
| 0.5609 | 782.0 | 2346 | 1.3942 |
| 0.5609 | 783.0 | 2349 | 1.3854 |
| 0.5609 | 784.0 | 2352 | 1.3637 |
| 0.5609 | 785.0 | 2355 | 1.3213 |
| 0.5609 | 786.0 | 2358 | 1.2795 |
| 0.5609 | 787.0 | 2361 | 1.2844 |
| 0.5609 | 788.0 | 2364 | 1.3058 |
| 0.5609 | 789.0 | 2367 | 1.3198 |
| 0.5609 | 790.0 | 2370 | 1.3251 |
| 0.5609 | 791.0 | 2373 | 1.3193 |
| 0.5609 | 792.0 | 2376 | 1.3021 |
| 0.5609 | 793.0 | 2379 | 1.3105 |
| 0.5609 | 794.0 | 2382 | 1.3310 |
| 0.5609 | 795.0 | 2385 | 1.3574 |
| 0.5609 | 796.0 | 2388 | 1.3642 |
| 0.5609 | 797.0 | 2391 | 1.3580 |
| 0.5609 | 798.0 | 2394 | 1.3255 |
| 0.5609 | 799.0 | 2397 | 1.2785 |
| 0.5609 | 800.0 | 2400 | 1.2199 |
| 0.5609 | 801.0 | 2403 | 1.1221 |
| 0.5609 | 802.0 | 2406 | 1.1233 |
| 0.5609 | 803.0 | 2409 | 1.1873 |
| 0.5609 | 804.0 | 2412 | 1.3435 |
| 0.5609 | 805.0 | 2415 | 1.3522 |
| 0.5609 | 806.0 | 2418 | 1.3800 |
| 0.5609 | 807.0 | 2421 | 1.3976 |
| 0.5609 | 808.0 | 2424 | 1.3899 |
| 0.5609 | 809.0 | 2427 | 1.3480 |
| 0.5609 | 810.0 | 2430 | 1.1934 |
| 0.5609 | 811.0 | 2433 | 1.1259 |
| 0.5609 | 812.0 | 2436 | 1.1836 |
| 0.5609 | 813.0 | 2439 | 1.2207 |
| 0.5609 | 814.0 | 2442 | 1.3393 |
| 0.5609 | 815.0 | 2445 | 1.4465 |
| 0.5609 | 816.0 | 2448 | 1.4166 |
| 0.5609 | 817.0 | 2451 | 1.3814 |
| 0.5609 | 818.0 | 2454 | 1.3636 |
| 0.5609 | 819.0 | 2457 | 1.3334 |
| 0.5609 | 820.0 | 2460 | 1.2854 |
| 0.5609 | 821.0 | 2463 | 1.2674 |
| 0.5609 | 822.0 | 2466 | 1.2533 |
| 0.5609 | 823.0 | 2469 | 1.2967 |
| 0.5609 | 824.0 | 2472 | 1.3504 |
| 0.5609 | 825.0 | 2475 | 1.3052 |
| 0.5609 | 826.0 | 2478 | 1.2894 |
| 0.5609 | 827.0 | 2481 | 1.3342 |
| 0.5609 | 828.0 | 2484 | 1.4139 |
| 0.5609 | 829.0 | 2487 | 1.4048 |
| 0.5609 | 830.0 | 2490 | 1.3678 |
| 0.5609 | 831.0 | 2493 | 1.3604 |
| 0.5609 | 832.0 | 2496 | 1.3533 |
| 0.5609 | 833.0 | 2499 | 1.3609 |
| 0.5608 | 834.0 | 2502 | 1.3909 |
| 0.5608 | 835.0 | 2505 | 1.4105 |
| 0.5608 | 836.0 | 2508 | 1.4294 |
| 0.5608 | 837.0 | 2511 | 1.4313 |
| 0.5608 | 838.0 | 2514 | 1.4112 |
| 0.5608 | 839.0 | 2517 | 1.3844 |
| 0.5608 | 840.0 | 2520 | 1.3769 |
| 0.5608 | 841.0 | 2523 | 1.3679 |
| 0.5608 | 842.0 | 2526 | 1.3449 |
| 0.5608 | 843.0 | 2529 | 1.3389 |
| 0.5608 | 844.0 | 2532 | 1.3366 |
| 0.5608 | 845.0 | 2535 | 1.3453 |
| 0.5608 | 846.0 | 2538 | 1.3726 |
| 0.5608 | 847.0 | 2541 | 1.3670 |
| 0.5608 | 848.0 | 2544 | 1.3503 |
| 0.5608 | 849.0 | 2547 | 1.3262 |
| 0.5608 | 850.0 | 2550 | 1.3017 |
| 0.5608 | 851.0 | 2553 | 1.2902 |
| 0.5608 | 852.0 | 2556 | 1.2662 |
| 0.5608 | 853.0 | 2559 | 1.2408 |
| 0.5608 | 854.0 | 2562 | 1.2208 |
| 0.5608 | 855.0 | 2565 | 1.2003 |
| 0.5608 | 856.0 | 2568 | 1.2038 |
| 0.5608 | 857.0 | 2571 | 1.2344 |
| 0.5608 | 858.0 | 2574 | 1.2968 |
| 0.5608 | 859.0 | 2577 | 1.3401 |
| 0.5608 | 860.0 | 2580 | 1.3674 |
| 0.5608 | 861.0 | 2583 | 1.3837 |
| 0.5608 | 862.0 | 2586 | 1.3753 |
| 0.5608 | 863.0 | 2589 | 1.3121 |
| 0.5608 | 864.0 | 2592 | 1.2480 |
| 0.5608 | 865.0 | 2595 | 1.2293 |
| 0.5608 | 866.0 | 2598 | 1.2000 |
| 0.5608 | 867.0 | 2601 | 1.2027 |
| 0.5608 | 868.0 | 2604 | 1.2281 |
| 0.5608 | 869.0 | 2607 | 1.2710 |
| 0.5608 | 870.0 | 2610 | 1.3535 |
| 0.5608 | 871.0 | 2613 | 1.3937 |
| 0.5608 | 872.0 | 2616 | 1.4003 |
| 0.5608 | 873.0 | 2619 | 1.3758 |
| 0.5608 | 874.0 | 2622 | 1.3253 |
| 0.5608 | 875.0 | 2625 | 1.2449 |
| 0.5608 | 876.0 | 2628 | 1.1745 |
| 0.5608 | 877.0 | 2631 | 1.1366 |
| 0.5608 | 878.0 | 2634 | 1.1655 |
| 0.5608 | 879.0 | 2637 | 1.2965 |
| 0.5608 | 880.0 | 2640 | 1.3166 |
| 0.5608 | 881.0 | 2643 | 1.3225 |
| 0.5608 | 882.0 | 2646 | 1.3141 |
| 0.5608 | 883.0 | 2649 | 1.2992 |
| 0.5608 | 884.0 | 2652 | 1.2834 |
| 0.5608 | 885.0 | 2655 | 1.2698 |
| 0.5608 | 886.0 | 2658 | 1.2829 |
| 0.5608 | 887.0 | 2661 | 1.3100 |
| 0.5608 | 888.0 | 2664 | 1.3314 |
| 0.5608 | 889.0 | 2667 | 1.3393 |
| 0.5608 | 890.0 | 2670 | 1.3354 |
| 0.5608 | 891.0 | 2673 | 1.3278 |
| 0.5608 | 892.0 | 2676 | 1.3333 |
| 0.5608 | 893.0 | 2679 | 1.3443 |
| 0.5608 | 894.0 | 2682 | 1.3343 |
| 0.5608 | 895.0 | 2685 | 1.3148 |
| 0.5608 | 896.0 | 2688 | 1.2858 |
| 0.5608 | 897.0 | 2691 | 1.2698 |
| 0.5608 | 898.0 | 2694 | 1.2777 |
| 0.5608 | 899.0 | 2697 | 1.2901 |
| 0.5608 | 900.0 | 2700 | 1.3008 |
| 0.5608 | 901.0 | 2703 | 1.3260 |
| 0.5608 | 902.0 | 2706 | 1.3440 |
| 0.5608 | 903.0 | 2709 | 1.3438 |
| 0.5608 | 904.0 | 2712 | 1.3380 |
| 0.5608 | 905.0 | 2715 | 1.3237 |
| 0.5608 | 906.0 | 2718 | 1.3145 |
| 0.5608 | 907.0 | 2721 | 1.3022 |
| 0.5608 | 908.0 | 2724 | 1.2902 |
| 0.5608 | 909.0 | 2727 | 1.2793 |
| 0.5608 | 910.0 | 2730 | 1.2909 |
| 0.5608 | 911.0 | 2733 | 1.3084 |
| 0.5608 | 912.0 | 2736 | 1.3185 |
| 0.5608 | 913.0 | 2739 | 1.3250 |
| 0.5608 | 914.0 | 2742 | 1.3412 |
| 0.5608 | 915.0 | 2745 | 1.3491 |
| 0.5608 | 916.0 | 2748 | 1.3561 |
| 0.5608 | 917.0 | 2751 | 1.3675 |
| 0.5608 | 918.0 | 2754 | 1.3759 |
| 0.5608 | 919.0 | 2757 | 1.3829 |
| 0.5608 | 920.0 | 2760 | 1.3805 |
| 0.5608 | 921.0 | 2763 | 1.3669 |
| 0.5608 | 922.0 | 2766 | 1.3605 |
| 0.5608 | 923.0 | 2769 | 1.3455 |
| 0.5608 | 924.0 | 2772 | 1.3373 |
| 0.5608 | 925.0 | 2775 | 1.3440 |
| 0.5608 | 926.0 | 2778 | 1.3408 |
| 0.5608 | 927.0 | 2781 | 1.3424 |
| 0.5608 | 928.0 | 2784 | 1.3414 |
| 0.5608 | 929.0 | 2787 | 1.3383 |
| 0.5608 | 930.0 | 2790 | 1.3371 |
| 0.5608 | 931.0 | 2793 | 1.3406 |
| 0.5608 | 932.0 | 2796 | 1.3432 |
| 0.5608 | 933.0 | 2799 | 1.3564 |
| 0.5608 | 934.0 | 2802 | 1.3773 |
| 0.5608 | 935.0 | 2805 | 1.3931 |
| 0.5608 | 936.0 | 2808 | 1.4030 |
| 0.5608 | 937.0 | 2811 | 1.3998 |
| 0.5608 | 938.0 | 2814 | 1.3955 |
| 0.5608 | 939.0 | 2817 | 1.3937 |
| 0.5608 | 940.0 | 2820 | 1.3801 |
| 0.5608 | 941.0 | 2823 | 1.3729 |
| 0.5608 | 942.0 | 2826 | 1.3679 |
| 0.5608 | 943.0 | 2829 | 1.3550 |
| 0.5608 | 944.0 | 2832 | 1.3437 |
| 0.5608 | 945.0 | 2835 | 1.3347 |
| 0.5608 | 946.0 | 2838 | 1.3220 |
| 0.5608 | 947.0 | 2841 | 1.2968 |
| 0.5608 | 948.0 | 2844 | 1.2799 |
| 0.5608 | 949.0 | 2847 | 1.2549 |
| 0.5608 | 950.0 | 2850 | 1.2459 |
| 0.5608 | 951.0 | 2853 | 1.2461 |
| 0.5608 | 952.0 | 2856 | 1.2299 |
| 0.5608 | 953.0 | 2859 | 1.2177 |
| 0.5608 | 954.0 | 2862 | 1.2640 |
| 0.5608 | 955.0 | 2865 | 1.2997 |
| 0.5608 | 956.0 | 2868 | 1.2971 |
| 0.5608 | 957.0 | 2871 | 1.2788 |
| 0.5608 | 958.0 | 2874 | 1.2858 |
| 0.5608 | 959.0 | 2877 | 1.2694 |
| 0.5608 | 960.0 | 2880 | 1.2542 |
| 0.5608 | 961.0 | 2883 | 1.2733 |
| 0.5608 | 962.0 | 2886 | 1.3086 |
| 0.5608 | 963.0 | 2889 | 1.3123 |
| 0.5608 | 964.0 | 2892 | 1.3039 |
| 0.5608 | 965.0 | 2895 | 1.2834 |
| 0.5608 | 966.0 | 2898 | 1.2809 |
| 0.5608 | 967.0 | 2901 | 1.2696 |
| 0.5608 | 968.0 | 2904 | 1.2567 |
| 0.5608 | 969.0 | 2907 | 1.2497 |
| 0.5608 | 970.0 | 2910 | 1.2639 |
| 0.5608 | 971.0 | 2913 | 1.2809 |
| 0.5608 | 972.0 | 2916 | 1.2881 |
| 0.5608 | 973.0 | 2919 | 1.3082 |
| 0.5608 | 974.0 | 2922 | 1.3283 |
| 0.5608 | 975.0 | 2925 | 1.3331 |
| 0.5608 | 976.0 | 2928 | 1.3384 |
| 0.5608 | 977.0 | 2931 | 1.3405 |
| 0.5608 | 978.0 | 2934 | 1.3515 |
| 0.5608 | 979.0 | 2937 | 1.3734 |
| 0.5608 | 980.0 | 2940 | 1.3875 |
| 0.5608 | 981.0 | 2943 | 1.3766 |
| 0.5608 | 982.0 | 2946 | 1.3530 |
| 0.5608 | 983.0 | 2949 | 1.3309 |
| 0.5608 | 984.0 | 2952 | 1.3178 |
| 0.5608 | 985.0 | 2955 | 1.2963 |
| 0.5608 | 986.0 | 2958 | 1.2672 |
| 0.5608 | 987.0 | 2961 | 1.2697 |
| 0.5608 | 988.0 | 2964 | 1.2620 |
| 0.5608 | 989.0 | 2967 | 1.2438 |
| 0.5608 | 990.0 | 2970 | 1.2488 |
| 0.5608 | 991.0 | 2973 | 1.2630 |
| 0.5608 | 992.0 | 2976 | 1.2496 |
| 0.5608 | 993.0 | 2979 | 1.2646 |
| 0.5608 | 994.0 | 2982 | 1.3051 |
| 0.5608 | 995.0 | 2985 | 1.3445 |
| 0.5608 | 996.0 | 2988 | 1.3551 |
| 0.5608 | 997.0 | 2991 | 1.3600 |
| 0.5608 | 998.0 | 2994 | 1.3566 |
| 0.5608 | 999.0 | 2997 | 1.3485 |
| 0.5596 | 1000.0 | 3000 | 1.3403 |
| 0.5596 | 1001.0 | 3003 | 1.3328 |
| 0.5596 | 1002.0 | 3006 | 1.3367 |
| 0.5596 | 1003.0 | 3009 | 1.3306 |
| 0.5596 | 1004.0 | 3012 | 1.3026 |
| 0.5596 | 1005.0 | 3015 | 1.2606 |
| 0.5596 | 1006.0 | 3018 | 1.2459 |
| 0.5596 | 1007.0 | 3021 | 1.2332 |
| 0.5596 | 1008.0 | 3024 | 1.2062 |
| 0.5596 | 1009.0 | 3027 | 1.1985 |
| 0.5596 | 1010.0 | 3030 | 1.1937 |
| 0.5596 | 1011.0 | 3033 | 1.1920 |
| 0.5596 | 1012.0 | 3036 | 1.1953 |
| 0.5596 | 1013.0 | 3039 | 1.1919 |
| 0.5596 | 1014.0 | 3042 | 1.1809 |
| 0.5596 | 1015.0 | 3045 | 1.1649 |
| 0.5596 | 1016.0 | 3048 | 1.1612 |
| 0.5596 | 1017.0 | 3051 | 1.1667 |
| 0.5596 | 1018.0 | 3054 | 1.1732 |
| 0.5596 | 1019.0 | 3057 | 1.1847 |
| 0.5596 | 1020.0 | 3060 | 1.1990 |
| 0.5596 | 1021.0 | 3063 | 1.2160 |
| 0.5596 | 1022.0 | 3066 | 1.2672 |
| 0.5596 | 1023.0 | 3069 | 1.3042 |
| 0.5596 | 1024.0 | 3072 | 1.3417 |
| 0.5596 | 1025.0 | 3075 | 1.3652 |
| 0.5596 | 1026.0 | 3078 | 1.3665 |
| 0.5596 | 1027.0 | 3081 | 1.3571 |
| 0.5596 | 1028.0 | 3084 | 1.3403 |
| 0.5596 | 1029.0 | 3087 | 1.3310 |
| 0.5596 | 1030.0 | 3090 | 1.3274 |
| 0.5596 | 1031.0 | 3093 | 1.3228 |
| 0.5596 | 1032.0 | 3096 | 1.2960 |
| 0.5596 | 1033.0 | 3099 | 1.2831 |
| 0.5596 | 1034.0 | 3102 | 1.2817 |
| 0.5596 | 1035.0 | 3105 | 1.2808 |
| 0.5596 | 1036.0 | 3108 | 1.2747 |
| 0.5596 | 1037.0 | 3111 | 1.2732 |
| 0.5596 | 1038.0 | 3114 | 1.2738 |
| 0.5596 | 1039.0 | 3117 | 1.2797 |
| 0.5596 | 1040.0 | 3120 | 1.2912 |
| 0.5596 | 1041.0 | 3123 | 1.3257 |
| 0.5596 | 1042.0 | 3126 | 1.3495 |
| 0.5596 | 1043.0 | 3129 | 1.3620 |
| 0.5596 | 1044.0 | 3132 | 1.3673 |
| 0.5596 | 1045.0 | 3135 | 1.3723 |
| 0.5596 | 1046.0 | 3138 | 1.3709 |
| 0.5596 | 1047.0 | 3141 | 1.3701 |
| 0.5596 | 1048.0 | 3144 | 1.3690 |
| 0.5596 | 1049.0 | 3147 | 1.3811 |
| 0.5596 | 1050.0 | 3150 | 1.3936 |
| 0.5596 | 1051.0 | 3153 | 1.3898 |
| 0.5596 | 1052.0 | 3156 | 1.3976 |
| 0.5596 | 1053.0 | 3159 | 1.3920 |
| 0.5596 | 1054.0 | 3162 | 1.3665 |
| 0.5596 | 1055.0 | 3165 | 1.3330 |
| 0.5596 | 1056.0 | 3168 | 1.3195 |
| 0.5596 | 1057.0 | 3171 | 1.3350 |
| 0.5596 | 1058.0 | 3174 | 1.3444 |
| 0.5596 | 1059.0 | 3177 | 1.3567 |
| 0.5596 | 1060.0 | 3180 | 1.3821 |
| 0.5596 | 1061.0 | 3183 | 1.3965 |
| 0.5596 | 1062.0 | 3186 | 1.4039 |
| 0.5596 | 1063.0 | 3189 | 1.4126 |
| 0.5596 | 1064.0 | 3192 | 1.4127 |
| 0.5596 | 1065.0 | 3195 | 1.4188 |
| 0.5596 | 1066.0 | 3198 | 1.4220 |
| 0.5596 | 1067.0 | 3201 | 1.4240 |
| 0.5596 | 1068.0 | 3204 | 1.4197 |
| 0.5596 | 1069.0 | 3207 | 1.4138 |
| 0.5596 | 1070.0 | 3210 | 1.4155 |
| 0.5596 | 1071.0 | 3213 | 1.4155 |
| 0.5596 | 1072.0 | 3216 | 1.4227 |
| 0.5596 | 1073.0 | 3219 | 1.4209 |
| 0.5596 | 1074.0 | 3222 | 1.4186 |
| 0.5596 | 1075.0 | 3225 | 1.4118 |
| 0.5596 | 1076.0 | 3228 | 1.3992 |
| 0.5596 | 1077.0 | 3231 | 1.3924 |
| 0.5596 | 1078.0 | 3234 | 1.3884 |
| 0.5596 | 1079.0 | 3237 | 1.3913 |
| 0.5596 | 1080.0 | 3240 | 1.3882 |
| 0.5596 | 1081.0 | 3243 | 1.3765 |
| 0.5596 | 1082.0 | 3246 | 1.3725 |
| 0.5596 | 1083.0 | 3249 | 1.3893 |
| 0.5596 | 1084.0 | 3252 | 1.3933 |
| 0.5596 | 1085.0 | 3255 | 1.4005 |
| 0.5596 | 1086.0 | 3258 | 1.4017 |
| 0.5596 | 1087.0 | 3261 | 1.4086 |
| 0.5596 | 1088.0 | 3264 | 1.4195 |
| 0.5596 | 1089.0 | 3267 | 1.4274 |
| 0.5596 | 1090.0 | 3270 | 1.4258 |
| 0.5596 | 1091.0 | 3273 | 1.4179 |
| 0.5596 | 1092.0 | 3276 | 1.4090 |
| 0.5596 | 1093.0 | 3279 | 1.3901 |
| 0.5596 | 1094.0 | 3282 | 1.3714 |
| 0.5596 | 1095.0 | 3285 | 1.3512 |
| 0.5596 | 1096.0 | 3288 | 1.3355 |
| 0.5596 | 1097.0 | 3291 | 1.3368 |
| 0.5596 | 1098.0 | 3294 | 1.3421 |
| 0.5596 | 1099.0 | 3297 | 1.3195 |
| 0.5596 | 1100.0 | 3300 | 1.2919 |
| 0.5596 | 1101.0 | 3303 | 1.2551 |
| 0.5596 | 1102.0 | 3306 | 1.2370 |
| 0.5596 | 1103.0 | 3309 | 1.2445 |
| 0.5596 | 1104.0 | 3312 | 1.2213 |
| 0.5596 | 1105.0 | 3315 | 1.2361 |
| 0.5596 | 1106.0 | 3318 | 1.3104 |
| 0.5596 | 1107.0 | 3321 | 1.3632 |
| 0.5596 | 1108.0 | 3324 | 1.3822 |
| 0.5596 | 1109.0 | 3327 | 1.3887 |
| 0.5596 | 1110.0 | 3330 | 1.3920 |
| 0.5596 | 1111.0 | 3333 | 1.3876 |
| 0.5596 | 1112.0 | 3336 | 1.3874 |
| 0.5596 | 1113.0 | 3339 | 1.3850 |
| 0.5596 | 1114.0 | 3342 | 1.3685 |
| 0.5596 | 1115.0 | 3345 | 1.3439 |
| 0.5596 | 1116.0 | 3348 | 1.3327 |
| 0.5596 | 1117.0 | 3351 | 1.3158 |
| 0.5596 | 1118.0 | 3354 | 1.3046 |
| 0.5596 | 1119.0 | 3357 | 1.2996 |
| 0.5596 | 1120.0 | 3360 | 1.2958 |
| 0.5596 | 1121.0 | 3363 | 1.2871 |
| 0.5596 | 1122.0 | 3366 | 1.2576 |
| 0.5596 | 1123.0 | 3369 | 1.2534 |
| 0.5596 | 1124.0 | 3372 | 1.2344 |
| 0.5596 | 1125.0 | 3375 | 1.2290 |
| 0.5596 | 1126.0 | 3378 | 1.2363 |
| 0.5596 | 1127.0 | 3381 | 1.2271 |
| 0.5596 | 1128.0 | 3384 | 1.2219 |
| 0.5596 | 1129.0 | 3387 | 1.2365 |
| 0.5596 | 1130.0 | 3390 | 1.2537 |
| 0.5596 | 1131.0 | 3393 | 1.2754 |
| 0.5596 | 1132.0 | 3396 | 1.2962 |
| 0.5596 | 1133.0 | 3399 | 1.3161 |
| 0.5596 | 1134.0 | 3402 | 1.3244 |
| 0.5596 | 1135.0 | 3405 | 1.3309 |
| 0.5596 | 1136.0 | 3408 | 1.3317 |
| 0.5596 | 1137.0 | 3411 | 1.3369 |
| 0.5596 | 1138.0 | 3414 | 1.3336 |
| 0.5596 | 1139.0 | 3417 | 1.3099 |
| 0.5596 | 1140.0 | 3420 | 1.2747 |
| 0.5596 | 1141.0 | 3423 | 1.2515 |
| 0.5596 | 1142.0 | 3426 | 1.2653 |
| 0.5596 | 1143.0 | 3429 | 1.2975 |
| 0.5596 | 1144.0 | 3432 | 1.3184 |
| 0.5596 | 1145.0 | 3435 | 1.3373 |
| 0.5596 | 1146.0 | 3438 | 1.3265 |
| 0.5596 | 1147.0 | 3441 | 1.3195 |
| 0.5596 | 1148.0 | 3444 | 1.3177 |
| 0.5596 | 1149.0 | 3447 | 1.3045 |
| 0.5596 | 1150.0 | 3450 | 1.3045 |
| 0.5596 | 1151.0 | 3453 | 1.3020 |
| 0.5596 | 1152.0 | 3456 | 1.3021 |
| 0.5596 | 1153.0 | 3459 | 1.3238 |
| 0.5596 | 1154.0 | 3462 | 1.3351 |
| 0.5596 | 1155.0 | 3465 | 1.3334 |
| 0.5596 | 1156.0 | 3468 | 1.3274 |
| 0.5596 | 1157.0 | 3471 | 1.3276 |
| 0.5596 | 1158.0 | 3474 | 1.3119 |
| 0.5596 | 1159.0 | 3477 | 1.2913 |
| 0.5596 | 1160.0 | 3480 | 1.2919 |
| 0.5596 | 1161.0 | 3483 | 1.2927 |
| 0.5596 | 1162.0 | 3486 | 1.3079 |
| 0.5596 | 1163.0 | 3489 | 1.3195 |
| 0.5596 | 1164.0 | 3492 | 1.3286 |
| 0.5596 | 1165.0 | 3495 | 1.3375 |
| 0.5596 | 1166.0 | 3498 | 1.3493 |
| 0.5594 | 1167.0 | 3501 | 1.3599 |
| 0.5594 | 1168.0 | 3504 | 1.3644 |
| 0.5594 | 1169.0 | 3507 | 1.3595 |
| 0.5594 | 1170.0 | 3510 | 1.3476 |
| 0.5594 | 1171.0 | 3513 | 1.3464 |
| 0.5594 | 1172.0 | 3516 | 1.3592 |
| 0.5594 | 1173.0 | 3519 | 1.3673 |
| 0.5594 | 1174.0 | 3522 | 1.3682 |
| 0.5594 | 1175.0 | 3525 | 1.3569 |
| 0.5594 | 1176.0 | 3528 | 1.3434 |
| 0.5594 | 1177.0 | 3531 | 1.3439 |
| 0.5594 | 1178.0 | 3534 | 1.3386 |
| 0.5594 | 1179.0 | 3537 | 1.3180 |
| 0.5594 | 1180.0 | 3540 | 1.2994 |
| 0.5594 | 1181.0 | 3543 | 1.2888 |
| 0.5594 | 1182.0 | 3546 | 1.2911 |
| 0.5594 | 1183.0 | 3549 | 1.2966 |
| 0.5594 | 1184.0 | 3552 | 1.2888 |
| 0.5594 | 1185.0 | 3555 | 1.2784 |
| 0.5594 | 1186.0 | 3558 | 1.2811 |
| 0.5594 | 1187.0 | 3561 | 1.2813 |
| 0.5594 | 1188.0 | 3564 | 1.2797 |
| 0.5594 | 1189.0 | 3567 | 1.2683 |
| 0.5594 | 1190.0 | 3570 | 1.2736 |
| 0.5594 | 1191.0 | 3573 | 1.2614 |
| 0.5594 | 1192.0 | 3576 | 1.2485 |
| 0.5594 | 1193.0 | 3579 | 1.2446 |
| 0.5594 | 1194.0 | 3582 | 1.2077 |
| 0.5594 | 1195.0 | 3585 | 1.1880 |
| 0.5594 | 1196.0 | 3588 | 1.1797 |
| 0.5594 | 1197.0 | 3591 | 1.1750 |
| 0.5594 | 1198.0 | 3594 | 1.1964 |
| 0.5594 | 1199.0 | 3597 | 1.2570 |
| 0.5594 | 1200.0 | 3600 | 1.3173 |
| 0.5594 | 1201.0 | 3603 | 1.3393 |
| 0.5594 | 1202.0 | 3606 | 1.3465 |
| 0.5594 | 1203.0 | 3609 | 1.3254 |
| 0.5594 | 1204.0 | 3612 | 1.3003 |
| 0.5594 | 1205.0 | 3615 | 1.2560 |
| 0.5594 | 1206.0 | 3618 | 1.2008 |
| 0.5594 | 1207.0 | 3621 | 1.1804 |
| 0.5594 | 1208.0 | 3624 | 1.1725 |
| 0.5594 | 1209.0 | 3627 | 1.1634 |
| 0.5594 | 1210.0 | 3630 | 1.1744 |
| 0.5594 | 1211.0 | 3633 | 1.1912 |
| 0.5594 | 1212.0 | 3636 | 1.2141 |
| 0.5594 | 1213.0 | 3639 | 1.2444 |
| 0.5594 | 1214.0 | 3642 | 1.2703 |
| 0.5594 | 1215.0 | 3645 | 1.2812 |
| 0.5594 | 1216.0 | 3648 | 1.2849 |
| 0.5594 | 1217.0 | 3651 | 1.2871 |
| 0.5594 | 1218.0 | 3654 | 1.2800 |
| 0.5594 | 1219.0 | 3657 | 1.2755 |
| 0.5594 | 1220.0 | 3660 | 1.2668 |
| 0.5594 | 1221.0 | 3663 | 1.2512 |
| 0.5594 | 1222.0 | 3666 | 1.2390 |
| 0.5594 | 1223.0 | 3669 | 1.2268 |
| 0.5594 | 1224.0 | 3672 | 1.2071 |
| 0.5594 | 1225.0 | 3675 | 1.1804 |
| 0.5594 | 1226.0 | 3678 | 1.1572 |
| 0.5594 | 1227.0 | 3681 | 1.1618 |
| 0.5594 | 1228.0 | 3684 | 1.1741 |
| 0.5594 | 1229.0 | 3687 | 1.1867 |
| 0.5594 | 1230.0 | 3690 | 1.1978 |
| 0.5594 | 1231.0 | 3693 | 1.2180 |
| 0.5594 | 1232.0 | 3696 | 1.2379 |
| 0.5594 | 1233.0 | 3699 | 1.2486 |
| 0.5594 | 1234.0 | 3702 | 1.2526 |
| 0.5594 | 1235.0 | 3705 | 1.2632 |
| 0.5594 | 1236.0 | 3708 | 1.2866 |
| 0.5594 | 1237.0 | 3711 | 1.2903 |
| 0.5594 | 1238.0 | 3714 | 1.2655 |
| 0.5594 | 1239.0 | 3717 | 1.2452 |
| 0.5594 | 1240.0 | 3720 | 1.2348 |
| 0.5594 | 1241.0 | 3723 | 1.1997 |
| 0.5594 | 1242.0 | 3726 | 1.1615 |
| 0.5594 | 1243.0 | 3729 | 1.1294 |
| 0.5594 | 1244.0 | 3732 | 1.1171 |
| 0.5594 | 1245.0 | 3735 | 1.1613 |
| 0.5594 | 1246.0 | 3738 | 1.2428 |
| 0.5594 | 1247.0 | 3741 | 1.2627 |
| 0.5594 | 1248.0 | 3744 | 1.2525 |
| 0.5594 | 1249.0 | 3747 | 1.2029 |
| 0.5594 | 1250.0 | 3750 | 1.1155 |
| 0.5594 | 1251.0 | 3753 | 1.0784 |
| 0.5594 | 1252.0 | 3756 | 1.0683 |
| 0.5594 | 1253.0 | 3759 | 1.0901 |
| 0.5594 | 1254.0 | 3762 | 1.1788 |
| 0.5594 | 1255.0 | 3765 | 1.2079 |
| 0.5594 | 1256.0 | 3768 | 1.2129 |
| 0.5594 | 1257.0 | 3771 | 1.2088 |
| 0.5594 | 1258.0 | 3774 | 1.1948 |
| 0.5594 | 1259.0 | 3777 | 1.1811 |
| 0.5594 | 1260.0 | 3780 | 1.1757 |
| 0.5594 | 1261.0 | 3783 | 1.1764 |
| 0.5594 | 1262.0 | 3786 | 1.1673 |
| 0.5594 | 1263.0 | 3789 | 1.1421 |
| 0.5594 | 1264.0 | 3792 | 1.1351 |
| 0.5594 | 1265.0 | 3795 | 1.1570 |
| 0.5594 | 1266.0 | 3798 | 1.1854 |
| 0.5594 | 1267.0 | 3801 | 1.1974 |
| 0.5594 | 1268.0 | 3804 | 1.2039 |
| 0.5594 | 1269.0 | 3807 | 1.1966 |
| 0.5594 | 1270.0 | 3810 | 1.2079 |
| 0.5594 | 1271.0 | 3813 | 1.2104 |
| 0.5594 | 1272.0 | 3816 | 1.2171 |
| 0.5594 | 1273.0 | 3819 | 1.2335 |
| 0.5594 | 1274.0 | 3822 | 1.2483 |
| 0.5594 | 1275.0 | 3825 | 1.2607 |
| 0.5594 | 1276.0 | 3828 | 1.2586 |
| 0.5594 | 1277.0 | 3831 | 1.2527 |
| 0.5594 | 1278.0 | 3834 | 1.2457 |
| 0.5594 | 1279.0 | 3837 | 1.2451 |
| 0.5594 | 1280.0 | 3840 | 1.2669 |
| 0.5594 | 1281.0 | 3843 | 1.2651 |
| 0.5594 | 1282.0 | 3846 | 1.2585 |
| 0.5594 | 1283.0 | 3849 | 1.2459 |
| 0.5594 | 1284.0 | 3852 | 1.2272 |
| 0.5594 | 1285.0 | 3855 | 1.2195 |
| 0.5594 | 1286.0 | 3858 | 1.2154 |
| 0.5594 | 1287.0 | 3861 | 1.2234 |
| 0.5594 | 1288.0 | 3864 | 1.2386 |
| 0.5594 | 1289.0 | 3867 | 1.2574 |
| 0.5594 | 1290.0 | 3870 | 1.2844 |
| 0.5594 | 1291.0 | 3873 | 1.3160 |
| 0.5594 | 1292.0 | 3876 | 1.3283 |
| 0.5594 | 1293.0 | 3879 | 1.3256 |
| 0.5594 | 1294.0 | 3882 | 1.3101 |
| 0.5594 | 1295.0 | 3885 | 1.2981 |
| 0.5594 | 1296.0 | 3888 | 1.2863 |
| 0.5594 | 1297.0 | 3891 | 1.2822 |
| 0.5594 | 1298.0 | 3894 | 1.2751 |
| 0.5594 | 1299.0 | 3897 | 1.2609 |
| 0.5594 | 1300.0 | 3900 | 1.2539 |
| 0.5594 | 1301.0 | 3903 | 1.2455 |
| 0.5594 | 1302.0 | 3906 | 1.2458 |
| 0.5594 | 1303.0 | 3909 | 1.2390 |
| 0.5594 | 1304.0 | 3912 | 1.2530 |
| 0.5594 | 1305.0 | 3915 | 1.2605 |
| 0.5594 | 1306.0 | 3918 | 1.2669 |
| 0.5594 | 1307.0 | 3921 | 1.2699 |
| 0.5594 | 1308.0 | 3924 | 1.2581 |
| 0.5594 | 1309.0 | 3927 | 1.2481 |
| 0.5594 | 1310.0 | 3930 | 1.2469 |
| 0.5594 | 1311.0 | 3933 | 1.2540 |
| 0.5594 | 1312.0 | 3936 | 1.2708 |
| 0.5594 | 1313.0 | 3939 | 1.2828 |
| 0.5594 | 1314.0 | 3942 | 1.2897 |
| 0.5594 | 1315.0 | 3945 | 1.2939 |
| 0.5594 | 1316.0 | 3948 | 1.2995 |
| 0.5594 | 1317.0 | 3951 | 1.3066 |
| 0.5594 | 1318.0 | 3954 | 1.3168 |
| 0.5594 | 1319.0 | 3957 | 1.3175 |
| 0.5594 | 1320.0 | 3960 | 1.3122 |
| 0.5594 | 1321.0 | 3963 | 1.3059 |
| 0.5594 | 1322.0 | 3966 | 1.2981 |
| 0.5594 | 1323.0 | 3969 | 1.2889 |
| 0.5594 | 1324.0 | 3972 | 1.2831 |
| 0.5594 | 1325.0 | 3975 | 1.2885 |
| 0.5594 | 1326.0 | 3978 | 1.2866 |
| 0.5594 | 1327.0 | 3981 | 1.2813 |
| 0.5594 | 1328.0 | 3984 | 1.2779 |
| 0.5594 | 1329.0 | 3987 | 1.2776 |
| 0.5594 | 1330.0 | 3990 | 1.2799 |
| 0.5594 | 1331.0 | 3993 | 1.2826 |
| 0.5594 | 1332.0 | 3996 | 1.2839 |
| 0.5594 | 1333.0 | 3999 | 1.2864 |
| 0.5596 | 1334.0 | 4002 | 1.2831 |
| 0.5596 | 1335.0 | 4005 | 1.2768 |
| 0.5596 | 1336.0 | 4008 | 1.2694 |
| 0.5596 | 1337.0 | 4011 | 1.2594 |
| 0.5596 | 1338.0 | 4014 | 1.2453 |
| 0.5596 | 1339.0 | 4017 | 1.2447 |
| 0.5596 | 1340.0 | 4020 | 1.2359 |
| 0.5596 | 1341.0 | 4023 | 1.2253 |
| 0.5596 | 1342.0 | 4026 | 1.2114 |
| 0.5596 | 1343.0 | 4029 | 1.2037 |
| 0.5596 | 1344.0 | 4032 | 1.1957 |
| 0.5596 | 1345.0 | 4035 | 1.2045 |
| 0.5596 | 1346.0 | 4038 | 1.2123 |
| 0.5596 | 1347.0 | 4041 | 1.2362 |
| 0.5596 | 1348.0 | 4044 | 1.2613 |
| 0.5596 | 1349.0 | 4047 | 1.2745 |
| 0.5596 | 1350.0 | 4050 | 1.2848 |
| 0.5596 | 1351.0 | 4053 | 1.2939 |
| 0.5596 | 1352.0 | 4056 | 1.2986 |
| 0.5596 | 1353.0 | 4059 | 1.2994 |
| 0.5596 | 1354.0 | 4062 | 1.3032 |
| 0.5596 | 1355.0 | 4065 | 1.3034 |
| 0.5596 | 1356.0 | 4068 | 1.3160 |
| 0.5596 | 1357.0 | 4071 | 1.3207 |
| 0.5596 | 1358.0 | 4074 | 1.3250 |
| 0.5596 | 1359.0 | 4077 | 1.3295 |
| 0.5596 | 1360.0 | 4080 | 1.3291 |
| 0.5596 | 1361.0 | 4083 | 1.3191 |
| 0.5596 | 1362.0 | 4086 | 1.3077 |
| 0.5596 | 1363.0 | 4089 | 1.3023 |
| 0.5596 | 1364.0 | 4092 | 1.2966 |
| 0.5596 | 1365.0 | 4095 | 1.2871 |
| 0.5596 | 1366.0 | 4098 | 1.2758 |
| 0.5596 | 1367.0 | 4101 | 1.2703 |
| 0.5596 | 1368.0 | 4104 | 1.2790 |
| 0.5596 | 1369.0 | 4107 | 1.2936 |
| 0.5596 | 1370.0 | 4110 | 1.3103 |
| 0.5596 | 1371.0 | 4113 | 1.3330 |
| 0.5596 | 1372.0 | 4116 | 1.3600 |
| 0.5596 | 1373.0 | 4119 | 1.3767 |
| 0.5596 | 1374.0 | 4122 | 1.3858 |
| 0.5596 | 1375.0 | 4125 | 1.3881 |
| 0.5596 | 1376.0 | 4128 | 1.4005 |
| 0.5596 | 1377.0 | 4131 | 1.4086 |
| 0.5596 | 1378.0 | 4134 | 1.4082 |
| 0.5596 | 1379.0 | 4137 | 1.4018 |
| 0.5596 | 1380.0 | 4140 | 1.3900 |
| 0.5596 | 1381.0 | 4143 | 1.3746 |
| 0.5596 | 1382.0 | 4146 | 1.3608 |
| 0.5596 | 1383.0 | 4149 | 1.3483 |
| 0.5596 | 1384.0 | 4152 | 1.3343 |
| 0.5596 | 1385.0 | 4155 | 1.3260 |
| 0.5596 | 1386.0 | 4158 | 1.3144 |
| 0.5596 | 1387.0 | 4161 | 1.3131 |
| 0.5596 | 1388.0 | 4164 | 1.3051 |
| 0.5596 | 1389.0 | 4167 | 1.2853 |
| 0.5596 | 1390.0 | 4170 | 1.2701 |
| 0.5596 | 1391.0 | 4173 | 1.2635 |
| 0.5596 | 1392.0 | 4176 | 1.2494 |
| 0.5596 | 1393.0 | 4179 | 1.2337 |
| 0.5596 | 1394.0 | 4182 | 1.2267 |
| 0.5596 | 1395.0 | 4185 | 1.2422 |
| 0.5596 | 1396.0 | 4188 | 1.2575 |
| 0.5596 | 1397.0 | 4191 | 1.2733 |
| 0.5596 | 1398.0 | 4194 | 1.2838 |
| 0.5596 | 1399.0 | 4197 | 1.2898 |
| 0.5596 | 1400.0 | 4200 | 1.2937 |
| 0.5596 | 1401.0 | 4203 | 1.2934 |
| 0.5596 | 1402.0 | 4206 | 1.2967 |
| 0.5596 | 1403.0 | 4209 | 1.2893 |
| 0.5596 | 1404.0 | 4212 | 1.2796 |
| 0.5596 | 1405.0 | 4215 | 1.2877 |
| 0.5596 | 1406.0 | 4218 | 1.3098 |
| 0.5596 | 1407.0 | 4221 | 1.3252 |
| 0.5596 | 1408.0 | 4224 | 1.3205 |
| 0.5596 | 1409.0 | 4227 | 1.3168 |
| 0.5596 | 1410.0 | 4230 | 1.3169 |
| 0.5596 | 1411.0 | 4233 | 1.3142 |
| 0.5596 | 1412.0 | 4236 | 1.2923 |
| 0.5596 | 1413.0 | 4239 | 1.2575 |
| 0.5596 | 1414.0 | 4242 | 1.2282 |
| 0.5596 | 1415.0 | 4245 | 1.2126 |
| 0.5596 | 1416.0 | 4248 | 1.2228 |
| 0.5596 | 1417.0 | 4251 | 1.2357 |
| 0.5596 | 1418.0 | 4254 | 1.2567 |
| 0.5596 | 1419.0 | 4257 | 1.2732 |
| 0.5596 | 1420.0 | 4260 | 1.2618 |
| 0.5596 | 1421.0 | 4263 | 1.2471 |
| 0.5596 | 1422.0 | 4266 | 1.2476 |
| 0.5596 | 1423.0 | 4269 | 1.2638 |
| 0.5596 | 1424.0 | 4272 | 1.3039 |
| 0.5596 | 1425.0 | 4275 | 1.3291 |
| 0.5596 | 1426.0 | 4278 | 1.3451 |
| 0.5596 | 1427.0 | 4281 | 1.3500 |
| 0.5596 | 1428.0 | 4284 | 1.3546 |
| 0.5596 | 1429.0 | 4287 | 1.3582 |
| 0.5596 | 1430.0 | 4290 | 1.3553 |
| 0.5596 | 1431.0 | 4293 | 1.3562 |
| 0.5596 | 1432.0 | 4296 | 1.3554 |
| 0.5596 | 1433.0 | 4299 | 1.3519 |
| 0.5596 | 1434.0 | 4302 | 1.3437 |
| 0.5596 | 1435.0 | 4305 | 1.3434 |
| 0.5596 | 1436.0 | 4308 | 1.3346 |
| 0.5596 | 1437.0 | 4311 | 1.3225 |
| 0.5596 | 1438.0 | 4314 | 1.3157 |
| 0.5596 | 1439.0 | 4317 | 1.3004 |
| 0.5596 | 1440.0 | 4320 | 1.2806 |
| 0.5596 | 1441.0 | 4323 | 1.2519 |
| 0.5596 | 1442.0 | 4326 | 1.2243 |
| 0.5596 | 1443.0 | 4329 | 1.2038 |
| 0.5596 | 1444.0 | 4332 | 1.1953 |
| 0.5596 | 1445.0 | 4335 | 1.1985 |
| 0.5596 | 1446.0 | 4338 | 1.2112 |
| 0.5596 | 1447.0 | 4341 | 1.2292 |
| 0.5596 | 1448.0 | 4344 | 1.2461 |
| 0.5596 | 1449.0 | 4347 | 1.2468 |
| 0.5596 | 1450.0 | 4350 | 1.2530 |
| 0.5596 | 1451.0 | 4353 | 1.2572 |
| 0.5596 | 1452.0 | 4356 | 1.2665 |
| 0.5596 | 1453.0 | 4359 | 1.2700 |
| 0.5596 | 1454.0 | 4362 | 1.2696 |
| 0.5596 | 1455.0 | 4365 | 1.2611 |
| 0.5596 | 1456.0 | 4368 | 1.2537 |
| 0.5596 | 1457.0 | 4371 | 1.2517 |
| 0.5596 | 1458.0 | 4374 | 1.2511 |
| 0.5596 | 1459.0 | 4377 | 1.2543 |
| 0.5596 | 1460.0 | 4380 | 1.2578 |
| 0.5596 | 1461.0 | 4383 | 1.2540 |
| 0.5596 | 1462.0 | 4386 | 1.2508 |
| 0.5596 | 1463.0 | 4389 | 1.2523 |
| 0.5596 | 1464.0 | 4392 | 1.2553 |
| 0.5596 | 1465.0 | 4395 | 1.2546 |
| 0.5596 | 1466.0 | 4398 | 1.2581 |
| 0.5596 | 1467.0 | 4401 | 1.2649 |
| 0.5596 | 1468.0 | 4404 | 1.2735 |
| 0.5596 | 1469.0 | 4407 | 1.2883 |
| 0.5596 | 1470.0 | 4410 | 1.3074 |
| 0.5596 | 1471.0 | 4413 | 1.3192 |
| 0.5596 | 1472.0 | 4416 | 1.3282 |
| 0.5596 | 1473.0 | 4419 | 1.3325 |
| 0.5596 | 1474.0 | 4422 | 1.3314 |
| 0.5596 | 1475.0 | 4425 | 1.3250 |
| 0.5596 | 1476.0 | 4428 | 1.3163 |
| 0.5596 | 1477.0 | 4431 | 1.3089 |
| 0.5596 | 1478.0 | 4434 | 1.3000 |
| 0.5596 | 1479.0 | 4437 | 1.3028 |
| 0.5596 | 1480.0 | 4440 | 1.3035 |
| 0.5596 | 1481.0 | 4443 | 1.3072 |
| 0.5596 | 1482.0 | 4446 | 1.3023 |
| 0.5596 | 1483.0 | 4449 | 1.3073 |
| 0.5596 | 1484.0 | 4452 | 1.3085 |
| 0.5596 | 1485.0 | 4455 | 1.3051 |
| 0.5596 | 1486.0 | 4458 | 1.3017 |
| 0.5596 | 1487.0 | 4461 | 1.2962 |
| 0.5596 | 1488.0 | 4464 | 1.2828 |
| 0.5596 | 1489.0 | 4467 | 1.2675 |
| 0.5596 | 1490.0 | 4470 | 1.2643 |
| 0.5596 | 1491.0 | 4473 | 1.2747 |
| 0.5596 | 1492.0 | 4476 | 1.2961 |
| 0.5596 | 1493.0 | 4479 | 1.3016 |
| 0.5596 | 1494.0 | 4482 | 1.2982 |
| 0.5596 | 1495.0 | 4485 | 1.2902 |
| 0.5596 | 1496.0 | 4488 | 1.2810 |
| 0.5596 | 1497.0 | 4491 | 1.2799 |
| 0.5596 | 1498.0 | 4494 | 1.2838 |
| 0.5596 | 1499.0 | 4497 | 1.2849 |
| 0.5585 | 1500.0 | 4500 | 1.2817 |
| 0.5585 | 1501.0 | 4503 | 1.2623 |
| 0.5585 | 1502.0 | 4506 | 1.2476 |
| 0.5585 | 1503.0 | 4509 | 1.2396 |
| 0.5585 | 1504.0 | 4512 | 1.2270 |
| 0.5585 | 1505.0 | 4515 | 1.2198 |
| 0.5585 | 1506.0 | 4518 | 1.2175 |
| 0.5585 | 1507.0 | 4521 | 1.2237 |
| 0.5585 | 1508.0 | 4524 | 1.2332 |
| 0.5585 | 1509.0 | 4527 | 1.2437 |
| 0.5585 | 1510.0 | 4530 | 1.2509 |
| 0.5585 | 1511.0 | 4533 | 1.2516 |
| 0.5585 | 1512.0 | 4536 | 1.2541 |
| 0.5585 | 1513.0 | 4539 | 1.2481 |
| 0.5585 | 1514.0 | 4542 | 1.2460 |
| 0.5585 | 1515.0 | 4545 | 1.2456 |
| 0.5585 | 1516.0 | 4548 | 1.2450 |
| 0.5585 | 1517.0 | 4551 | 1.2441 |
| 0.5585 | 1518.0 | 4554 | 1.2437 |
| 0.5585 | 1519.0 | 4557 | 1.2446 |
| 0.5585 | 1520.0 | 4560 | 1.2490 |
| 0.5585 | 1521.0 | 4563 | 1.2540 |
| 0.5585 | 1522.0 | 4566 | 1.2620 |
| 0.5585 | 1523.0 | 4569 | 1.2615 |
| 0.5585 | 1524.0 | 4572 | 1.2570 |
| 0.5585 | 1525.0 | 4575 | 1.2569 |
| 0.5585 | 1526.0 | 4578 | 1.2570 |
| 0.5585 | 1527.0 | 4581 | 1.2681 |
| 0.5585 | 1528.0 | 4584 | 1.2824 |
| 0.5585 | 1529.0 | 4587 | 1.2947 |
| 0.5585 | 1530.0 | 4590 | 1.2917 |
| 0.5585 | 1531.0 | 4593 | 1.2866 |
| 0.5585 | 1532.0 | 4596 | 1.2758 |
| 0.5585 | 1533.0 | 4599 | 1.2622 |
| 0.5585 | 1534.0 | 4602 | 1.2540 |
| 0.5585 | 1535.0 | 4605 | 1.2411 |
| 0.5585 | 1536.0 | 4608 | 1.2433 |
| 0.5585 | 1537.0 | 4611 | 1.2553 |
| 0.5585 | 1538.0 | 4614 | 1.2590 |
| 0.5585 | 1539.0 | 4617 | 1.2535 |
| 0.5585 | 1540.0 | 4620 | 1.2439 |
| 0.5585 | 1541.0 | 4623 | 1.2461 |
| 0.5585 | 1542.0 | 4626 | 1.2506 |
| 0.5585 | 1543.0 | 4629 | 1.2483 |
| 0.5585 | 1544.0 | 4632 | 1.2488 |
| 0.5585 | 1545.0 | 4635 | 1.2463 |
| 0.5585 | 1546.0 | 4638 | 1.2497 |
| 0.5585 | 1547.0 | 4641 | 1.2608 |
| 0.5585 | 1548.0 | 4644 | 1.2711 |
| 0.5585 | 1549.0 | 4647 | 1.2785 |
| 0.5585 | 1550.0 | 4650 | 1.2751 |
| 0.5585 | 1551.0 | 4653 | 1.2641 |
| 0.5585 | 1552.0 | 4656 | 1.2510 |
| 0.5585 | 1553.0 | 4659 | 1.2358 |
| 0.5585 | 1554.0 | 4662 | 1.2287 |
| 0.5585 | 1555.0 | 4665 | 1.2247 |
| 0.5585 | 1556.0 | 4668 | 1.2228 |
| 0.5585 | 1557.0 | 4671 | 1.2226 |
| 0.5585 | 1558.0 | 4674 | 1.2310 |
| 0.5585 | 1559.0 | 4677 | 1.2332 |
| 0.5585 | 1560.0 | 4680 | 1.2375 |
| 0.5585 | 1561.0 | 4683 | 1.2369 |
| 0.5585 | 1562.0 | 4686 | 1.2275 |
| 0.5585 | 1563.0 | 4689 | 1.2133 |
| 0.5585 | 1564.0 | 4692 | 1.1939 |
| 0.5585 | 1565.0 | 4695 | 1.1805 |
| 0.5585 | 1566.0 | 4698 | 1.1668 |
| 0.5585 | 1567.0 | 4701 | 1.1570 |
| 0.5585 | 1568.0 | 4704 | 1.1510 |
| 0.5585 | 1569.0 | 4707 | 1.1499 |
| 0.5585 | 1570.0 | 4710 | 1.1548 |
| 0.5585 | 1571.0 | 4713 | 1.1644 |
| 0.5585 | 1572.0 | 4716 | 1.1659 |
| 0.5585 | 1573.0 | 4719 | 1.1751 |
| 0.5585 | 1574.0 | 4722 | 1.1975 |
| 0.5585 | 1575.0 | 4725 | 1.2115 |
| 0.5585 | 1576.0 | 4728 | 1.2144 |
| 0.5585 | 1577.0 | 4731 | 1.2082 |
| 0.5585 | 1578.0 | 4734 | 1.1975 |
| 0.5585 | 1579.0 | 4737 | 1.1939 |
| 0.5585 | 1580.0 | 4740 | 1.1906 |
| 0.5585 | 1581.0 | 4743 | 1.1783 |
| 0.5585 | 1582.0 | 4746 | 1.1757 |
| 0.5585 | 1583.0 | 4749 | 1.1792 |
| 0.5585 | 1584.0 | 4752 | 1.1950 |
| 0.5585 | 1585.0 | 4755 | 1.2039 |
| 0.5585 | 1586.0 | 4758 | 1.2107 |
| 0.5585 | 1587.0 | 4761 | 1.2178 |
| 0.5585 | 1588.0 | 4764 | 1.2261 |
| 0.5585 | 1589.0 | 4767 | 1.2340 |
| 0.5585 | 1590.0 | 4770 | 1.2420 |
| 0.5585 | 1591.0 | 4773 | 1.2525 |
| 0.5585 | 1592.0 | 4776 | 1.2740 |
| 0.5585 | 1593.0 | 4779 | 1.2903 |
| 0.5585 | 1594.0 | 4782 | 1.2987 |
| 0.5585 | 1595.0 | 4785 | 1.2991 |
| 0.5585 | 1596.0 | 4788 | 1.2934 |
| 0.5585 | 1597.0 | 4791 | 1.2862 |
| 0.5585 | 1598.0 | 4794 | 1.2868 |
| 0.5585 | 1599.0 | 4797 | 1.2803 |
| 0.5585 | 1600.0 | 4800 | 1.2826 |
| 0.5585 | 1601.0 | 4803 | 1.2763 |
| 0.5585 | 1602.0 | 4806 | 1.2718 |
| 0.5585 | 1603.0 | 4809 | 1.2646 |
| 0.5585 | 1604.0 | 4812 | 1.2668 |
| 0.5585 | 1605.0 | 4815 | 1.2755 |
| 0.5585 | 1606.0 | 4818 | 1.2812 |
| 0.5585 | 1607.0 | 4821 | 1.2905 |
| 0.5585 | 1608.0 | 4824 | 1.2896 |
| 0.5585 | 1609.0 | 4827 | 1.2850 |
| 0.5585 | 1610.0 | 4830 | 1.2822 |
| 0.5585 | 1611.0 | 4833 | 1.2768 |
| 0.5585 | 1612.0 | 4836 | 1.2710 |
| 0.5585 | 1613.0 | 4839 | 1.2660 |
| 0.5585 | 1614.0 | 4842 | 1.2627 |
| 0.5585 | 1615.0 | 4845 | 1.2584 |
| 0.5585 | 1616.0 | 4848 | 1.2485 |
| 0.5585 | 1617.0 | 4851 | 1.2344 |
| 0.5585 | 1618.0 | 4854 | 1.2201 |
| 0.5585 | 1619.0 | 4857 | 1.2069 |
| 0.5585 | 1620.0 | 4860 | 1.1927 |
| 0.5585 | 1621.0 | 4863 | 1.1971 |
| 0.5585 | 1622.0 | 4866 | 1.2042 |
| 0.5585 | 1623.0 | 4869 | 1.2124 |
| 0.5585 | 1624.0 | 4872 | 1.2249 |
| 0.5585 | 1625.0 | 4875 | 1.2413 |
| 0.5585 | 1626.0 | 4878 | 1.2477 |
| 0.5585 | 1627.0 | 4881 | 1.2600 |
| 0.5585 | 1628.0 | 4884 | 1.2676 |
| 0.5585 | 1629.0 | 4887 | 1.2724 |
| 0.5585 | 1630.0 | 4890 | 1.2755 |
| 0.5585 | 1631.0 | 4893 | 1.2782 |
| 0.5585 | 1632.0 | 4896 | 1.2968 |
| 0.5585 | 1633.0 | 4899 | 1.3072 |
| 0.5585 | 1634.0 | 4902 | 1.3119 |
| 0.5585 | 1635.0 | 4905 | 1.3116 |
| 0.5585 | 1636.0 | 4908 | 1.3104 |
| 0.5585 | 1637.0 | 4911 | 1.3071 |
| 0.5585 | 1638.0 | 4914 | 1.3022 |
| 0.5585 | 1639.0 | 4917 | 1.2993 |
| 0.5585 | 1640.0 | 4920 | 1.2960 |
| 0.5585 | 1641.0 | 4923 | 1.2829 |
| 0.5585 | 1642.0 | 4926 | 1.2700 |
| 0.5585 | 1643.0 | 4929 | 1.2669 |
| 0.5585 | 1644.0 | 4932 | 1.2658 |
| 0.5585 | 1645.0 | 4935 | 1.2583 |
| 0.5585 | 1646.0 | 4938 | 1.2580 |
| 0.5585 | 1647.0 | 4941 | 1.2485 |
| 0.5585 | 1648.0 | 4944 | 1.2374 |
| 0.5585 | 1649.0 | 4947 | 1.2234 |
| 0.5585 | 1650.0 | 4950 | 1.2172 |
| 0.5585 | 1651.0 | 4953 | 1.2044 |
| 0.5585 | 1652.0 | 4956 | 1.1955 |
| 0.5585 | 1653.0 | 4959 | 1.1854 |
| 0.5585 | 1654.0 | 4962 | 1.1917 |
| 0.5585 | 1655.0 | 4965 | 1.1924 |
| 0.5585 | 1656.0 | 4968 | 1.1886 |
| 0.5585 | 1657.0 | 4971 | 1.1910 |
| 0.5585 | 1658.0 | 4974 | 1.1913 |
| 0.5585 | 1659.0 | 4977 | 1.1960 |
| 0.5585 | 1660.0 | 4980 | 1.2030 |
| 0.5585 | 1661.0 | 4983 | 1.2132 |
| 0.5585 | 1662.0 | 4986 | 1.2263 |
| 0.5585 | 1663.0 | 4989 | 1.2411 |
| 0.5585 | 1664.0 | 4992 | 1.2572 |
| 0.5585 | 1665.0 | 4995 | 1.2714 |
| 0.5585 | 1666.0 | 4998 | 1.2824 |
| 0.5584 | 1667.0 | 5001 | 1.2862 |
| 0.5584 | 1668.0 | 5004 | 1.2866 |
| 0.5584 | 1669.0 | 5007 | 1.2883 |
| 0.5584 | 1670.0 | 5010 | 1.2868 |
| 0.5584 | 1671.0 | 5013 | 1.2821 |
| 0.5584 | 1672.0 | 5016 | 1.2769 |
| 0.5584 | 1673.0 | 5019 | 1.2708 |
| 0.5584 | 1674.0 | 5022 | 1.2631 |
| 0.5584 | 1675.0 | 5025 | 1.2573 |
| 0.5584 | 1676.0 | 5028 | 1.2570 |
| 0.5584 | 1677.0 | 5031 | 1.2558 |
| 0.5584 | 1678.0 | 5034 | 1.2561 |
| 0.5584 | 1679.0 | 5037 | 1.2551 |
| 0.5584 | 1680.0 | 5040 | 1.2521 |
| 0.5584 | 1681.0 | 5043 | 1.2414 |
| 0.5584 | 1682.0 | 5046 | 1.2274 |
| 0.5584 | 1683.0 | 5049 | 1.2122 |
| 0.5584 | 1684.0 | 5052 | 1.1951 |
| 0.5584 | 1685.0 | 5055 | 1.1893 |
| 0.5584 | 1686.0 | 5058 | 1.1823 |
| 0.5584 | 1687.0 | 5061 | 1.1763 |
| 0.5584 | 1688.0 | 5064 | 1.1725 |
| 0.5584 | 1689.0 | 5067 | 1.1744 |
| 0.5584 | 1690.0 | 5070 | 1.1875 |
| 0.5584 | 1691.0 | 5073 | 1.1946 |
| 0.5584 | 1692.0 | 5076 | 1.2012 |
| 0.5584 | 1693.0 | 5079 | 1.2053 |
| 0.5584 | 1694.0 | 5082 | 1.2083 |
| 0.5584 | 1695.0 | 5085 | 1.2196 |
| 0.5584 | 1696.0 | 5088 | 1.2435 |
| 0.5584 | 1697.0 | 5091 | 1.2554 |
| 0.5584 | 1698.0 | 5094 | 1.2650 |
| 0.5584 | 1699.0 | 5097 | 1.2680 |
| 0.5584 | 1700.0 | 5100 | 1.2642 |
| 0.5584 | 1701.0 | 5103 | 1.2682 |
| 0.5584 | 1702.0 | 5106 | 1.2741 |
| 0.5584 | 1703.0 | 5109 | 1.2736 |
| 0.5584 | 1704.0 | 5112 | 1.2641 |
| 0.5584 | 1705.0 | 5115 | 1.2590 |
| 0.5584 | 1706.0 | 5118 | 1.2602 |
| 0.5584 | 1707.0 | 5121 | 1.2610 |
| 0.5584 | 1708.0 | 5124 | 1.2628 |
| 0.5584 | 1709.0 | 5127 | 1.2661 |
| 0.5584 | 1710.0 | 5130 | 1.2716 |
| 0.5584 | 1711.0 | 5133 | 1.2769 |
| 0.5584 | 1712.0 | 5136 | 1.2820 |
| 0.5584 | 1713.0 | 5139 | 1.2837 |
| 0.5584 | 1714.0 | 5142 | 1.2823 |
| 0.5584 | 1715.0 | 5145 | 1.2832 |
| 0.5584 | 1716.0 | 5148 | 1.2814 |
| 0.5584 | 1717.0 | 5151 | 1.2819 |
| 0.5584 | 1718.0 | 5154 | 1.2820 |
| 0.5584 | 1719.0 | 5157 | 1.2816 |
| 0.5584 | 1720.0 | 5160 | 1.2814 |
| 0.5584 | 1721.0 | 5163 | 1.2813 |
| 0.5584 | 1722.0 | 5166 | 1.2787 |
| 0.5584 | 1723.0 | 5169 | 1.2741 |
| 0.5584 | 1724.0 | 5172 | 1.2706 |
| 0.5584 | 1725.0 | 5175 | 1.2711 |
| 0.5584 | 1726.0 | 5178 | 1.2760 |
| 0.5584 | 1727.0 | 5181 | 1.2812 |
| 0.5584 | 1728.0 | 5184 | 1.2847 |
| 0.5584 | 1729.0 | 5187 | 1.2863 |
| 0.5584 | 1730.0 | 5190 | 1.2881 |
| 0.5584 | 1731.0 | 5193 | 1.2861 |
| 0.5584 | 1732.0 | 5196 | 1.2846 |
| 0.5584 | 1733.0 | 5199 | 1.2825 |
| 0.5584 | 1734.0 | 5202 | 1.2793 |
| 0.5584 | 1735.0 | 5205 | 1.2799 |
| 0.5584 | 1736.0 | 5208 | 1.2794 |
| 0.5584 | 1737.0 | 5211 | 1.2769 |
| 0.5584 | 1738.0 | 5214 | 1.2734 |
| 0.5584 | 1739.0 | 5217 | 1.2713 |
| 0.5584 | 1740.0 | 5220 | 1.2720 |
| 0.5584 | 1741.0 | 5223 | 1.2751 |
| 0.5584 | 1742.0 | 5226 | 1.2776 |
| 0.5584 | 1743.0 | 5229 | 1.2792 |
| 0.5584 | 1744.0 | 5232 | 1.2830 |
| 0.5584 | 1745.0 | 5235 | 1.2845 |
| 0.5584 | 1746.0 | 5238 | 1.2858 |
| 0.5584 | 1747.0 | 5241 | 1.2844 |
| 0.5584 | 1748.0 | 5244 | 1.2823 |
| 0.5584 | 1749.0 | 5247 | 1.2819 |
| 0.5584 | 1750.0 | 5250 | 1.2809 |
| 0.5584 | 1751.0 | 5253 | 1.2805 |
| 0.5584 | 1752.0 | 5256 | 1.2779 |
| 0.5584 | 1753.0 | 5259 | 1.2749 |
| 0.5584 | 1754.0 | 5262 | 1.2768 |
| 0.5584 | 1755.0 | 5265 | 1.2799 |
| 0.5584 | 1756.0 | 5268 | 1.2808 |
| 0.5584 | 1757.0 | 5271 | 1.2788 |
| 0.5584 | 1758.0 | 5274 | 1.2726 |
| 0.5584 | 1759.0 | 5277 | 1.2663 |
| 0.5584 | 1760.0 | 5280 | 1.2611 |
| 0.5584 | 1761.0 | 5283 | 1.2576 |
| 0.5584 | 1762.0 | 5286 | 1.2551 |
| 0.5584 | 1763.0 | 5289 | 1.2647 |
| 0.5584 | 1764.0 | 5292 | 1.2732 |
| 0.5584 | 1765.0 | 5295 | 1.2749 |
| 0.5584 | 1766.0 | 5298 | 1.2798 |
| 0.5584 | 1767.0 | 5301 | 1.2798 |
| 0.5584 | 1768.0 | 5304 | 1.2799 |
| 0.5584 | 1769.0 | 5307 | 1.2805 |
| 0.5584 | 1770.0 | 5310 | 1.2787 |
| 0.5584 | 1771.0 | 5313 | 1.2751 |
| 0.5584 | 1772.0 | 5316 | 1.2724 |
| 0.5584 | 1773.0 | 5319 | 1.2702 |
| 0.5584 | 1774.0 | 5322 | 1.2681 |
| 0.5584 | 1775.0 | 5325 | 1.2680 |
| 0.5584 | 1776.0 | 5328 | 1.2762 |
| 0.5584 | 1777.0 | 5331 | 1.2824 |
| 0.5584 | 1778.0 | 5334 | 1.2878 |
| 0.5584 | 1779.0 | 5337 | 1.2896 |
| 0.5584 | 1780.0 | 5340 | 1.2924 |
| 0.5584 | 1781.0 | 5343 | 1.2972 |
| 0.5584 | 1782.0 | 5346 | 1.2993 |
| 0.5584 | 1783.0 | 5349 | 1.2992 |
| 0.5584 | 1784.0 | 5352 | 1.2982 |
| 0.5584 | 1785.0 | 5355 | 1.2968 |
| 0.5584 | 1786.0 | 5358 | 1.2951 |
| 0.5584 | 1787.0 | 5361 | 1.2933 |
| 0.5584 | 1788.0 | 5364 | 1.2933 |
| 0.5584 | 1789.0 | 5367 | 1.2916 |
| 0.5584 | 1790.0 | 5370 | 1.2882 |
| 0.5584 | 1791.0 | 5373 | 1.2879 |
| 0.5584 | 1792.0 | 5376 | 1.2876 |
| 0.5584 | 1793.0 | 5379 | 1.2848 |
| 0.5584 | 1794.0 | 5382 | 1.2832 |
| 0.5584 | 1795.0 | 5385 | 1.2809 |
| 0.5584 | 1796.0 | 5388 | 1.2803 |
| 0.5584 | 1797.0 | 5391 | 1.2786 |
| 0.5584 | 1798.0 | 5394 | 1.2740 |
| 0.5584 | 1799.0 | 5397 | 1.2691 |
| 0.5584 | 1800.0 | 5400 | 1.2653 |
| 0.5584 | 1801.0 | 5403 | 1.2605 |
| 0.5584 | 1802.0 | 5406 | 1.2591 |
| 0.5584 | 1803.0 | 5409 | 1.2564 |
| 0.5584 | 1804.0 | 5412 | 1.2520 |
| 0.5584 | 1805.0 | 5415 | 1.2478 |
| 0.5584 | 1806.0 | 5418 | 1.2489 |
| 0.5584 | 1807.0 | 5421 | 1.2499 |
| 0.5584 | 1808.0 | 5424 | 1.2530 |
| 0.5584 | 1809.0 | 5427 | 1.2525 |
| 0.5584 | 1810.0 | 5430 | 1.2523 |
| 0.5584 | 1811.0 | 5433 | 1.2526 |
| 0.5584 | 1812.0 | 5436 | 1.2536 |
| 0.5584 | 1813.0 | 5439 | 1.2507 |
| 0.5584 | 1814.0 | 5442 | 1.2481 |
| 0.5584 | 1815.0 | 5445 | 1.2451 |
| 0.5584 | 1816.0 | 5448 | 1.2370 |
| 0.5584 | 1817.0 | 5451 | 1.2326 |
| 0.5584 | 1818.0 | 5454 | 1.2316 |
| 0.5584 | 1819.0 | 5457 | 1.2329 |
| 0.5584 | 1820.0 | 5460 | 1.2352 |
| 0.5584 | 1821.0 | 5463 | 1.2331 |
| 0.5584 | 1822.0 | 5466 | 1.2283 |
| 0.5584 | 1823.0 | 5469 | 1.2228 |
| 0.5584 | 1824.0 | 5472 | 1.2207 |
| 0.5584 | 1825.0 | 5475 | 1.2197 |
| 0.5584 | 1826.0 | 5478 | 1.2164 |
| 0.5584 | 1827.0 | 5481 | 1.2152 |
| 0.5584 | 1828.0 | 5484 | 1.2172 |
| 0.5584 | 1829.0 | 5487 | 1.2181 |
| 0.5584 | 1830.0 | 5490 | 1.2158 |
| 0.5584 | 1831.0 | 5493 | 1.2166 |
| 0.5584 | 1832.0 | 5496 | 1.2138 |
| 0.5584 | 1833.0 | 5499 | 1.2109 |
| 0.5585 | 1834.0 | 5502 | 1.2170 |
| 0.5585 | 1835.0 | 5505 | 1.2216 |
| 0.5585 | 1836.0 | 5508 | 1.2244 |
| 0.5585 | 1837.0 | 5511 | 1.2267 |
| 0.5585 | 1838.0 | 5514 | 1.2321 |
| 0.5585 | 1839.0 | 5517 | 1.2359 |
| 0.5585 | 1840.0 | 5520 | 1.2415 |
| 0.5585 | 1841.0 | 5523 | 1.2507 |
| 0.5585 | 1842.0 | 5526 | 1.2623 |
| 0.5585 | 1843.0 | 5529 | 1.2675 |
| 0.5585 | 1844.0 | 5532 | 1.2701 |
| 0.5585 | 1845.0 | 5535 | 1.2701 |
| 0.5585 | 1846.0 | 5538 | 1.2698 |
| 0.5585 | 1847.0 | 5541 | 1.2720 |
| 0.5585 | 1848.0 | 5544 | 1.2740 |
| 0.5585 | 1849.0 | 5547 | 1.2751 |
| 0.5585 | 1850.0 | 5550 | 1.2771 |
| 0.5585 | 1851.0 | 5553 | 1.2801 |
| 0.5585 | 1852.0 | 5556 | 1.2817 |
| 0.5585 | 1853.0 | 5559 | 1.2834 |
| 0.5585 | 1854.0 | 5562 | 1.2851 |
| 0.5585 | 1855.0 | 5565 | 1.2870 |
| 0.5585 | 1856.0 | 5568 | 1.2885 |
| 0.5585 | 1857.0 | 5571 | 1.2872 |
| 0.5585 | 1858.0 | 5574 | 1.2855 |
| 0.5585 | 1859.0 | 5577 | 1.2835 |
| 0.5585 | 1860.0 | 5580 | 1.2837 |
| 0.5585 | 1861.0 | 5583 | 1.2837 |
| 0.5585 | 1862.0 | 5586 | 1.2828 |
| 0.5585 | 1863.0 | 5589 | 1.2814 |
| 0.5585 | 1864.0 | 5592 | 1.2794 |
| 0.5585 | 1865.0 | 5595 | 1.2781 |
| 0.5585 | 1866.0 | 5598 | 1.2806 |
| 0.5585 | 1867.0 | 5601 | 1.2827 |
| 0.5585 | 1868.0 | 5604 | 1.2827 |
| 0.5585 | 1869.0 | 5607 | 1.2828 |
| 0.5585 | 1870.0 | 5610 | 1.2827 |
| 0.5585 | 1871.0 | 5613 | 1.2810 |
| 0.5585 | 1872.0 | 5616 | 1.2799 |
| 0.5585 | 1873.0 | 5619 | 1.2784 |
| 0.5585 | 1874.0 | 5622 | 1.2760 |
| 0.5585 | 1875.0 | 5625 | 1.2729 |
| 0.5585 | 1876.0 | 5628 | 1.2710 |
| 0.5585 | 1877.0 | 5631 | 1.2718 |
| 0.5585 | 1878.0 | 5634 | 1.2747 |
| 0.5585 | 1879.0 | 5637 | 1.2779 |
| 0.5585 | 1880.0 | 5640 | 1.2808 |
| 0.5585 | 1881.0 | 5643 | 1.2827 |
| 0.5585 | 1882.0 | 5646 | 1.2821 |
| 0.5585 | 1883.0 | 5649 | 1.2822 |
| 0.5585 | 1884.0 | 5652 | 1.2834 |
| 0.5585 | 1885.0 | 5655 | 1.2828 |
| 0.5585 | 1886.0 | 5658 | 1.2808 |
| 0.5585 | 1887.0 | 5661 | 1.2784 |
| 0.5585 | 1888.0 | 5664 | 1.2760 |
| 0.5585 | 1889.0 | 5667 | 1.2731 |
| 0.5585 | 1890.0 | 5670 | 1.2704 |
| 0.5585 | 1891.0 | 5673 | 1.2704 |
| 0.5585 | 1892.0 | 5676 | 1.2701 |
| 0.5585 | 1893.0 | 5679 | 1.2696 |
| 0.5585 | 1894.0 | 5682 | 1.2657 |
| 0.5585 | 1895.0 | 5685 | 1.2590 |
| 0.5585 | 1896.0 | 5688 | 1.2525 |
| 0.5585 | 1897.0 | 5691 | 1.2475 |
| 0.5585 | 1898.0 | 5694 | 1.2441 |
| 0.5585 | 1899.0 | 5697 | 1.2416 |
| 0.5585 | 1900.0 | 5700 | 1.2422 |
| 0.5585 | 1901.0 | 5703 | 1.2433 |
| 0.5585 | 1902.0 | 5706 | 1.2443 |
| 0.5585 | 1903.0 | 5709 | 1.2453 |
| 0.5585 | 1904.0 | 5712 | 1.2513 |
| 0.5585 | 1905.0 | 5715 | 1.2538 |
| 0.5585 | 1906.0 | 5718 | 1.2554 |
| 0.5585 | 1907.0 | 5721 | 1.2567 |
| 0.5585 | 1908.0 | 5724 | 1.2573 |
| 0.5585 | 1909.0 | 5727 | 1.2580 |
| 0.5585 | 1910.0 | 5730 | 1.2579 |
| 0.5585 | 1911.0 | 5733 | 1.2576 |
| 0.5585 | 1912.0 | 5736 | 1.2567 |
| 0.5585 | 1913.0 | 5739 | 1.2552 |
| 0.5585 | 1914.0 | 5742 | 1.2542 |
| 0.5585 | 1915.0 | 5745 | 1.2539 |
| 0.5585 | 1916.0 | 5748 | 1.2530 |
| 0.5585 | 1917.0 | 5751 | 1.2534 |
| 0.5585 | 1918.0 | 5754 | 1.2542 |
| 0.5585 | 1919.0 | 5757 | 1.2537 |
| 0.5585 | 1920.0 | 5760 | 1.2527 |
| 0.5585 | 1921.0 | 5763 | 1.2517 |
| 0.5585 | 1922.0 | 5766 | 1.2510 |
| 0.5585 | 1923.0 | 5769 | 1.2496 |
| 0.5585 | 1924.0 | 5772 | 1.2497 |
| 0.5585 | 1925.0 | 5775 | 1.2491 |
| 0.5585 | 1926.0 | 5778 | 1.2483 |
| 0.5585 | 1927.0 | 5781 | 1.2462 |
| 0.5585 | 1928.0 | 5784 | 1.2437 |
| 0.5585 | 1929.0 | 5787 | 1.2406 |
| 0.5585 | 1930.0 | 5790 | 1.2390 |
| 0.5585 | 1931.0 | 5793 | 1.2390 |
| 0.5585 | 1932.0 | 5796 | 1.2390 |
| 0.5585 | 1933.0 | 5799 | 1.2409 |
| 0.5585 | 1934.0 | 5802 | 1.2442 |
| 0.5585 | 1935.0 | 5805 | 1.2473 |
| 0.5585 | 1936.0 | 5808 | 1.2490 |
| 0.5585 | 1937.0 | 5811 | 1.2516 |
| 0.5585 | 1938.0 | 5814 | 1.2542 |
| 0.5585 | 1939.0 | 5817 | 1.2565 |
| 0.5585 | 1940.0 | 5820 | 1.2594 |
| 0.5585 | 1941.0 | 5823 | 1.2610 |
| 0.5585 | 1942.0 | 5826 | 1.2623 |
| 0.5585 | 1943.0 | 5829 | 1.2636 |
| 0.5585 | 1944.0 | 5832 | 1.2657 |
| 0.5585 | 1945.0 | 5835 | 1.2667 |
| 0.5585 | 1946.0 | 5838 | 1.2676 |
| 0.5585 | 1947.0 | 5841 | 1.2685 |
| 0.5585 | 1948.0 | 5844 | 1.2696 |
| 0.5585 | 1949.0 | 5847 | 1.2707 |
| 0.5585 | 1950.0 | 5850 | 1.2707 |
| 0.5585 | 1951.0 | 5853 | 1.2710 |
| 0.5585 | 1952.0 | 5856 | 1.2707 |
| 0.5585 | 1953.0 | 5859 | 1.2694 |
| 0.5585 | 1954.0 | 5862 | 1.2673 |
| 0.5585 | 1955.0 | 5865 | 1.2650 |
| 0.5585 | 1956.0 | 5868 | 1.2625 |
| 0.5585 | 1957.0 | 5871 | 1.2614 |
| 0.5585 | 1958.0 | 5874 | 1.2605 |
| 0.5585 | 1959.0 | 5877 | 1.2599 |
| 0.5585 | 1960.0 | 5880 | 1.2599 |
| 0.5585 | 1961.0 | 5883 | 1.2598 |
| 0.5585 | 1962.0 | 5886 | 1.2585 |
| 0.5585 | 1963.0 | 5889 | 1.2572 |
| 0.5585 | 1964.0 | 5892 | 1.2555 |
| 0.5585 | 1965.0 | 5895 | 1.2527 |
| 0.5585 | 1966.0 | 5898 | 1.2513 |
| 0.5585 | 1967.0 | 5901 | 1.2504 |
| 0.5585 | 1968.0 | 5904 | 1.2508 |
| 0.5585 | 1969.0 | 5907 | 1.2511 |
| 0.5585 | 1970.0 | 5910 | 1.2517 |
| 0.5585 | 1971.0 | 5913 | 1.2528 |
| 0.5585 | 1972.0 | 5916 | 1.2537 |
| 0.5585 | 1973.0 | 5919 | 1.2543 |
| 0.5585 | 1974.0 | 5922 | 1.2549 |
| 0.5585 | 1975.0 | 5925 | 1.2554 |
| 0.5585 | 1976.0 | 5928 | 1.2554 |
| 0.5585 | 1977.0 | 5931 | 1.2555 |
| 0.5585 | 1978.0 | 5934 | 1.2554 |
| 0.5585 | 1979.0 | 5937 | 1.2553 |
| 0.5585 | 1980.0 | 5940 | 1.2554 |
| 0.5585 | 1981.0 | 5943 | 1.2556 |
| 0.5585 | 1982.0 | 5946 | 1.2563 |
| 0.5585 | 1983.0 | 5949 | 1.2567 |
| 0.5585 | 1984.0 | 5952 | 1.2567 |
| 0.5585 | 1985.0 | 5955 | 1.2567 |
| 0.5585 | 1986.0 | 5958 | 1.2566 |
| 0.5585 | 1987.0 | 5961 | 1.2566 |
| 0.5585 | 1988.0 | 5964 | 1.2564 |
| 0.5585 | 1989.0 | 5967 | 1.2563 |
| 0.5585 | 1990.0 | 5970 | 1.2564 |
| 0.5585 | 1991.0 | 5973 | 1.2564 |
| 0.5585 | 1992.0 | 5976 | 1.2564 |
| 0.5585 | 1993.0 | 5979 | 1.2565 |
| 0.5585 | 1994.0 | 5982 | 1.2565 |
| 0.5585 | 1995.0 | 5985 | 1.2564 |
| 0.5585 | 1996.0 | 5988 | 1.2563 |
| 0.5585 | 1997.0 | 5991 | 1.2563 |
| 0.5585 | 1998.0 | 5994 | 1.2562 |
| 0.5585 | 1999.0 | 5997 | 1.2562 |
| 0.558 | 2000.0 | 6000 | 1.2562 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.0
|
QuizzerPrivate/lora-trained-xl
|
QuizzerPrivate
| 2024-03-08T00:22:36Z | 1 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-03-07T19:46:13Z |
---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
widget:
- text: A photo of sks dog in a bucket
output:
url: image_0.png
- text: A photo of sks dog in a bucket
output:
url: image_1.png
- text: A photo of sks dog in a bucket
output:
url: image_2.png
- text: A photo of sks dog in a bucket
output:
url: image_3.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - QuizzerPrivate/lora-trained-xl
<Gallery />
## Model description
These are QuizzerPrivate/lora-trained-xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](QuizzerPrivate/lora-trained-xl/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
ColeD0/Claud.ai-2
|
ColeD0
| 2024-03-08T00:19:56Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-03-07T19:53:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ycfNTU/bloomz-560m_PROMPT_TUNING_textgrading_CASUAL_LM_v1
|
ycfNTU
| 2024-03-08T00:19:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-08T00:19:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shleeeee/mistral-ko-tech-science-v1
|
shleeeee
| 2024-03-08T00:18:13Z | 2,267 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"ko",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-10T05:02:38Z |
---
license: other
language:
- ko
pipeline_tag: text-generation
---
# Model Card for mistral-ko-tech-science-v1
It is a fine-tuned model using Korean in the mistral-7b model
## Model Details
* **Model Developers** : shleeeee(Seunghyeon Lee) , oopsung(Sungwoo Park)
|
shleeeee/mistral-ko-OpenOrca-Platypus-v2
|
shleeeee
| 2024-03-08T00:17:46Z | 124 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"ko",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-18T06:57:43Z |
---
license: other
language:
- ko
pipeline_tag: text-generation
---
# Model Card for mistral-ko-OpenOrca-Platypus-v2
It is a fine-tuned model using Korean in the mistral-7b model
## Model Details
* **Model Developers** : shleeeee(Seunghyeon Lee) , oopsung(Sungwoo Park)
|
shleeeee/mistral-ko-openorca-platypus-1epoch
|
shleeeee
| 2024-03-08T00:17:16Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"text-generation",
"ko",
"license:other",
"region:us"
] |
text-generation
| 2023-12-21T08:21:55Z |
---
library_name: peft
license: other
language:
- ko
pipeline_tag: text-generation
---
# Model Card for mistral-ko-openorca-platypus-1epoch
It is a fine-tuned model using Korean in the mistral-7b model
## Model Details
* **Model Developers** : shleeeee(Seunghyeon Lee) , oopsung(Sungwoo Park)
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
shleeeee/mistral-7b-ko-v1
|
shleeeee
| 2024-03-08T00:16:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"text-generation",
"ko",
"license:other",
"region:us"
] |
text-generation
| 2023-12-27T04:27:10Z |
---
library_name: peft
license: other
language:
- ko
pipeline_tag: text-generation
---
# Model Card for mistral-7b-ko-v1
It is a fine-tuned model using Korean in the mistral-7b model
## Model Details
* **Model Developers** : shleeeee(Seunghyeon Lee) , oopsung(Sungwoo Park)
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
OwOOwO/eacc_contTrain_m2_25
|
OwOOwO
| 2024-03-08T00:16:21Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-08T00:13:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shleeeee/mistral-ko-exo-mrc-v1
|
shleeeee
| 2024-03-08T00:15:03Z | 109 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"ko",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-11T08:10:49Z |
---
license: other
language:
- ko
pipeline_tag: text-generation
---
# Model Card for mistral-ko-exo-mrc-v1
It is a fine-tuned model using Korean in the mistral-7b model
## Model Details
* **Model Developers** : shleeeee(Seunghyeon Lee) , oopsung(Sungwoo Park)
|
shleeeee/mistral-ko-7b-tech
|
shleeeee
| 2024-03-08T00:14:25Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetune",
"ko",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-29T15:35:30Z |
---
language:
- ko
pipeline_tag: text-generation
tags:
- finetune
license: other
---
# Model Card for mistral-ko-7b-tech
It is a fine-tuned model using Korean in the mistral-7b model.
## Model Details
* **Model Developers** : shleeeee(Seunghyeon Lee), oopsung(Sungwoo Park)
* **Repository** : To be added
* **Model Architecture** : The mistral-ko-7b-wiki-neft is is a fine-tuned version of the Mistral-7B-v0.1.
* **Lora target modules** : q_proj, k_proj, v_proj, o_proj,gate_proj
* **train_batch** : 4
* **Max_step** : 500
## Dataset
Korean Custom Dataset(2000)
## Prompt template: Mistral
```
<s>[INST]{['instruction']}[/INST]{['output']}</s>
```
## Usage
```
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("shleeeee/mistral-ko-7b-tech")
model = AutoModelForCausalLM.from_pretrained("shleeeee/mistral-ko-7b-tech")
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="shleeeee/mistral-ko-7b-tech")
```
## Evaluation

|
shleeeee/mistral-ko-exo-wiki-quiz-v1
|
shleeeee
| 2024-03-08T00:12:49Z | 96 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"ko",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-06T03:23:55Z |
---
license: other
language:
- ko
pipeline_tag: text-generation
---
# Model Card for mistral-ko-exo-wiki-quiz-v1
It is a fine-tuned model using Korean in the mistral-7b model
## Model Details
* **Model Developers** : shleeeee(Seunghyeon Lee) , oopsung(Sungwoo Park)
|
shleeeee/mistral-ko-OpenOrca-2000
|
shleeeee
| 2024-03-08T00:11:32Z | 95 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetune",
"ko",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-04T13:17:54Z |
---
language:
- ko
pipeline_tag: text-generation
tags:
- finetune
---
# Model Card for mistral-ko-OpenOrca-2000
It is a fine-tuned model using Korean in the mistral-7b model
## Model Details
* **Model Developers** : shleeeee(Seunghyeon Lee), oopsung(Sungwoo Park)
* **Repository** : To be added
* **Model Architecture** : The shleeeee/mistral-ko-OpenOrca-2000 is is a fine-tuned version of the Mistral-7B-v0.1.
* **Lora target modules** : q_proj, k_proj, v_proj, o_proj,gate_proj
* **train_batch** : 4
* **epochs** : 2
## Dataset
2000 ko-OpenOrca datasets
## Prompt template: Mistral
```
<s>[INST]{['instruction']}[/INST]{['output']}</s>
```
## Usage
```
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("shleeeee/mistral-ko-OpenOrca-2000")
model = AutoModelForCausalLM.from_pretrained("shleeeee/mistral-ko-OpenOrca-2000")
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="shleeeee/mistral-ko-OpenOrca-2000")
```
## Evaluation
To be added
|
shleeeee/mistral-ko-7b-wiki-neft
|
shleeeee
| 2024-03-08T00:11:04Z | 2,286 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetune",
"ko",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-29T04:46:44Z |
---
language:
- ko
pipeline_tag: text-generation
tags:
- finetune
---
# Model Card for mistral-ko-7b-wiki-neft
It is a fine-tuned model using Korean and NEFT in the mistral-7b model.
## Model Details
* **Model Developers** : shleeeee(Seunghyeon Lee), oopsung(Sungwoo Park)
* **Repository** : To be added
* **Model Architecture** : The mistral-ko-7b-wiki-neft is is a fine-tuned version of the Mistral-7B-v0.1.
* **Lora target modules** : q_proj, k_proj, v_proj, o_proj,gate_proj
* **train_batch** : 4
* **neftune_noise_alpha** : 5
* **Max_step** : 1000
## Dataset
Korean Custom Dataset
## Prompt template: Mistral
```
<s>[INST]{['instruction']}[/INST]{['output']}</s>
```
## Usage
```
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("shleeeee/mistral-7b-wiki")
model = AutoModelForCausalLM.from_pretrained("shleeeee/mistral-7b-wiki")
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="shleeeee/mistral-7b-wiki")
```
## Evaluation

|
Holarissun/phi2-airl_sft-tldr-seqsampler
|
Holarissun
| 2024-03-08T00:06:49Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-08T00:06:42Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi2-airl_sft-tldr-seqsampler
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi2-airl_sft-tldr-seqsampler
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
OwOOwO/eacc_contTrain_m2_55_orig
|
OwOOwO
| 2024-03-08T00:01:33Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T23:59:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
imsarfaroz/fine-tuned-albert-tweets
|
imsarfaroz
| 2024-03-07T23:58:52Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-07T23:47:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: albert-base-v2
model-index:
- name: fine-tuned-albert-tweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-albert-tweets
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6212
- Accuracy: 0.6785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 179 | 0.6264 | 0.6377 |
| No log | 2.0 | 358 | 0.6212 | 0.6785 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_32_64_0.05_8_0.0002
|
ferrazzipietro
| 2024-03-07T23:56:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T23:55:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
s14pe/poca-SoccerTwos
|
s14pe
| 2024-03-07T23:54:13Z | 21 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-03-07T23:53:39Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: s14pe/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
MarkBW/no-bra-club
|
MarkBW
| 2024-03-07T23:52:25Z | 4 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"region:us"
] |
text-to-image
| 2024-03-07T23:52:22Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: "UNICODE\0\0P\0o\0s\0t\0p\0r\0o\0c\0e\0s\0s\0 \0u\0p\0s\0c\0a\0l\0e\0 \0b\0y\0:\0 \04\0,\0 \0P\0o\0s\0t\0p\0r\0o\0c\0e\0s\0s\0 \0u\0p\0s\0c\0a\0l\0e\0r\0:\0 \0R\0-\0E\0S\0R\0G\0A\0N\0 \04\0x\0+"
output:
url: images/wrefds.jpeg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: dmnoy, crop top
---
# no-bra-club
<Gallery />
## Trigger words
You should use `dmnoy` to trigger the image generation.
You should use `crop top` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/MarkBW/no-bra-club/tree/main) them in the Files & versions tab.
|
adityahrudayam/T5_qa_model
|
adityahrudayam
| 2024-03-07T23:52:21Z | 32 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"question-answering",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-07T23:42:04Z |
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
model-index:
- name: T5_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5_qa_model
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | nan |
| No log | 2.0 | 2 | nan |
| No log | 3.0 | 3 | nan |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
panos-span/ppo-SoccerTwos2
|
panos-span
| 2024-03-07T23:50:16Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-03-07T23:49:58Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: panos-span/ppo-SoccerTwos2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
andysalerno/openchat-nectar-0.1
|
andysalerno
| 2024-03-07T23:45:02Z | 13 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:berkeley-nest/Nectar",
"base_model:openchat/openchat-3.5-0106",
"base_model:finetune:openchat/openchat-3.5-0106",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T08:02:43Z |
---
license: apache-2.0
datasets:
- berkeley-nest/Nectar
base_model: openchat/openchat-3.5-0106
model-index:
- name: openchat-nectar-0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.21
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/openchat-nectar-0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.99
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/openchat-nectar-0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.17
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/openchat-nectar-0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 54.22
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/openchat-nectar-0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/openchat-nectar-0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/openchat-nectar-0.1
name: Open LLM Leaderboard
---
This is openchat/openchat-3.5-0106, tuned with DPO on a tiny subset Nectar. Only 200 steps, so nowhere close to a full epoch.
Careful attention was paid to make sure the chat template was followed properly.
Summary of versions:
**[openchat-nectar-0.1](https://huggingface.co/andysalerno/openchat-nectar-0.1)**
- 200 steps, no filtering on Nectar dataset, 5e-5 learning rate
**[openchat-nectar-0.2](https://huggingface.co/andysalerno/openchat-nectar-0.2)**
- empty repo, failed training. ignore it
**[openchat-nectar-0.3](https://huggingface.co/andysalerno/openchat-nectar-0.3)**
- 500 steps, no filtering on Nectar dataset, 5e-5 learning rate (same as 1 but with more steps)
**[openchat-nectar-0.4](https://huggingface.co/andysalerno/openchat-nectar-0.4)**
- 500 steps, filtered dataset to only include multi-chat-turn examples, used 4th ranking response as the "rejected" instead of 3rd, filtered out "good_natured=False", 5e-5 learning rate
**[openchat-nectar-0.5](https://huggingface.co/andysalerno/openchat-nectar-0.5)**
- 5000 steps (over a full epoch), filtered dataset to only include multi-chat-turn examples, used 4th ranking response as the "rejected" instead of 3rd, filtered out "good_natured=False", 5e-6 learning rate. Same as 0.4 but with 10x the steps, and 1/10th the learning rate
**[openchat-nectar-0.6](https://huggingface.co/andysalerno/openchat-nectar-0.6)**
- 500 steps, filtered dataset to only include multi-chat-turn examples, used 4th ranking response as the "rejected" instead of 3rd, filtered out "good_natured=False", 5e-5 learning rate. Same as 0.5 but with 1/10th the steps, and 10x the learning rate
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_andysalerno__openchat-nectar-0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.94|
|AI2 Reasoning Challenge (25-Shot)|66.21|
|HellaSwag (10-Shot) |82.99|
|MMLU (5-Shot) |65.17|
|TruthfulQA (0-shot) |54.22|
|Winogrande (5-shot) |81.37|
|GSM8k (5-shot) |69.67|
|
dranger003/LWM-Text-Chat-128K-iMat.GGUF
|
dranger003
| 2024-03-07T23:33:58Z | 143 | 8 |
gguf
|
[
"gguf",
"text-generation",
"license:llama2",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-14T14:14:59Z |
---
license: llama2
pipeline_tag: text-generation
library_name: gguf
---
GGUF importance matrix (imatrix) quants for https://huggingface.co/LargeWorldModel/LWM-Text-Chat-128K
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using wiki.train.raw.
* The imatrix Q4-K quant fits with 32K context on 24GB and gives me ~100 t/s inference on a 3090.
* With IQ3_XXS it seems to fit ~37K context on 24GB (and it is even faster than Q4-K).
* With either quant on a 3090 it seems to decode context at well over 2000 t/s.
* Using Q8 K-cache (instead of F16) you can fit up to 43-44K context but inference speed goes down a little bit.
* Also for some reason I need to use 1.0 penalty to avoid the response being cut-off.
| Layers | Context | [Template](https://github.com/LargeWorldModel/LWM/blob/9aaaa1e864bfcf31b66028e782395a22f4817535/scripts/eval_needle.py#L48) |
| --- | --- | --- |
| <pre>32</pre> | <pre>131072</pre> | <pre>You are a helpful assistant.<br>USER:<br>{context}<br>{question}<br>Don't give information outside the document or repeat your findings. Keep your response short and direct.<br>ASSISTANT:<br>{response}</pre> |
|
iampedroalz/gemma-2b-4bit-alpaca-spanish
|
iampedroalz
| 2024-03-07T23:33:25Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-03-07T23:31:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ChaoticNeutrals/Eris_Floramix_DPO_7B
|
ChaoticNeutrals
| 2024-03-07T23:30:49Z | 231 | 6 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED",
"dataset:ResplendentAI/Synthetic_Soul_1k",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T23:09:39Z |
---
library_name: transformers
license: other
datasets:
- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED
- ResplendentAI/Synthetic_Soul_1k
language:
- en
---
# Eris Floramix DPO
This is a mix between Eris Remix DPO and Flora DPO, a finetune of the original Eris Remix on the Synthetic_Soul_1k dataset.
Applied this DPO dataset: https://huggingface.co/datasets/athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED
|
shamekhjr/ppo-Huggy
|
shamekhjr
| 2024-03-07T23:28:49Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-03-07T23:28:21Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: shamekhjr/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
Alpaca69B/phi2-2b-absa
|
Alpaca69B
| 2024-03-07T23:21:02Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-30T02:56:41Z |
---
library_name: transformers
tags: []
---
---
# phi2-2b-absa: Fine-Tuned Aspect-Based Sentiment Analysis Model
## Model Description
The **phi2-2b-absa** model is a fine-tuned aspect-based sentiment analysis (ABSA) model based on the Microsoft Phi-2 model. It has been trained on the **semeval2016-full-absa-reviews-english-translated-resampled** dataset. The model predicts sentiments towards different aspects mentioned in a given sentence.
## Fine-Tuning Details
The fine tuning can be revisited on [Google Colab](https://colab.research.google.com/drive/1n3ykETLpHQPXwPhUcOe-z9cG3ThrDkSi?usp=sharing).
### Dataset
- **Name:** semeval2016-full-absa-reviews-english-translated-resampled
- **Description:** Annotated dataset for ABSA containing sentences, aspects, sentiments, and additional contextual text. It is split into train and test sets.
### Model Architecture
- **Base Model:** Microsoft Phi-2
- **Fine-Tuned Model:** phi2-2b-absa
### Fine-Tuning Parameters
- **LoRA Attention Dimension (lora_r):** 64
- **LoRA Scaling Parameter (lora_alpha):** 16
- **LoRA Dropout Probability (lora_dropout):** 0.1
### BitsAndBytes Quantization
- **Activate 4-bit Precision:** True
- **Compute Dtype for 4-bit Models:** float16
- **Quantization Type:** nf4
### Training Parameters
- **Number of Training Epochs:** 1
- **Batch Size per GPU for Training:** 4
- **Batch Size per GPU for Evaluation:** 4
- **Gradient Accumulation Steps:** 1
- **Learning Rate:** 2e-4
- **Weight Decay:** 0.001
- **Optimizer:** PagedAdamW (32-bit)
- **Learning Rate Scheduler:** Cosine
### SFT Parameters
- **Maximum Sequence Length:** None
- **Packing:** False
## How to Use
```
from transformers import AutoTokenizer, pipeline
import torch
model = "Alpaca69B/llama-2-7b-absa-semeval-2016"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.float16,
device="auto",
)
input_sentence = "the first thing that attracts attention is the warm reception and the smiling receptionists."
sequences = pipeline(
f'### Human: {input_sentence} ### Assistant: aspect:',
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
sequences[0]['generated_text']
```
Testing can be seen on [Google Colab](https://colab.research.google.com/drive/1eKdZYYWiivyeCQDsocGBstVODMLZyT-_?usp=sharing)
## Acknowledgments
- The fine-tuning process and model development were performed by Ben Kampmann.
---
|
dolainu/Nyanners_loraXL_Vtuber
|
dolainu
| 2024-03-07T23:15:36Z | 4 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2024-03-07T22:36:54Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
<lora:NyanXL_V1_50se:0.87>, nyanners1st, purple eyes, petite, closed mouth,
smug
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers, realistic
output:
url: images/09104-3065517054.png
- text: >-
<lora:NyanXL_V1_50se:0.87>, nyanners1st, purple eyes, petite, closed mouth,
smug, shirt lift, bed, legs up, pussy, hugging own legs
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers, realistic
output:
url: images/09101-2699702684.png
- text: >-
<lora:NyanXL_V1_50se:0.87>, nyanners1st, purple eyes, petite, closed mouth,
smug, shirt lift, bed, masturbating, pussy
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers, realistic
output:
url: images/09094-3843125815.png
- text: >-
score_9, score_8_up, score_7_up, <lora:NyanXL_V1_50se:0.87>, nyanners1st,
purple eyes, petite, closed mouth, smug, sitting, table, drink, hand on
cheek, looking at viewer, resting head
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers
output:
url: images/09082-3207580297.png
- text: >-
score_9, score_8_up, score_7_up, <lora:NyanXL_V1_50se:0.87>, nyanners1st,
medium hair, purple eyes, petite, closed mouth, smug, sitting, table, drink,
hand on cheek, looking at viewer, resting head
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers
output:
url: images/00028-4212975261.png
- text: >-
score_9, score_8_up, score_7_up, <lora:NyanXL_V1_50se:0.87>, nyanners1st,
medium hair, purple eyes, petite, closed mouth, smug, sitting, table, drink,
hand on cheek, looking at viewer, resting head
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers
output:
url: images/09078-3444304162.png
- text: >-
score_9, score_8_up, score_7_up, <lora:NyanXL_V1_50se:0.87>, nyanners1st,
purple eyes, petite, closed mouth, smug, kneeling, shirt lift
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers
output:
url: images/09085-1394382796.png
- text: >-
score_9, score_8_up, score_7_up, <lora:NyanXL_V1_50se:1>, nyanners2st, long
hair, kneeling, closed mouth, shirt lift
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, realistic, bad anatomy, bad proportions,
deformed, deformed anatomy, deformed fingers, motion lines
output:
url: images/09018-3981033982.png
- text: >-
score_9, score_8_up, score_7_up, <lora:NyanXL_V1_50se:0.87>, nyanners2st,
long hair, closed mouth, shirt lift, smug, petite, lying, bed
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, realistic, bad anatomy, bad proportions,
deformed, deformed anatomy, deformed fingers, motion lines
output:
url: images/09035-1317649319.png
- text: >-
score_9, score_8_up, score_7_up, <lora:NyanXL_V1_50se:0.87>, nyanners2st,
long hair, kneeling, closed mouth, shirt lift, smug, petite
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, realistic, bad anatomy, bad proportions,
deformed, deformed anatomy, deformed fingers, motion lines
output:
url: images/09026-560715627.png
- text: >-
score_9, score_8_up, score_7_up, <lora:NyanXL_V1_50se:0.87>, nyanners2st,
long hair, kneeling, closed mouth, shirt lift, smug, petite
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, realistic, bad anatomy, bad proportions,
deformed, deformed anatomy, deformed fingers, motion lines
output:
url: images/09024-4125556276.png
- text: >-
score_9, score_8_up, score_7_up, <lora:NyanXL_V1_50se:0.87>, nyanners2st,
long hair, purple eyes, petite, closed mouth, smug, sitting, table, drink,
hand on cheek, looking at viewer, resting head
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers
output:
url: images/09077-834539960.png
base_model: stablediffusionapi/pony-diffusion-v6-xl
instance_prompt: null
license: apache-2.0
---
# Nyanners
<Gallery />
## Model description
Works best with Ponydiffusion V6 XL
TESTED AT 0.87 STRENGTH
Prompts:
short hair ver.: "nyanners1st, purple eyes"---optional: "medium hair"
long hair ver.: "nyanners2st, long hair, purple eyes"
## Download model
Weights for this model are available in Safetensors format.
[Download](/dolainu/Nyanners_lora_Vtuber/tree/main) them in the Files & versions tab.
|
dolainu/Natsuiro_Matsuri_loraXL_Vtuber
|
dolainu
| 2024-03-07T23:09:30Z | 3 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2024-03-07T23:09:23Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
score_9, <lora:NatsuiroMatsuriXL_V0.2:0.8>, namatsuri, 1girl, (petite),
green eyes, sitting, bikini
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers
output:
url: images/09658-1693014100.png
- text: >-
score_9, <lora:NatsuiroMatsuriXL_V0.2:0.8>, namatsuri, 1girl, (petite),
green eyes, sitting, crossed legs
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers
output:
url: images/09663-282707023.png
- text: >-
score_9, <lora:NatsuiroMatsuriXL_V0.2:0.8>, namatsuri, 1girl, (petite),
green eyes, shirt lift, nipples, kneeling, small breasts
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers, full body shot
output:
url: images/09673-1053123366.png
- text: >-
score_9, <lora:NatsuiroMatsuriXL_V0.2:0.8>, namatsuri, 1girl, (petite),
green eyes, shirt lift, nipples, kneeling, small breasts, condom wrapper in
mouth
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers, full body shot
output:
url: images/09675-1053123366.png
- text: >-
score_9, score_8_up, <lora:NatsuiroMatsuriXL_V0.2:0.8>, namatsuri, 1girl,
(petite), green eyes, lying, bed, spread legs, masturbating, pussy,
fingering, hand on breast, small breasts, nipples
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers, realistic
output:
url: images/09700-4195931212.png
- text: >-
score_9, score_8_up, <lora:NatsuiroMatsuriXL_V0.2R:0.8>, namatsuri, 1girl,
(petite), green eyes, lying, bed, spread legs, [[[spread pussy]]]
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers, realistic
output:
url: images/09705-4195931212.png
- text: >-
score_9, <lora:NatsuiroMatsuriXL_V0.2R:0.8>, namatsuri, 1girl, petite, green
eyes, leaning towards viewer
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers
output:
url: images/09710-2211139587.png
base_model: stablediffusionapi/pony-diffusion-v6-xl
instance_prompt: null
license: apache-2.0
---
# Natsuiro Matsuri
<Gallery />
## Model description
Works best with Ponydiffusion V6 XL
TESTED AT 0.8 STRENGTH.
Trigger Words:
"namatsuri, 1girl, (petite), green eyes"
## Download model
Weights for this model are available in Safetensors format.
[Download](/dolainu/Natsuiro_Matsuri_loraXL_Vtuber/tree/main) them in the Files & versions tab.
|
ArtMindia/artmindia3k
|
ArtMindia
| 2024-03-07T23:07:06Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"pytorch",
"mistral",
"question-answering",
"en",
"license:apache-2.0",
"region:us"
] |
question-answering
| 2023-10-28T00:43:48Z |
---
license: apache-2.0
language:
- en
library_name: adapter-transformers
metrics:
- accuracy
pipeline_tag: question-answering
---
---
license: apache-2.0
language:
- en
This is just a test card with a few thousand rows of data. I wish I had more to add but that is all.
How much do you need. Here it is.
fsdf
asdf
sdf
dsafs
fsd
fad
sfs
f
sadf
dafs
dfs
fa
sf
asf
sf
s
afs
fsdf
f
s
f
sf
sdf
sf
sf
s
This model is not too short
|
Sebas012/mi-super-modelo
|
Sebas012
| 2024-03-07T23:06:44Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-07T23:00:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: bert-base-cased
metrics:
- accuracy
model-index:
- name: mi-super-modelo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mi-super-modelo
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7111
- Accuracy: 0.15
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7596 | 0.5 | 5 | 1.7679 | 0.15 |
| 1.8268 | 1.0 | 10 | 1.7111 | 0.15 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
AIdenU/Gemma-7b-ko-Y24_v2.0
|
AIdenU
| 2024-03-07T23:04:05Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-05T00:12:43Z |
---
license: apache-2.0
language:
- ko
pipeline_tag: text-generation
tags:
- gemma
---
### BaseModel
- [google/gemma-7b](https://huggingface.co/google/gemma-7b)
### Model Generation
```
from transforemrs import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("AIdenU/Gemma-7b-ko-Y24_v2.0", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("AIdenU/Gemma-7b-ko-Y24_v2.0", use_fast=True)
systemPrompt = "λΉμ μ μ λ₯ν AIμ
λλ€."
prompt = "μ§λ μ΄λ λ°μΌλ©΄ κΏννλμ?"
outputs = model.generate(
**tokenizer(
f"### instruction: {system}\n{prompt} \n### output: ",
return_tensors='pt'
).to('cuda'),
max_new_tokens=256,
temperature=0.2,
top_p=1,
do_sample=True
)
print(tokenizer.decode(outputs[0]))
```
|
AIdenU/LLAMA-2-13b-koen-Y24_v1.0
|
AIdenU
| 2024-03-07T23:01:59Z | 211 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama2",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-21T01:25:26Z |
---
license: apache-2.0
language:
- ko
pipeline_tag: text-generation
tags:
- llama2
---
### BaseModel
- [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)
### Model Generation
```
from transforemrs import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("AIdenU/LLAMA-2-13b-koen-Y24_v1.0", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("AIdenU/LLAMA-2-13b-koen-Y24_v1.0", use_fast=True)
systemPrompt = "λΉμ μ μ λ₯ν AIμ
λλ€."
prompt = "μ§λ μ΄λ λ°μΌλ©΄ κΏννλμ?"
outputs = model.generate(
**tokenizer(
f"[INST] <<SYS>>\n{systemPrompt}\n<</SYS>>\n\n{prompt} [/INST] ",
return_tensors='pt'
).to('cuda'),
max_new_tokens=256,
temperature=0.2,
top_p=1,
do_sample=True
)
print(tokenizer.decode(outputs[0]))
```
|
AlexandreManai/dqn-SpaceInvadersNoFrameskip-v4
|
AlexandreManai
| 2024-03-07T22:59:30Z | 6 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-07T22:58:58Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 512.00 +/- 139.13
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AlexandreManai -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AlexandreManai -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga AlexandreManai
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
dolainu/SmugAlana_loraXL_Vtuber
|
dolainu
| 2024-03-07T22:58:46Z | 3 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2024-03-07T22:40:52Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
score_9, <lora:SmugAlanaV0.1:0.87>, smalana, 1girl, smug, kneeling, bed,
condom wrapper in mouth
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers
output:
url: images/09290-1373470824.png
- text: score_9, <lora:SmugAlanaV0.1:0.8>, smalana, 1girl, nude, bed
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers
output:
url: images/09328-362448161.png
- text: >-
score_9, <lora:SmugAlanaV0.1:0.8>, smalana, 1girl, sitting, clothes down,
nipple
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers
output:
url: images/09316-1813936606.png
- text: score_9, <lora:SmugAlanaV0.1:0.87>, smalana, 1girl, smug, sitting
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers
output:
url: images/09309-4054020249.png
- text: >-
<lora:SmugAlanaV0.1:0.8>, smalana, 1girl, <lora:Smooth Anime 2 Style
SDXL_LoRA_Pony Diffusion V6 XL:1>
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers
output:
url: images/09340-1383936240.png
- text: score_9, <lora:SmugAlanaV0.1:0.87>, smalana, 1girl, smug, sitting
parameters:
negative_prompt: >-
censored, unfinished, sketch, messy drawing, amateur drawing, thick
thighs, muscular female, bad anatomy, bad proportions, deformed, deformed
anatomy, deformed fingers
output:
url: images/09343-915960868.png
base_model: stablediffusionapi/pony-diffusion-v6-xl
instance_prompt: null
license: apache-2.0
---
# SmugAlana
<Gallery />
## Model description
Works best with Ponydiffusion V6 XL
TESTED BETWEEN '0.8 - 0.9' STRENGTH
Trigger words: smalana, 1girl
## Download model
Weights for this model are available in Safetensors format.
[Download](/dolainu/SmugAlana_lora_Vtuber/tree/main) them in the Files & versions tab.
|
colerobertson/wav2vec2-base-ogma-phoneme
|
colerobertson
| 2024-03-07T22:41:39Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-07T22:22:42Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-ogma-phoneme
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ogma-phoneme
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Cer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:------------------------:|:-----:|:----:|:---------------:|:------:|
| 67.3394 | 1.0 | 5 | 62.1245 | 6.7376 |
| 73.2076 | 2.0 | 10 | nan | 6.7376 |
| -257055208286820768.0000 | 3.0 | 15 | nan | 6.7376 |
| 64.2241 | 4.0 | 20 | nan | 6.7376 |
| 65.3601 | 5.0 | 25 | 62.1245 | 6.7376 |
| 64.2295 | 6.0 | 30 | 62.1157 | 6.7376 |
| 45.425 | 7.0 | 35 | 62.1251 | 6.7376 |
| 50.6118 | 8.0 | 40 | nan | 6.7178 |
| 64.6394 | 9.0 | 45 | 62.0582 | 6.6188 |
| 48.7615 | 10.0 | 50 | nan | 6.6188 |
| 54.5817 | 11.0 | 55 | nan | 6.4950 |
| 48.1198 | 12.0 | 60 | nan | 6.4950 |
| 56.9202 | 13.0 | 65 | nan | 6.3465 |
| 57.3656 | 14.0 | 70 | nan | 6.4406 |
| 68.163 | 15.0 | 75 | 61.7497 | 6.2129 |
| 56.806 | 16.0 | 80 | nan | 6.2129 |
| 69.1218 | 17.0 | 85 | 61.6119 | 5.7574 |
| 55.5282 | 18.0 | 90 | 61.5413 | 5.4158 |
| -6752.4055 | 19.0 | 95 | 61.2303 | 4.9257 |
| 64.744 | 20.0 | 100 | 60.9641 | 4.4455 |
| 66.7382 | 21.0 | 105 | 60.3274 | 3.5198 |
| -21060.9078 | 22.0 | 110 | nan | 3.5198 |
| 51.2619 | 23.0 | 115 | 59.9896 | 3.1089 |
| 51.398 | 24.0 | 120 | nan | 2.7772 |
| 63.6242 | 25.0 | 125 | 59.3321 | 2.5149 |
| 59.6308 | 26.0 | 130 | 58.7697 | 2.1931 |
| 62.0615 | 27.0 | 135 | nan | 1.8366 |
| -46.2037 | 28.0 | 140 | 57.9474 | 1.7475 |
| 60.5632 | 29.0 | 145 | 57.5041 | 1.5941 |
| 55.4431 | 30.0 | 150 | 56.7507 | 1.4307 |
| 40.8661 | 31.0 | 155 | 56.6063 | 1.4059 |
| 63.784 | 32.0 | 160 | 56.1097 | 1.2327 |
| 42.2708 | 33.0 | 165 | nan | 1.2327 |
| 53.7813 | 34.0 | 170 | nan | 1.2426 |
| 57.459 | 35.0 | 175 | 55.8894 | 1.2228 |
| 58.9998 | 36.0 | 180 | nan | 1.0 |
| 0.0 | 37.0 | 185 | nan | 1.0 |
| 0.0 | 38.0 | 190 | nan | 1.0 |
| 0.0 | 39.0 | 195 | nan | 1.0 |
| 0.0 | 40.0 | 200 | nan | 1.0 |
| 0.0 | 41.0 | 205 | nan | 1.0 |
| 0.0 | 42.0 | 210 | nan | 1.0 |
| 0.0 | 43.0 | 215 | nan | 1.0 |
| 0.0 | 44.0 | 220 | nan | 1.0 |
| 0.0 | 45.0 | 225 | nan | 1.0 |
| 0.0 | 46.0 | 230 | nan | 1.0 |
| 0.0 | 47.0 | 235 | nan | 1.0 |
| 0.0 | 48.0 | 240 | nan | 1.0 |
| 0.0 | 49.0 | 245 | nan | 1.0 |
| 0.0 | 50.0 | 250 | nan | 1.0 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_32_32_0.01_4_0.0002
|
ferrazzipietro
| 2024-03-07T22:39:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T22:39:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ShubhamJain18/ppo-Huggy
|
ShubhamJain18
| 2024-03-07T22:35:47Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-03-07T22:34:03Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ShubhamJain18/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF
|
MaziyarPanahi
| 2024-03-07T22:33:24Z | 54 | 2 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"en",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"base_model:google/gemma-7b",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:HuggingFaceH4/zephyr-7b-gemma-sft-v0.1",
"base_model:quantized:HuggingFaceH4/zephyr-7b-gemma-sft-v0.1"
] |
text-generation
| 2024-03-07T22:03:29Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- tensorboard
- safetensors
- gemma
- text-generation
- alignment-handbook
- trl
- sft
- generated_from_trainer
- conversational
- en
- dataset:HuggingFaceH4/deita-10k-v0-sft
- base_model:google/gemma-7b
- license:other
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: zephyr-7b-gemma-sft-v0.1-GGUF
base_model: HuggingFaceH4/zephyr-7b-gemma-sft-v0.1
inference: false
model_creator: HuggingFaceH4
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF)
- Model creator: [HuggingFaceH4](https://huggingface.co/HuggingFaceH4)
- Original model: [HuggingFaceH4/zephyr-7b-gemma-sft-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-sft-v0.1)
## Description
[MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF) contains GGUF format model files for [HuggingFaceH4/zephyr-7b-gemma-sft-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-sft-v0.1).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF) and below it, a specific filename to download, such as: zephyr-7b-gemma-sft-v0.1-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF zephyr-7b-gemma-sft-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/zephyr-7b-gemma-sft-v0.1-GGUF zephyr-7b-gemma-sft-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m zephyr-7b-gemma-sft-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 β Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./zephyr-7b-gemma-sft-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./zephyr-7b-gemma-sft-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
tsavage68/mistralit2_1000_STEPS_1e7_SFT_SFT
|
tsavage68
| 2024-03-07T22:31:26Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T22:25:45Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: mistralit2_1000_STEPS_1e7_SFT_SFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistralit2_1000_STEPS_1e7_SFT_SFT
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.393 | 0.1 | 50 | 1.3650 |
| 0.9828 | 0.2 | 100 | 0.9080 |
| 0.3975 | 0.29 | 150 | 0.3765 |
| 0.3465 | 0.39 | 200 | 0.3516 |
| 0.3422 | 0.49 | 250 | 0.3418 |
| 0.3436 | 0.59 | 300 | 0.3365 |
| 0.3244 | 0.68 | 350 | 0.3329 |
| 0.3332 | 0.78 | 400 | 0.3298 |
| 0.3221 | 0.88 | 450 | 0.3275 |
| 0.3293 | 0.98 | 500 | 0.3260 |
| 0.3143 | 1.07 | 550 | 0.3251 |
| 0.3279 | 1.17 | 600 | 0.3246 |
| 0.3336 | 1.27 | 650 | 0.3243 |
| 0.3045 | 1.37 | 700 | 0.3241 |
| 0.3199 | 1.46 | 750 | 0.3240 |
| 0.3227 | 1.56 | 800 | 0.3240 |
| 0.3217 | 1.66 | 850 | 0.3239 |
| 0.3256 | 1.76 | 900 | 0.3239 |
| 0.3383 | 1.86 | 950 | 0.3239 |
| 0.3305 | 1.95 | 1000 | 0.3239 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
masonjar/mixtral_test
|
masonjar
| 2024-03-07T22:31:22Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-03-07T21:45:29Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
model-index:
- name: mixtral_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mixtral_test
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0
- Datasets 2.15.0
- Tokenizers 0.15.2
|
Maqqq/OpenHermes-2.5-Mistral-7B-15
|
Maqqq
| 2024-03-07T22:26:33Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T21:55:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_32_32_0.05_16_0.0002
|
ferrazzipietro
| 2024-03-07T22:20:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T17:23:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Arczisan/christy-doa
|
Arczisan
| 2024-03-07T22:17:25Z | 2 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"region:us"
] |
text-to-image
| 2024-03-07T22:17:21Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/chisty.png
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
---
# Dead or Alive - Christy
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Arczisan/christy-doa/tree/main) them in the Files & versions tab.
|
s14pe/ppo-Pyramid
|
s14pe
| 2024-03-07T22:17:18Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2024-03-07T19:18:02Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: s14pe/ppo-Pyramid
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
dataequity/DE-LM-7B
|
dataequity
| 2024-03-07T22:07:29Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deci",
"text-generation",
"conversational",
"custom_code",
"en",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-03-07T21:54:59Z |
---
license: apache-2.0
language:
- en
---
# DE-LM-7B
DE-LM-7B is a 7.04 billion parameter decoder-only text generation model, released under the Apache 2.0 license.
This is an instruction tuned model built on top of Deci/DeciLM-7B fine-tuned for data filtering and API generation.
### Model Description
- **Language(s) (NLP):** English
- **License:** Apache 2.0
## Model Architecture
| Parameters | Layers | Heads | Sequence Length | GQA num_key_value_heads* |
|:----------|:----------|:----------|:----------|:----------|
| 7.04 billion | 32 | 32 | 8192 | Variable |
## Uses
The model is intended for commercial and research use in English and can be fine-tuned for various tasks and languages.
## How to Get Started with the Model
Use the code below to get started with the model.
```bibtex
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "dataequity/DE-LM-7B"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", trust_remote_code=True).to(device)
inputs = tokenizer.encode("List the top 10 financial APIs", return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_p=0.95)
print(tokenizer.decode(outputs[0]))
# The model can also be used via the text-generation pipeline interface
from transformers import pipeline
generator = pipeline("text-generation", "dataequity/DE-LM-7B", torch_dtype="auto", trust_remote_code=True, device=device)
outputs = generator("List the top 10 financial APIs", max_new_tokens=100, do_sample=True, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Ethical Considerations and Limitations
DE-LM-7B is a new technology that comes with inherent risks associated with its use.
The testing conducted so far has been primarily in English and does not encompass all possible scenarios.
Like those of all large language models, DE-LM-7B's outputs are unpredictable, and the model may generate responses that are inaccurate, biased, or otherwise objectionable. Consequently, developers planning to use DE-LM-7B should undertake thorough safety testing and tuning designed explicitly for their intended applications of the model before deployment.
## Citation
```bibtex
@misc{DeciFoundationModels,
title = {DeciLM-7B},
author = {DeciAI Research Team},
year = {2023}
url={https://huggingface.co/Deci/DeciLM-7B},
}
```
|
HuggingFaceM4/siglip-so400m-14-980-flash-attn2-navit
|
HuggingFaceM4
| 2024-03-07T22:05:47Z | 8,619 | 43 |
transformers
|
[
"transformers",
"safetensors",
"siglip",
"zero-shot-image-classification",
"custom_code",
"arxiv:2307.06304",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2024-01-30T19:31:08Z |
---
license: apache-2.0
---
Same as https://huggingface.co/HuggingFaceM4/siglip-so400m-14-384-flash-attn2 with two changes:
- increase max resolution to 980 x 980 (instead of 384 x 384) by interpolating the position embeddings
- implement the strategy in [NaViT](https://arxiv.org/abs/2307.06304) to allow a/ variable resoltion images, b/ aspect ratio preserved images
These changes only apply to the vision tower. No changes to the text tower.
Implementation is fully backward compatible to `https://huggingface.co/HuggingFaceM4/siglip-so400m-14-384-flash-attn2` -> just don't specify the `patch_attention_mask`
Usage:
```python
import torch
from modeling_siglip import SiglipVisionModel
DEVICE = torch.device("cuda:0")
PATCH_SIZE = 14
pixel_values = torch.randn(2, 3, 28, 42, dtype=torch.bfloat16, device=DEVICE)
pixel_attention_mask = [
[
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[1] * 14 + [1] * 14 + [1] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
[0] * 14 + [0] * 14 + [0] * 14,
],
[
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
[1] * 14 + [1] * 14 + [0] * 14,
],
]
pixel_attention_mask = torch.tensor(pixel_attention_mask, dtype=torch.bool, device=DEVICE)
patches_subgrid = pixel_attention_mask.unfold(
dimension=1, size=PATCH_SIZE, step=PATCH_SIZE
).unfold(dimension=2, size=PATCH_SIZE, step=PATCH_SIZE)
patch_attention_mask = (patches_subgrid.sum(dim=(-1, -2)) > 0).bool()
model = SiglipVisionModel.from_pretrained("HuggingFaceM4/siglip-so400m-14-980-flash-attn2-navit", _flash_attn_2_enabled=True)
model.train()
model.vision_model.to(DEVICE, dtype=torch.bfloat16)
output = model.vision_model(pixel_values=pixel_values, patch_attention_mask=patch_attention_mask)
```
|
Maqqq/OpenHermes-2.5-Mistral-7B-14
|
Maqqq
| 2024-03-07T21:58:58Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T19:57:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
twhoool02/Llama-2-7b-hf-AWQ
|
twhoool02
| 2024-03-07T21:58:42Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"AWQ",
"llama-2",
"en",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:quantized:meta-llama/Llama-2-7b-hf",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-03-03T21:37:40Z |
---
language: en
license: other
tags:
- facebook
- meta
- AWQ
- llama-2
- llama
base_model: meta-llama/Llama-2-7b-hf
model_name: Llama-2-7b-hf-AWQ
library:
- Transformers
- AWQ
arxiv: https://arxiv.org/abs/2306.00978
model_type: llama
pipeline_tag: text-generation
qunatized_by: twhoool02
---
# Model Card for Llama-2-7b-hf-AWQ
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model is a quantized version of the meta-llama/Llama-2-7b-hf model. The model was quantized using AWQ.
- **Developed by:** Ted Whooley
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** llama
- **Language(s) (NLP):** en
- **License:** other
- **Finetuned from model [optional]:** meta-llama/Llama-2-7b-hf
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nusrat1234/Mistral-7B-User-Profile
|
Nusrat1234
| 2024-03-07T21:44:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T21:43:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_32_32_0.05_4_0.0002
|
ferrazzipietro
| 2024-03-07T21:43:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T16:45:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dominic1021/ohwxsarah
|
dominic1021
| 2024-03-07T21:40:04Z | 2 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-03-07T19:23:40Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: ohwxsarah woman
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
pypy/VGMShield
|
pypy
| 2024-03-07T21:32:06Z | 0 | 3 | null |
[
"Fake Video Detection",
"Fake Video Source Tracing",
"video-classification",
"dataset:OpenGVLab/InternVid",
"dataset:TempoFunk/webvid-10M",
"arxiv:2402.13126",
"license:apache-2.0",
"region:us"
] |
video-classification
| 2024-02-21T20:23:29Z |
---
license: apache-2.0
datasets:
- OpenGVLab/InternVid
- TempoFunk/webvid-10M
pipeline_tag: video-classification
tags:
- Fake Video Detection
- Fake Video Source Tracing
---
<div style="text-align: center;">
<img src="./symbol.png" alt="symbol" style="height: 100px;"/>
</div>
# VGMShield: Mitigating Misuse of Video Generative Models
This repository pre-trained checkpoints to evaluate our detection and source tracing models. Our paper can be found at [here](https://arxiv.org/abs/2402.13126).
**Detection Model**:
[I3D](./detect/i3d/invid_i3d_i2v_i2v_best_model.pth) (0 True 1 False)
[MAE](./detect/mae/invid_mae_i2v_i2v_best_model.pth) (0 True 1 False)
[XCLIP](./detect/xclip/invid_xclip_i2v_i2v_best_model.pth) (0 True 1 False)
[MAE-sora](./detect/mae/detection_ft_sora.pt) (0 True 1 False)
**Source Tracing Model**
> 0 Hotshot-xl 1 i2vgen-xl(i2v) 2 i2vgen-xl(t2v) 3 LaVie 4 SEINE 5 Show-1 6 Stable Video Diffusion 7 VideoCrafter(i2v) 8 VideoCrafter(t2v)
[I3D](./source_tracing/i3d/invid_i3d_st_best_model.pth)-based source tracing model
[MAE](./source_tracing/mae/invid_mae_st_best_model.pth)-based source tracing model
[XCLIP](./source_tracing/xclip/invid_xclip_st_best_model.pth)-based source tracing model
[MAE](./source_tracing/mae/source_tracing_ft_sora.pt)-based(sora) source tracing model sora is label 9.
|
cmu-lti/sotopia-pi-mistral-7b-BC
|
cmu-lti
| 2024-03-07T21:30:09Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-03-07T21:24:20Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_16_64_0.01_16_0.0002
|
ferrazzipietro
| 2024-03-07T21:24:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T16:26:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Shawt/Shawt
|
Shawt
| 2024-03-07T21:23:17Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/sdxl-turbo",
"base_model:finetune:stabilityai/sdxl-turbo",
"region:us"
] |
text-to-image
| 2023-07-11T04:45:40Z |
---
base_model: stabilityai/sdxl-turbo
instance_prompt: <shawt>
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
MaziyarPanahi/phi-2-super-GGUF
|
MaziyarPanahi
| 2024-03-07T21:20:57Z | 101 | 4 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"phi",
"text-generation",
"convAI",
"conversational",
"custom_code",
"en",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"has_space",
"base_model:abacaj/phi-2-super",
"base_model:quantized:abacaj/phi-2-super"
] |
text-generation
| 2024-03-07T21:09:42Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- phi
- text-generation
- convAI
- conversational
- custom_code
- en
- license:mit
- model-index
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- has_space
- text-generation
model_name: phi-2-super-GGUF
base_model: abacaj/phi-2-super
inference: false
model_creator: abacaj
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/phi-2-super-GGUF](https://huggingface.co/MaziyarPanahi/phi-2-super-GGUF)
- Model creator: [abacaj](https://huggingface.co/abacaj)
- Original model: [abacaj/phi-2-super](https://huggingface.co/abacaj/phi-2-super)
## Description
[MaziyarPanahi/phi-2-super-GGUF](https://huggingface.co/MaziyarPanahi/phi-2-super-GGUF) contains GGUF format model files for [abacaj/phi-2-super](https://huggingface.co/abacaj/phi-2-super).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/phi-2-super-GGUF](https://huggingface.co/MaziyarPanahi/phi-2-super-GGUF) and below it, a specific filename to download, such as: phi-2-super-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/phi-2-super-GGUF phi-2-super-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/phi-2-super-GGUF](https://huggingface.co/MaziyarPanahi/phi-2-super-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/phi-2-super-GGUF phi-2-super-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m phi-2-super-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 β Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./phi-2-super-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./phi-2-super-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
mehdirafiei/SQLCODER16L
|
mehdirafiei
| 2024-03-07T21:18:40Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T21:10:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
panos-span/a2c-PandaReachDense-v3
|
panos-span
| 2024-03-07T21:18:19Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-07T21:14:32Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.20 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.