modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
Kamilatr/mistral_model
|
Kamilatr
| 2024-01-21T08:57:19Z | 1 | 0 |
peft
|
[
"peft",
"pytorch",
"mistral",
"arxiv:1910.09700",
"base_model:Hugofernandez/Mistral-7B-v0.1-colab-sharded",
"base_model:adapter:Hugofernandez/Mistral-7B-v0.1-colab-sharded",
"region:us"
] | null | 2024-01-21T08:44:55Z |
---
library_name: peft
base_model: Hugofernandez/Mistral-7B-v0.1-colab-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
wave1art/ppo-LunarLander-v2
|
wave1art
| 2024-01-21T08:55:34Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-08T15:01:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 238.45 +/- 47.22
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yomilimi/gamblingspam-koRoberta
|
yomilimi
| 2024-01-21T08:55:33Z | 13 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-21T08:51:32Z |
---
base_model: klue/roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: gamblingspam-koRoberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gamblingspam-koRoberta
This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1861
- Accuracy: 0.935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 50 | 0.1973 | 0.935 |
| No log | 2.0 | 100 | 0.1889 | 0.935 |
| No log | 3.0 | 150 | 0.1861 | 0.935 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
danangwijaya/IndoRetNet-Liputan6
|
danangwijaya
| 2024-01-21T08:45:12Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"retnet",
"text-generation",
"generated_from_trainer",
"id",
"dataset:liputan6",
"arxiv:2307.08621",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T17:53:18Z |
---
tags:
- generated_from_trainer
datasets:
- liputan6
model-index:
- name: IndoRetNet-Liputan6
results: []
license: apache-2.0
language:
- id
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IndoRetNet-Liputan6
This model is a Indonesian RetNet model train using the Liputan6 dataset.
Using Tokenizer from [IndoBERT](https://huggingface.co/indolem/indobert-base-uncased)
It achieves the following results on the evaluation set:
- Loss: 3.4936
## Model description
Demonstrate training and recurrent inference using a retentive network (https://arxiv.org/pdf/2307.08621.pdf).
The code utilizes Sehyun Choi's implementation of retentive network (https://github.com/syncdoth/RetNet).
- **License:** Apache 2.0.
## Intended uses & limitations
Intended to demonstrate training and (recurrent O(1)) inference using a retentive network in Indonesian language.
## Training and evaluation data
Using Train and validation set from Liputan6 dataset provided by [NusaCrowd](https://github.com/IndoNLP/nusa-crowd).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.5053 | 0.17 | 1000 | 4.5145 |
| 4.1281 | 0.34 | 2000 | 4.1702 |
| 3.9452 | 0.52 | 3000 | 4.0094 |
| 3.8302 | 0.69 | 4000 | 3.8972 |
| 3.6955 | 0.86 | 5000 | 3.8144 |
| 3.589 | 1.03 | 6000 | 3.7600 |
| 3.5279 | 1.21 | 7000 | 3.7088 |
| 3.4598 | 1.38 | 8000 | 3.6670 |
| 3.4445 | 1.55 | 9000 | 3.6259 |
| 3.4098 | 1.72 | 10000 | 3.5904 |
| 3.3455 | 1.9 | 11000 | 3.5610 |
| 3.2306 | 2.07 | 12000 | 3.5406 |
| 3.261 | 2.24 | 13000 | 3.5216 |
| 3.2204 | 2.41 | 14000 | 3.5111 |
| 3.2321 | 2.59 | 15000 | 3.5001 |
| 3.2514 | 2.76 | 16000 | 3.4941 |
| 3.233 | 2.93 | 17000 | 3.4936 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
CLMBR/old-existential-there-quantifier-transformer-1
|
CLMBR
| 2024-01-21T08:42:37Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T15:42:08Z |
---
tags:
- generated_from_trainer
model-index:
- name: existential-there-quantifier-transformer-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# existential-there-quantifier-transformer-1
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2616 | 0.03 | 76320 | 4.2240 |
| 4.0508 | 1.03 | 152640 | 4.0508 |
| 3.9401 | 0.03 | 228960 | 3.9758 |
| 3.865 | 1.03 | 305280 | 3.9339 |
| 3.8135 | 0.03 | 381600 | 3.9073 |
| 3.7714 | 1.03 | 457920 | 3.8895 |
| 3.7388 | 0.03 | 534240 | 3.8792 |
| 3.7131 | 1.03 | 610560 | 3.8728 |
| 3.684 | 0.03 | 686880 | 3.8681 |
| 3.6636 | 1.03 | 763200 | 3.8646 |
| 3.6394 | 0.03 | 839520 | 3.8636 |
| 3.617 | 1.03 | 915840 | 3.8631 |
| 3.5998 | 0.03 | 992160 | 3.8632 |
| 3.5818 | 1.03 | 1068480 | 3.8621 |
| 3.5593 | 0.03 | 1144800 | 3.8644 |
| 3.5503 | 1.03 | 1221120 | 3.8656 |
| 3.5342 | 0.03 | 1297440 | 3.8669 |
| 3.5177 | 0.03 | 1373760 | 3.8686 |
| 3.5074 | 1.03 | 1450080 | 3.8695 |
| 3.4918 | 0.03 | 1526400 | 3.8712 |
| 3.4844 | 1.03 | 1602720 | 3.8720 |
| 3.4726 | 0.03 | 1679040 | 3.8740 |
| 3.4666 | 0.03 | 1755360 | 3.8755 |
| 3.4589 | 0.03 | 1831680 | 3.8769 |
| 3.4472 | 1.03 | 1908000 | 3.8772 |
| 3.4392 | 0.03 | 1984320 | 3.8804 |
| 3.4254 | 1.03 | 2060640 | 3.8811 |
| 3.4139 | 0.03 | 2136960 | 3.8819 |
| 3.4048 | 1.03 | 2213280 | 3.8826 |
| 3.39 | 0.03 | 2289600 | 3.8845 |
| 3.3736 | 0.03 | 2365920 | 3.8846 |
| 3.3731 | 0.03 | 2442240 | 3.8849 |
| 3.3589 | 1.03 | 2518560 | 3.8856 |
| 3.3472 | 0.03 | 2594880 | 3.8856 |
| 3.3408 | 1.03 | 2671200 | 3.8866 |
| 3.3287 | 0.03 | 2747520 | 3.8863 |
| 3.3257 | 0.03 | 2823840 | 3.8861 |
| 3.318 | 0.03 | 2900160 | 3.8854 |
| 3.3149 | 1.03 | 2976480 | 3.8846 |
| 3.3129 | 0.02 | 3052726 | 3.8839 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CLMBR/old-existential-there-quantifier-transformer-2
|
CLMBR
| 2024-01-21T08:38:33Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T15:42:08Z |
---
tags:
- generated_from_trainer
model-index:
- name: existential-there-quantifier-transformer-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# existential-there-quantifier-transformer-2
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2526 | 0.03 | 76320 | 4.2151 |
| 4.0455 | 1.03 | 152640 | 4.0461 |
| 3.9381 | 0.03 | 228960 | 3.9721 |
| 3.8628 | 1.03 | 305280 | 3.9311 |
| 3.8124 | 0.03 | 381600 | 3.9065 |
| 3.7715 | 1.03 | 457920 | 3.8904 |
| 3.7381 | 0.03 | 534240 | 3.8783 |
| 3.7124 | 1.03 | 610560 | 3.8721 |
| 3.683 | 0.03 | 686880 | 3.8683 |
| 3.6615 | 1.03 | 763200 | 3.8654 |
| 3.6385 | 0.03 | 839520 | 3.8646 |
| 3.616 | 1.03 | 915840 | 3.8642 |
| 3.5965 | 0.03 | 992160 | 3.8636 |
| 3.578 | 1.03 | 1068480 | 3.8644 |
| 3.5575 | 0.03 | 1144800 | 3.8644 |
| 3.5489 | 1.03 | 1221120 | 3.8660 |
| 3.5313 | 0.03 | 1297440 | 3.8666 |
| 3.5161 | 0.03 | 1373760 | 3.8675 |
| 3.5035 | 1.03 | 1450080 | 3.8687 |
| 3.4905 | 0.03 | 1526400 | 3.8709 |
| 3.4821 | 0.03 | 1602720 | 3.8735 |
| 3.472 | 1.03 | 1679040 | 3.8752 |
| 3.4657 | 0.03 | 1755360 | 3.8762 |
| 3.4571 | 0.03 | 1831680 | 3.8779 |
| 3.4455 | 1.03 | 1908000 | 3.8795 |
| 3.4391 | 0.03 | 1984320 | 3.8816 |
| 3.4253 | 1.03 | 2060640 | 3.8817 |
| 3.4134 | 0.03 | 2136960 | 3.8836 |
| 3.4028 | 1.03 | 2213280 | 3.8853 |
| 3.3929 | 0.03 | 2289600 | 3.8861 |
| 3.3786 | 1.03 | 2365920 | 3.8866 |
| 3.3717 | 0.03 | 2442240 | 3.8879 |
| 3.3586 | 1.03 | 2518560 | 3.8886 |
| 3.3484 | 0.03 | 2594880 | 3.8890 |
| 3.34 | 1.03 | 2671200 | 3.8896 |
| 3.3287 | 0.03 | 2747520 | 3.8888 |
| 3.3257 | 1.03 | 2823840 | 3.8884 |
| 3.3182 | 0.03 | 2900160 | 3.8875 |
| 3.315 | 1.03 | 2976480 | 3.8866 |
| 3.3135 | 0.02 | 3052726 | 3.8856 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
powep/ddpm-celebahq-finetuned-butterflies-2epochs
|
powep
| 2024-01-21T08:34:35Z | 45 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-01-21T08:34:10Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('powep/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
m4ddki7/ppo-LunarLander-v2
|
m4ddki7
| 2024-01-21T08:30:49Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-21T08:30:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.58 +/- 22.64
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PeterWright/MNIST_LeNet
|
PeterWright
| 2024-01-21T08:28:33Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2024-01-21T08:00:27Z |
---
license: mit
---
## Introduction
MNIST_LeNet is a CNN model used for handwriting recognization.
This model is trained with traditional MNIST dataset, which is included in PyTorch as default.
As a result, it could achieve 99.5% accuracy among handwriting recognization tasks.
## Hands on
```python3
import torch
LeNet = torch.load('path/to/model/mnist_lenet.pt')
LeNet.eval()
# config preprocessor for your data
transform = ...
# load data
input_data = transform(open('path/to/your/data'))
# predict with our model
with torch.no_grad():
output = LeNet(input_data)
# explain results
prob = torch.nn.functional.softmax(output[0], dim=0)
...
```
## Reference
- [LeNet Paper: GradientBased Learning Applied to Document
Recognition(1998)](http://vision.stanford.edu/cs598_spring07/papers/Lecun98.pdf)
|
leveldevai/TurdusBeagle-7B
|
leveldevai
| 2024-01-21T08:17:46Z | 2,359 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"udkai/Turdus",
"mlabonne/NeuralBeagle14-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T16:34:00Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- udkai/Turdus
- mlabonne/NeuralBeagle14-7B
---
# TurdusBeagle-7B
TurdusBeagle-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [udkai/Turdus](https://huggingface.co/udkai/Turdus)
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: udkai/Turdus
layer_range: [0, 32]
- model: mlabonne/NeuralBeagle14-7B
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/NeuralBeagle14-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.45 # fallback for rest of tensors
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "leveldevai/TurdusBeagle-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
leveldevai/MarcBeagle-7B
|
leveldevai
| 2024-01-21T08:15:37Z | 1,361 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"flemmingmiguel/MarcMistral-7B",
"leveldevai/TurdusBeagle-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T04:54:33Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- flemmingmiguel/MarcMistral-7B
- leveldevai/TurdusBeagle-7B
---
# MarcBeagle-7B
MarcBeagle-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [flemmingmiguel/MarcMistral-7B](https://huggingface.co/flemmingmiguel/MarcMistral-7B)
* [leveldevai/TurdusBeagle-7B](https://huggingface.co/leveldevai/TurdusBeagle-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: flemmingmiguel/MarcMistral-7B
layer_range: [0, 32]
- model: leveldevai/TurdusBeagle-7B
layer_range: [0, 32]
merge_method: slerp
base_model: leveldevai/TurdusBeagle-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.45 # fallback for rest of tensors
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "leveldevai/MarcBeagle-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
roktimsardar123/Realistic_Vision_V6.0_B1_noVAE
|
roktimsardar123
| 2024-01-21T08:12:05Z | 6 | 1 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-19T16:45:55Z |
---
license: creativeml-openrail-m
---
<b>This model is available on <a href="https://www.mage.space/">Mage.Space</a> (main sponsor)</b><br>
<b>Please read this!</b><br>
This is not yet the full version of the model (read the <b>"Model Description"</b> section).<br>
For version 6.0 it is recommended to use with VAE (to improve generation quality and get rid of artifacts): https://huggingface.co/stabilityai/sd-vae-ft-mse-original<br>
<b>Model Description</b><br>
Realistic Vision V6.0 "New Vision" is a global update for the Realistic Vision model, which will be released gradually in several beta versions until the full release. The model is aimed at realism and photorealism.<br>
CivitAI Page: https://civitai.com/models/4201/realistic-vision-v60-b1?modelVersionId=245598
<b>Resolutions (use lower resolution if you get a lot of mutations and stuff like that)</b><br>
- Face Portrait: 896x896<br>
- Portrait: 896x896, 768x1024<br>
- Half Body: 768x1024, 640x1152<br>
- Full Body: 896x896, 768x1024, 640x1152, 1024x768, 1152x640<br>
<b>Improvements</b>
- increased generation resolution to such resolutions as: 896x896, 768x1024, 640x1152, 1024x768, 1152x640. (note. in some cases there may still be mutations, duplications, etc -> will be fixed in future versions).<br>
- improved sfw and nsfw for female and female anatomy (note. not all poses work correctly in such large resolutions -> will be fixed in future versions).<br>
<b>Recommended Workflow</b><br>
Images can be generated with or without Hires.Fix, but it will help improve the generation quality significantly. In some cases it is strictly recommended to use Hires.Fix, namely when generating full body and half body images (note: you can also use Restore Faces or ADetailer).<br>
<b>Recommended Generation Parameters</b><br>
Sampler: DPM++ SDE Karras (25+ steps) / DPM++ 2M SDE (50+ steps)<br>
Negative Prompt: (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime), text, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck<br>
<b>Recommended Hires.Fix Parameters</b><br>
Sampler: DPM++ SDE Karras or DPM++ 2M SDE<br>
Denoising steps: 10+ (DPM++ SDE Karras) / 20+ (DPM++ 2M SDE (notice. the lower the value of hires steps at a given sampler, the stronger the skin texture and the higher the chance of getting artifacts))<br>
Denoising strength: 0.1-0.3<br>
Upscaler: 4x-UltraSharp / 4x_NMKD-Superscale-SP_178000_G or another<br>
Upscale by: 1.1-2.0+<br>
|
Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp
|
Weyaxi
| 2024-01-21T08:03:27Z | 58 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"base_model:Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp",
"base_model:merge:Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp",
"base_model:openchat/openchat-3.5-1210",
"base_model:merge:openchat/openchat-3.5-1210",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-24T14:30:22Z |
---
license: apache-2.0
tags:
- merge
base_model:
- Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
- openchat/openchat-3.5-1210
---
# OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp
This is the model for OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp. I used [mergekit](https://github.com/cg123/mergekit) to merge models.
# Yaml Config
```yaml
slices:
- sources:
- model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
layer_range: [0, 32]
- model: openchat/openchat-3.5-1210
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
tokenizer_source: union
dtype: bfloat16
```
|
CLMBR/binding-c-command-transformer-2
|
CLMBR
| 2024-01-21T07:57:46Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T15:42:02Z |
---
tags:
- generated_from_trainer
model-index:
- name: binding-c-command-transformer-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binding-c-command-transformer-2
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8612
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2236 | 0.03 | 76320 | 4.1937 |
| 4.0183 | 1.03 | 152640 | 4.0245 |
| 3.9082 | 0.03 | 228960 | 3.9500 |
| 3.8443 | 1.03 | 305280 | 3.9103 |
| 3.7908 | 0.03 | 381600 | 3.8840 |
| 3.7508 | 1.03 | 457920 | 3.8685 |
| 3.7217 | 0.03 | 534240 | 3.8586 |
| 3.6907 | 1.03 | 610560 | 3.8519 |
| 3.6607 | 0.03 | 686880 | 3.8475 |
| 3.6343 | 1.03 | 763200 | 3.8452 |
| 3.611 | 0.03 | 839520 | 3.8432 |
| 3.5893 | 1.03 | 915840 | 3.8427 |
| 3.5719 | 0.03 | 992160 | 3.8424 |
| 3.5509 | 0.03 | 1068480 | 3.8427 |
| 3.5351 | 1.03 | 1144800 | 3.8427 |
| 3.5269 | 0.03 | 1221120 | 3.8441 |
| 3.5066 | 1.03 | 1297440 | 3.8453 |
| 3.4967 | 0.03 | 1373760 | 3.8458 |
| 3.4799 | 1.03 | 1450080 | 3.8476 |
| 3.4756 | 0.03 | 1526400 | 3.8500 |
| 3.4629 | 1.03 | 1602720 | 3.8510 |
| 3.4559 | 0.03 | 1679040 | 3.8536 |
| 3.4494 | 1.03 | 1755360 | 3.8534 |
| 3.4387 | 0.03 | 1831680 | 3.8558 |
| 3.4247 | 0.03 | 1908000 | 3.8572 |
| 3.4118 | 1.03 | 1984320 | 3.8582 |
| 3.3992 | 0.03 | 2060640 | 3.8598 |
| 3.3894 | 1.03 | 2136960 | 3.8608 |
| 3.3803 | 0.03 | 2213280 | 3.8610 |
| 3.3632 | 1.03 | 2289600 | 3.8625 |
| 3.3574 | 0.03 | 2365920 | 3.8635 |
| 3.3492 | 0.03 | 2442240 | 3.8634 |
| 3.3364 | 0.03 | 2518560 | 3.8649 |
| 3.3296 | 1.03 | 2594880 | 3.8642 |
| 3.317 | 0.03 | 2671200 | 3.8647 |
| 3.3139 | 1.03 | 2747520 | 3.8653 |
| 3.308 | 0.03 | 2823840 | 3.8642 |
| 3.3023 | 1.03 | 2900160 | 3.8635 |
| 3.3002 | 0.03 | 2976480 | 3.8623 |
| 3.2921 | 1.02 | 3052726 | 3.8612 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
pingyi2024/llama2-qlora-finetunined-french
|
pingyi2024
| 2024-01-21T07:55:26Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2024-01-21T07:55:22Z |
---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
sheyuv/paris-lora
|
sheyuv
| 2024-01-21T07:54:11Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-01-21T07:54:09Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of paris
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
Spanicin/Fulcrum_Aura5
|
Spanicin
| 2024-01-21T07:54:03Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mistralai/Mistral-7B-v0.1",
"samir-fama/SamirGPT-v1",
"abacusai/Slerp-CM-mist-dpo",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-21T07:49:25Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- mistralai/Mistral-7B-v0.1
- samir-fama/SamirGPT-v1
- abacusai/Slerp-CM-mist-dpo
---
# Fulcrum_Aura5
Fulcrum_Aura5 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* [samir-fama/SamirGPT-v1](https://huggingface.co/samir-fama/SamirGPT-v1)
* [abacusai/Slerp-CM-mist-dpo](https://huggingface.co/abacusai/Slerp-CM-mist-dpo)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-v0.1
layer_range: [0, 32]
- model: samir-fama/SamirGPT-v1
layer_range: [0, 32]
parameters:
density: 0.53
weight: 0.4
- model: abacusai/Slerp-CM-mist-dpo
layer_range: [0, 32]
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Spanicin/Fulcrum_Aura5"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
nervps/Glanc
|
nervps
| 2024-01-21T07:40:20Z | 0 | 0 | null |
[
"music",
"audio-to-audio",
"ru",
"license:unknown",
"region:us"
] |
audio-to-audio
| 2024-01-20T13:35:41Z |
---
license: unknown
language:
- ru
pipeline_tag: audio-to-audio
tags:
- music
---
|
ntc-ai/SDXL-LoRA-slider.watercolor
|
ntc-ai
| 2024-01-21T07:23:34Z | 16 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-21T07:23:31Z |
---
language:
- en
thumbnail: "images/evaluate/watercolor.../watercolor_17_3.0.png"
widget:
- text: watercolor
output:
url: images/watercolor_17_3.0.png
- text: watercolor
output:
url: images/watercolor_19_3.0.png
- text: watercolor
output:
url: images/watercolor_20_3.0.png
- text: watercolor
output:
url: images/watercolor_21_3.0.png
- text: watercolor
output:
url: images/watercolor_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "watercolor"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - watercolor (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/watercolor_17_-3.0.png" width=256 height=256 /> | <img src="images/watercolor_17_0.0.png" width=256 height=256 /> | <img src="images/watercolor_17_3.0.png" width=256 height=256 /> |
| <img src="images/watercolor_19_-3.0.png" width=256 height=256 /> | <img src="images/watercolor_19_0.0.png" width=256 height=256 /> | <img src="images/watercolor_19_3.0.png" width=256 height=256 /> |
| <img src="images/watercolor_20_-3.0.png" width=256 height=256 /> | <img src="images/watercolor_20_0.0.png" width=256 height=256 /> | <img src="images/watercolor_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
watercolor
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.watercolor', weight_name='watercolor.safetensors', adapter_name="watercolor")
# Activate the LoRA
pipe.set_adapters(["watercolor"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, watercolor"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
AshtonLKY/Whisper_ATC_v1
|
AshtonLKY
| 2024-01-21T07:14:04Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"asr-fyp",
"generated_from_trainer",
"en",
"dataset:AshtonLKY/Whisper_ASR_ATC",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-14T04:50:51Z |
---
language:
- en
license: apache-2.0
base_model: openai/whisper-small
tags:
- asr-fyp
- generated_from_trainer
datasets:
- AshtonLKY/Whisper_ASR_ATC
model-index:
- name: Whisper_ASR_ATC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper_ASR_ATC
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the AshtonLKY/augmented_audio dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1061
- eval_wer: 10.8325
- eval_runtime: 7410.9891
- eval_samples_per_second: 1.813
- eval_steps_per_second: 0.227
- epoch: 0.89
- step: 3000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 6000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Yhyu13/dolphin-2.6-mistral-7b-dpo-laser-function-calling-lora
|
Yhyu13
| 2024-01-21T07:12:54Z | 5 | 4 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser",
"base_model:adapter:cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser",
"license:apache-2.0",
"region:us"
] | null | 2024-01-21T07:10:54Z |
---
license: apache-2.0
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
model-index:
- name: dolphin-2.6-mistral-7b-dpo-laser-sft-glaive-function-calling-v2-ep1-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dolphin-2.6-mistral-7b-dpo-laser-sft-glaive-function-calling-v2-ep1-lora
This model is a fine-tuned version of [cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser) on the https://huggingface.co/datasets/Yhyu13/glaive-function-calling-v2-llama-factory-convert/blob/main/simple-function-calling-v2_converted_5000_with_function_call_only.json dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2548 | 0.09 | 100 | 0.1148 |
| 0.1149 | 0.18 | 200 | 0.0914 |
| 0.0871 | 0.27 | 300 | 0.0831 |
| 0.0865 | 0.35 | 400 | 0.0760 |
| 0.0802 | 0.44 | 500 | 0.0718 |
| 0.0689 | 0.53 | 600 | 0.0702 |
| 0.0649 | 0.62 | 700 | 0.0649 |
| 0.0637 | 0.71 | 800 | 0.0632 |
| 0.0698 | 0.8 | 900 | 0.0619 |
| 0.0648 | 0.88 | 1000 | 0.0608 |
| 0.0654 | 0.97 | 1100 | 0.0605 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
WYNN747/Burmese-GPT-fresh1-2k
|
WYNN747
| 2024-01-21T06:56:49Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-21T06:45:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jow2/llama2-qlora-finetunined-french
|
jow2
| 2024-01-21T06:36:06Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2024-01-21T06:35:59Z |
---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
HatimF/my_awesome_billsum_model
|
HatimF
| 2024-01-21T06:11:04Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-21T06:09:15Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
CLMBR/superlative-quantifier-lstm-0
|
CLMBR
| 2024-01-21T06:07:05Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-16T15:28:30Z |
---
tags:
- generated_from_trainer
model-index:
- name: superlative-quantifier-lstm-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# superlative-quantifier-lstm-0
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7768 | 0.03 | 76320 | 4.7625 |
| 4.4994 | 1.03 | 152640 | 4.4851 |
| 4.3586 | 0.03 | 228960 | 4.3523 |
| 4.2722 | 1.03 | 305280 | 4.2704 |
| 4.2128 | 2.03 | 381600 | 4.2144 |
| 4.1637 | 0.03 | 457920 | 4.1737 |
| 4.1288 | 1.03 | 534240 | 4.1432 |
| 4.0985 | 0.03 | 610560 | 4.1191 |
| 4.0699 | 1.03 | 686880 | 4.0997 |
| 4.0461 | 2.03 | 763200 | 4.0839 |
| 4.0238 | 0.03 | 839520 | 4.0716 |
| 4.0037 | 1.03 | 915840 | 4.0607 |
| 3.9934 | 0.03 | 992160 | 4.0516 |
| 3.9755 | 1.03 | 1068480 | 4.0440 |
| 3.9605 | 0.03 | 1144800 | 4.0370 |
| 3.9465 | 0.03 | 1221120 | 4.0317 |
| 3.9323 | 1.03 | 1297440 | 4.0269 |
| 3.9218 | 0.03 | 1373760 | 4.0221 |
| 3.9132 | 1.03 | 1450080 | 4.0180 |
| 3.9112 | 0.03 | 1526400 | 4.0147 |
| 3.9084 | 1.03 | 1602720 | 4.0120 |
| 3.9015 | 0.03 | 1679040 | 4.0093 |
| 3.8991 | 1.03 | 1755360 | 4.0071 |
| 3.8977 | 0.03 | 1831680 | 4.0053 |
| 3.8892 | 0.03 | 1908000 | 4.0033 |
| 3.8818 | 0.03 | 1984320 | 4.0013 |
| 3.8748 | 0.03 | 2060640 | 3.9992 |
| 3.8679 | 1.03 | 2136960 | 3.9976 |
| 3.8693 | 2.03 | 2213280 | 3.9966 |
| 3.8634 | 0.03 | 2289600 | 3.9957 |
| 3.8557 | 1.03 | 2365920 | 3.9948 |
| 3.85 | 0.03 | 2442240 | 3.9942 |
| 3.8426 | 1.03 | 2518560 | 3.9934 |
| 3.8418 | 0.03 | 2594880 | 3.9927 |
| 3.835 | 1.03 | 2671200 | 3.9919 |
| 3.8405 | 2.03 | 2747520 | 3.9913 |
| 3.8384 | 0.03 | 2823840 | 3.9907 |
| 3.8387 | 1.03 | 2900160 | 3.9903 |
| 3.8427 | 0.03 | 2976480 | 3.9898 |
| 3.8428 | 1.02 | 3052726 | 3.9895 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
PetroGPT/Severus-7B-DPO
|
PetroGPT
| 2024-01-21T06:02:55Z | 1,403 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-21T05:57:43Z |
---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dscoder25/finetuning-sentiment-model
|
dscoder25
| 2024-01-21T06:00:18Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-21T05:20:14Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
CLMBR/old-existential-there-quantifier-lstm-2
|
CLMBR
| 2024-01-21T05:59:36Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-16T15:44:02Z |
---
tags:
- generated_from_trainer
model-index:
- name: existential-there-quantifier-lstm-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# existential-there-quantifier-lstm-2
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.8395 | 0.03 | 76320 | 4.7977 |
| 4.5474 | 1.03 | 152640 | 4.5109 |
| 4.4015 | 0.03 | 228960 | 4.3723 |
| 4.3064 | 1.03 | 305280 | 4.2870 |
| 4.245 | 0.03 | 381600 | 4.2297 |
| 4.1964 | 1.03 | 457920 | 4.1873 |
| 4.1603 | 0.03 | 534240 | 4.1561 |
| 4.1312 | 1.03 | 610560 | 4.1313 |
| 4.102 | 0.03 | 686880 | 4.1114 |
| 4.0817 | 1.03 | 763200 | 4.0951 |
| 4.0587 | 0.03 | 839520 | 4.0820 |
| 4.0367 | 1.03 | 915840 | 4.0709 |
| 4.0242 | 0.03 | 992160 | 4.0620 |
| 4.008 | 1.03 | 1068480 | 4.0537 |
| 3.9878 | 0.03 | 1144800 | 4.0458 |
| 3.9807 | 1.03 | 1221120 | 4.0401 |
| 3.9693 | 0.03 | 1297440 | 4.0351 |
| 3.9571 | 1.03 | 1373760 | 4.0307 |
| 3.9487 | 0.03 | 1450080 | 4.0271 |
| 3.9399 | 1.03 | 1526400 | 4.0231 |
| 3.937 | 0.03 | 1602720 | 4.0198 |
| 3.931 | 1.03 | 1679040 | 4.0166 |
| 3.9284 | 0.03 | 1755360 | 4.0143 |
| 3.9273 | 0.03 | 1831680 | 4.0123 |
| 3.9169 | 1.03 | 1908000 | 4.0098 |
| 3.9161 | 0.03 | 1984320 | 4.0084 |
| 3.9107 | 1.03 | 2060640 | 4.0066 |
| 3.9021 | 0.03 | 2136960 | 4.0053 |
| 3.8983 | 1.03 | 2213280 | 4.0038 |
| 3.8927 | 0.03 | 2289600 | 4.0029 |
| 3.8835 | 1.03 | 2365920 | 4.0017 |
| 3.8847 | 0.03 | 2442240 | 4.0006 |
| 3.8779 | 0.03 | 2518560 | 3.9999 |
| 3.8749 | 0.03 | 2594880 | 3.9989 |
| 3.8715 | 1.03 | 2671200 | 3.9982 |
| 3.8664 | 0.03 | 2747520 | 3.9975 |
| 3.8704 | 1.03 | 2823840 | 3.9968 |
| 3.8676 | 0.03 | 2900160 | 3.9962 |
| 3.872 | 1.03 | 2976480 | 3.9960 |
| 3.8715 | 0.02 | 3052726 | 3.9956 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
danielhanchen/gguf_4bit
|
danielhanchen
| 2024-01-21T05:57:37Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"en",
"base_model:unsloth/mistral-7b",
"base_model:quantized:unsloth/mistral-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-20T17:58:21Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: unsloth/mistral-7b
---
# Uploaded model
- **Developed by:** danielhanchen
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CLMBR/old-existential-there-quantifier-lstm-4
|
CLMBR
| 2024-01-21T05:48:57Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-16T15:44:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: existential-there-quantifier-lstm-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# existential-there-quantifier-lstm-4
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9940
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.8393 | 0.03 | 76320 | 4.7994 |
| 4.545 | 1.03 | 152640 | 4.5108 |
| 4.4018 | 0.03 | 228960 | 4.3731 |
| 4.307 | 1.03 | 305280 | 4.2876 |
| 4.2466 | 0.03 | 381600 | 4.2296 |
| 4.1988 | 1.03 | 457920 | 4.1882 |
| 4.1617 | 0.03 | 534240 | 4.1563 |
| 4.1348 | 1.03 | 610560 | 4.1320 |
| 4.1044 | 0.03 | 686880 | 4.1122 |
| 4.0846 | 1.03 | 763200 | 4.0961 |
| 4.0615 | 0.03 | 839520 | 4.0829 |
| 4.0398 | 1.03 | 915840 | 4.0709 |
| 4.0253 | 0.03 | 992160 | 4.0623 |
| 4.009 | 1.03 | 1068480 | 4.0539 |
| 3.9899 | 0.03 | 1144800 | 4.0468 |
| 3.9844 | 0.03 | 1221120 | 4.0409 |
| 3.9692 | 1.03 | 1297440 | 4.0347 |
| 3.9603 | 0.03 | 1373760 | 4.0302 |
| 3.9517 | 1.03 | 1450080 | 4.0258 |
| 3.9412 | 0.03 | 1526400 | 4.0220 |
| 3.9384 | 0.03 | 1602720 | 4.0190 |
| 3.933 | 1.03 | 1679040 | 4.0162 |
| 3.9305 | 0.03 | 1755360 | 4.0140 |
| 3.9288 | 0.03 | 1831680 | 4.0118 |
| 3.921 | 1.03 | 1908000 | 4.0093 |
| 3.9189 | 0.03 | 1984320 | 4.0075 |
| 3.9116 | 1.03 | 2060640 | 4.0056 |
| 3.9036 | 0.03 | 2136960 | 4.0040 |
| 3.9001 | 1.03 | 2213280 | 4.0028 |
| 3.8925 | 0.03 | 2289600 | 4.0014 |
| 3.8851 | 1.03 | 2365920 | 4.0003 |
| 3.8851 | 0.03 | 2442240 | 3.9992 |
| 3.8813 | 0.03 | 2518560 | 3.9984 |
| 3.8756 | 0.03 | 2594880 | 3.9975 |
| 3.8726 | 1.03 | 2671200 | 3.9969 |
| 3.8673 | 0.03 | 2747520 | 3.9959 |
| 3.871 | 1.03 | 2823840 | 3.9953 |
| 3.8704 | 0.03 | 2900160 | 3.9947 |
| 3.8717 | 1.03 | 2976480 | 3.9943 |
| 3.8734 | 0.02 | 3052726 | 3.9940 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ttellner/distilbert-base-uncased-finetuned-emotion
|
ttellner
| 2024-01-21T05:31:50Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-21T04:00:19Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.92
- name: F1
type: f1
value: 0.9195426549881595
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2160
- Accuracy: 0.92
- F1: 0.9195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8294 | 1.0 | 250 | 0.3168 | 0.905 | 0.9035 |
| 0.251 | 2.0 | 500 | 0.2160 | 0.92 | 0.9195 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cpu
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ahebbar69/not-entertainment-1
|
ahebbar69
| 2024-01-21T05:30:22Z | 0 | 0 | null |
[
"safetensors",
"autotrain",
"text-generation",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-21T05:30:02Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
genaicore3434/Mistral-7b-instruct-v0.2-summ-sft-lp-e1
|
genaicore3434
| 2024-01-21T05:10:03Z | 0 | 0 | null |
[
"safetensors",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-21T05:06:18Z |
---
license: cc-by-nc-4.0
---
Description to load and test will be added soon. More details on training and data will be added aswell.
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
TBD
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Hi, my name is "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
genaicore3434/MistralLite-summ-sft-e1
|
genaicore3434
| 2024-01-21T04:59:51Z | 0 | 0 | null |
[
"safetensors",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-21T04:57:36Z |
---
license: cc-by-nc-4.0
---
Description to load and test will be added soon. More details on training and data will be added aswell.
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
TBD
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Hi, my name is "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
bartowski/kellemar-DPO-Orca-Distilled-7B-SLERP-exl2
|
bartowski
| 2024-01-21T04:58:25Z | 4 | 0 | null |
[
"text-generation",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"base_model:mlabonne/Marcoro14-7B-slerp",
"base_model:finetune:mlabonne/Marcoro14-7B-slerp",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-21T04:42:21Z |
---
base_model: mlabonne/Marcoro14-7B-slerp
license: apache-2.0
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of kellemar-DPO-Orca-Distilled-7B-SLERP
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP
Model Size: 7B
| Branch | Bits | lm_head bits | Size | Description |
| ----- | ---- | ------- | ------ | ------------ |
| [8_0](https://huggingface.co/Bartowski/kellemar-DPO-Orca-Distilled-7B-SLERP-exl2/tree/8_0) | 8.0 | 8.0 | 9.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Bartowski/kellemar-DPO-Orca-Distilled-7B-SLERP-exl2/tree/6_5) | 6.5 | 8.0 | 8.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/kellemar-DPO-Orca-Distilled-7B-SLERP-exl2/tree/5_0) | 5.0 | 6.0 | 7.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/Bartowski/kellemar-DPO-Orca-Distilled-7B-SLERP-exl2/tree/4_25) | 4.25 | 6.0 | 6.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/Bartowski/kellemar-DPO-Orca-Distilled-7B-SLERP-exl2/tree/3_5) | 3.5 | 6.0 | 6.1 GB | Lower quality, only use if you have to. |
All VRAM requirements estimated from 16k context. For 32k context add ~2 GB.
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/kellemar-DPO-Orca-Distilled-7B-SLERP-exl2 kellemar-DPO-Orca-Distilled-7B-SLERP-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `kellemar-DPO-Orca-Distilled-7B-SLERP-exl2`:
```shell
mkdir kellemar-DPO-Orca-Distilled-7B-SLERP-exl2
huggingface-cli download bartowski/kellemar-DPO-Orca-Distilled-7B-SLERP-exl2 --local-dir kellemar-DPO-Orca-Distilled-7B-SLERP-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir kellemar-DPO-Orca-Distilled-7B-SLERP-exl2-6_5
huggingface-cli download bartowski/kellemar-DPO-Orca-Distilled-7B-SLERP-exl2 --revision 6_5 --local-dir kellemar-DPO-Orca-Distilled-7B-SLERP-exl2-6_5 --local-dir-use-symlinks False
```
|
haesun/distilbert-base-uncased-finetuned-emotion
|
haesun
| 2024-01-21T04:51:21Z | 16 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-31T14:41:11Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.9277315829088285
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2192
- Accuracy: 0.9275
- F1: 0.9277
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8729 | 1.0 | 250 | 0.3260 | 0.91 | 0.9098 |
| 0.257 | 2.0 | 500 | 0.2192 | 0.9275 | 0.9277 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
brittlewis12/phi-2-orange-GGUF
|
brittlewis12
| 2024-01-21T04:44:50Z | 7 | 5 | null |
[
"gguf",
"phi-msft",
"text-generation",
"en",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:migtissera/Synthia-v1.3",
"dataset:LDJnr/Verified-Camel",
"dataset:LDJnr/Pure-Dove",
"dataset:LDJnr/Capybara",
"dataset:meta-math/MetaMathQA",
"dataset:Intel/orca_dpo_pairs",
"dataset:argilla/ultrafeedback-binarized-preferences-cleaned",
"base_model:rhysjones/phi-2-orange",
"base_model:quantized:rhysjones/phi-2-orange",
"license:mit",
"region:us"
] |
text-generation
| 2024-01-15T16:55:01Z |
---
base_model: rhysjones/phi-2-orange
inference: false
language:
- en
license: mit
model_creator: rhysjones
model_name: Phi-2 Orange
model_type: phi-msft
datasets:
- Open-Orca/SlimOrca-Dedup
- migtissera/Synthia-v1.3
- LDJnr/Verified-Camel
- LDJnr/Pure-Dove
- LDJnr/Capybara
- meta-math/MetaMathQA
- Intel/orca_dpo_pairs
- argilla/ultrafeedback-binarized-preferences-cleaned
pipeline_tag: text-generation
tags:
- phi-msft
prompt_template: "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
"
quantized_by: brittlewis12
---
# Phi-2 Orange GGUF

Original model: [Phi-2 Orange](https://huggingface.co/rhysjones/phi-2-orange)
Model creator: [Rhys Jones](https://huggingface.co/rhysjones)
This repo contains GGUF format model files for Rhys Jones' Phi-2 Orange.
### What is GGUF?
GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Converted using llama.cpp revision [de473f5](https://github.com/ggerganov/llama.cpp/commit/de473f5f8e19ba5e659cdf5af65fb9251dce16c5), the last compatible version before Microsoft's [incompatible modeling changes](https://huggingface.co/microsoft/phi-2/commit/cb2f4533604d8b67de604e7df03bfe6f3ca22869) were [introduced to llama.cpp](https://github.com/ggerganov/llama.cpp/commit/15ebe59210e7fd9817ff67f51fa1a5ee2d004294).
### Prompt template: ChatML
```
<|im_start|>system
{{system_message}}<|im_end|>
<im_start|>user
{{prompt}}<|im_end|>
<|im_start|>assistant
```
---
## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac!

[cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device:
- create & save **Characters** with custom system prompts & temperature settings
- download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)!
- make it your own with custom **Theme colors**
- powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming!
- **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)!
- follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date
---
## Original Model Evaluations:
Evaluations done using mlabonne's [Colab notebook llm-autoeval](https://github.com/mlabonne/llm-autoeval).
Also check out the alternative leaderboard, YALL: [Yet_Another_LLM_Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard)
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|----------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[phi-2-orange](https://huggingface.co/rhysjones/phi-2-orange)| **33.37**| 71.33| 49.87| **37.3**| **47.97**|
|[phi-2-dpo](https://huggingface.co/lxuechen/phi-2-dpo)| 30.39| **71.68**| **50.75**| 34.9| 46.93|
|[dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2)| 33.12| 69.85| 47.39| 37.2| 46.89|
|[phi-2](https://huggingface.co/microsoft/phi-2)| 27.98| 70.8| 44.43| 35.21| 44.61|
|
brittlewis12/distilabeled-Marcoro14-7B-slerp-full-GGUF
|
brittlewis12
| 2024-01-21T04:43:58Z | 24 | 0 | null |
[
"gguf",
"distilabel",
"dpo",
"rlaif",
"rlhf",
"merge",
"mergekit",
"text-generation",
"en",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"base_model:argilla/distilabeled-Marcoro14-7B-slerp-full",
"base_model:quantized:argilla/distilabeled-Marcoro14-7B-slerp-full",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-16T03:12:05Z |
---
base_model: argilla/distilabeled-Marcoro14-7B-slerp-full
inference: false
license: apache-2.0
language:
- en
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
tags:
- distilabel
- dpo
- rlaif
- rlhf
- merge
- mergekit
model_creator: argilla
model_name: distilabeled-Marcoro14-7B-slerp-full
model_type: mistral
pipeline_tag: text-generation
quantized_by: brittlewis12
---
# distilabeled-Marcoro14-7B-slerp-full GGUF
Original model: [distilabeled-Marcoro14-7B-slerp-full](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp-full)
Model creator: [Argilla](https://huggingface.co/argilla)
This repo contains GGUF format model files for Argilla’s distilabeled-Marcoro14-7B-slerp-full.
As described on the original model card:
> This model is a new DPO fine-tune of our new open dataset argilla/distilabel-intel-orca-dpo-pairs, on the mlabonne/Marcoro14-7B-slerp model. You can find more information of the "distilabeled" dataset used at this repo argilla/distilabeled-Hermes-2.5-Mistral-7B, and visit distilabel.
>
> The difference between this model and argilla/distilabeled-Marcoro14-7B-slerp is that this model has been fine-tuned for a whole epoch instead instead of 200 steps, so it has seen the whole dataset.
### What is GGUF?
GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Converted using llama.cpp build 1879 (revision [3e5ca79](https://github.com/ggerganov/llama.cpp/commit/3e5ca7931c68152e4ec18d126e9c832dd84914c8))
### Prompt template: Unknown
`{{prompt}}`
---
## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac!

[cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device:
- create & save **Characters** with custom system prompts & temperature settings
- download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)!
- make it your own with custom **Theme colors**
- powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming!
- **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)!
- follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date
---
## Original Model Evaluations:
| Model |AGIEval|GPT4ALL|TruthfulQA|Bigbench|Average|
|-------------------------|------:|------:|---------:|-------:|------:|
|[argilla/distilabeled-Marcoro14-7B-slerp-full](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp-full)| 45.17| **76.59**| 64.68| **48.15**| **58.65**|
|[argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp)| **45.4**| 76.47| **65.46**| 47.19| 58.63|
|[Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) | 44.66| 76.24| 64.15| 45.64| 57.67|
|[argilla/distilabeled-Hermes-2.5-Mistral-7B](https://huggingface.co/argilla/distilabeled-Hermes-2.5-Mistral-7B) | 44.64 | 73.35 | 55.96 | 42.21 | 54.04 |
|
brittlewis12/NeuralDaredevil-7B-GGUF
|
brittlewis12
| 2024-01-21T04:43:24Z | 28 | 2 | null |
[
"gguf",
"text-generation",
"en",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"base_model:mlabonne/NeuralDaredevil-7B",
"base_model:quantized:mlabonne/NeuralDaredevil-7B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-15T20:07:49Z |
---
base_model: mlabonne/NeuralDaredevil-7B
inference: false
language:
- en
license: apache-2.0
model_creator: mlabonne
model_name: NeuralDaredevil-7B
model_type: mistral
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
pipeline_tag: text-generation
prompt_template: "<|system|>
</s>
<|user|>
{prompt}</s>
<|assistant|>
"
quantized_by: brittlewis12
---
# NeuralDaredevil-7B GGUF
Original model: [NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B)
Model creator: [Maxime Labonne](https://huggingface.co/mlabonne)
This repo contains GGUF format model files for Maxime Labonne’s NeuralDaredevil-7B.
Maxime discussing NeuralDaredevil on [Twitter](https://twitter.com/maximelabonne/status/1746979986370666741):
> 🤝 NeuralDaredevil is actually trained using [argilla’s] excellent distilled version of the [orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs) made with [distilabel](https://github.com/argilla-io/distilabel).
### What is GGUF?
GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Converted using llama.cpp build 1879 (revision [3e5ca79](https://github.com/ggerganov/llama.cpp/commit/3e5ca7931c68152e4ec18d126e9c832dd84914c8))
### Prompt template: Zephyr
Zephyr-style appears to work well!
```
<|system|>
{{system_message}}</s>
<|user|>
{{prompt}}</s>
<|assistant|>
```
---
## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac!

[cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device:
- create & save **Characters** with custom system prompts & temperature settings
- download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)!
- make it your own with custom **Theme colors**
- powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming!
- **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)!
- follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date
---
## Original Model Evaluations:
Evaluations reported from mlabonne’s alternative leaderboard, YALL: [Yet_Another_LLM_Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|-------------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[**NeuralDaredevil-7B**](https://huggingface.co/mlabonne/NeuralDaredevil-7B) | **45.23**| **76.2**| **67.61**| **48.52**| **59.39**|
|[Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 38.5 | 71.64 | 66.82 | 42.29 | 54.81 |
|[openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) | 44.17 | 73.72 | 52.53 | 44.4 | 53.71 |
|[OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) | 42.75 | 72.99 | 52.99 | 40.94 | 52.42 |
|
brittlewis12/stable-code-3b-GGUF
|
brittlewis12
| 2024-01-21T04:42:52Z | 66 | 2 | null |
[
"gguf",
"causal-lm",
"code",
"text-generation",
"en",
"dataset:tiiuae/falcon-refinedweb",
"dataset:bigcode/the-stack-github-issues",
"dataset:bigcode/commitpackft",
"dataset:bigcode/starcoderdata",
"dataset:EleutherAI/proof-pile-2",
"dataset:meta-math/MetaMathQA",
"base_model:stabilityai/stable-code-3b",
"base_model:quantized:stabilityai/stable-code-3b",
"license:other",
"region:us"
] |
text-generation
| 2024-01-17T12:23:22Z |
---
base_model: stabilityai/stable-code-3b
inference: false
language:
- en
license: other
license_link: https://huggingface.co/stabilityai/stable-code-3b/blob/main/LICENSE
model_creator: stabilityai
model_name: stable-code-3b
pipeline_tag: text-generation
datasets:
- tiiuae/falcon-refinedweb
- bigcode/the-stack-github-issues
- bigcode/commitpackft
- bigcode/starcoderdata
- EleutherAI/proof-pile-2
- meta-math/MetaMathQA
tags:
- causal-lm
- code
quantized_by: brittlewis12
---
# stable-code-3b GGUF
Original model: [stable-code-3b](https://huggingface.co/stabilityai/stable-code-3b)
Model creator: [StabilityAI](https://huggingface.co/stabilityai/stable-code-3b)
This repo contains GGUF format model files for StabilityAI’s stable-code-3b with 16k context.
> stable-code-3b is a 2.7B billion parameter decoder-only language model pre-trained on 1.3 trillion tokens of diverse textual and code datasets. stable-code-3b is trained on 18 programming languages (selected based on the 2023 StackOverflow Developer Survey) and demonstrates state-of-the-art performance (compared to models of similar size) on the MultiPL-E metrics across multiple programming languages tested using BigCode's Evaluation Harness.
### What is GGUF?
GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Converted using llama.cpp build 1897 (revision [2b3a665](https://github.com/ggerganov/llama.cpp/commit/2b3a665))
## Prompt template: Completion or Fill-in-Middle
### Completion
```
{{prompt}}
```
### Fill-in-Middle (FIM)
```
<fim_prefix>{{prefix code}}<fim_suffix>{{suffix code}}<fim_middle>
```
Example prompt with special prefix, suffix, and middle tokens in context:
```
<fim_prefix>def fib(n):
<fim_suffix>
else:
return fib(n - 2) + fib(n - 1)
<fim_middle>
```
---
## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac!

[cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device:
- create & save **Characters** with custom system prompts & temperature settings
- download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)!
- make it your own with custom **Theme colors**
- powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming!
- **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)!
- follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date
---
# Original Model Evaluation

| Model | Size | Python | C++ | Javascript | Java | PHP | Rust |
|------------------|------|--------|------|------------|------|------|------|
| **Stable Code** | 3B | 32.4% | 30.9%| 32.1% | 32.1%| 24.2%| 23.0%|
| CodeLLama | 7B | 30.0% | 28.2%| 32.5% | 31.1%| 25.7%| 26.3%|
| Deepseek Coder | 1.3B | 28.6% | 29.2%| 28.7% | 29.0%| 23.6%| 18.5%|
| Wizard Coder | 3B | 31.6% | 25.6%| 26.2% | 25.8%| 25.3%| 20.4%|
| StarCoder | 3B | 21.6% | 19.8%| 21.5% | 20.5%| 19.0%| 16.9%|
| Replit Code V1.5 | 3B | 23.0% | 25.9%| 26.2% | 23.6%| 23.2%| 21.5%|
| Deci Coder | 1B | 19.1% | 6.8% | 18.4% | 16.7%| 2.1% | 1.7% |
|
brittlewis12/TopicNeuralHermes-2.5-Mistral-7B-GGUF
|
brittlewis12
| 2024-01-21T04:42:11Z | 10 | 1 | null |
[
"gguf",
"text-generation",
"en",
"dataset:bunkalab/topic_based_chatml_dpo_pairs",
"base_model:charlesdedampierre/TopicNeuralHermes-2.5-Mistral-7B",
"base_model:quantized:charlesdedampierre/TopicNeuralHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us",
"conversational"
] |
text-generation
| 2024-01-20T03:54:23Z |
---
base_model: charlesdedampierre/TopicNeuralHermes-2.5-Mistral-7B
inference: false
language:
- en
license: apache-2.0
model_creator: charlesdedampierre
model_name: TopicNeuralHermes-2.5-Mistral-7B
model_type: mistral
pipeline_tag: text-generation
prompt_template: |
<|im_start|>system
{{system_prompt}}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
datasets:
- bunkalab/topic_based_chatml_dpo_pairs
quantized_by: brittlewis12
---
# TopicNeuralHermes-2.5-Mistral-7B GGUF
Original model: [TopicNeuralHermes-2.5-Mistral-7B](https://huggingface.co/charlesdedampierre/TopicNeuralHermes-2.5-Mistral-7B)
Model creator: [charlesdedampierre](https://huggingface.co/charlesdedampierre)
This repo contains GGUF format model files for charlesdedampierre’s TopicNeuralHermes-2.5-Mistral-7B.
> TopicNeuralHermes 2.5 Mistral 7B is a refined model developed through fine-tuning with a specific subset of data, selected via Topic Modeling Techniques using [Bunkatopics](https://github.com/charlesdedampierre/BunkaTopics), as a continuing from [OpenHermes 2.5](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B).
> We found the 13 distinctive following topics described by 10 terms each:
>
> **Emotional Dynamics**: feelings, Quinn, Austin, minority women, teaching, schools, individual, personality, backgrounds, triggers.
>
> **Global Knowledge Queries**: question, information, geography, news articles, Step, answer, capital city, pipeline system, country, analogy.
>
> **Digital Interactions and Queries**: questions, question, PersonX, modem, answers, effect relationship, Quora, browser, answer, e-commerce.
>
> **Business and Cybersecurity**: email, businesses, initiatives, innovation, advertising papers, spam, breaches, antivirus, payments, prospects.
>
> **Lifestyle and Wellness**: sleep, exercise, gifts, shopping, Casey, stores, stress, headaches, options, mood.
>
> **Wildlife Ecology**: birds, prey, animals, species, infection, nest, eggs, bacteria, insects, kitty condo.
>
> **Environmental Science and Climate**: temperature, gases, greenhouse, emissions, perturbation, sulfur, dioxide, climate change, water, heat.
>
> **Maritime and Mechanical Engineering**: ship, bowling, propulsion, beam width, Filing cabinet, LED, lane, containment area, lawnmower, rotors.
>
> **Cultural and Social Dynamics**: Lindsey, museum, Kate, Rachel, Jason, Alex, Erin, conversation, Laura, exhibits.
>
> **Political Media Analysis**: media platforms, election, politics, teenagers, elections, White House, Barack Obama, nation, Confederate, depression.
>
> **International Relations and Policy**: cooperation, EU, nations, alliance, NATO, European Union, member states, policy, monarch, Brexit.
>
> **Astrophysics and Physical Sciences**: electrons, km, Moon, acceleration, orbit, friction, current, asteroid, electron, collector emitter.
>
> **Film Critique and Analysis**: movie review, film, reviewer, sentiment, critic, flaws, DVD, plot, opinion, originality.
### What is GGUF?
GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Converted using llama.cpp build 1925 (revision [381ee19](https://github.com/ggerganov/llama.cpp/commit/381ee195721d8e747ee31a60c0751822b3072f02))
### Prompt template: ChatML
```
<|im_start|>system
{{system_message}}<|im_end|>
<|im_start|>user
{{prompt}}<|im_end|>
<|im_start|>assistant
```
---
## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac!

[cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device:
- create & save **Characters** with custom system prompts & temperature settings
- download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)!
- make it your own with custom **Theme colors**
- powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming!
- **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)!
- follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date
---
## Original Model Evaluations:

|
Locutusque/Qwen-14B-llamafied
|
Locutusque
| 2024-01-21T04:38:22Z | 10 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"zh",
"en",
"code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-21T04:30:34Z |
---
license: other
license_name: qwen
license_link: LICENSE
language:
- zh
- en
- code
---
# Description
This is a llamafied version of the Qwen/Qwen-14B model, with theoretically no performance degradation since the tokenizer was not replaced. It was llamafied using a script found in the LLaMA-factory repository.
# License
Make sure to comply with the non-commercial license given by Qwen.
|
SciPhi/Sensei-7B-V1
|
SciPhi
| 2024-01-21T04:37:24Z | 27 | 93 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-18T17:43:53Z |
---
{}
---
# Sensei-7B-V1 Model Card
Sensei-7B-V1 is a Large Language Model (LLM) fine-tuned from OpenPipe's mistral-ft-optimized-1218, which is based on Mistral-7B. Sensei-7B-V1 was was fine-tuned with a fully synthetic dataset to specialize at performing retrieval-augmented generation (RAG) over detailed web search results. This model strives to specialize in using search, such as [AgentSearch](https://huggingface.co/datasets/SciPhi/AgentSearch-V1), to generate accurate and well-cited summaries from a range of search results, providing more accurate answers to user queries. Please refer to the [docs here](https://agent-search.readthedocs.io/en/latest/) for more information on how to run Sensei end-to-end.
Currently, Sensei is available via hosted api at https://www.sciphi.ai. You can try a demonstration [here](https://search.sciphi.ai/).
## Model Architecture
Base Model: mistral-ft-optimized-1218
**Architecture Features:**
- Transformer-based model
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Using the Model
It is recommended to use a single search query. The model will return an answer using search results as context.
Using the AgentSearch package an example is shown below.
```
export SCIPHI_API_KEY=MY_SCIPHI_API_KEY
# Use `Sensei` for LLM RAG w/ AgentSearch
python -m agent_search.scripts.run_rag run --query="What is Fermat's last theorem?"
```
Alternatively, you may provide your own search context directly to the model by adhereing to the following format:
```
### Instruction:
Your task is to perform retrieval augmented generation (RAG) over the given query and search results. Return your answer in a json format that includes a summary of the search results and a list of related queries.
Query:
{prompt}
\n\n
Search Results:
{context}
\n\n
Query:
{prompt}
### Response:
{"summary":
```
__Note__: The inclusion of the text '{"summary":' following the Response footer is intentional. This ensures that the model responds with the proper json format, failure to include this leading prefix can cause small deviaitons. Combining the output with the leading string '{"summary":' results in a properly formatted JSON with keys 'summary' and 'other_queries'.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## References
1. OpenPipe AI. (2023). Model Card for mistral-ft-optimized-1218. The mistral-ft-1218 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters optimized for downstream fine-tuning on a variety of tasks. For full details, please refer to the release blog post. Model Architecture: Transformer with Grouped-Query Attention, Sliding-Window Attention, and Byte-fallback BPE tokenizer. [Link](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
|
pinkyponky/Mistral-7b-instruct-v0.2-summ-sft-e3
|
pinkyponky
| 2024-01-21T04:05:30Z | 0 | 0 | null |
[
"safetensors",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-21T04:04:27Z |
---
license: cc-by-nc-4.0
---
Description to load and test will be added soon. More details on training and data will be added aswell.
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
TBD
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Hi, my name is "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
pinkyponky/Mistral-7b-instruct-v0.2-summ-sft-e1
|
pinkyponky
| 2024-01-21T03:59:32Z | 0 | 0 | null |
[
"safetensors",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-21T03:51:50Z |
---
license: cc-by-nc-4.0
---
Description to load and test will be added soon. More details on training and data will be added aswell.
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
TBD
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Hi, my name is "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
SwiftEggTart/ppo-Huggy
|
SwiftEggTart
| 2024-01-21T03:55:13Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-21T03:55:07Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: SwiftEggTart/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CLMBR/superlative-quantifier-lstm-2
|
CLMBR
| 2024-01-21T03:43:43Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-16T15:29:18Z |
---
tags:
- generated_from_trainer
model-index:
- name: superlative-quantifier-lstm-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# superlative-quantifier-lstm-2
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7739 | 0.03 | 76320 | 4.7591 |
| 4.4948 | 1.03 | 152640 | 4.4813 |
| 4.3521 | 0.03 | 228960 | 4.3474 |
| 4.2673 | 1.03 | 305280 | 4.2657 |
| 4.2044 | 0.03 | 381600 | 4.2106 |
| 4.1584 | 1.03 | 457920 | 4.1697 |
| 4.1237 | 0.03 | 534240 | 4.1397 |
| 4.0937 | 1.03 | 610560 | 4.1153 |
| 4.0646 | 0.03 | 686880 | 4.0961 |
| 4.0397 | 1.03 | 763200 | 4.0802 |
| 4.0188 | 0.03 | 839520 | 4.0677 |
| 3.9979 | 1.03 | 915840 | 4.0576 |
| 3.9875 | 0.03 | 992160 | 4.0478 |
| 3.9694 | 1.03 | 1068480 | 4.0406 |
| 3.9548 | 0.03 | 1144800 | 4.0342 |
| 3.9407 | 0.03 | 1221120 | 4.0289 |
| 3.9259 | 1.03 | 1297440 | 4.0236 |
| 3.9208 | 2.03 | 1373760 | 4.0188 |
| 3.9086 | 0.03 | 1450080 | 4.0158 |
| 3.9049 | 1.03 | 1526400 | 4.0122 |
| 3.9024 | 2.03 | 1602720 | 4.0090 |
| 3.8964 | 0.03 | 1679040 | 4.0069 |
| 3.8947 | 0.03 | 1755360 | 4.0043 |
| 3.8923 | 0.03 | 1831680 | 4.0024 |
| 3.8834 | 0.03 | 1908000 | 4.0003 |
| 3.8735 | 1.03 | 1984320 | 3.9990 |
| 3.8695 | 0.03 | 2060640 | 3.9974 |
| 3.8628 | 1.03 | 2136960 | 3.9960 |
| 3.8635 | 2.03 | 2213280 | 3.9948 |
| 3.8567 | 0.03 | 2289600 | 3.9936 |
| 3.8478 | 1.03 | 2365920 | 3.9924 |
| 3.8426 | 2.03 | 2442240 | 3.9914 |
| 3.8355 | 0.03 | 2518560 | 3.9904 |
| 3.8351 | 0.03 | 2594880 | 3.9893 |
| 3.8282 | 1.03 | 2671200 | 3.9884 |
| 3.8322 | 2.03 | 2747520 | 3.9875 |
| 3.8311 | 0.03 | 2823840 | 3.9872 |
| 3.8322 | 0.03 | 2900160 | 3.9865 |
| 3.8356 | 1.03 | 2976480 | 3.9860 |
| 3.8367 | 2.02 | 3052726 | 3.9856 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
drabindr/detr-resnet-50_finetuned_cppe5
|
drabindr
| 2024-01-21T03:40:20Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-01-21T03:05:31Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Zintoulou/codellamafinetune2
|
Zintoulou
| 2024-01-21T03:38:15Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:finetune:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"region:us"
] | null | 2024-01-21T03:13:10Z |
---
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
tags:
- generated_from_trainer
model-index:
- name: codellamafinetune2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codellamafinetune2
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.688 | 1.0 | 1 | 2.7652 |
| 2.2875 | 2.0 | 2 | 2.2572 |
| 1.8353 | 3.0 | 3 | 1.8551 |
| 1.4334 | 4.0 | 4 | 1.4766 |
| 1.0292 | 5.0 | 5 | 1.3120 |
| 0.7473 | 6.0 | 6 | 1.1319 |
| 0.6156 | 7.0 | 7 | 1.0788 |
| 0.4577 | 8.0 | 8 | 1.0291 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.13.3
|
bloedddd/creamburger
|
bloedddd
| 2024-01-21T03:27:52Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"license:cc",
"region:us"
] | null | 2024-01-21T03:26:29Z |
---
license: cc
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
this model is meant to do sex with you
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
sex model sexy sex
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
StanfordAIMI/RadLLaMA-7b
|
StanfordAIMI
| 2024-01-21T03:05:11Z | 174 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T19:48:37Z |
---
license: llama2
---
<div align="center">
<h1>
AIMI FMs: A Collection of Foundation Models in Radiology
</h1>
</div>
<p align="center">
📝 <a href="https://arxiv.org/" target="_blank">Paper</a> • 🤗 <a href="https://huggingface.co/StanfordAIMI/RadLLaMA-7b" target="_blank">Hugging Face</a> • 🧩 <a href="https://github.com/Stanford-AIMI/aimi-fms" target="_blank">Github</a> • 🪄 <a href="https://github.com/Stanford-AIMI/aimi-fms" target="_blank">Project</a>
</p>
<div align="center">
</div>
## ✨ Latest News
- [01/20/2023]: Model released in [Hugging Face](https://huggingface.co/StanfordAIMI/RadLLaMA-7b).
## 🎬 Get Started
```python
from transformers import AutoTokenizer
from transformers import AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("StanfordAIMI/RadLLaMA-7b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("StanfordAIMI/RadLLaMA-7b")
prompt = "Hi"
conv = [{"from": "human", "value": prompt}]
input_ids = tokenizer.apply_chat_template(conv, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(input_ids)
response = tokenizer.decode(outputs[0])
print(response)
```
## ✏️ Citation
```
@article{aimifms-2024,
title={},
author={},
journal={arXiv preprint arXiv:xxxx.xxxxx},
url={https://arxiv.org/abs/xxxx.xxxxx},
year={2024}
}
```
|
sageriverlee/recyclethis
|
sageriverlee
| 2024-01-21T03:02:54Z | 0 | 0 | null |
[
"dataset:thomasavare/waste-classification-v2",
"license:gpl-3.0",
"region:us"
] | null | 2024-01-21T02:54:03Z |
---
license: gpl-3.0
datasets:
- thomasavare/waste-classification-v2
---
|
rafi138/setfit-paraphrase-mpnet-base-v2-business-type
|
rafi138
| 2024-01-21T02:48:07Z | 46 | 0 |
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] |
text-classification
| 2024-01-17T16:35:31Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: Dadon Hotel
- text: Joyi Homeo Hall
- text: Masum Egg Supplier
- text: Salam Automobiles
- text: Shoumik Enterprise
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.59
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 19 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:----------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Landmark | <ul><li>'Rajarbagh Shahjahanpur Moar'</li><li>'3 Number Mp Checkpost'</li><li>'Akua Madrasha Quarter Rail Crossing'</li></ul> |
| Religious Place | <ul><li>'Csd Masjid Jashore Cantonment'</li><li>'Baytut Tayyib Jame Masjid'</li><li>'Court Para Jame Masjid'</li></ul> |
| Residential | <ul><li>'Matrichaya'</li><li>'Chowdhury Mansion'</li><li>'Nabila Villa'</li></ul> |
| Government | <ul><li>'Rapid Action Battalion 3 (RAB)'</li><li>'Bangladesh Muktijoddha Sangsad Tejgaon Industrial Area Unit Command'</li><li>'Subhanighat Police Outpost'</li></ul> |
| Recreation | <ul><li>'5h Avenue Convention Hall'</li><li>'Purobi Chinema Hall'</li><li>'Shotodol Club'</li></ul> |
| Commercial | <ul><li>'Bikers Racing Zone'</li><li>'Sr Parcel Service Limited'</li><li>'Vegetables Market'</li></ul> |
| Education | <ul><li>'Dinajpur Polytechnic Institute'</li><li>'Gazipur Central Public Management Institute'</li><li>'Maskatadighi Government Primary School'</li></ul> |
| Bank | <ul><li>'Commercial Bank Of Ceylon Plc Shantinagar'</li><li>'Social Islami Bank Limited (SIBL) Mohammadpur'</li><li>'Dhaka Bank Limited Jashore Sadar'</li></ul> |
| Fuel | <ul><li>'Ferdaous Enterprise'</li><li>'Ahmed Bd Trade'</li><li>'Hasan Filling Station'</li></ul> |
| Hotel | <ul><li>'Hotel Arafat'</li><li>'World Beach Resort'</li><li>'Hotel Chalontika'</li></ul> |
| Shop | <ul><li>'Janosheba Medico'</li><li>'New Mone Rekho Tailors'</li><li>'Man Tailors'</li></ul> |
| Utility | <ul><li>'Pole No 86'</li><li>'Pole No 239'</li><li>'Salimullah Road Water Pump'</li></ul> |
| Construction | <ul><li>'N J Enterprise'</li><li>'Bk Traders'</li><li>'Sohel Glass'</li></ul> |
| Agricultural | <ul><li>'Mango Seedlings'</li><li>'Amaia Garden'</li><li>'Gardey'</li></ul> |
| Office | <ul><li>'Dalik Lekhak And Serviour'</li><li>'Assistant Police Commissioner Traffic Mohammadpur Zone Office'</li><li>'Challenger Elevator And Escalator Limited'</li></ul> |
| Healthcare | <ul><li>'Afroz Homeo Hall'</li><li>'Expert Dental Solution'</li><li>'Bhai Bon Homeo Hall'</li></ul> |
| Transportation | <ul><li>'Ideal Parking Jone'</li><li>'Hanif Bus Counter'</li><li>'Five Star Transport Agency'</li></ul> |
| Food | <ul><li>'Madhuban Hotel'</li><li>'Al Hamza Biriyani House'</li><li>'Gramer Bari'</li></ul> |
| Industry | <ul><li>'Rony Group'</li><li>'Net Factory'</li><li>'Kumudini Welfare Trust Of Bengal Bd Limited'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.59 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("rafi138/setfit-paraphrase-mpnet-base-v2-business-type")
# Run inference
preds = model("Dadon Hotel")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 1 | 3.3752 | 11 |
| Label | Training Sample Count |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------|
| ShopCommercialGovernmentHealthcareEducationFoodOfficeReligious PlaceBankTransportationConstructionIndustryResidentialLandmarkRecreationFuelHotelUtilityAgricultural | 0 |
### Training Hyperparameters
- batch_size: (64, 64)
- num_epochs: (4, 4)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:---------:|:-------------:|:---------------:|
| 0.0000 | 1 | 0.2815 | - |
| 0.0019 | 50 | 0.2543 | - |
| 0.0038 | 100 | 0.2781 | - |
| 0.0057 | 150 | 0.2328 | - |
| 0.0076 | 200 | 0.2258 | - |
| 0.0095 | 250 | 0.1884 | - |
| 0.0115 | 300 | 0.2192 | - |
| 0.0134 | 350 | 0.1935 | - |
| 0.0153 | 400 | 0.2057 | - |
| 0.0172 | 450 | 0.1949 | - |
| 0.0191 | 500 | 0.1955 | - |
| 0.0210 | 550 | 0.1744 | - |
| 0.0229 | 600 | 0.1518 | - |
| 0.0248 | 650 | 0.1804 | - |
| 0.0267 | 700 | 0.1669 | - |
| 0.0286 | 750 | 0.1565 | - |
| 0.0306 | 800 | 0.14 | - |
| 0.0325 | 850 | 0.1648 | - |
| 0.0344 | 900 | 0.1441 | - |
| 0.0363 | 950 | 0.1237 | - |
| 0.0382 | 1000 | 0.1155 | - |
| 0.0401 | 1050 | 0.1303 | - |
| 0.0420 | 1100 | 0.1115 | - |
| 0.0439 | 1150 | 0.1533 | - |
| 0.0458 | 1200 | 0.1278 | - |
| 0.0477 | 1250 | 0.0851 | - |
| 0.0496 | 1300 | 0.0982 | - |
| 0.0516 | 1350 | 0.0925 | - |
| 0.0535 | 1400 | 0.0861 | - |
| 0.0554 | 1450 | 0.1121 | - |
| 0.0573 | 1500 | 0.1226 | - |
| 0.0592 | 1550 | 0.1049 | - |
| 0.0611 | 1600 | 0.1101 | - |
| 0.0630 | 1650 | 0.113 | - |
| 0.0649 | 1700 | 0.1061 | - |
| 0.0668 | 1750 | 0.0573 | - |
| 0.0687 | 1800 | 0.0665 | - |
| 0.0707 | 1850 | 0.0657 | - |
| 0.0726 | 1900 | 0.082 | - |
| 0.0745 | 1950 | 0.0654 | - |
| 0.0764 | 2000 | 0.0783 | - |
| 0.0783 | 2050 | 0.0713 | - |
| 0.0802 | 2100 | 0.0669 | - |
| 0.0821 | 2150 | 0.0536 | - |
| 0.0840 | 2200 | 0.0556 | - |
| 0.0859 | 2250 | 0.0704 | - |
| 0.0878 | 2300 | 0.063 | - |
| 0.0897 | 2350 | 0.0472 | - |
| 0.0917 | 2400 | 0.0562 | - |
| 0.0936 | 2450 | 0.0392 | - |
| 0.0955 | 2500 | 0.0465 | - |
| 0.0974 | 2550 | 0.0424 | - |
| 0.0993 | 2600 | 0.0453 | - |
| 0.1012 | 2650 | 0.0375 | - |
| 0.1031 | 2700 | 0.0445 | - |
| 0.1050 | 2750 | 0.0231 | - |
| 0.1069 | 2800 | 0.0386 | - |
| 0.1088 | 2850 | 0.0258 | - |
| 0.1108 | 2900 | 0.0344 | - |
| 0.1127 | 2950 | 0.0143 | - |
| 0.1146 | 3000 | 0.016 | - |
| 0.1165 | 3050 | 0.0216 | - |
| 0.1184 | 3100 | 0.0365 | - |
| 0.1203 | 3150 | 0.0238 | - |
| 0.1222 | 3200 | 0.0099 | - |
| 0.1241 | 3250 | 0.0211 | - |
| 0.1260 | 3300 | 0.0149 | - |
| 0.1279 | 3350 | 0.0406 | - |
| 0.1298 | 3400 | 0.0231 | - |
| 0.1318 | 3450 | 0.0266 | - |
| 0.1337 | 3500 | 0.0225 | - |
| 0.1356 | 3550 | 0.0203 | - |
| 0.1375 | 3600 | 0.0097 | - |
| 0.1394 | 3650 | 0.0143 | - |
| 0.1413 | 3700 | 0.0052 | - |
| 0.1432 | 3750 | 0.0094 | - |
| 0.1451 | 3800 | 0.0029 | - |
| 0.1470 | 3850 | 0.0065 | - |
| 0.1489 | 3900 | 0.0078 | - |
| 0.1508 | 3950 | 0.0109 | - |
| 0.1528 | 4000 | 0.0054 | - |
| 0.1547 | 4050 | 0.0069 | - |
| 0.1566 | 4100 | 0.0248 | - |
| 0.1585 | 4150 | 0.0069 | - |
| 0.1604 | 4200 | 0.0073 | - |
| 0.1623 | 4250 | 0.0054 | - |
| 0.1642 | 4300 | 0.0027 | - |
| 0.1661 | 4350 | 0.0028 | - |
| 0.1680 | 4400 | 0.0021 | - |
| 0.1699 | 4450 | 0.0038 | - |
| 0.1719 | 4500 | 0.0034 | - |
| 0.1738 | 4550 | 0.0158 | - |
| 0.1757 | 4600 | 0.0043 | - |
| 0.1776 | 4650 | 0.0055 | - |
| 0.1795 | 4700 | 0.0064 | - |
| 0.1814 | 4750 | 0.0021 | - |
| 0.1833 | 4800 | 0.0026 | - |
| 0.1852 | 4850 | 0.0112 | - |
| 0.1871 | 4900 | 0.0029 | - |
| 0.1890 | 4950 | 0.0176 | - |
| 0.1909 | 5000 | 0.0071 | - |
| 0.1929 | 5050 | 0.0097 | - |
| 0.1948 | 5100 | 0.0032 | - |
| 0.1967 | 5150 | 0.0044 | - |
| 0.1986 | 5200 | 0.0065 | - |
| 0.2005 | 5250 | 0.0176 | - |
| 0.2024 | 5300 | 0.0026 | - |
| 0.2043 | 5350 | 0.0036 | - |
| 0.2062 | 5400 | 0.0022 | - |
| 0.2081 | 5450 | 0.0014 | - |
| 0.2100 | 5500 | 0.0073 | - |
| 0.2120 | 5550 | 0.0009 | - |
| 0.2139 | 5600 | 0.0011 | - |
| 0.2158 | 5650 | 0.0022 | - |
| 0.2177 | 5700 | 0.0011 | - |
| 0.2196 | 5750 | 0.0074 | - |
| 0.2215 | 5800 | 0.0114 | - |
| 0.2234 | 5850 | 0.0013 | - |
| 0.2253 | 5900 | 0.0009 | - |
| 0.2272 | 5950 | 0.0018 | - |
| 0.2291 | 6000 | 0.001 | - |
| 0.2310 | 6050 | 0.0013 | - |
| 0.2330 | 6100 | 0.0058 | - |
| 0.2349 | 6150 | 0.0012 | - |
| 0.2368 | 6200 | 0.0013 | - |
| 0.2387 | 6250 | 0.0009 | - |
| 0.2406 | 6300 | 0.0012 | - |
| 0.2425 | 6350 | 0.0006 | - |
| 0.2444 | 6400 | 0.0013 | - |
| 0.2463 | 6450 | 0.0005 | - |
| 0.2482 | 6500 | 0.0032 | - |
| 0.2501 | 6550 | 0.0006 | - |
| 0.2521 | 6600 | 0.0007 | - |
| 0.2540 | 6650 | 0.0029 | - |
| 0.2559 | 6700 | 0.001 | - |
| 0.2578 | 6750 | 0.0005 | - |
| 0.2597 | 6800 | 0.001 | - |
| 0.2616 | 6850 | 0.0004 | - |
| 0.2635 | 6900 | 0.0007 | - |
| 0.2654 | 6950 | 0.0006 | - |
| 0.2673 | 7000 | 0.0034 | - |
| 0.2692 | 7050 | 0.0006 | - |
| 0.2711 | 7100 | 0.0006 | - |
| 0.2731 | 7150 | 0.0013 | - |
| 0.2750 | 7200 | 0.0017 | - |
| 0.2769 | 7250 | 0.0006 | - |
| 0.2788 | 7300 | 0.0007 | - |
| 0.2807 | 7350 | 0.0005 | - |
| 0.2826 | 7400 | 0.0009 | - |
| 0.2845 | 7450 | 0.0007 | - |
| 0.2864 | 7500 | 0.0008 | - |
| 0.2883 | 7550 | 0.001 | - |
| 0.2902 | 7600 | 0.0004 | - |
| 0.2922 | 7650 | 0.0004 | - |
| 0.2941 | 7700 | 0.0007 | - |
| 0.2960 | 7750 | 0.0004 | - |
| 0.2979 | 7800 | 0.0011 | - |
| 0.2998 | 7850 | 0.0005 | - |
| 0.3017 | 7900 | 0.0004 | - |
| 0.3036 | 7950 | 0.0003 | - |
| 0.3055 | 8000 | 0.001 | - |
| 0.3074 | 8050 | 0.0005 | - |
| 0.3093 | 8100 | 0.0003 | - |
| 0.3112 | 8150 | 0.0004 | - |
| 0.3132 | 8200 | 0.0003 | - |
| 0.3151 | 8250 | 0.007 | - |
| 0.3170 | 8300 | 0.0026 | - |
| 0.3189 | 8350 | 0.0004 | - |
| 0.3208 | 8400 | 0.0005 | - |
| 0.3227 | 8450 | 0.0005 | - |
| 0.3246 | 8500 | 0.0004 | - |
| 0.3265 | 8550 | 0.0003 | - |
| 0.3284 | 8600 | 0.0003 | - |
| 0.3303 | 8650 | 0.0003 | - |
| 0.3323 | 8700 | 0.0026 | - |
| 0.3342 | 8750 | 0.0003 | - |
| 0.3361 | 8800 | 0.0002 | - |
| 0.3380 | 8850 | 0.0002 | - |
| 0.3399 | 8900 | 0.0002 | - |
| 0.3418 | 8950 | 0.0004 | - |
| 0.3437 | 9000 | 0.0003 | - |
| 0.3456 | 9050 | 0.0005 | - |
| 0.3475 | 9100 | 0.0002 | - |
| 0.3494 | 9150 | 0.0052 | - |
| 0.3513 | 9200 | 0.0138 | - |
| 0.3533 | 9250 | 0.0006 | - |
| 0.3552 | 9300 | 0.0004 | - |
| 0.3571 | 9350 | 0.0019 | - |
| 0.3590 | 9400 | 0.0003 | - |
| 0.3609 | 9450 | 0.0002 | - |
| 0.3628 | 9500 | 0.0002 | - |
| 0.3647 | 9550 | 0.0003 | - |
| 0.3666 | 9600 | 0.0003 | - |
| 0.3685 | 9650 | 0.0269 | - |
| 0.3704 | 9700 | 0.0004 | - |
| 0.3724 | 9750 | 0.0002 | - |
| 0.3743 | 9800 | 0.0005 | - |
| 0.3762 | 9850 | 0.0002 | - |
| 0.3781 | 9900 | 0.0003 | - |
| 0.3800 | 9950 | 0.0002 | - |
| 0.3819 | 10000 | 0.0002 | - |
| 0.3838 | 10050 | 0.0002 | - |
| 0.3857 | 10100 | 0.0158 | - |
| 0.3876 | 10150 | 0.0004 | - |
| 0.3895 | 10200 | 0.0094 | - |
| 0.3914 | 10250 | 0.0098 | - |
| 0.3934 | 10300 | 0.0003 | - |
| 0.3953 | 10350 | 0.0003 | - |
| 0.3972 | 10400 | 0.0004 | - |
| 0.3991 | 10450 | 0.0003 | - |
| 0.4010 | 10500 | 0.0003 | - |
| 0.4029 | 10550 | 0.0002 | - |
| 0.4048 | 10600 | 0.0006 | - |
| 0.4067 | 10650 | 0.0004 | - |
| 0.4086 | 10700 | 0.0002 | - |
| 0.4105 | 10750 | 0.0005 | - |
| 0.4124 | 10800 | 0.0002 | - |
| 0.4144 | 10850 | 0.0002 | - |
| 0.4163 | 10900 | 0.0002 | - |
| 0.4182 | 10950 | 0.0002 | - |
| 0.4201 | 11000 | 0.0002 | - |
| 0.4220 | 11050 | 0.0002 | - |
| 0.4239 | 11100 | 0.0002 | - |
| 0.4258 | 11150 | 0.0002 | - |
| 0.4277 | 11200 | 0.0002 | - |
| 0.4296 | 11250 | 0.0002 | - |
| 0.4315 | 11300 | 0.0001 | - |
| 0.4335 | 11350 | 0.0001 | - |
| 0.4354 | 11400 | 0.0001 | - |
| 0.4373 | 11450 | 0.0002 | - |
| 0.4392 | 11500 | 0.0001 | - |
| 0.4411 | 11550 | 0.0002 | - |
| 0.4430 | 11600 | 0.0001 | - |
| 0.4449 | 11650 | 0.0001 | - |
| 0.4468 | 11700 | 0.0002 | - |
| 0.4487 | 11750 | 0.0001 | - |
| 0.4506 | 11800 | 0.0001 | - |
| 0.4525 | 11850 | 0.0002 | - |
| 0.4545 | 11900 | 0.0006 | - |
| 0.4564 | 11950 | 0.0002 | - |
| 0.4583 | 12000 | 0.0001 | - |
| 0.4602 | 12050 | 0.0001 | - |
| 0.4621 | 12100 | 0.0002 | - |
| 0.4640 | 12150 | 0.0002 | - |
| 0.4659 | 12200 | 0.0001 | - |
| 0.4678 | 12250 | 0.0001 | - |
| 0.4697 | 12300 | 0.0002 | - |
| 0.4716 | 12350 | 0.0004 | - |
| 0.4736 | 12400 | 0.0003 | - |
| 0.4755 | 12450 | 0.0279 | - |
| 0.4774 | 12500 | 0.0072 | - |
| 0.4793 | 12550 | 0.006 | - |
| 0.4812 | 12600 | 0.0325 | - |
| 0.4831 | 12650 | 0.0012 | - |
| 0.4850 | 12700 | 0.0015 | - |
| 0.4869 | 12750 | 0.0013 | - |
| 0.4888 | 12800 | 0.0002 | - |
| 0.4907 | 12850 | 0.0003 | - |
| 0.4926 | 12900 | 0.0003 | - |
| 0.4946 | 12950 | 0.001 | - |
| 0.4965 | 13000 | 0.0004 | - |
| 0.4984 | 13050 | 0.0002 | - |
| 0.5003 | 13100 | 0.0002 | - |
| 0.5022 | 13150 | 0.0002 | - |
| 0.5041 | 13200 | 0.0003 | - |
| 0.5060 | 13250 | 0.0001 | - |
| 0.5079 | 13300 | 0.0001 | - |
| 0.5098 | 13350 | 0.0002 | - |
| 0.5117 | 13400 | 0.0001 | - |
| 0.5137 | 13450 | 0.0002 | - |
| 0.5156 | 13500 | 0.0001 | - |
| 0.5175 | 13550 | 0.0002 | - |
| 0.5194 | 13600 | 0.0002 | - |
| 0.5213 | 13650 | 0.0002 | - |
| 0.5232 | 13700 | 0.0001 | - |
| 0.5251 | 13750 | 0.0002 | - |
| 0.5270 | 13800 | 0.0002 | - |
| 0.5289 | 13850 | 0.0001 | - |
| 0.5308 | 13900 | 0.0001 | - |
| 0.5327 | 13950 | 0.0002 | - |
| 0.5347 | 14000 | 0.0001 | - |
| 0.5366 | 14050 | 0.0001 | - |
| 0.5385 | 14100 | 0.0001 | - |
| 0.5404 | 14150 | 0.0001 | - |
| 0.5423 | 14200 | 0.0149 | - |
| 0.5442 | 14250 | 0.0002 | - |
| 0.5461 | 14300 | 0.0009 | - |
| 0.5480 | 14350 | 0.0057 | - |
| 0.5499 | 14400 | 0.0089 | - |
| 0.5518 | 14450 | 0.0154 | - |
| 0.5538 | 14500 | 0.0003 | - |
| 0.5557 | 14550 | 0.0016 | - |
| 0.5576 | 14600 | 0.0001 | - |
| 0.5595 | 14650 | 0.0001 | - |
| 0.5614 | 14700 | 0.0002 | - |
| 0.5633 | 14750 | 0.0002 | - |
| 0.5652 | 14800 | 0.0004 | - |
| 0.5671 | 14850 | 0.0004 | - |
| 0.5690 | 14900 | 0.0004 | - |
| 0.5709 | 14950 | 0.0002 | - |
| 0.5728 | 15000 | 0.0001 | - |
| 0.5748 | 15050 | 0.0001 | - |
| 0.5767 | 15100 | 0.0002 | - |
| 0.5786 | 15150 | 0.0002 | - |
| 0.5805 | 15200 | 0.0001 | - |
| 0.5824 | 15250 | 0.0002 | - |
| 0.5843 | 15300 | 0.0001 | - |
| 0.5862 | 15350 | 0.0002 | - |
| 0.5881 | 15400 | 0.0001 | - |
| 0.5900 | 15450 | 0.0113 | - |
| 0.5919 | 15500 | 0.0002 | - |
| 0.5939 | 15550 | 0.0002 | - |
| 0.5958 | 15600 | 0.0001 | - |
| 0.5977 | 15650 | 0.0001 | - |
| 0.5996 | 15700 | 0.0004 | - |
| 0.6015 | 15750 | 0.0009 | - |
| 0.6034 | 15800 | 0.001 | - |
| 0.6053 | 15850 | 0.0002 | - |
| 0.6072 | 15900 | 0.0002 | - |
| 0.6091 | 15950 | 0.0003 | - |
| 0.6110 | 16000 | 0.0002 | - |
| 0.6129 | 16050 | 0.0004 | - |
| 0.6149 | 16100 | 0.0001 | - |
| 0.6168 | 16150 | 0.0001 | - |
| 0.6187 | 16200 | 0.0001 | - |
| 0.6206 | 16250 | 0.0001 | - |
| 0.6225 | 16300 | 0.0001 | - |
| 0.6244 | 16350 | 0.0001 | - |
| 0.6263 | 16400 | 0.0001 | - |
| 0.6282 | 16450 | 0.0008 | - |
| 0.6301 | 16500 | 0.0001 | - |
| 0.6320 | 16550 | 0.0001 | - |
| 0.6340 | 16600 | 0.0001 | - |
| 0.6359 | 16650 | 0.0001 | - |
| 0.6378 | 16700 | 0.0001 | - |
| 0.6397 | 16750 | 0.0001 | - |
| 0.6416 | 16800 | 0.0001 | - |
| 0.6435 | 16850 | 0.0004 | - |
| 0.6454 | 16900 | 0.0001 | - |
| 0.6473 | 16950 | 0.0001 | - |
| 0.6492 | 17000 | 0.0001 | - |
| 0.6511 | 17050 | 0.0001 | - |
| 0.6530 | 17100 | 0.0001 | - |
| 0.6550 | 17150 | 0.0003 | - |
| 0.6569 | 17200 | 0.0003 | - |
| 0.6588 | 17250 | 0.0002 | - |
| 0.6607 | 17300 | 0.0002 | - |
| 0.6626 | 17350 | 0.0011 | - |
| 0.6645 | 17400 | 0.0003 | - |
| 0.6664 | 17450 | 0.0001 | - |
| 0.6683 | 17500 | 0.0003 | - |
| 0.6702 | 17550 | 0.0001 | - |
| 0.6721 | 17600 | 0.0002 | - |
| 0.6741 | 17650 | 0.0001 | - |
| 0.6760 | 17700 | 0.0001 | - |
| 0.6779 | 17750 | 0.0001 | - |
| 0.6798 | 17800 | 0.0001 | - |
| 0.6817 | 17850 | 0.0001 | - |
| 0.6836 | 17900 | 0.0001 | - |
| 0.6855 | 17950 | 0.0001 | - |
| 0.6874 | 18000 | 0.0001 | - |
| 0.6893 | 18050 | 0.0001 | - |
| 0.6912 | 18100 | 0.0001 | - |
| 0.6931 | 18150 | 0.0023 | - |
| 0.6951 | 18200 | 0.0001 | - |
| 0.6970 | 18250 | 0.0001 | - |
| 0.6989 | 18300 | 0.0001 | - |
| 0.7008 | 18350 | 0.0001 | - |
| 0.7027 | 18400 | 0.0001 | - |
| 0.7046 | 18450 | 0.0001 | - |
| 0.7065 | 18500 | 0.0001 | - |
| 0.7084 | 18550 | 0.0001 | - |
| 0.7103 | 18600 | 0.0001 | - |
| 0.7122 | 18650 | 0.0001 | - |
| 0.7141 | 18700 | 0.0001 | - |
| 0.7161 | 18750 | 0.0001 | - |
| 0.7180 | 18800 | 0.0001 | - |
| 0.7199 | 18850 | 0.0001 | - |
| 0.7218 | 18900 | 0.0001 | - |
| 0.7237 | 18950 | 0.0001 | - |
| 0.7256 | 19000 | 0.0001 | - |
| 0.7275 | 19050 | 0.0001 | - |
| 0.7294 | 19100 | 0.0104 | - |
| 0.7313 | 19150 | 0.0041 | - |
| 0.7332 | 19200 | 0.0045 | - |
| 0.7352 | 19250 | 0.0002 | - |
| 0.7371 | 19300 | 0.0007 | - |
| 0.7390 | 19350 | 0.0021 | - |
| 0.7409 | 19400 | 0.0001 | - |
| 0.7428 | 19450 | 0.0001 | - |
| 0.7447 | 19500 | 0.0001 | - |
| 0.7466 | 19550 | 0.0001 | - |
| 0.7485 | 19600 | 0.0001 | - |
| 0.7504 | 19650 | 0.0001 | - |
| 0.7523 | 19700 | 0.0001 | - |
| 0.7542 | 19750 | 0.0001 | - |
| 0.7562 | 19800 | 0.0001 | - |
| 0.7581 | 19850 | 0.0001 | - |
| 0.7600 | 19900 | 0.0001 | - |
| 0.7619 | 19950 | 0.0001 | - |
| 0.7638 | 20000 | 0.0001 | - |
| 0.7657 | 20050 | 0.0001 | - |
| 0.7676 | 20100 | 0.0001 | - |
| 0.7695 | 20150 | 0.0001 | - |
| 0.7714 | 20200 | 0.0001 | - |
| 0.7733 | 20250 | 0.0001 | - |
| 0.7753 | 20300 | 0.0001 | - |
| 0.7772 | 20350 | 0.0001 | - |
| 0.7791 | 20400 | 0.0001 | - |
| 0.7810 | 20450 | 0.0001 | - |
| 0.7829 | 20500 | 0.0001 | - |
| 0.7848 | 20550 | 0.0001 | - |
| 0.7867 | 20600 | 0.0072 | - |
| 0.7886 | 20650 | 0.0034 | - |
| 0.7905 | 20700 | 0.0001 | - |
| 0.7924 | 20750 | 0.0002 | - |
| 0.7943 | 20800 | 0.0001 | - |
| 0.7963 | 20850 | 0.0001 | - |
| 0.7982 | 20900 | 0.0004 | - |
| 0.8001 | 20950 | 0.0001 | - |
| 0.8020 | 21000 | 0.0001 | - |
| 0.8039 | 21050 | 0.0001 | - |
| 0.8058 | 21100 | 0.0001 | - |
| 0.8077 | 21150 | 0.0001 | - |
| 0.8096 | 21200 | 0.0001 | - |
| 0.8115 | 21250 | 0.0001 | - |
| 0.8134 | 21300 | 0.0001 | - |
| 0.8154 | 21350 | 0.0001 | - |
| 0.8173 | 21400 | 0.0001 | - |
| 0.8192 | 21450 | 0.0001 | - |
| 0.8211 | 21500 | 0.0002 | - |
| 0.8230 | 21550 | 0.0002 | - |
| 0.8249 | 21600 | 0.0001 | - |
| 0.8268 | 21650 | 0.0001 | - |
| 0.8287 | 21700 | 0.0001 | - |
| 0.8306 | 21750 | 0.0001 | - |
| 0.8325 | 21800 | 0.0001 | - |
| 0.8344 | 21850 | 0.0019 | - |
| 0.8364 | 21900 | 0.0001 | - |
| 0.8383 | 21950 | 0.0001 | - |
| 0.8402 | 22000 | 0.0001 | - |
| 0.8421 | 22050 | 0.0001 | - |
| 0.8440 | 22100 | 0.0001 | - |
| 0.8459 | 22150 | 0.0001 | - |
| 0.8478 | 22200 | 0.0001 | - |
| 0.8497 | 22250 | 0.0001 | - |
| 0.8516 | 22300 | 0.0001 | - |
| 0.8535 | 22350 | 0.0001 | - |
| 0.8555 | 22400 | 0.0001 | - |
| 0.8574 | 22450 | 0.0001 | - |
| 0.8593 | 22500 | 0.002 | - |
| 0.8612 | 22550 | 0.0001 | - |
| 0.8631 | 22600 | 0.0001 | - |
| 0.8650 | 22650 | 0.0001 | - |
| 0.8669 | 22700 | 0.0001 | - |
| 0.8688 | 22750 | 0.0001 | - |
| 0.8707 | 22800 | 0.0001 | - |
| 0.8726 | 22850 | 0.0001 | - |
| 0.8745 | 22900 | 0.0001 | - |
| 0.8765 | 22950 | 0.0001 | - |
| 0.8784 | 23000 | 0.0001 | - |
| 0.8803 | 23050 | 0.0001 | - |
| 0.8822 | 23100 | 0.0001 | - |
| 0.8841 | 23150 | 0.0001 | - |
| 0.8860 | 23200 | 0.0001 | - |
| 0.8879 | 23250 | 0.0001 | - |
| 0.8898 | 23300 | 0.0001 | - |
| 0.8917 | 23350 | 0.0001 | - |
| 0.8936 | 23400 | 0.0001 | - |
| 0.8956 | 23450 | 0.0001 | - |
| 0.8975 | 23500 | 0.0001 | - |
| 0.8994 | 23550 | 0.0001 | - |
| 0.9013 | 23600 | 0.0001 | - |
| 0.9032 | 23650 | 0.0001 | - |
| 0.9051 | 23700 | 0.0001 | - |
| 0.9070 | 23750 | 0.0001 | - |
| 0.9089 | 23800 | 0.0001 | - |
| 0.9108 | 23850 | 0.0001 | - |
| 0.9127 | 23900 | 0.0001 | - |
| 0.9146 | 23950 | 0.0001 | - |
| 0.9166 | 24000 | 0.0001 | - |
| 0.9185 | 24050 | 0.0001 | - |
| 0.9204 | 24100 | 0.0015 | - |
| 0.9223 | 24150 | 0.0001 | - |
| 0.9242 | 24200 | 0.0009 | - |
| 0.9261 | 24250 | 0.0002 | - |
| 0.9280 | 24300 | 0.0001 | - |
| 0.9299 | 24350 | 0.0001 | - |
| 0.9318 | 24400 | 0.0001 | - |
| 0.9337 | 24450 | 0.0001 | - |
| 0.9357 | 24500 | 0.0001 | - |
| 0.9376 | 24550 | 0.0001 | - |
| 0.9395 | 24600 | 0.0001 | - |
| 0.9414 | 24650 | 0.0004 | - |
| 0.9433 | 24700 | 0.0001 | - |
| 0.9452 | 24750 | 0.0001 | - |
| 0.9471 | 24800 | 0.0001 | - |
| 0.9490 | 24850 | 0.0001 | - |
| 0.9509 | 24900 | 0.0001 | - |
| 0.9528 | 24950 | 0.0001 | - |
| 0.9547 | 25000 | 0.0001 | - |
| 0.9567 | 25050 | 0.0001 | - |
| 0.9586 | 25100 | 0.0001 | - |
| 0.9605 | 25150 | 0.0001 | - |
| 0.9624 | 25200 | 0.0001 | - |
| 0.9643 | 25250 | 0.0001 | - |
| 0.9662 | 25300 | 0.0001 | - |
| 0.9681 | 25350 | 0.0001 | - |
| 0.9700 | 25400 | 0.0001 | - |
| 0.9719 | 25450 | 0.0001 | - |
| 0.9738 | 25500 | 0.0001 | - |
| 0.9757 | 25550 | 0.0001 | - |
| 0.9777 | 25600 | 0.0277 | - |
| 0.9796 | 25650 | 0.0008 | - |
| 0.9815 | 25700 | 0.0001 | - |
| 0.9834 | 25750 | 0.0001 | - |
| 0.9853 | 25800 | 0.0001 | - |
| 0.9872 | 25850 | 0.0001 | - |
| 0.9891 | 25900 | 0.0001 | - |
| 0.9910 | 25950 | 0.0001 | - |
| 0.9929 | 26000 | 0.0001 | - |
| 0.9948 | 26050 | 0.0001 | - |
| 0.9968 | 26100 | 0.0001 | - |
| 0.9987 | 26150 | 0.0001 | - |
| 1.0 | 26185 | - | 0.3529 |
| 1.0006 | 26200 | 0.0001 | - |
| 1.0025 | 26250 | 0.0001 | - |
| 1.0044 | 26300 | 0.0001 | - |
| 1.0063 | 26350 | 0.0001 | - |
| 1.0082 | 26400 | 0.0001 | - |
| 1.0101 | 26450 | 0.0001 | - |
| 1.0120 | 26500 | 0.0001 | - |
| 1.0139 | 26550 | 0.0001 | - |
| 1.0158 | 26600 | 0.0001 | - |
| 1.0178 | 26650 | 0.0001 | - |
| 1.0197 | 26700 | 0.0001 | - |
| 1.0216 | 26750 | 0.0001 | - |
| 1.0235 | 26800 | 0.0001 | - |
| 1.0254 | 26850 | 0.0001 | - |
| 1.0273 | 26900 | 0.0001 | - |
| 1.0292 | 26950 | 0.0001 | - |
| 1.0311 | 27000 | 0.0001 | - |
| 1.0330 | 27050 | 0.0001 | - |
| 1.0349 | 27100 | 0.0002 | - |
| 1.0369 | 27150 | 0.0001 | - |
| 1.0388 | 27200 | 0.0001 | - |
| 1.0407 | 27250 | 0.0001 | - |
| 1.0426 | 27300 | 0.0001 | - |
| 1.0445 | 27350 | 0.0001 | - |
| 1.0464 | 27400 | 0.0001 | - |
| 1.0483 | 27450 | 0.0001 | - |
| 1.0502 | 27500 | 0.0001 | - |
| 1.0521 | 27550 | 0.0001 | - |
| 1.0540 | 27600 | 0.0001 | - |
| 1.0559 | 27650 | 0.0001 | - |
| 1.0579 | 27700 | 0.0001 | - |
| 1.0598 | 27750 | 0.0001 | - |
| 1.0617 | 27800 | 0.0001 | - |
| 1.0636 | 27850 | 0.0001 | - |
| 1.0655 | 27900 | 0.0001 | - |
| 1.0674 | 27950 | 0.0001 | - |
| 1.0693 | 28000 | 0.0001 | - |
| 1.0712 | 28050 | 0.0001 | - |
| 1.0731 | 28100 | 0.0001 | - |
| 1.0750 | 28150 | 0.0003 | - |
| 1.0770 | 28200 | 0.0001 | - |
| 1.0789 | 28250 | 0.0001 | - |
| 1.0808 | 28300 | 0.0002 | - |
| 1.0827 | 28350 | 0.0009 | - |
| 1.0846 | 28400 | 0.0003 | - |
| 1.0865 | 28450 | 0.0001 | - |
| 1.0884 | 28500 | 0.0001 | - |
| 1.0903 | 28550 | 0.0001 | - |
| 1.0922 | 28600 | 0.0001 | - |
| 1.0941 | 28650 | 0.0001 | - |
| 1.0960 | 28700 | 0.0001 | - |
| 1.0980 | 28750 | 0.0001 | - |
| 1.0999 | 28800 | 0.0001 | - |
| 1.1018 | 28850 | 0.0001 | - |
| 1.1037 | 28900 | 0.0001 | - |
| 1.1056 | 28950 | 0.0001 | - |
| 1.1075 | 29000 | 0.0001 | - |
| 1.1094 | 29050 | 0.0001 | - |
| 1.1113 | 29100 | 0.0001 | - |
| 1.1132 | 29150 | 0.0001 | - |
| 1.1151 | 29200 | 0.0001 | - |
| 1.1171 | 29250 | 0.0001 | - |
| 1.1190 | 29300 | 0.0001 | - |
| 1.1209 | 29350 | 0.0001 | - |
| 1.1228 | 29400 | 0.0001 | - |
| 1.1247 | 29450 | 0.0001 | - |
| 1.1266 | 29500 | 0.0001 | - |
| 1.1285 | 29550 | 0.0001 | - |
| 1.1304 | 29600 | 0.0001 | - |
| 1.1323 | 29650 | 0.0001 | - |
| 1.1342 | 29700 | 0.0001 | - |
| 1.1361 | 29750 | 0.0001 | - |
| 1.1381 | 29800 | 0.0001 | - |
| 1.1400 | 29850 | 0.0001 | - |
| 1.1419 | 29900 | 0.0001 | - |
| 1.1438 | 29950 | 0.0001 | - |
| 1.1457 | 30000 | 0.0001 | - |
| 1.1476 | 30050 | 0.0001 | - |
| 1.1495 | 30100 | 0.0001 | - |
| 1.1514 | 30150 | 0.0001 | - |
| 1.1533 | 30200 | 0.0001 | - |
| 1.1552 | 30250 | 0.0001 | - |
| 1.1572 | 30300 | 0.0001 | - |
| 1.1591 | 30350 | 0.0001 | - |
| 1.1610 | 30400 | 0.0001 | - |
| 1.1629 | 30450 | 0.0001 | - |
| 1.1648 | 30500 | 0.0001 | - |
| 1.1667 | 30550 | 0.0001 | - |
| 1.1686 | 30600 | 0.0001 | - |
| 1.1705 | 30650 | 0.0001 | - |
| 1.1724 | 30700 | 0.0001 | - |
| 1.1743 | 30750 | 0.0001 | - |
| 1.1762 | 30800 | 0.0001 | - |
| 1.1782 | 30850 | 0.0001 | - |
| 1.1801 | 30900 | 0.0001 | - |
| 1.1820 | 30950 | 0.0001 | - |
| 1.1839 | 31000 | 0.0001 | - |
| 1.1858 | 31050 | 0.0001 | - |
| 1.1877 | 31100 | 0.0001 | - |
| 1.1896 | 31150 | 0.0001 | - |
| 1.1915 | 31200 | 0.0001 | - |
| 1.1934 | 31250 | 0.0001 | - |
| 1.1953 | 31300 | 0.0008 | - |
| 1.1973 | 31350 | 0.0006 | - |
| 1.1992 | 31400 | 0.0001 | - |
| 1.2011 | 31450 | 0.0001 | - |
| 1.2030 | 31500 | 0.0001 | - |
| 1.2049 | 31550 | 0.0001 | - |
| 1.2068 | 31600 | 0.0001 | - |
| 1.2087 | 31650 | 0.0001 | - |
| 1.2106 | 31700 | 0.0001 | - |
| 1.2125 | 31750 | 0.0001 | - |
| 1.2144 | 31800 | 0.0001 | - |
| 1.2163 | 31850 | 0.0001 | - |
| 1.2183 | 31900 | 0.0001 | - |
| 1.2202 | 31950 | 0.0001 | - |
| 1.2221 | 32000 | 0.0001 | - |
| 1.2240 | 32050 | 0.0001 | - |
| 1.2259 | 32100 | 0.0001 | - |
| 1.2278 | 32150 | 0.0001 | - |
| 1.2297 | 32200 | 0.0001 | - |
| 1.2316 | 32250 | 0.0124 | - |
| 1.2335 | 32300 | 0.0001 | - |
| 1.2354 | 32350 | 0.0001 | - |
| 1.2373 | 32400 | 0.0001 | - |
| 1.2393 | 32450 | 0.0002 | - |
| 1.2412 | 32500 | 0.0001 | - |
| 1.2431 | 32550 | 0.0001 | - |
| 1.2450 | 32600 | 0.0001 | - |
| 1.2469 | 32650 | 0.0001 | - |
| 1.2488 | 32700 | 0.0001 | - |
| 1.2507 | 32750 | 0.0001 | - |
| 1.2526 | 32800 | 0.0001 | - |
| 1.2545 | 32850 | 0.0001 | - |
| 1.2564 | 32900 | 0.0001 | - |
| 1.2584 | 32950 | 0.0001 | - |
| 1.2603 | 33000 | 0.0001 | - |
| 1.2622 | 33050 | 0.0001 | - |
| 1.2641 | 33100 | 0.0001 | - |
| 1.2660 | 33150 | 0.0001 | - |
| 1.2679 | 33200 | 0.0001 | - |
| 1.2698 | 33250 | 0.0001 | - |
| 1.2717 | 33300 | 0.0001 | - |
| 1.2736 | 33350 | 0.0001 | - |
| 1.2755 | 33400 | 0.0001 | - |
| 1.2774 | 33450 | 0.0001 | - |
| 1.2794 | 33500 | 0.0001 | - |
| 1.2813 | 33550 | 0.0001 | - |
| 1.2832 | 33600 | 0.0001 | - |
| 1.2851 | 33650 | 0.0001 | - |
| 1.2870 | 33700 | 0.0001 | - |
| 1.2889 | 33750 | 0.0001 | - |
| 1.2908 | 33800 | 0.0001 | - |
| 1.2927 | 33850 | 0.0001 | - |
| 1.2946 | 33900 | 0.0001 | - |
| 1.2965 | 33950 | 0.0001 | - |
| 1.2985 | 34000 | 0.0001 | - |
| 1.3004 | 34050 | 0.0001 | - |
| 1.3023 | 34100 | 0.0001 | - |
| 1.3042 | 34150 | 0.0001 | - |
| 1.3061 | 34200 | 0.0004 | - |
| 1.3080 | 34250 | 0.0001 | - |
| 1.3099 | 34300 | 0.0001 | - |
| 1.3118 | 34350 | 0.0001 | - |
| 1.3137 | 34400 | 0.0001 | - |
| 1.3156 | 34450 | 0.0001 | - |
| 1.3175 | 34500 | 0.0001 | - |
| 1.3195 | 34550 | 0.0001 | - |
| 1.3214 | 34600 | 0.0001 | - |
| 1.3233 | 34650 | 0.0001 | - |
| 1.3252 | 34700 | 0.0001 | - |
| 1.3271 | 34750 | 0.0001 | - |
| 1.3290 | 34800 | 0.0001 | - |
| 1.3309 | 34850 | 0.0001 | - |
| 1.3328 | 34900 | 0.0001 | - |
| 1.3347 | 34950 | 0.0001 | - |
| 1.3366 | 35000 | 0.0001 | - |
| 1.3386 | 35050 | 0.0001 | - |
| 1.3405 | 35100 | 0.0001 | - |
| 1.3424 | 35150 | 0.0001 | - |
| 1.3443 | 35200 | 0.0001 | - |
| 1.3462 | 35250 | 0.0001 | - |
| 1.3481 | 35300 | 0.0001 | - |
| 1.3500 | 35350 | 0.0001 | - |
| 1.3519 | 35400 | 0.0001 | - |
| 1.3538 | 35450 | 0.0001 | - |
| 1.3557 | 35500 | 0.0001 | - |
| 1.3576 | 35550 | 0.0001 | - |
| 1.3596 | 35600 | 0.0001 | - |
| 1.3615 | 35650 | 0.0001 | - |
| 1.3634 | 35700 | 0.0001 | - |
| 1.3653 | 35750 | 0.0001 | - |
| 1.3672 | 35800 | 0.0001 | - |
| 1.3691 | 35850 | 0.0001 | - |
| 1.3710 | 35900 | 0.0001 | - |
| 1.3729 | 35950 | 0.0001 | - |
| 1.3748 | 36000 | 0.0001 | - |
| 1.3767 | 36050 | 0.0001 | - |
| 1.3787 | 36100 | 0.0001 | - |
| 1.3806 | 36150 | 0.0001 | - |
| 1.3825 | 36200 | 0.0001 | - |
| 1.3844 | 36250 | 0.0001 | - |
| 1.3863 | 36300 | 0.0001 | - |
| 1.3882 | 36350 | 0.0001 | - |
| 1.3901 | 36400 | 0.0001 | - |
| 1.3920 | 36450 | 0.0001 | - |
| 1.3939 | 36500 | 0.0001 | - |
| 1.3958 | 36550 | 0.0001 | - |
| 1.3977 | 36600 | 0.0001 | - |
| 1.3997 | 36650 | 0.0001 | - |
| 1.4016 | 36700 | 0.0119 | - |
| 1.4035 | 36750 | 0.0007 | - |
| 1.4054 | 36800 | 0.0002 | - |
| 1.4073 | 36850 | 0.0002 | - |
| 1.4092 | 36900 | 0.0001 | - |
| 1.4111 | 36950 | 0.0022 | - |
| 1.4130 | 37000 | 0.0001 | - |
| 1.4149 | 37050 | 0.0001 | - |
| 1.4168 | 37100 | 0.0001 | - |
| 1.4188 | 37150 | 0.0001 | - |
| 1.4207 | 37200 | 0.0001 | - |
| 1.4226 | 37250 | 0.0001 | - |
| 1.4245 | 37300 | 0.0001 | - |
| 1.4264 | 37350 | 0.0001 | - |
| 1.4283 | 37400 | 0.0001 | - |
| 1.4302 | 37450 | 0.0001 | - |
| 1.4321 | 37500 | 0.0001 | - |
| 1.4340 | 37550 | 0.0001 | - |
| 1.4359 | 37600 | 0.0001 | - |
| 1.4378 | 37650 | 0.0001 | - |
| 1.4398 | 37700 | 0.0001 | - |
| 1.4417 | 37750 | 0.0001 | - |
| 1.4436 | 37800 | 0.0001 | - |
| 1.4455 | 37850 | 0.0001 | - |
| 1.4474 | 37900 | 0.0001 | - |
| 1.4493 | 37950 | 0.0001 | - |
| 1.4512 | 38000 | 0.0001 | - |
| 1.4531 | 38050 | 0.0001 | - |
| 1.4550 | 38100 | 0.0001 | - |
| 1.4569 | 38150 | 0.0001 | - |
| 1.4589 | 38200 | 0.0001 | - |
| 1.4608 | 38250 | 0.0001 | - |
| 1.4627 | 38300 | 0.0001 | - |
| 1.4646 | 38350 | 0.0001 | - |
| 1.4665 | 38400 | 0.0001 | - |
| 1.4684 | 38450 | 0.0001 | - |
| 1.4703 | 38500 | 0.0001 | - |
| 1.4722 | 38550 | 0.0001 | - |
| 1.4741 | 38600 | 0.0001 | - |
| 1.4760 | 38650 | 0.0001 | - |
| 1.4779 | 38700 | 0.0001 | - |
| 1.4799 | 38750 | 0.0001 | - |
| 1.4818 | 38800 | 0.0001 | - |
| 1.4837 | 38850 | 0.0001 | - |
| 1.4856 | 38900 | 0.0001 | - |
| 1.4875 | 38950 | 0.0001 | - |
| 1.4894 | 39000 | 0.0001 | - |
| 1.4913 | 39050 | 0.0001 | - |
| 1.4932 | 39100 | 0.0001 | - |
| 1.4951 | 39150 | 0.0001 | - |
| 1.4970 | 39200 | 0.0001 | - |
| 1.4989 | 39250 | 0.0001 | - |
| 1.5009 | 39300 | 0.0001 | - |
| 1.5028 | 39350 | 0.0003 | - |
| 1.5047 | 39400 | 0.0001 | - |
| 1.5066 | 39450 | 0.0001 | - |
| 1.5085 | 39500 | 0.0001 | - |
| 1.5104 | 39550 | 0.0001 | - |
| 1.5123 | 39600 | 0.0001 | - |
| 1.5142 | 39650 | 0.0001 | - |
| 1.5161 | 39700 | 0.0001 | - |
| 1.5180 | 39750 | 0.0001 | - |
| 1.5200 | 39800 | 0.0001 | - |
| 1.5219 | 39850 | 0.0001 | - |
| 1.5238 | 39900 | 0.0001 | - |
| 1.5257 | 39950 | 0.0001 | - |
| 1.5276 | 40000 | 0.0001 | - |
| 1.5295 | 40050 | 0.0001 | - |
| 1.5314 | 40100 | 0.0001 | - |
| 1.5333 | 40150 | 0.0001 | - |
| 1.5352 | 40200 | 0.0001 | - |
| 1.5371 | 40250 | 0.0001 | - |
| 1.5390 | 40300 | 0.0068 | - |
| 1.5410 | 40350 | 0.0001 | - |
| 1.5429 | 40400 | 0.0001 | - |
| 1.5448 | 40450 | 0.0001 | - |
| 1.5467 | 40500 | 0.0001 | - |
| 1.5486 | 40550 | 0.0001 | - |
| 1.5505 | 40600 | 0.0001 | - |
| 1.5524 | 40650 | 0.0001 | - |
| 1.5543 | 40700 | 0.0001 | - |
| 1.5562 | 40750 | 0.0001 | - |
| 1.5581 | 40800 | 0.0001 | - |
| 1.5601 | 40850 | 0.0001 | - |
| 1.5620 | 40900 | 0.0001 | - |
| 1.5639 | 40950 | 0.0001 | - |
| 1.5658 | 41000 | 0.0001 | - |
| 1.5677 | 41050 | 0.0088 | - |
| 1.5696 | 41100 | 0.0001 | - |
| 1.5715 | 41150 | 0.0001 | - |
| 1.5734 | 41200 | 0.0001 | - |
| 1.5753 | 41250 | 0.0001 | - |
| 1.5772 | 41300 | 0.0001 | - |
| 1.5791 | 41350 | 0.0001 | - |
| 1.5811 | 41400 | 0.0001 | - |
| 1.5830 | 41450 | 0.0001 | - |
| 1.5849 | 41500 | 0.0001 | - |
| 1.5868 | 41550 | 0.0001 | - |
| 1.5887 | 41600 | 0.0001 | - |
| 1.5906 | 41650 | 0.0001 | - |
| 1.5925 | 41700 | 0.0001 | - |
| 1.5944 | 41750 | 0.0 | - |
| 1.5963 | 41800 | 0.0001 | - |
| 1.5982 | 41850 | 0.0001 | - |
| 1.6002 | 41900 | 0.0001 | - |
| 1.6021 | 41950 | 0.0001 | - |
| 1.6040 | 42000 | 0.0001 | - |
| 1.6059 | 42050 | 0.0001 | - |
| 1.6078 | 42100 | 0.0 | - |
| 1.6097 | 42150 | 0.0001 | - |
| 1.6116 | 42200 | 0.0001 | - |
| 1.6135 | 42250 | 0.0001 | - |
| 1.6154 | 42300 | 0.0001 | - |
| 1.6173 | 42350 | 0.0001 | - |
| 1.6192 | 42400 | 0.0001 | - |
| 1.6212 | 42450 | 0.0001 | - |
| 1.6231 | 42500 | 0.0001 | - |
| 1.6250 | 42550 | 0.0001 | - |
| 1.6269 | 42600 | 0.0001 | - |
| 1.6288 | 42650 | 0.0001 | - |
| 1.6307 | 42700 | 0.0001 | - |
| 1.6326 | 42750 | 0.0001 | - |
| 1.6345 | 42800 | 0.0001 | - |
| 1.6364 | 42850 | 0.0001 | - |
| 1.6383 | 42900 | 0.0011 | - |
| 1.6403 | 42950 | 0.0001 | - |
| 1.6422 | 43000 | 0.0003 | - |
| 1.6441 | 43050 | 0.0001 | - |
| 1.6460 | 43100 | 0.0307 | - |
| 1.6479 | 43150 | 0.0001 | - |
| 1.6498 | 43200 | 0.0153 | - |
| 1.6517 | 43250 | 0.0001 | - |
| 1.6536 | 43300 | 0.0155 | - |
| 1.6555 | 43350 | 0.0002 | - |
| 1.6574 | 43400 | 0.0001 | - |
| 1.6593 | 43450 | 0.0001 | - |
| 1.6613 | 43500 | 0.0001 | - |
| 1.6632 | 43550 | 0.0001 | - |
| 1.6651 | 43600 | 0.0001 | - |
| 1.6670 | 43650 | 0.0001 | - |
| 1.6689 | 43700 | 0.0153 | - |
| 1.6708 | 43750 | 0.0001 | - |
| 1.6727 | 43800 | 0.0001 | - |
| 1.6746 | 43850 | 0.0093 | - |
| 1.6765 | 43900 | 0.0001 | - |
| 1.6784 | 43950 | 0.0001 | - |
| 1.6804 | 44000 | 0.004 | - |
| 1.6823 | 44050 | 0.0003 | - |
| 1.6842 | 44100 | 0.0001 | - |
| 1.6861 | 44150 | 0.0001 | - |
| 1.6880 | 44200 | 0.0001 | - |
| 1.6899 | 44250 | 0.0001 | - |
| 1.6918 | 44300 | 0.0001 | - |
| 1.6937 | 44350 | 0.0001 | - |
| 1.6956 | 44400 | 0.0001 | - |
| 1.6975 | 44450 | 0.0001 | - |
| 1.6994 | 44500 | 0.0001 | - |
| 1.7014 | 44550 | 0.0001 | - |
| 1.7033 | 44600 | 0.0001 | - |
| 1.7052 | 44650 | 0.0001 | - |
| 1.7071 | 44700 | 0.0001 | - |
| 1.7090 | 44750 | 0.0001 | - |
| 1.7109 | 44800 | 0.0001 | - |
| 1.7128 | 44850 | 0.0001 | - |
| 1.7147 | 44900 | 0.0001 | - |
| 1.7166 | 44950 | 0.0001 | - |
| 1.7185 | 45000 | 0.0001 | - |
| 1.7205 | 45050 | 0.0001 | - |
| 1.7224 | 45100 | 0.0001 | - |
| 1.7243 | 45150 | 0.0001 | - |
| 1.7262 | 45200 | 0.0001 | - |
| 1.7281 | 45250 | 0.0001 | - |
| 1.7300 | 45300 | 0.0001 | - |
| 1.7319 | 45350 | 0.0001 | - |
| 1.7338 | 45400 | 0.0001 | - |
| 1.7357 | 45450 | 0.0001 | - |
| 1.7376 | 45500 | 0.0001 | - |
| 1.7395 | 45550 | 0.0001 | - |
| 1.7415 | 45600 | 0.0001 | - |
| 1.7434 | 45650 | 0.0001 | - |
| 1.7453 | 45700 | 0.0001 | - |
| 1.7472 | 45750 | 0.0001 | - |
| 1.7491 | 45800 | 0.0001 | - |
| 1.7510 | 45850 | 0.0001 | - |
| 1.7529 | 45900 | 0.0001 | - |
| 1.7548 | 45950 | 0.0001 | - |
| 1.7567 | 46000 | 0.0001 | - |
| 1.7586 | 46050 | 0.0001 | - |
| 1.7605 | 46100 | 0.0001 | - |
| 1.7625 | 46150 | 0.0001 | - |
| 1.7644 | 46200 | 0.0001 | - |
| 1.7663 | 46250 | 0.0001 | - |
| 1.7682 | 46300 | 0.0001 | - |
| 1.7701 | 46350 | 0.0 | - |
| 1.7720 | 46400 | 0.0001 | - |
| 1.7739 | 46450 | 0.0001 | - |
| 1.7758 | 46500 | 0.0001 | - |
| 1.7777 | 46550 | 0.0001 | - |
| 1.7796 | 46600 | 0.0001 | - |
| 1.7816 | 46650 | 0.0001 | - |
| 1.7835 | 46700 | 0.0001 | - |
| 1.7854 | 46750 | 0.0001 | - |
| 1.7873 | 46800 | 0.0001 | - |
| 1.7892 | 46850 | 0.0001 | - |
| 1.7911 | 46900 | 0.0001 | - |
| 1.7930 | 46950 | 0.0001 | - |
| 1.7949 | 47000 | 0.0001 | - |
| 1.7968 | 47050 | 0.0001 | - |
| 1.7987 | 47100 | 0.0001 | - |
| 1.8006 | 47150 | 0.0001 | - |
| 1.8026 | 47200 | 0.0001 | - |
| 1.8045 | 47250 | 0.0001 | - |
| 1.8064 | 47300 | 0.0 | - |
| 1.8083 | 47350 | 0.0001 | - |
| 1.8102 | 47400 | 0.0001 | - |
| 1.8121 | 47450 | 0.0001 | - |
| 1.8140 | 47500 | 0.0001 | - |
| 1.8159 | 47550 | 0.0001 | - |
| 1.8178 | 47600 | 0.0001 | - |
| 1.8197 | 47650 | 0.0001 | - |
| 1.8217 | 47700 | 0.0001 | - |
| 1.8236 | 47750 | 0.0001 | - |
| 1.8255 | 47800 | 0.0001 | - |
| 1.8274 | 47850 | 0.0001 | - |
| 1.8293 | 47900 | 0.0001 | - |
| 1.8312 | 47950 | 0.0001 | - |
| 1.8331 | 48000 | 0.0001 | - |
| 1.8350 | 48050 | 0.0001 | - |
| 1.8369 | 48100 | 0.0 | - |
| 1.8388 | 48150 | 0.0001 | - |
| 1.8407 | 48200 | 0.0001 | - |
| 1.8427 | 48250 | 0.0001 | - |
| 1.8446 | 48300 | 0.0001 | - |
| 1.8465 | 48350 | 0.0001 | - |
| 1.8484 | 48400 | 0.0001 | - |
| 1.8503 | 48450 | 0.0001 | - |
| 1.8522 | 48500 | 0.0001 | - |
| 1.8541 | 48550 | 0.0001 | - |
| 1.8560 | 48600 | 0.0001 | - |
| 1.8579 | 48650 | 0.0001 | - |
| 1.8598 | 48700 | 0.0001 | - |
| 1.8618 | 48750 | 0.0001 | - |
| 1.8637 | 48800 | 0.0001 | - |
| 1.8656 | 48850 | 0.0001 | - |
| 1.8675 | 48900 | 0.0001 | - |
| 1.8694 | 48950 | 0.0001 | - |
| 1.8713 | 49000 | 0.0001 | - |
| 1.8732 | 49050 | 0.0001 | - |
| 1.8751 | 49100 | 0.0001 | - |
| 1.8770 | 49150 | 0.0001 | - |
| 1.8789 | 49200 | 0.0001 | - |
| 1.8808 | 49250 | 0.0001 | - |
| 1.8828 | 49300 | 0.0001 | - |
| 1.8847 | 49350 | 0.0001 | - |
| 1.8866 | 49400 | 0.0001 | - |
| 1.8885 | 49450 | 0.0001 | - |
| 1.8904 | 49500 | 0.0001 | - |
| 1.8923 | 49550 | 0.0001 | - |
| 1.8942 | 49600 | 0.0001 | - |
| 1.8961 | 49650 | 0.0001 | - |
| 1.8980 | 49700 | 0.0001 | - |
| 1.8999 | 49750 | 0.0001 | - |
| 1.9019 | 49800 | 0.0001 | - |
| 1.9038 | 49850 | 0.0001 | - |
| 1.9057 | 49900 | 0.0001 | - |
| 1.9076 | 49950 | 0.0001 | - |
| 1.9095 | 50000 | 0.0001 | - |
| 1.9114 | 50050 | 0.0001 | - |
| 1.9133 | 50100 | 0.0001 | - |
| 1.9152 | 50150 | 0.0001 | - |
| 1.9171 | 50200 | 0.0002 | - |
| 1.9190 | 50250 | 0.0001 | - |
| 1.9209 | 50300 | 0.0001 | - |
| 1.9229 | 50350 | 0.0001 | - |
| 1.9248 | 50400 | 0.0001 | - |
| 1.9267 | 50450 | 0.0001 | - |
| 1.9286 | 50500 | 0.0001 | - |
| 1.9305 | 50550 | 0.0001 | - |
| 1.9324 | 50600 | 0.0001 | - |
| 1.9343 | 50650 | 0.0001 | - |
| 1.9362 | 50700 | 0.0001 | - |
| 1.9381 | 50750 | 0.0001 | - |
| 1.9400 | 50800 | 0.0001 | - |
| 1.9420 | 50850 | 0.0001 | - |
| 1.9439 | 50900 | 0.0001 | - |
| 1.9458 | 50950 | 0.0001 | - |
| 1.9477 | 51000 | 0.0001 | - |
| 1.9496 | 51050 | 0.0001 | - |
| 1.9515 | 51100 | 0.0001 | - |
| 1.9534 | 51150 | 0.0001 | - |
| 1.9553 | 51200 | 0.0001 | - |
| 1.9572 | 51250 | 0.0 | - |
| 1.9591 | 51300 | 0.0001 | - |
| 1.9610 | 51350 | 0.0001 | - |
| 1.9630 | 51400 | 0.0001 | - |
| 1.9649 | 51450 | 0.0001 | - |
| 1.9668 | 51500 | 0.0001 | - |
| 1.9687 | 51550 | 0.0001 | - |
| 1.9706 | 51600 | 0.0 | - |
| 1.9725 | 51650 | 0.0002 | - |
| 1.9744 | 51700 | 0.0001 | - |
| 1.9763 | 51750 | 0.0001 | - |
| 1.9782 | 51800 | 0.0001 | - |
| 1.9801 | 51850 | 0.0001 | - |
| 1.9821 | 51900 | 0.0001 | - |
| 1.9840 | 51950 | 0.0001 | - |
| 1.9859 | 52000 | 0.0001 | - |
| 1.9878 | 52050 | 0.0001 | - |
| 1.9897 | 52100 | 0.0001 | - |
| 1.9916 | 52150 | 0.0001 | - |
| 1.9935 | 52200 | 0.0001 | - |
| 1.9954 | 52250 | 0.0001 | - |
| 1.9973 | 52300 | 0.0001 | - |
| 1.9992 | 52350 | 0.0001 | - |
| **2.0** | **52370** | **-** | **0.3262** |
| 2.0011 | 52400 | 0.0001 | - |
| 2.0031 | 52450 | 0.0001 | - |
| 2.0050 | 52500 | 0.0001 | - |
| 2.0069 | 52550 | 0.0001 | - |
| 2.0088 | 52600 | 0.0001 | - |
| 2.0107 | 52650 | 0.0001 | - |
| 2.0126 | 52700 | 0.0001 | - |
| 2.0145 | 52750 | 0.0001 | - |
| 2.0164 | 52800 | 0.0001 | - |
| 2.0183 | 52850 | 0.0001 | - |
| 2.0202 | 52900 | 0.0001 | - |
| 2.0222 | 52950 | 0.0001 | - |
| 2.0241 | 53000 | 0.0001 | - |
| 2.0260 | 53050 | 0.0001 | - |
| 2.0279 | 53100 | 0.0001 | - |
| 2.0298 | 53150 | 0.0001 | - |
| 2.0317 | 53200 | 0.0001 | - |
| 2.0336 | 53250 | 0.0001 | - |
| 2.0355 | 53300 | 0.0001 | - |
| 2.0374 | 53350 | 0.0001 | - |
| 2.0393 | 53400 | 0.0001 | - |
| 2.0412 | 53450 | 0.0001 | - |
| 2.0432 | 53500 | 0.0001 | - |
| 2.0451 | 53550 | 0.0001 | - |
| 2.0470 | 53600 | 0.0001 | - |
| 2.0489 | 53650 | 0.0001 | - |
| 2.0508 | 53700 | 0.0001 | - |
| 2.0527 | 53750 | 0.0001 | - |
| 2.0546 | 53800 | 0.0001 | - |
| 2.0565 | 53850 | 0.0001 | - |
| 2.0584 | 53900 | 0.0001 | - |
| 2.0603 | 53950 | 0.0001 | - |
| 2.0622 | 54000 | 0.0001 | - |
| 2.0642 | 54050 | 0.0001 | - |
| 2.0661 | 54100 | 0.0001 | - |
| 2.0680 | 54150 | 0.0001 | - |
| 2.0699 | 54200 | 0.0001 | - |
| 2.0718 | 54250 | 0.0004 | - |
| 2.0737 | 54300 | 0.0001 | - |
| 2.0756 | 54350 | 0.0001 | - |
| 2.0775 | 54400 | 0.0001 | - |
| 2.0794 | 54450 | 0.0001 | - |
| 2.0813 | 54500 | 0.0001 | - |
| 2.0833 | 54550 | 0.0001 | - |
| 2.0852 | 54600 | 0.0001 | - |
| 2.0871 | 54650 | 0.0001 | - |
| 2.0890 | 54700 | 0.0 | - |
| 2.0909 | 54750 | 0.0001 | - |
| 2.0928 | 54800 | 0.0001 | - |
| 2.0947 | 54850 | 0.0001 | - |
| 2.0966 | 54900 | 0.0001 | - |
| 2.0985 | 54950 | 0.0001 | - |
| 2.1004 | 55000 | 0.0001 | - |
| 2.1023 | 55050 | 0.0 | - |
| 2.1043 | 55100 | 0.0001 | - |
| 2.1062 | 55150 | 0.0001 | - |
| 2.1081 | 55200 | 0.0001 | - |
| 2.1100 | 55250 | 0.0004 | - |
| 2.1119 | 55300 | 0.0001 | - |
| 2.1138 | 55350 | 0.0001 | - |
| 2.1157 | 55400 | 0.0001 | - |
| 2.1176 | 55450 | 0.0001 | - |
| 2.1195 | 55500 | 0.0001 | - |
| 2.1214 | 55550 | 0.0001 | - |
| 2.1234 | 55600 | 0.0001 | - |
| 2.1253 | 55650 | 0.0001 | - |
| 2.1272 | 55700 | 0.0001 | - |
| 2.1291 | 55750 | 0.0001 | - |
| 2.1310 | 55800 | 0.0001 | - |
| 2.1329 | 55850 | 0.0001 | - |
| 2.1348 | 55900 | 0.0001 | - |
| 2.1367 | 55950 | 0.0001 | - |
| 2.1386 | 56000 | 0.0001 | - |
| 2.1405 | 56050 | 0.0001 | - |
| 2.1424 | 56100 | 0.0001 | - |
| 2.1444 | 56150 | 0.0001 | - |
| 2.1463 | 56200 | 0.0 | - |
| 2.1482 | 56250 | 0.0001 | - |
| 2.1501 | 56300 | 0.0001 | - |
| 2.1520 | 56350 | 0.0001 | - |
| 2.1539 | 56400 | 0.0 | - |
| 2.1558 | 56450 | 0.0001 | - |
| 2.1577 | 56500 | 0.0001 | - |
| 2.1596 | 56550 | 0.0001 | - |
| 2.1615 | 56600 | 0.0001 | - |
| 2.1635 | 56650 | 0.0 | - |
| 2.1654 | 56700 | 0.0001 | - |
| 2.1673 | 56750 | 0.0001 | - |
| 2.1692 | 56800 | 0.0 | - |
| 2.1711 | 56850 | 0.0 | - |
| 2.1730 | 56900 | 0.0001 | - |
| 2.1749 | 56950 | 0.0001 | - |
| 2.1768 | 57000 | 0.0 | - |
| 2.1787 | 57050 | 0.0001 | - |
| 2.1806 | 57100 | 0.0001 | - |
| 2.1825 | 57150 | 0.0001 | - |
| 2.1845 | 57200 | 0.0001 | - |
| 2.1864 | 57250 | 0.0 | - |
| 2.1883 | 57300 | 0.0001 | - |
| 2.1902 | 57350 | 0.0001 | - |
| 2.1921 | 57400 | 0.0001 | - |
| 2.1940 | 57450 | 0.0001 | - |
| 2.1959 | 57500 | 0.0001 | - |
| 2.1978 | 57550 | 0.0001 | - |
| 2.1997 | 57600 | 0.0001 | - |
| 2.2016 | 57650 | 0.0 | - |
| 2.2036 | 57700 | 0.0001 | - |
| 2.2055 | 57750 | 0.0001 | - |
| 2.2074 | 57800 | 0.0001 | - |
| 2.2093 | 57850 | 0.0001 | - |
| 2.2112 | 57900 | 0.0001 | - |
| 2.2131 | 57950 | 0.0001 | - |
| 2.2150 | 58000 | 0.0 | - |
| 2.2169 | 58050 | 0.0 | - |
| 2.2188 | 58100 | 0.0001 | - |
| 2.2207 | 58150 | 0.0001 | - |
| 2.2226 | 58200 | 0.0001 | - |
| 2.2246 | 58250 | 0.0001 | - |
| 2.2265 | 58300 | 0.0001 | - |
| 2.2284 | 58350 | 0.0001 | - |
| 2.2303 | 58400 | 0.0001 | - |
| 2.2322 | 58450 | 0.0001 | - |
| 2.2341 | 58500 | 0.0001 | - |
| 2.2360 | 58550 | 0.0001 | - |
| 2.2379 | 58600 | 0.0 | - |
| 2.2398 | 58650 | 0.0001 | - |
| 2.2417 | 58700 | 0.0001 | - |
| 2.2437 | 58750 | 0.0001 | - |
| 2.2456 | 58800 | 0.0001 | - |
| 2.2475 | 58850 | 0.0001 | - |
| 2.2494 | 58900 | 0.0001 | - |
| 2.2513 | 58950 | 0.0001 | - |
| 2.2532 | 59000 | 0.0001 | - |
| 2.2551 | 59050 | 0.0001 | - |
| 2.2570 | 59100 | 0.0001 | - |
| 2.2589 | 59150 | 0.0001 | - |
| 2.2608 | 59200 | 0.0001 | - |
| 2.2627 | 59250 | 0.0001 | - |
| 2.2647 | 59300 | 0.0001 | - |
| 2.2666 | 59350 | 0.0001 | - |
| 2.2685 | 59400 | 0.0001 | - |
| 2.2704 | 59450 | 0.0 | - |
| 2.2723 | 59500 | 0.0001 | - |
| 2.2742 | 59550 | 0.0001 | - |
| 2.2761 | 59600 | 0.0005 | - |
| 2.2780 | 59650 | 0.0001 | - |
| 2.2799 | 59700 | 0.0001 | - |
| 2.2818 | 59750 | 0.0 | - |
| 2.2838 | 59800 | 0.0001 | - |
| 2.2857 | 59850 | 0.0001 | - |
| 2.2876 | 59900 | 0.0001 | - |
| 2.2895 | 59950 | 0.0002 | - |
| 2.2914 | 60000 | 0.0001 | - |
| 2.2933 | 60050 | 0.0 | - |
| 2.2952 | 60100 | 0.0 | - |
| 2.2971 | 60150 | 0.0 | - |
| 2.2990 | 60200 | 0.0001 | - |
| 2.3009 | 60250 | 0.0001 | - |
| 2.3028 | 60300 | 0.0001 | - |
| 2.3048 | 60350 | 0.0 | - |
| 2.3067 | 60400 | 0.0 | - |
| 2.3086 | 60450 | 0.0001 | - |
| 2.3105 | 60500 | 0.0001 | - |
| 2.3124 | 60550 | 0.0001 | - |
| 2.3143 | 60600 | 0.0001 | - |
| 2.3162 | 60650 | 0.0001 | - |
| 2.3181 | 60700 | 0.0001 | - |
| 2.3200 | 60750 | 0.0 | - |
| 2.3219 | 60800 | 0.0001 | - |
| 2.3238 | 60850 | 0.0 | - |
| 2.3258 | 60900 | 0.0 | - |
| 2.3277 | 60950 | 0.0001 | - |
| 2.3296 | 61000 | 0.0001 | - |
| 2.3315 | 61050 | 0.0001 | - |
| 2.3334 | 61100 | 0.0001 | - |
| 2.3353 | 61150 | 0.0 | - |
| 2.3372 | 61200 | 0.0001 | - |
| 2.3391 | 61250 | 0.0001 | - |
| 2.3410 | 61300 | 0.0001 | - |
| 2.3429 | 61350 | 0.0 | - |
| 2.3449 | 61400 | 0.0001 | - |
| 2.3468 | 61450 | 0.0 | - |
| 2.3487 | 61500 | 0.0001 | - |
| 2.3506 | 61550 | 0.0001 | - |
| 2.3525 | 61600 | 0.0001 | - |
| 2.3544 | 61650 | 0.0001 | - |
| 2.3563 | 61700 | 0.0001 | - |
| 2.3582 | 61750 | 0.0 | - |
| 2.3601 | 61800 | 0.0001 | - |
| 2.3620 | 61850 | 0.0 | - |
| 2.3639 | 61900 | 0.0 | - |
| 2.3659 | 61950 | 0.0 | - |
| 2.3678 | 62000 | 0.0 | - |
| 2.3697 | 62050 | 0.0001 | - |
| 2.3716 | 62100 | 0.0 | - |
| 2.3735 | 62150 | 0.0 | - |
| 2.3754 | 62200 | 0.0001 | - |
| 2.3773 | 62250 | 0.0001 | - |
| 2.3792 | 62300 | 0.0 | - |
| 2.3811 | 62350 | 0.0001 | - |
| 2.3830 | 62400 | 0.0001 | - |
| 2.3850 | 62450 | 0.0001 | - |
| 2.3869 | 62500 | 0.0 | - |
| 2.3888 | 62550 | 0.0 | - |
| 2.3907 | 62600 | 0.0001 | - |
| 2.3926 | 62650 | 0.0001 | - |
| 2.3945 | 62700 | 0.0001 | - |
| 2.3964 | 62750 | 0.0001 | - |
| 2.3983 | 62800 | 0.0001 | - |
| 2.4002 | 62850 | 0.0 | - |
| 2.4021 | 62900 | 0.0001 | - |
| 2.4040 | 62950 | 0.0001 | - |
| 2.4060 | 63000 | 0.0 | - |
| 2.4079 | 63050 | 0.0 | - |
| 2.4098 | 63100 | 0.0 | - |
| 2.4117 | 63150 | 0.0001 | - |
| 2.4136 | 63200 | 0.0001 | - |
| 2.4155 | 63250 | 0.0001 | - |
| 2.4174 | 63300 | 0.0 | - |
| 2.4193 | 63350 | 0.0001 | - |
| 2.4212 | 63400 | 0.0001 | - |
| 2.4231 | 63450 | 0.0 | - |
| 2.4251 | 63500 | 0.0001 | - |
| 2.4270 | 63550 | 0.0 | - |
| 2.4289 | 63600 | 0.0001 | - |
| 2.4308 | 63650 | 0.0001 | - |
| 2.4327 | 63700 | 0.0001 | - |
| 2.4346 | 63750 | 0.0001 | - |
| 2.4365 | 63800 | 0.0001 | - |
| 2.4384 | 63850 | 0.0001 | - |
| 2.4403 | 63900 | 0.0 | - |
| 2.4422 | 63950 | 0.0001 | - |
| 2.4441 | 64000 | 0.0 | - |
| 2.4461 | 64050 | 0.0 | - |
| 2.4480 | 64100 | 0.0001 | - |
| 2.4499 | 64150 | 0.0001 | - |
| 2.4518 | 64200 | 0.0 | - |
| 2.4537 | 64250 | 0.0 | - |
| 2.4556 | 64300 | 0.0001 | - |
| 2.4575 | 64350 | 0.0 | - |
| 2.4594 | 64400 | 0.0001 | - |
| 2.4613 | 64450 | 0.0 | - |
| 2.4632 | 64500 | 0.0001 | - |
| 2.4652 | 64550 | 0.0 | - |
| 2.4671 | 64600 | 0.0 | - |
| 2.4690 | 64650 | 0.0001 | - |
| 2.4709 | 64700 | 0.0 | - |
| 2.4728 | 64750 | 0.0001 | - |
| 2.4747 | 64800 | 0.0 | - |
| 2.4766 | 64850 | 0.0001 | - |
| 2.4785 | 64900 | 0.0 | - |
| 2.4804 | 64950 | 0.0 | - |
| 2.4823 | 65000 | 0.0 | - |
| 2.4842 | 65050 | 0.0 | - |
| 2.4862 | 65100 | 0.0001 | - |
| 2.4881 | 65150 | 0.0 | - |
| 2.4900 | 65200 | 0.0 | - |
| 2.4919 | 65250 | 0.0001 | - |
| 2.4938 | 65300 | 0.0 | - |
| 2.4957 | 65350 | 0.0 | - |
| 2.4976 | 65400 | 0.0001 | - |
| 2.4995 | 65450 | 0.0001 | - |
| 2.5014 | 65500 | 0.0 | - |
| 2.5033 | 65550 | 0.0001 | - |
| 2.5053 | 65600 | 0.0 | - |
| 2.5072 | 65650 | 0.0 | - |
| 2.5091 | 65700 | 0.0 | - |
| 2.5110 | 65750 | 0.0001 | - |
| 2.5129 | 65800 | 0.0 | - |
| 2.5148 | 65850 | 0.0 | - |
| 2.5167 | 65900 | 0.0001 | - |
| 2.5186 | 65950 | 0.0 | - |
| 2.5205 | 66000 | 0.0 | - |
| 2.5224 | 66050 | 0.0 | - |
| 2.5243 | 66100 | 0.0 | - |
| 2.5263 | 66150 | 0.0001 | - |
| 2.5282 | 66200 | 0.0 | - |
| 2.5301 | 66250 | 0.0001 | - |
| 2.5320 | 66300 | 0.0 | - |
| 2.5339 | 66350 | 0.0 | - |
| 2.5358 | 66400 | 0.0001 | - |
| 2.5377 | 66450 | 0.0 | - |
| 2.5396 | 66500 | 0.0 | - |
| 2.5415 | 66550 | 0.0 | - |
| 2.5434 | 66600 | 0.0001 | - |
| 2.5454 | 66650 | 0.0 | - |
| 2.5473 | 66700 | 0.0001 | - |
| 2.5492 | 66750 | 0.0001 | - |
| 2.5511 | 66800 | 0.0001 | - |
| 2.5530 | 66850 | 0.0 | - |
| 2.5549 | 66900 | 0.0 | - |
| 2.5568 | 66950 | 0.0 | - |
| 2.5587 | 67000 | 0.0 | - |
| 2.5606 | 67050 | 0.0 | - |
| 2.5625 | 67100 | 0.0001 | - |
| 2.5644 | 67150 | 0.0 | - |
| 2.5664 | 67200 | 0.0001 | - |
| 2.5683 | 67250 | 0.0 | - |
| 2.5702 | 67300 | 0.0001 | - |
| 2.5721 | 67350 | 0.0 | - |
| 2.5740 | 67400 | 0.0 | - |
| 2.5759 | 67450 | 0.0 | - |
| 2.5778 | 67500 | 0.0001 | - |
| 2.5797 | 67550 | 0.0001 | - |
| 2.5816 | 67600 | 0.0 | - |
| 2.5835 | 67650 | 0.0 | - |
| 2.5854 | 67700 | 0.0001 | - |
| 2.5874 | 67750 | 0.0001 | - |
| 2.5893 | 67800 | 0.0 | - |
| 2.5912 | 67850 | 0.0 | - |
| 2.5931 | 67900 | 0.0 | - |
| 2.5950 | 67950 | 0.0 | - |
| 2.5969 | 68000 | 0.0 | - |
| 2.5988 | 68050 | 0.0 | - |
| 2.6007 | 68100 | 0.0 | - |
| 2.6026 | 68150 | 0.0 | - |
| 2.6045 | 68200 | 0.0 | - |
| 2.6065 | 68250 | 0.0001 | - |
| 2.6084 | 68300 | 0.0 | - |
| 2.6103 | 68350 | 0.0001 | - |
| 2.6122 | 68400 | 0.0 | - |
| 2.6141 | 68450 | 0.0 | - |
| 2.6160 | 68500 | 0.0 | - |
| 2.6179 | 68550 | 0.0 | - |
| 2.6198 | 68600 | 0.0 | - |
| 2.6217 | 68650 | 0.0001 | - |
| 2.6236 | 68700 | 0.0001 | - |
| 2.6255 | 68750 | 0.0 | - |
| 2.6275 | 68800 | 0.0 | - |
| 2.6294 | 68850 | 0.0001 | - |
| 2.6313 | 68900 | 0.0 | - |
| 2.6332 | 68950 | 0.0001 | - |
| 2.6351 | 69000 | 0.0 | - |
| 2.6370 | 69050 | 0.0 | - |
| 2.6389 | 69100 | 0.0 | - |
| 2.6408 | 69150 | 0.0 | - |
| 2.6427 | 69200 | 0.0 | - |
| 2.6446 | 69250 | 0.0 | - |
| 2.6466 | 69300 | 0.0 | - |
| 2.6485 | 69350 | 0.0 | - |
| 2.6504 | 69400 | 0.0001 | - |
| 2.6523 | 69450 | 0.0 | - |
| 2.6542 | 69500 | 0.0 | - |
| 2.6561 | 69550 | 0.0001 | - |
| 2.6580 | 69600 | 0.0 | - |
| 2.6599 | 69650 | 0.0001 | - |
| 2.6618 | 69700 | 0.0001 | - |
| 2.6637 | 69750 | 0.0 | - |
| 2.6656 | 69800 | 0.0 | - |
| 2.6676 | 69850 | 0.0 | - |
| 2.6695 | 69900 | 0.0 | - |
| 2.6714 | 69950 | 0.0 | - |
| 2.6733 | 70000 | 0.0 | - |
| 2.6752 | 70050 | 0.0001 | - |
| 2.6771 | 70100 | 0.0001 | - |
| 2.6790 | 70150 | 0.0 | - |
| 2.6809 | 70200 | 0.0001 | - |
| 2.6828 | 70250 | 0.0 | - |
| 2.6847 | 70300 | 0.0 | - |
| 2.6867 | 70350 | 0.0 | - |
| 2.6886 | 70400 | 0.0 | - |
| 2.6905 | 70450 | 0.0 | - |
| 2.6924 | 70500 | 0.0 | - |
| 2.6943 | 70550 | 0.0001 | - |
| 2.6962 | 70600 | 0.0 | - |
| 2.6981 | 70650 | 0.0 | - |
| 2.7000 | 70700 | 0.0001 | - |
| 2.7019 | 70750 | 0.0001 | - |
| 2.7038 | 70800 | 0.0 | - |
| 2.7057 | 70850 | 0.0001 | - |
| 2.7077 | 70900 | 0.0 | - |
| 2.7096 | 70950 | 0.0 | - |
| 2.7115 | 71000 | 0.0 | - |
| 2.7134 | 71050 | 0.0 | - |
| 2.7153 | 71100 | 0.0 | - |
| 2.7172 | 71150 | 0.0 | - |
| 2.7191 | 71200 | 0.0 | - |
| 2.7210 | 71250 | 0.0 | - |
| 2.7229 | 71300 | 0.0 | - |
| 2.7248 | 71350 | 0.0 | - |
| 2.7268 | 71400 | 0.0 | - |
| 2.7287 | 71450 | 0.0001 | - |
| 2.7306 | 71500 | 0.0 | - |
| 2.7325 | 71550 | 0.0 | - |
| 2.7344 | 71600 | 0.0 | - |
| 2.7363 | 71650 | 0.0 | - |
| 2.7382 | 71700 | 0.0 | - |
| 2.7401 | 71750 | 0.0 | - |
| 2.7420 | 71800 | 0.0 | - |
| 2.7439 | 71850 | 0.0 | - |
| 2.7458 | 71900 | 0.0001 | - |
| 2.7478 | 71950 | 0.0 | - |
| 2.7497 | 72000 | 0.0 | - |
| 2.7516 | 72050 | 0.0 | - |
| 2.7535 | 72100 | 0.0 | - |
| 2.7554 | 72150 | 0.0 | - |
| 2.7573 | 72200 | 0.0 | - |
| 2.7592 | 72250 | 0.0 | - |
| 2.7611 | 72300 | 0.0 | - |
| 2.7630 | 72350 | 0.0 | - |
| 2.7649 | 72400 | 0.0 | - |
| 2.7669 | 72450 | 0.0 | - |
| 2.7688 | 72500 | 0.0 | - |
| 2.7707 | 72550 | 0.0 | - |
| 2.7726 | 72600 | 0.0 | - |
| 2.7745 | 72650 | 0.0 | - |
| 2.7764 | 72700 | 0.0 | - |
| 2.7783 | 72750 | 0.0001 | - |
| 2.7802 | 72800 | 0.0 | - |
| 2.7821 | 72850 | 0.0 | - |
| 2.7840 | 72900 | 0.0 | - |
| 2.7859 | 72950 | 0.0 | - |
| 2.7879 | 73000 | 0.0 | - |
| 2.7898 | 73050 | 0.0 | - |
| 2.7917 | 73100 | 0.0 | - |
| 2.7936 | 73150 | 0.0 | - |
| 2.7955 | 73200 | 0.0 | - |
| 2.7974 | 73250 | 0.0 | - |
| 2.7993 | 73300 | 0.0 | - |
| 2.8012 | 73350 | 0.0 | - |
| 2.8031 | 73400 | 0.0 | - |
| 2.8050 | 73450 | 0.0 | - |
| 2.8070 | 73500 | 0.0 | - |
| 2.8089 | 73550 | 0.0 | - |
| 2.8108 | 73600 | 0.0 | - |
| 2.8127 | 73650 | 0.0 | - |
| 2.8146 | 73700 | 0.0 | - |
| 2.8165 | 73750 | 0.0 | - |
| 2.8184 | 73800 | 0.0 | - |
| 2.8203 | 73850 | 0.0 | - |
| 2.8222 | 73900 | 0.0 | - |
| 2.8241 | 73950 | 0.0 | - |
| 2.8260 | 74000 | 0.0 | - |
| 2.8280 | 74050 | 0.0 | - |
| 2.8299 | 74100 | 0.0 | - |
| 2.8318 | 74150 | 0.0 | - |
| 2.8337 | 74200 | 0.0 | - |
| 2.8356 | 74250 | 0.0 | - |
| 2.8375 | 74300 | 0.0 | - |
| 2.8394 | 74350 | 0.0 | - |
| 2.8413 | 74400 | 0.0001 | - |
| 2.8432 | 74450 | 0.0 | - |
| 2.8451 | 74500 | 0.0 | - |
| 2.8470 | 74550 | 0.0 | - |
| 2.8490 | 74600 | 0.0 | - |
| 2.8509 | 74650 | 0.0 | - |
| 2.8528 | 74700 | 0.0 | - |
| 2.8547 | 74750 | 0.0 | - |
| 2.8566 | 74800 | 0.0 | - |
| 2.8585 | 74850 | 0.0 | - |
| 2.8604 | 74900 | 0.0 | - |
| 2.8623 | 74950 | 0.0 | - |
| 2.8642 | 75000 | 0.0 | - |
| 2.8661 | 75050 | 0.0 | - |
| 2.8681 | 75100 | 0.0 | - |
| 2.8700 | 75150 | 0.0 | - |
| 2.8719 | 75200 | 0.0 | - |
| 2.8738 | 75250 | 0.0 | - |
| 2.8757 | 75300 | 0.0 | - |
| 2.8776 | 75350 | 0.0001 | - |
| 2.8795 | 75400 | 0.0 | - |
| 2.8814 | 75450 | 0.0 | - |
| 2.8833 | 75500 | 0.0 | - |
| 2.8852 | 75550 | 0.0 | - |
| 2.8871 | 75600 | 0.0 | - |
| 2.8891 | 75650 | 0.0 | - |
| 2.8910 | 75700 | 0.0 | - |
| 2.8929 | 75750 | 0.0 | - |
| 2.8948 | 75800 | 0.0 | - |
| 2.8967 | 75850 | 0.0 | - |
| 2.8986 | 75900 | 0.0 | - |
| 2.9005 | 75950 | 0.0 | - |
| 2.9024 | 76000 | 0.0 | - |
| 2.9043 | 76050 | 0.0 | - |
| 2.9062 | 76100 | 0.0 | - |
| 2.9082 | 76150 | 0.0 | - |
| 2.9101 | 76200 | 0.0 | - |
| 2.9120 | 76250 | 0.0 | - |
| 2.9139 | 76300 | 0.0 | - |
| 2.9158 | 76350 | 0.0 | - |
| 2.9177 | 76400 | 0.0 | - |
| 2.9196 | 76450 | 0.0001 | - |
| 2.9215 | 76500 | 0.0 | - |
| 2.9234 | 76550 | 0.0001 | - |
| 2.9253 | 76600 | 0.0 | - |
| 2.9272 | 76650 | 0.0 | - |
| 2.9292 | 76700 | 0.0 | - |
| 2.9311 | 76750 | 0.0 | - |
| 2.9330 | 76800 | 0.0 | - |
| 2.9349 | 76850 | 0.0 | - |
| 2.9368 | 76900 | 0.0 | - |
| 2.9387 | 76950 | 0.0 | - |
| 2.9406 | 77000 | 0.0 | - |
| 2.9425 | 77050 | 0.0 | - |
| 2.9444 | 77100 | 0.0 | - |
| 2.9463 | 77150 | 0.0 | - |
| 2.9483 | 77200 | 0.0 | - |
| 2.9502 | 77250 | 0.0 | - |
| 2.9521 | 77300 | 0.0 | - |
| 2.9540 | 77350 | 0.0 | - |
| 2.9559 | 77400 | 0.0 | - |
| 2.9578 | 77450 | 0.0 | - |
| 2.9597 | 77500 | 0.0 | - |
| 2.9616 | 77550 | 0.0 | - |
| 2.9635 | 77600 | 0.0 | - |
| 2.9654 | 77650 | 0.0 | - |
| 2.9673 | 77700 | 0.0 | - |
| 2.9693 | 77750 | 0.0 | - |
| 2.9712 | 77800 | 0.0001 | - |
| 2.9731 | 77850 | 0.0 | - |
| 2.9750 | 77900 | 0.0 | - |
| 2.9769 | 77950 | 0.0 | - |
| 2.9788 | 78000 | 0.0 | - |
| 2.9807 | 78050 | 0.0 | - |
| 2.9826 | 78100 | 0.0 | - |
| 2.9845 | 78150 | 0.0 | - |
| 2.9864 | 78200 | 0.0 | - |
| 2.9884 | 78250 | 0.0001 | - |
| 2.9903 | 78300 | 0.0 | - |
| 2.9922 | 78350 | 0.0 | - |
| 2.9941 | 78400 | 0.0001 | - |
| 2.9960 | 78450 | 0.0 | - |
| 2.9979 | 78500 | 0.0 | - |
| 2.9998 | 78550 | 0.0 | - |
| 3.0 | 78555 | - | 0.3541 |
| 3.0017 | 78600 | 0.0 | - |
| 3.0036 | 78650 | 0.0 | - |
| 3.0055 | 78700 | 0.0 | - |
| 3.0074 | 78750 | 0.0 | - |
| 3.0094 | 78800 | 0.0 | - |
| 3.0113 | 78850 | 0.0 | - |
| 3.0132 | 78900 | 0.0 | - |
| 3.0151 | 78950 | 0.0 | - |
| 3.0170 | 79000 | 0.0 | - |
| 3.0189 | 79050 | 0.0 | - |
| 3.0208 | 79100 | 0.0 | - |
| 3.0227 | 79150 | 0.0002 | - |
| 3.0246 | 79200 | 0.0 | - |
| 3.0265 | 79250 | 0.0 | - |
| 3.0285 | 79300 | 0.0 | - |
| 3.0304 | 79350 | 0.0 | - |
| 3.0323 | 79400 | 0.0 | - |
| 3.0342 | 79450 | 0.0 | - |
| 3.0361 | 79500 | 0.0 | - |
| 3.0380 | 79550 | 0.0 | - |
| 3.0399 | 79600 | 0.0 | - |
| 3.0418 | 79650 | 0.0 | - |
| 3.0437 | 79700 | 0.0 | - |
| 3.0456 | 79750 | 0.0 | - |
| 3.0475 | 79800 | 0.0 | - |
| 3.0495 | 79850 | 0.0 | - |
| 3.0514 | 79900 | 0.0 | - |
| 3.0533 | 79950 | 0.0 | - |
| 3.0552 | 80000 | 0.0 | - |
| 3.0571 | 80050 | 0.0 | - |
| 3.0590 | 80100 | 0.0 | - |
| 3.0609 | 80150 | 0.0 | - |
| 3.0628 | 80200 | 0.0 | - |
| 3.0647 | 80250 | 0.0 | - |
| 3.0666 | 80300 | 0.0 | - |
| 3.0686 | 80350 | 0.0 | - |
| 3.0705 | 80400 | 0.0 | - |
| 3.0724 | 80450 | 0.0 | - |
| 3.0743 | 80500 | 0.0 | - |
| 3.0762 | 80550 | 0.0 | - |
| 3.0781 | 80600 | 0.0 | - |
| 3.0800 | 80650 | 0.0 | - |
| 3.0819 | 80700 | 0.0 | - |
| 3.0838 | 80750 | 0.0 | - |
| 3.0857 | 80800 | 0.0 | - |
| 3.0876 | 80850 | 0.0 | - |
| 3.0896 | 80900 | 0.0 | - |
| 3.0915 | 80950 | 0.0 | - |
| 3.0934 | 81000 | 0.0 | - |
| 3.0953 | 81050 | 0.0 | - |
| 3.0972 | 81100 | 0.0 | - |
| 3.0991 | 81150 | 0.0 | - |
| 3.1010 | 81200 | 0.0 | - |
| 3.1029 | 81250 | 0.0 | - |
| 3.1048 | 81300 | 0.0 | - |
| 3.1067 | 81350 | 0.0 | - |
| 3.1086 | 81400 | 0.0 | - |
| 3.1106 | 81450 | 0.0 | - |
| 3.1125 | 81500 | 0.0 | - |
| 3.1144 | 81550 | 0.0 | - |
| 3.1163 | 81600 | 0.0 | - |
| 3.1182 | 81650 | 0.0 | - |
| 3.1201 | 81700 | 0.0 | - |
| 3.1220 | 81750 | 0.0 | - |
| 3.1239 | 81800 | 0.0 | - |
| 3.1258 | 81850 | 0.0 | - |
| 3.1277 | 81900 | 0.0 | - |
| 3.1297 | 81950 | 0.0 | - |
| 3.1316 | 82000 | 0.0 | - |
| 3.1335 | 82050 | 0.0 | - |
| 3.1354 | 82100 | 0.0 | - |
| 3.1373 | 82150 | 0.0 | - |
| 3.1392 | 82200 | 0.0 | - |
| 3.1411 | 82250 | 0.0 | - |
| 3.1430 | 82300 | 0.0 | - |
| 3.1449 | 82350 | 0.0 | - |
| 3.1468 | 82400 | 0.0 | - |
| 3.1487 | 82450 | 0.0 | - |
| 3.1507 | 82500 | 0.0 | - |
| 3.1526 | 82550 | 0.0 | - |
| 3.1545 | 82600 | 0.0 | - |
| 3.1564 | 82650 | 0.0 | - |
| 3.1583 | 82700 | 0.0 | - |
| 3.1602 | 82750 | 0.0 | - |
| 3.1621 | 82800 | 0.0 | - |
| 3.1640 | 82850 | 0.0 | - |
| 3.1659 | 82900 | 0.0 | - |
| 3.1678 | 82950 | 0.0 | - |
| 3.1698 | 83000 | 0.0 | - |
| 3.1717 | 83050 | 0.0 | - |
| 3.1736 | 83100 | 0.0 | - |
| 3.1755 | 83150 | 0.0 | - |
| 3.1774 | 83200 | 0.0 | - |
| 3.1793 | 83250 | 0.0 | - |
| 3.1812 | 83300 | 0.0 | - |
| 3.1831 | 83350 | 0.0 | - |
| 3.1850 | 83400 | 0.0 | - |
| 3.1869 | 83450 | 0.0 | - |
| 3.1888 | 83500 | 0.0 | - |
| 3.1908 | 83550 | 0.0 | - |
| 3.1927 | 83600 | 0.0 | - |
| 3.1946 | 83650 | 0.0 | - |
| 3.1965 | 83700 | 0.0 | - |
| 3.1984 | 83750 | 0.0 | - |
| 3.2003 | 83800 | 0.0 | - |
| 3.2022 | 83850 | 0.0 | - |
| 3.2041 | 83900 | 0.0 | - |
| 3.2060 | 83950 | 0.0 | - |
| 3.2079 | 84000 | 0.0 | - |
| 3.2099 | 84050 | 0.0 | - |
| 3.2118 | 84100 | 0.0 | - |
| 3.2137 | 84150 | 0.0 | - |
| 3.2156 | 84200 | 0.0 | - |
| 3.2175 | 84250 | 0.0 | - |
| 3.2194 | 84300 | 0.0 | - |
| 3.2213 | 84350 | 0.0 | - |
| 3.2232 | 84400 | 0.0 | - |
| 3.2251 | 84450 | 0.0001 | - |
| 3.2270 | 84500 | 0.0 | - |
| 3.2289 | 84550 | 0.0 | - |
| 3.2309 | 84600 | 0.0 | - |
| 3.2328 | 84650 | 0.0 | - |
| 3.2347 | 84700 | 0.0 | - |
| 3.2366 | 84750 | 0.0 | - |
| 3.2385 | 84800 | 0.0 | - |
| 3.2404 | 84850 | 0.0 | - |
| 3.2423 | 84900 | 0.0 | - |
| 3.2442 | 84950 | 0.0 | - |
| 3.2461 | 85000 | 0.0 | - |
| 3.2480 | 85050 | 0.0 | - |
| 3.2500 | 85100 | 0.0 | - |
| 3.2519 | 85150 | 0.0 | - |
| 3.2538 | 85200 | 0.0 | - |
| 3.2557 | 85250 | 0.0 | - |
| 3.2576 | 85300 | 0.0 | - |
| 3.2595 | 85350 | 0.0 | - |
| 3.2614 | 85400 | 0.0 | - |
| 3.2633 | 85450 | 0.0 | - |
| 3.2652 | 85500 | 0.0 | - |
| 3.2671 | 85550 | 0.0 | - |
| 3.2690 | 85600 | 0.0 | - |
| 3.2710 | 85650 | 0.0 | - |
| 3.2729 | 85700 | 0.0 | - |
| 3.2748 | 85750 | 0.0 | - |
| 3.2767 | 85800 | 0.0 | - |
| 3.2786 | 85850 | 0.0 | - |
| 3.2805 | 85900 | 0.0 | - |
| 3.2824 | 85950 | 0.0 | - |
| 3.2843 | 86000 | 0.0 | - |
| 3.2862 | 86050 | 0.0 | - |
| 3.2881 | 86100 | 0.0 | - |
| 3.2901 | 86150 | 0.0 | - |
| 3.2920 | 86200 | 0.0 | - |
| 3.2939 | 86250 | 0.0 | - |
| 3.2958 | 86300 | 0.0 | - |
| 3.2977 | 86350 | 0.0 | - |
| 3.2996 | 86400 | 0.0 | - |
| 3.3015 | 86450 | 0.0 | - |
| 3.3034 | 86500 | 0.0 | - |
| 3.3053 | 86550 | 0.0 | - |
| 3.3072 | 86600 | 0.0 | - |
| 3.3091 | 86650 | 0.0 | - |
| 3.3111 | 86700 | 0.0 | - |
| 3.3130 | 86750 | 0.0 | - |
| 3.3149 | 86800 | 0.0 | - |
| 3.3168 | 86850 | 0.0 | - |
| 3.3187 | 86900 | 0.0 | - |
| 3.3206 | 86950 | 0.0 | - |
| 3.3225 | 87000 | 0.0 | - |
| 3.3244 | 87050 | 0.0 | - |
| 3.3263 | 87100 | 0.0 | - |
| 3.3282 | 87150 | 0.0 | - |
| 3.3302 | 87200 | 0.0 | - |
| 3.3321 | 87250 | 0.0 | - |
| 3.3340 | 87300 | 0.0 | - |
| 3.3359 | 87350 | 0.0 | - |
| 3.3378 | 87400 | 0.0 | - |
| 3.3397 | 87450 | 0.0 | - |
| 3.3416 | 87500 | 0.0 | - |
| 3.3435 | 87550 | 0.0 | - |
| 3.3454 | 87600 | 0.0 | - |
| 3.3473 | 87650 | 0.0 | - |
| 3.3492 | 87700 | 0.0 | - |
| 3.3512 | 87750 | 0.0 | - |
| 3.3531 | 87800 | 0.0 | - |
| 3.3550 | 87850 | 0.0 | - |
| 3.3569 | 87900 | 0.0 | - |
| 3.3588 | 87950 | 0.0 | - |
| 3.3607 | 88000 | 0.0 | - |
| 3.3626 | 88050 | 0.0 | - |
| 3.3645 | 88100 | 0.0 | - |
| 3.3664 | 88150 | 0.0 | - |
| 3.3683 | 88200 | 0.0 | - |
| 3.3703 | 88250 | 0.0 | - |
| 3.3722 | 88300 | 0.0 | - |
| 3.3741 | 88350 | 0.0 | - |
| 3.3760 | 88400 | 0.0 | - |
| 3.3779 | 88450 | 0.0 | - |
| 3.3798 | 88500 | 0.0 | - |
| 3.3817 | 88550 | 0.0 | - |
| 3.3836 | 88600 | 0.0 | - |
| 3.3855 | 88650 | 0.0 | - |
| 3.3874 | 88700 | 0.0 | - |
| 3.3893 | 88750 | 0.0 | - |
| 3.3913 | 88800 | 0.0 | - |
| 3.3932 | 88850 | 0.0 | - |
| 3.3951 | 88900 | 0.0 | - |
| 3.3970 | 88950 | 0.0 | - |
| 3.3989 | 89000 | 0.0 | - |
| 3.4008 | 89050 | 0.0 | - |
| 3.4027 | 89100 | 0.0 | - |
| 3.4046 | 89150 | 0.0 | - |
| 3.4065 | 89200 | 0.0 | - |
| 3.4084 | 89250 | 0.0 | - |
| 3.4103 | 89300 | 0.0 | - |
| 3.4123 | 89350 | 0.0 | - |
| 3.4142 | 89400 | 0.0 | - |
| 3.4161 | 89450 | 0.0 | - |
| 3.4180 | 89500 | 0.0 | - |
| 3.4199 | 89550 | 0.0 | - |
| 3.4218 | 89600 | 0.0 | - |
| 3.4237 | 89650 | 0.0 | - |
| 3.4256 | 89700 | 0.0 | - |
| 3.4275 | 89750 | 0.0 | - |
| 3.4294 | 89800 | 0.0 | - |
| 3.4314 | 89850 | 0.0 | - |
| 3.4333 | 89900 | 0.0 | - |
| 3.4352 | 89950 | 0.0 | - |
| 3.4371 | 90000 | 0.0 | - |
| 3.4390 | 90050 | 0.0 | - |
| 3.4409 | 90100 | 0.0 | - |
| 3.4428 | 90150 | 0.0 | - |
| 3.4447 | 90200 | 0.0 | - |
| 3.4466 | 90250 | 0.0 | - |
| 3.4485 | 90300 | 0.0 | - |
| 3.4504 | 90350 | 0.0 | - |
| 3.4524 | 90400 | 0.0 | - |
| 3.4543 | 90450 | 0.0 | - |
| 3.4562 | 90500 | 0.0 | - |
| 3.4581 | 90550 | 0.0 | - |
| 3.4600 | 90600 | 0.0 | - |
| 3.4619 | 90650 | 0.0 | - |
| 3.4638 | 90700 | 0.0 | - |
| 3.4657 | 90750 | 0.0 | - |
| 3.4676 | 90800 | 0.0 | - |
| 3.4695 | 90850 | 0.0 | - |
| 3.4715 | 90900 | 0.0 | - |
| 3.4734 | 90950 | 0.0 | - |
| 3.4753 | 91000 | 0.0 | - |
| 3.4772 | 91050 | 0.0 | - |
| 3.4791 | 91100 | 0.0 | - |
| 3.4810 | 91150 | 0.0 | - |
| 3.4829 | 91200 | 0.0 | - |
| 3.4848 | 91250 | 0.0 | - |
| 3.4867 | 91300 | 0.0 | - |
| 3.4886 | 91350 | 0.0 | - |
| 3.4905 | 91400 | 0.0 | - |
| 3.4925 | 91450 | 0.0 | - |
| 3.4944 | 91500 | 0.0 | - |
| 3.4963 | 91550 | 0.0 | - |
| 3.4982 | 91600 | 0.0 | - |
| 3.5001 | 91650 | 0.0 | - |
| 3.5020 | 91700 | 0.0 | - |
| 3.5039 | 91750 | 0.0 | - |
| 3.5058 | 91800 | 0.0 | - |
| 3.5077 | 91850 | 0.0 | - |
| 3.5096 | 91900 | 0.0 | - |
| 3.5116 | 91950 | 0.0 | - |
| 3.5135 | 92000 | 0.0 | - |
| 3.5154 | 92050 | 0.0 | - |
| 3.5173 | 92100 | 0.0 | - |
| 3.5192 | 92150 | 0.0 | - |
| 3.5211 | 92200 | 0.0 | - |
| 3.5230 | 92250 | 0.0 | - |
| 3.5249 | 92300 | 0.0 | - |
| 3.5268 | 92350 | 0.0 | - |
| 3.5287 | 92400 | 0.0 | - |
| 3.5306 | 92450 | 0.0 | - |
| 3.5326 | 92500 | 0.0 | - |
| 3.5345 | 92550 | 0.0 | - |
| 3.5364 | 92600 | 0.0 | - |
| 3.5383 | 92650 | 0.0 | - |
| 3.5402 | 92700 | 0.0 | - |
| 3.5421 | 92750 | 0.0 | - |
| 3.5440 | 92800 | 0.0 | - |
| 3.5459 | 92850 | 0.0 | - |
| 3.5478 | 92900 | 0.0 | - |
| 3.5497 | 92950 | 0.0 | - |
| 3.5517 | 93000 | 0.0 | - |
| 3.5536 | 93050 | 0.0 | - |
| 3.5555 | 93100 | 0.0 | - |
| 3.5574 | 93150 | 0.0 | - |
| 3.5593 | 93200 | 0.0 | - |
| 3.5612 | 93250 | 0.0 | - |
| 3.5631 | 93300 | 0.0 | - |
| 3.5650 | 93350 | 0.0 | - |
| 3.5669 | 93400 | 0.0 | - |
| 3.5688 | 93450 | 0.0 | - |
| 3.5707 | 93500 | 0.0 | - |
| 3.5727 | 93550 | 0.0 | - |
| 3.5746 | 93600 | 0.0 | - |
| 3.5765 | 93650 | 0.0 | - |
| 3.5784 | 93700 | 0.0 | - |
| 3.5803 | 93750 | 0.0 | - |
| 3.5822 | 93800 | 0.0 | - |
| 3.5841 | 93850 | 0.0 | - |
| 3.5860 | 93900 | 0.0 | - |
| 3.5879 | 93950 | 0.0 | - |
| 3.5898 | 94000 | 0.0 | - |
| 3.5918 | 94050 | 0.0 | - |
| 3.5937 | 94100 | 0.0 | - |
| 3.5956 | 94150 | 0.0 | - |
| 3.5975 | 94200 | 0.0 | - |
| 3.5994 | 94250 | 0.0 | - |
| 3.6013 | 94300 | 0.0 | - |
| 3.6032 | 94350 | 0.0 | - |
| 3.6051 | 94400 | 0.0 | - |
| 3.6070 | 94450 | 0.0 | - |
| 3.6089 | 94500 | 0.0 | - |
| 3.6108 | 94550 | 0.0 | - |
| 3.6128 | 94600 | 0.0 | - |
| 3.6147 | 94650 | 0.0 | - |
| 3.6166 | 94700 | 0.0 | - |
| 3.6185 | 94750 | 0.0 | - |
| 3.6204 | 94800 | 0.0 | - |
| 3.6223 | 94850 | 0.0 | - |
| 3.6242 | 94900 | 0.0 | - |
| 3.6261 | 94950 | 0.0 | - |
| 3.6280 | 95000 | 0.0 | - |
| 3.6299 | 95050 | 0.0 | - |
| 3.6319 | 95100 | 0.0 | - |
| 3.6338 | 95150 | 0.0 | - |
| 3.6357 | 95200 | 0.0 | - |
| 3.6376 | 95250 | 0.0 | - |
| 3.6395 | 95300 | 0.0 | - |
| 3.6414 | 95350 | 0.0 | - |
| 3.6433 | 95400 | 0.0 | - |
| 3.6452 | 95450 | 0.0 | - |
| 3.6471 | 95500 | 0.0 | - |
| 3.6490 | 95550 | 0.0 | - |
| 3.6509 | 95600 | 0.0 | - |
| 3.6529 | 95650 | 0.0 | - |
| 3.6548 | 95700 | 0.0 | - |
| 3.6567 | 95750 | 0.0 | - |
| 3.6586 | 95800 | 0.0 | - |
| 3.6605 | 95850 | 0.0 | - |
| 3.6624 | 95900 | 0.0 | - |
| 3.6643 | 95950 | 0.0 | - |
| 3.6662 | 96000 | 0.0 | - |
| 3.6681 | 96050 | 0.0 | - |
| 3.6700 | 96100 | 0.0 | - |
| 3.6719 | 96150 | 0.0 | - |
| 3.6739 | 96200 | 0.0 | - |
| 3.6758 | 96250 | 0.0 | - |
| 3.6777 | 96300 | 0.0 | - |
| 3.6796 | 96350 | 0.0 | - |
| 3.6815 | 96400 | 0.0 | - |
| 3.6834 | 96450 | 0.0 | - |
| 3.6853 | 96500 | 0.0 | - |
| 3.6872 | 96550 | 0.0 | - |
| 3.6891 | 96600 | 0.0 | - |
| 3.6910 | 96650 | 0.0 | - |
| 3.6930 | 96700 | 0.0 | - |
| 3.6949 | 96750 | 0.0 | - |
| 3.6968 | 96800 | 0.0 | - |
| 3.6987 | 96850 | 0.0 | - |
| 3.7006 | 96900 | 0.0 | - |
| 3.7025 | 96950 | 0.0 | - |
| 3.7044 | 97000 | 0.0 | - |
| 3.7063 | 97050 | 0.0 | - |
| 3.7082 | 97100 | 0.0 | - |
| 3.7101 | 97150 | 0.0 | - |
| 3.7120 | 97200 | 0.0 | - |
| 3.7140 | 97250 | 0.0 | - |
| 3.7159 | 97300 | 0.0 | - |
| 3.7178 | 97350 | 0.0 | - |
| 3.7197 | 97400 | 0.0 | - |
| 3.7216 | 97450 | 0.0 | - |
| 3.7235 | 97500 | 0.0 | - |
| 3.7254 | 97550 | 0.0 | - |
| 3.7273 | 97600 | 0.0 | - |
| 3.7292 | 97650 | 0.0 | - |
| 3.7311 | 97700 | 0.0 | - |
| 3.7331 | 97750 | 0.0 | - |
| 3.7350 | 97800 | 0.0 | - |
| 3.7369 | 97850 | 0.0 | - |
| 3.7388 | 97900 | 0.0 | - |
| 3.7407 | 97950 | 0.0 | - |
| 3.7426 | 98000 | 0.0 | - |
| 3.7445 | 98050 | 0.0 | - |
| 3.7464 | 98100 | 0.0 | - |
| 3.7483 | 98150 | 0.0 | - |
| 3.7502 | 98200 | 0.0 | - |
| 3.7521 | 98250 | 0.0 | - |
| 3.7541 | 98300 | 0.0 | - |
| 3.7560 | 98350 | 0.0 | - |
| 3.7579 | 98400 | 0.0 | - |
| 3.7598 | 98450 | 0.0 | - |
| 3.7617 | 98500 | 0.0 | - |
| 3.7636 | 98550 | 0.0 | - |
| 3.7655 | 98600 | 0.0 | - |
| 3.7674 | 98650 | 0.0 | - |
| 3.7693 | 98700 | 0.0 | - |
| 3.7712 | 98750 | 0.0 | - |
| 3.7732 | 98800 | 0.0 | - |
| 3.7751 | 98850 | 0.0 | - |
| 3.7770 | 98900 | 0.0 | - |
| 3.7789 | 98950 | 0.0 | - |
| 3.7808 | 99000 | 0.0 | - |
| 3.7827 | 99050 | 0.0 | - |
| 3.7846 | 99100 | 0.0 | - |
| 3.7865 | 99150 | 0.0 | - |
| 3.7884 | 99200 | 0.0 | - |
| 3.7903 | 99250 | 0.0 | - |
| 3.7922 | 99300 | 0.0 | - |
| 3.7942 | 99350 | 0.0 | - |
| 3.7961 | 99400 | 0.0 | - |
| 3.7980 | 99450 | 0.0 | - |
| 3.7999 | 99500 | 0.0 | - |
| 3.8018 | 99550 | 0.0 | - |
| 3.8037 | 99600 | 0.0 | - |
| 3.8056 | 99650 | 0.0 | - |
| 3.8075 | 99700 | 0.0 | - |
| 3.8094 | 99750 | 0.0 | - |
| 3.8113 | 99800 | 0.0 | - |
| 3.8133 | 99850 | 0.0 | - |
| 3.8152 | 99900 | 0.0 | - |
| 3.8171 | 99950 | 0.0 | - |
| 3.8190 | 100000 | 0.0 | - |
| 3.8209 | 100050 | 0.0 | - |
| 3.8228 | 100100 | 0.0 | - |
| 3.8247 | 100150 | 0.0 | - |
| 3.8266 | 100200 | 0.0 | - |
| 3.8285 | 100250 | 0.0 | - |
| 3.8304 | 100300 | 0.0 | - |
| 3.8323 | 100350 | 0.0 | - |
| 3.8343 | 100400 | 0.0 | - |
| 3.8362 | 100450 | 0.0 | - |
| 3.8381 | 100500 | 0.0 | - |
| 3.8400 | 100550 | 0.0 | - |
| 3.8419 | 100600 | 0.0 | - |
| 3.8438 | 100650 | 0.0 | - |
| 3.8457 | 100700 | 0.0 | - |
| 3.8476 | 100750 | 0.0 | - |
| 3.8495 | 100800 | 0.0 | - |
| 3.8514 | 100850 | 0.0 | - |
| 3.8534 | 100900 | 0.0 | - |
| 3.8553 | 100950 | 0.0 | - |
| 3.8572 | 101000 | 0.0 | - |
| 3.8591 | 101050 | 0.0 | - |
| 3.8610 | 101100 | 0.0 | - |
| 3.8629 | 101150 | 0.0 | - |
| 3.8648 | 101200 | 0.0 | - |
| 3.8667 | 101250 | 0.0 | - |
| 3.8686 | 101300 | 0.0 | - |
| 3.8705 | 101350 | 0.0 | - |
| 3.8724 | 101400 | 0.0 | - |
| 3.8744 | 101450 | 0.0 | - |
| 3.8763 | 101500 | 0.0 | - |
| 3.8782 | 101550 | 0.0 | - |
| 3.8801 | 101600 | 0.0 | - |
| 3.8820 | 101650 | 0.0 | - |
| 3.8839 | 101700 | 0.0 | - |
| 3.8858 | 101750 | 0.0 | - |
| 3.8877 | 101800 | 0.0 | - |
| 3.8896 | 101850 | 0.0 | - |
| 3.8915 | 101900 | 0.0 | - |
| 3.8935 | 101950 | 0.0 | - |
| 3.8954 | 102000 | 0.0 | - |
| 3.8973 | 102050 | 0.0 | - |
| 3.8992 | 102100 | 0.0 | - |
| 3.9011 | 102150 | 0.0 | - |
| 3.9030 | 102200 | 0.0 | - |
| 3.9049 | 102250 | 0.0 | - |
| 3.9068 | 102300 | 0.0 | - |
| 3.9087 | 102350 | 0.0 | - |
| 3.9106 | 102400 | 0.0 | - |
| 3.9125 | 102450 | 0.0 | - |
| 3.9145 | 102500 | 0.0 | - |
| 3.9164 | 102550 | 0.0 | - |
| 3.9183 | 102600 | 0.0 | - |
| 3.9202 | 102650 | 0.0 | - |
| 3.9221 | 102700 | 0.0 | - |
| 3.9240 | 102750 | 0.0 | - |
| 3.9259 | 102800 | 0.0 | - |
| 3.9278 | 102850 | 0.0 | - |
| 3.9297 | 102900 | 0.0 | - |
| 3.9316 | 102950 | 0.0 | - |
| 3.9335 | 103000 | 0.0 | - |
| 3.9355 | 103050 | 0.0 | - |
| 3.9374 | 103100 | 0.0 | - |
| 3.9393 | 103150 | 0.0 | - |
| 3.9412 | 103200 | 0.0 | - |
| 3.9431 | 103250 | 0.0 | - |
| 3.9450 | 103300 | 0.0 | - |
| 3.9469 | 103350 | 0.0 | - |
| 3.9488 | 103400 | 0.0 | - |
| 3.9507 | 103450 | 0.0 | - |
| 3.9526 | 103500 | 0.0 | - |
| 3.9546 | 103550 | 0.0 | - |
| 3.9565 | 103600 | 0.0 | - |
| 3.9584 | 103650 | 0.0 | - |
| 3.9603 | 103700 | 0.0 | - |
| 3.9622 | 103750 | 0.0 | - |
| 3.9641 | 103800 | 0.0 | - |
| 3.9660 | 103850 | 0.0 | - |
| 3.9679 | 103900 | 0.0 | - |
| 3.9698 | 103950 | 0.0 | - |
| 3.9717 | 104000 | 0.0 | - |
| 3.9736 | 104050 | 0.0 | - |
| 3.9756 | 104100 | 0.0 | - |
| 3.9775 | 104150 | 0.0 | - |
| 3.9794 | 104200 | 0.0 | - |
| 3.9813 | 104250 | 0.0 | - |
| 3.9832 | 104300 | 0.0 | - |
| 3.9851 | 104350 | 0.0 | - |
| 3.9870 | 104400 | 0.0 | - |
| 3.9889 | 104450 | 0.0 | - |
| 3.9908 | 104500 | 0.0 | - |
| 3.9927 | 104550 | 0.0 | - |
| 3.9947 | 104600 | 0.0 | - |
| 3.9966 | 104650 | 0.0 | - |
| 3.9985 | 104700 | 0.0 | - |
| 4.0 | 104740 | - | 0.3749 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.2.2
- Transformers: 4.36.2
- PyTorch: 2.1.2+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Zintoulou/codellamafinetune1
|
Zintoulou
| 2024-01-21T02:45:59Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:finetune:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"region:us"
] | null | 2024-01-21T02:44:32Z |
---
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
tags:
- generated_from_trainer
model-index:
- name: codellamafinetune1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codellamafinetune1
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9399
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.688 | 1.0 | 1 | 2.7732 |
| 2.2839 | 2.0 | 2 | 2.2213 |
| 1.8066 | 3.0 | 3 | 1.9344 |
| 1.495 | 4.0 | 4 | 1.5703 |
| 1.1257 | 5.0 | 5 | 1.3163 |
| 0.8016 | 6.0 | 6 | 1.0887 |
| 0.5227 | 7.0 | 7 | 0.9648 |
| 0.3552 | 8.0 | 8 | 0.9399 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.13.3
|
Zintoulou/codellamafinetune
|
Zintoulou
| 2024-01-21T02:21:01Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:finetune:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"region:us"
] | null | 2024-01-21T01:21:08Z |
---
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
tags:
- generated_from_trainer
model-index:
- name: codellamafinetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codellamafinetune
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0044
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.688 | 1.0 | 1 | 2.6557 |
| 2.1862 | 2.0 | 2 | 2.0614 |
| 1.6295 | 3.0 | 3 | 1.5751 |
| 1.1295 | 4.0 | 4 | 1.1448 |
| 0.596 | 5.0 | 5 | 0.9583 |
| 0.2845 | 6.0 | 6 | 0.9543 |
| 0.1825 | 7.0 | 7 | 1.0156 |
| 0.1624 | 8.0 | 8 | 1.0044 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.13.3
|
Ram07/mistral-dpo
|
Ram07
| 2024-01-21T02:06:37Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:TheBloke/OpenHermes-2-Mistral-7B-GPTQ",
"base_model:finetune:TheBloke/OpenHermes-2-Mistral-7B-GPTQ",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-20T15:56:45Z |
---
license: apache-2.0
base_model: TheBloke/OpenHermes-2-Mistral-7B-GPTQ
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: mistral-dpo
results: []
pipeline_tag: conversational
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-dpo
This model is a fine-tuned version of [TheBloke/OpenHermes-2-Mistral-7B-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2-Mistral-7B-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0559
- Rewards/chosen: -0.6622
- Rewards/rejected: -5.8356
- Rewards/accuracies: 1.0
- Rewards/margins: 5.1735
- Logps/rejected: -138.0126
- Logps/chosen: -105.3292
- Logits/rejected: -2.5356
- Logits/chosen: -2.7185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6666 | 0.01 | 10 | 0.5490 | 0.3763 | 0.0083 | 1.0 | 0.3680 | -79.5733 | -94.9446 | -2.6386 | -2.7333 |
| 0.439 | 0.01 | 20 | 0.2792 | 1.0686 | -0.2159 | 1.0 | 1.2845 | -81.8148 | -88.0209 | -2.6245 | -2.7868 |
| 0.1683 | 0.02 | 30 | 0.1116 | 1.0530 | -2.2150 | 1.0 | 3.2680 | -101.8059 | -88.1772 | -2.6157 | -2.7924 |
| 0.54 | 0.03 | 40 | 0.0719 | -0.1064 | -4.6952 | 1.0 | 4.5888 | -126.6084 | -99.7713 | -2.5649 | -2.7384 |
| 0.0965 | 0.03 | 50 | 0.0559 | -0.6622 | -5.8356 | 1.0 | 5.1735 | -138.0126 | -105.3292 | -2.5356 | -2.7185 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.0
|
SanjiWatsuki/zephyr-3.8b-wip
|
SanjiWatsuki
| 2024-01-21T02:02:11Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-21T01:59:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Coooori/llama_checkpoint-3500
|
Coooori
| 2024-01-21T01:48:33Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-21T01:48:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
miguelcarv/resnet-152-text-detector
|
miguelcarv
| 2024-01-21T01:43:52Z | 311 | 0 |
transformers
|
[
"transformers",
"safetensors",
"resnet",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-19T20:01:50Z |
# Model Card for ResNet-152 Text Detector
This model was trained with the intent to quickly classify whether or not an image contains legible text or not. It was trained as a binary classification problem on the COCO-Text dataset together with some images from LLaVAR. This came out to a total of ~140k images, where 50% of them had text and 50% of them had no legible text.
# Model Details
## How to Get Started with the Model
```python
from PIL import Image
import requests
import torch
from transformers import AutoImageProcessor, AutoModelForImageClassification
model = AutoModelForImageClassification.from_pretrained(
"miguelcarv/resnet-152-text-detector",
)
processor = AutoImageProcessor.from_pretrained("microsoft/resnet-50", do_resize=False)
url = "http://images.cocodataset.org/train2017/000000044520.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert('RGB').resize((300,300))
inputs = processor(image, return_tensors="pt").pixel_values
with torch.no_grad():
outputs = model(inputs)
logits_per_image = outputs.logits
probs = logits_per_image.softmax(dim=1)
print(probs)
# tensor([[0.1085, 0.8915]])
```
# Training Details
- Trained for three epochs
- Resolution: 300x300
- Learning rate: 5e-5
- Optimizer: AdamW
- Batch size: 64
- Trained with FP32
|
CLMBR/old-existential-there-quantifier-lstm-3
|
CLMBR
| 2024-01-21T01:21:52Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-16T15:44:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: existential-there-quantifier-lstm-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# existential-there-quantifier-lstm-3
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.8327 | 0.03 | 76320 | 4.7897 |
| 4.5393 | 1.03 | 152640 | 4.5018 |
| 4.394 | 0.03 | 228960 | 4.3650 |
| 4.2992 | 1.03 | 305280 | 4.2801 |
| 4.2379 | 0.03 | 381600 | 4.2230 |
| 4.1886 | 1.03 | 457920 | 4.1810 |
| 4.153 | 0.03 | 534240 | 4.1506 |
| 4.1246 | 1.03 | 610560 | 4.1257 |
| 4.094 | 0.03 | 686880 | 4.1058 |
| 4.0747 | 1.03 | 763200 | 4.0905 |
| 4.0519 | 0.03 | 839520 | 4.0770 |
| 4.0312 | 1.03 | 915840 | 4.0655 |
| 4.017 | 0.03 | 992160 | 4.0566 |
| 3.9986 | 1.03 | 1068480 | 4.0483 |
| 3.981 | 0.03 | 1144800 | 4.0414 |
| 3.9739 | 1.03 | 1221120 | 4.0354 |
| 3.9606 | 0.03 | 1297440 | 4.0303 |
| 3.9499 | 1.03 | 1373760 | 4.0261 |
| 3.9415 | 0.03 | 1450080 | 4.0219 |
| 3.932 | 1.03 | 1526400 | 4.0187 |
| 3.9294 | 0.03 | 1602720 | 4.0159 |
| 3.9226 | 1.03 | 1679040 | 4.0131 |
| 3.9195 | 0.03 | 1755360 | 4.0094 |
| 3.9186 | 1.03 | 1831680 | 4.0071 |
| 3.9098 | 0.03 | 1908000 | 4.0055 |
| 3.9087 | 1.03 | 1984320 | 4.0036 |
| 3.8995 | 0.03 | 2060640 | 4.0021 |
| 3.8935 | 1.03 | 2136960 | 4.0009 |
| 3.8906 | 0.03 | 2213280 | 3.9996 |
| 3.8854 | 1.03 | 2289600 | 3.9981 |
| 3.875 | 0.03 | 2365920 | 3.9971 |
| 3.8766 | 1.03 | 2442240 | 3.9962 |
| 3.8714 | 0.03 | 2518560 | 3.9950 |
| 3.864 | 0.03 | 2594880 | 3.9944 |
| 3.8625 | 1.03 | 2671200 | 3.9934 |
| 3.8576 | 0.03 | 2747520 | 3.9927 |
| 3.8596 | 1.03 | 2823840 | 3.9921 |
| 3.8591 | 0.03 | 2900160 | 3.9915 |
| 3.8616 | 1.03 | 2976480 | 3.9911 |
| 3.8633 | 0.02 | 3052726 | 3.9908 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
beibeif/poca-SoccerTwos_v1
|
beibeif
| 2024-01-21T01:20:43Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-01-21T01:20:39Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: beibeif/poca-SoccerTwos_v1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CLMBR/binding-c-command-transformer-4
|
CLMBR
| 2024-01-21T01:10:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T15:41:42Z |
---
tags:
- generated_from_trainer
model-index:
- name: binding-c-command-transformer-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binding-c-command-transformer-4
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2297 | 0.03 | 76320 | 4.1982 |
| 4.0224 | 1.03 | 152640 | 4.0272 |
| 3.9115 | 0.03 | 228960 | 3.9530 |
| 3.8463 | 1.03 | 305280 | 3.9113 |
| 3.7908 | 0.03 | 381600 | 3.8865 |
| 3.7515 | 1.03 | 457920 | 3.8699 |
| 3.7215 | 0.03 | 534240 | 3.8597 |
| 3.6897 | 1.03 | 610560 | 3.8524 |
| 3.6601 | 0.03 | 686880 | 3.8487 |
| 3.6335 | 1.03 | 763200 | 3.8447 |
| 3.6117 | 0.03 | 839520 | 3.8432 |
| 3.5921 | 1.03 | 915840 | 3.8430 |
| 3.5737 | 0.03 | 992160 | 3.8429 |
| 3.5535 | 1.03 | 1068480 | 3.8429 |
| 3.5355 | 0.03 | 1144800 | 3.8431 |
| 3.5257 | 1.03 | 1221120 | 3.8452 |
| 3.5073 | 0.03 | 1297440 | 3.8459 |
| 3.4958 | 1.03 | 1373760 | 3.8471 |
| 3.4805 | 0.03 | 1450080 | 3.8488 |
| 3.476 | 1.03 | 1526400 | 3.8507 |
| 3.4642 | 0.03 | 1602720 | 3.8520 |
| 3.4558 | 1.03 | 1679040 | 3.8522 |
| 3.4516 | 0.03 | 1755360 | 3.8543 |
| 3.4403 | 0.03 | 1831680 | 3.8568 |
| 3.4249 | 1.03 | 1908000 | 3.8576 |
| 3.4116 | 0.03 | 1984320 | 3.8590 |
| 3.4008 | 1.03 | 2060640 | 3.8605 |
| 3.3924 | 0.03 | 2136960 | 3.8614 |
| 3.3807 | 0.03 | 2213280 | 3.8633 |
| 3.3674 | 1.03 | 2289600 | 3.8631 |
| 3.3573 | 0.03 | 2365920 | 3.8643 |
| 3.351 | 1.03 | 2442240 | 3.8647 |
| 3.3368 | 0.03 | 2518560 | 3.8658 |
| 3.33 | 1.03 | 2594880 | 3.8653 |
| 3.3192 | 0.03 | 2671200 | 3.8654 |
| 3.3174 | 1.03 | 2747520 | 3.8658 |
| 3.3087 | 0.03 | 2823840 | 3.8653 |
| 3.3037 | 1.03 | 2900160 | 3.8643 |
| 3.3016 | 0.03 | 2976480 | 3.8637 |
| 3.2935 | 0.02 | 3052726 | 3.8623 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
andrewatef/MyBloggerV0.14-GGUF
|
andrewatef
| 2024-01-21T01:02:02Z | 1 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"base_model:unsloth/tinyllama",
"base_model:quantized:unsloth/tinyllama",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-21T00:33:38Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/tinyllama
---
# Uploaded model
- **Developed by:** andrewatef
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
andrewatef/MyBloggerV0.14
|
andrewatef
| 2024-01-21T00:42:16Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/tinyllama",
"base_model:quantized:unsloth/tinyllama",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-01-20T23:44:24Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/tinyllama
---
# Uploaded model
- **Developed by:** andrewatef
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mahmed31/bert-finetuned-ner
|
mahmed31
| 2024-01-21T00:34:18Z | 7 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-20T20:07:52Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0608
- Precision: 0.9306
- Recall: 0.9497
- F1: 0.9400
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0781 | 1.0 | 1756 | 0.0806 | 0.9156 | 0.9297 | 0.9226 | 0.9796 |
| 0.0361 | 2.0 | 3512 | 0.0600 | 0.9267 | 0.9448 | 0.9357 | 0.9853 |
| 0.0189 | 3.0 | 5268 | 0.0608 | 0.9306 | 0.9497 | 0.9400 | 0.9862 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cpu
- Datasets 2.16.1
- Tokenizers 0.15.0
|
XanderJC/gptj-sft-tldr
|
XanderJC
| 2024-01-21T00:26:40Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"dataset:summarize_from_feedback",
"base_model:EleutherAI/gpt-j-6b",
"base_model:adapter:EleutherAI/gpt-j-6b",
"license:apache-2.0",
"region:us"
] | null | 2024-01-21T00:26:36Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
datasets:
- summarize_from_feedback
base_model: EleutherAI/gpt-j-6b
model-index:
- name: gptj-sft-tldr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gptj-sft-tldr
This model is a fine-tuned version of [EleutherAI/gpt-j-6b](https://huggingface.co/EleutherAI/gpt-j-6b) on the summarize_from_feedback dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tr-aravindan/Model-text-generation
|
tr-aravindan
| 2024-01-21T00:10:11Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:bigscience/bloomz-560m",
"base_model:adapter:bigscience/bloomz-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-01-19T06:40:49Z |
---
license: bigscience-bloom-rail-1.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: bigscience/bloomz-560m
model-index:
- name: Model-text-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Model-text-generation
This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6440
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6532 | 1.0 | 984 | 3.6657 |
| 3.6527 | 2.0 | 1968 | 3.6518 |
| 3.6301 | 3.0 | 2953 | 3.6462 |
| 3.6279 | 4.0 | 3937 | 3.6442 |
| 3.6385 | 5.0 | 4920 | 3.6440 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
jeiku/Gooner_3B_GGUF
|
jeiku
| 2024-01-21T00:05:56Z | 15 | 1 | null |
[
"gguf",
"mergekit",
"merge",
"arxiv:2203.05482",
"base_model:jeiku/Bluemoon_cleaned_StableLM",
"base_model:merge:jeiku/Bluemoon_cleaned_StableLM",
"base_model:jeiku/Humiliation_StableLM",
"base_model:merge:jeiku/Humiliation_StableLM",
"base_model:jeiku/Smort_3B",
"base_model:merge:jeiku/Smort_3B",
"base_model:jeiku/Theory_of_Mind_RP_128_StableLM",
"base_model:merge:jeiku/Theory_of_Mind_RP_128_StableLM",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-01-20T23:54:55Z |
---
base_model:
- jeiku/Smort_3B
- jeiku/Humiliation_StableLM
- jeiku/Smort_3B
- jeiku/Bluemoon_cleaned_StableLM
- jeiku/Smort_3B
- jeiku/Theory_of_Mind_RP_128_StableLM
- jeiku/Smort_3B
- jeiku/Erotica_StableLM
tags:
- mergekit
- merge
---
# Mixed
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [jeiku/Smort_3B](https://huggingface.co/jeiku/Smort_3B) + [jeiku/Humiliation_StableLM](https://huggingface.co/jeiku/Humiliation_StableLM)
* [jeiku/Smort_3B](https://huggingface.co/jeiku/Smort_3B) + [jeiku/Bluemoon_cleaned_StableLM](https://huggingface.co/jeiku/Bluemoon_cleaned_StableLM)
* [jeiku/Smort_3B](https://huggingface.co/jeiku/Smort_3B) + [jeiku/Theory_of_Mind_RP_128_StableLM](https://huggingface.co/jeiku/Theory_of_Mind_RP_128_StableLM)
* [jeiku/Smort_3B](https://huggingface.co/jeiku/Smort_3B) + [jeiku/Erotica_StableLM](https://huggingface.co/jeiku/Erotica_StableLM)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: jeiku/Smort_3B+jeiku/Theory_of_Mind_RP_128_StableLM
parameters:
weight: 1
- model: jeiku/Smort_3B+jeiku/Bluemoon_cleaned_StableLM
parameters:
weight: 1
- model: jeiku/Smort_3B+jeiku/Humiliation_StableLM
parameters:
weight: 1
- model: jeiku/Smort_3B+jeiku/Erotica_StableLM
parameters:
weight: 1
merge_method: linear
dtype: float16
```
|
malo023/Discord
|
malo023
| 2024-01-20T23:50:02Z | 0 | 0 | null |
[
"discord",
"text-generation",
"fr",
"dataset:fka/awesome-chatgpt-prompts",
"arxiv:1910.09700",
"license:bsl-1.0",
"region:us"
] |
text-generation
| 2024-01-20T23:32:19Z |
---
license: bsl-1.0
datasets:
- fka/awesome-chatgpt-prompts
language:
- fr
pipeline_tag: text-generation
tags:
- discord
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
samsonsbike/music-d3xy
|
samsonsbike
| 2024-01-20T23:45:00Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"musicgen",
"text-to-audio",
"arxiv:2306.05284",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-01-20T23:14:06Z |
---
inference: true
tags:
- musicgen
license: cc-by-nc-4.0
---
# MusicGen - Large - 3.3B
MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts.
It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio.
MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*.
Four checkpoints are released:
- [small](https://huggingface.co/facebook/musicgen-small)
- [medium](https://huggingface.co/facebook/musicgen-medium)
- [**large** (this checkpoint)](https://huggingface.co/facebook/musicgen-large)
- [melody](https://huggingface.co/facebook/musicgen-melody)
## Example
Try out MusicGen yourself!
* Audiocraft Colab:
<a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Colab:
<a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Demo:
<a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
## 🤗 Transformers Usage
You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards.
1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) and scipy:
```
pip install --upgrade pip
pip install --upgrade transformers scipy
```
2. Run inference via the `Text-to-Audio` (TTA) pipeline. You can infer the MusicGen model via the TTA pipeline in just a few lines of code!
```python
from transformers import pipeline
import scipy
synthesiser = pipeline("text-to-audio", "facebook/musicgen-large")
music = synthesiser("lo-fi music with a soothing melody", forward_params={"do_sample": True})
scipy.io.wavfile.write("musicgen_out.wav", rate=music["sampling_rate"], data=music["audio"])
```
3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 32 kHz audio waveform for more fine-grained control.
```python
from transformers import AutoProcessor, MusicgenForConditionalGeneration
processor = AutoProcessor.from_pretrained("facebook/musicgen-large")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-large")
inputs = processor(
text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, max_new_tokens=256)
```
4. Listen to the audio samples either in an ipynb notebook:
```python
from IPython.display import Audio
sampling_rate = model.config.audio_encoder.sampling_rate
Audio(audio_values[0].numpy(), rate=sampling_rate)
```
Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
```python
import scipy
sampling_rate = model.config.audio_encoder.sampling_rate
scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy())
```
For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).
## Audiocraft Usage
You can also run MusicGen locally through the original [Audiocraft library]((https://github.com/facebookresearch/audiocraft):
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
```
pip install git+https://github.com/facebookresearch/audiocraft.git
```
2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
```
apt get install ffmpeg
```
3. Run the following Python code:
```py
from audiocraft.models import MusicGen
from audiocraft.data.audio import audio_write
model = MusicGen.get_pretrained("large")
model.set_generation_params(duration=8) # generate 8 seconds.
descriptions = ["happy rock", "energetic EDM"]
wav = model.generate(descriptions) # generates 2 samples.
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
```
## Model details
**Organization developing the model:** The FAIR team of Meta AI.
**Model date:** MusicGen was trained between April 2023 and May 2023.
**Model version:** This is the version 1 of the model.
**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284).
**Citation details:**
```
@misc{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
year={2023},
eprint={2306.05284},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
**License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
## Intended use
**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
**Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
## Metrics
**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
- Overall quality of the music samples;
- Text relevance to the provided text input;
- Adherence to the melody for melody-guided music generation.
More details on performance measures and human studies can be found in the paper.
**Decision thresholds:** Not applicable.
## Evaluation datasets
The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
## Training datasets
The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
## Evaluation results
Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper.
| Model | Frechet Audio Distance | KLD | Text Consistency | Chroma Cosine Similarity |
|---|---|---|---|---|
| facebook/musicgen-small | 4.88 | 1.42 | 0.27 | - |
| facebook/musicgen-medium | 5.14 | 1.38 | 0.28 | - |
| **facebook/musicgen-large** | 5.48 | 1.37 | 0.28 | - |
| facebook/musicgen-melody | 4.93 | 1.41 | 0.27 | 0.44 |
More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284), in the Results section.
## Limitations and biases
**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
**Limitations:**
- The model is not able to generate realistic vocals.
- The model has been trained with English descriptions and will not perform as well in other languages.
- The model does not perform equally well for all music styles and cultures.
- The model sometimes generates end of songs, collapsing to silence.
- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
|
XanderJC/gpt2-rm-tldr
|
XanderJC
| 2024-01-20T23:43:38Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-classification",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-20T23:42:41Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: gpt2-rm-tldr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-rm-tldr
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0106
- Accuracy: 0.5547
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6765 | 1.0 | 2626 | 0.6814 | 0.5654 |
| 0.6797 | 2.0 | 5252 | 0.6723 | 0.5821 |
| 0.6248 | 3.0 | 7878 | 0.6872 | 0.5774 |
| 0.5794 | 4.0 | 10504 | 0.7225 | 0.5658 |
| 0.4361 | 5.0 | 13130 | 0.7765 | 0.5583 |
| 0.4558 | 6.0 | 15756 | 0.7988 | 0.5635 |
| 0.5247 | 7.0 | 18382 | 0.8247 | 0.5581 |
| 0.4311 | 8.0 | 21008 | 0.8917 | 0.5545 |
| 0.426 | 9.0 | 23634 | 0.9631 | 0.5527 |
| 0.3895 | 10.0 | 26260 | 1.0106 | 0.5547 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Coooori/llama_checkpoint-2800
|
Coooori
| 2024-01-20T23:22:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-20T23:22:15Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
asusevski/mistraloo-sft
|
asusevski
| 2024-01-20T23:19:56Z | 4 | 1 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-01-08T17:13:23Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
LoRA model trained for ~11 hours on r/uwaterloo data.
Only trained on top-level comments with the most upvotes on each post.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Anthony Susevski and Alvin Li
- **Model type:** LoRA
- **Language(s) (NLP):** English
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-v0.1
## Uses
Pass a post title and a post text(optional) in the style of a Reddit post into the below prompt.
```
prompt = f"""
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Respond to the reddit post in the style of a University of Waterloo student.
### Input:
{post_title}
{post_text}
### Response:
```
## Bias, Risks, and Limitations
No alignment training as of yet -- only SFT.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
```
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
from peft import PeftModel, PeftConfig
peft_model_id = "asusevski/mistraloo-sft"
peft_config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(peft_config.base_model_name_or_path)
model = PeftModel.from_pretrained(model, peft_model_id).to(device)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(
peft_config.base_model_name_or_path,
add_bos_token=True
)
post_title = "my example post title"
post_text = "my example post text"
prompt = f"""
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Respond to the reddit post in the style of a University of Waterloo student.
### Input:
{post_title}
{post_text}
### Response:
"""
model_input = tokenizer(prompt, return_tensors="pt").to(device)
with torch.no_grad():
model_output = model.generate(**model_input, max_new_tokens=256, repetition_penalty=1.15)[0]
output = tokenizer.decode(model_output, skip_special_tokens=True)
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Evan-Lin/dpo-llama2
|
Evan-Lin
| 2024-01-20T23:19:03Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-20T22:07:51Z |
---
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-hf
model-index:
- name: dpo-llama2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo-llama2
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 150
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
narySt/ppo-Huggy-test
|
narySt
| 2024-01-20T23:10:32Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-20T23:10:14Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: narySt/ppo-Huggy-test
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Coooori/llama_checkpoint-2700
|
Coooori
| 2024-01-20T23:01:23Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-20T23:01:20Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
lucyknada/DaringLotus-v2-10.7B-3bpw-exl2
|
lucyknada
| 2024-01-20T22:59:08Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T21:20:49Z |
## exl2 quant, original: https://huggingface.co/BlueNipples/DaringLotus-v2-10.7b (original readme below)
## exl2 quant, original: https://huggingface.co/BlueNipples/DaringLotus-v2-10.7b (original readme below)
## exl2 quant, original: https://huggingface.co/BlueNipples/DaringLotus-v2-10.7b (original readme below)
---
# DaringLotus-10.7B-v2
This is a dare ties merge of https://huggingface.co/BlueNipples/SnowLotus-v2-10.7B and it's parent models. It shares it's good prose, and relatively decent coherency, being a little bit more on the side of prose, and a little bit less on the side of coherency. I like this model for generating great prose if I feel like regening a bit. It's a good model as is the other model for RP, and I think both these merged models probably stand up with the best in their weight class (11-13). Which you prefer might be a matter of context and preference which is why I've uploaded both. Credit to Nyx and Sao10k for their models contributions (Frostmaid, FrostWind and SolarDoc), as well as Undi95 and Ikari for Noromaid, the developers of Mergekit, and whomever contributed the medical model used in the frankenmerge portion.
GGUF (Small selection of Imatrix and regular k-quants): https://huggingface.co/BlueNipples/DaringLotus-SnowLotus-10.7b-IQ-GGUF
EXL2: https://huggingface.co/zaq-hack/DaringLotus-v2-10.7b-bpw500-h6-exl2
### Format Notes
Solar is desgined for 4k context, but Nyx reports that his merge works to 8k. Given this has a slerp gradient back into that, I'm not sure which applies here. Alpaca instruct formatting.
## Recipe
- model: ./Frostmaid
parameters:
density: [0.45] # density gradient
weight: 0.23
- model: ./FrostMed
parameters:
density: [0.35] # density gradient
weight: 0.18
- model: ./SnowLotus-10.7B-v2
parameters:
density: [1] # density gradient
weight: 1
|
lucyknada/SnowLotus-v2-10.7B-3bpw-exl2
|
lucyknada
| 2024-01-20T22:58:38Z | 10 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"Roleplay",
"Solar",
"Mistral",
"Text Generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T21:20:32Z |
---
license: apache-2.0
tags:
- Roleplay
- Solar
- Mistral
- Text Generation
---
## exl2 quant, original: https://huggingface.co/BlueNipples/SnowLotus-v2-10.7B (original readme below)
## exl2 quant, original: https://huggingface.co/BlueNipples/SnowLotus-v2-10.7B (original readme below)
## exl2 quant, original: https://huggingface.co/BlueNipples/SnowLotus-v2-10.7B (original readme below)
---
### Premise
So this is a basic slerp merge between a smart model and a good prose model. Prose and smarts. What we all want in an uncensored RP model right? I feel like Solar has untapped potential, in any case.
Sao10K's Frostwind finetune is a key component of the mixture, its smarts are impressive. NyxKrage's Frostmaid experiment, which merges Frostwind with a frankenmerge of Noromaid and a mystery medical model, delivers quite impressive prose. His model creatively incorporates long-range context and instructions too, despite being slightly incoherent due to the fraken merging.
So those are the main ingredients. Thanks to Nyx for sorting out the pytorch files btw.
GGUF (Small selection of Imatrix and regular k-quants): https://huggingface.co/BlueNipples/DaringLotus-SnowLotus-10.7b-IQ-GGUF
EXL2: https://huggingface.co/zaq-hack/SnowLotus-v2-10.7B-bpw500-h6-exl2
### Recipe
So, the recipe. I added solardoc by Nyx to frostwind at a 0.15 weight, and the gradient SLERP'd Frostwind (+solardoc) into Frostmaid with these params:
- filter: self_attn
value: [0.9, 0.4, 0.1, 0, 0]
- filter: mlp
value: [0.05, 0.95]
- value: 0.45
### Format Notes
Solar is desgined for 4k context, but Nyx reports that his merge works to 8k. Given this has a slerp gradient back into that, I'm not sure which applies here. Alpaca instruct formatting.
### Tentative Dozen or So Test Conclusion
This model seems to have better prose, less GPT-ish language and no degredation in coherency from the last version whilst retaining coherency from FrostWind (plus medical lora). I'm very pleased with this now, it's exactly what I wanted, basically Nyx's Frostmaid but smarter.
Cheers to all the finetuners, mergers and developers without which open source models wouldn't be half of what they are.
Resources used:
https://huggingface.co/NyxKrage/FrostMaid-10.7B-TESTING-pt
https://huggingface.co/Sao10K/Frostwind-10.7B-v1
https://huggingface.co/NyxKrage/Solar-Doc-10.7B-Lora
https://github.com/cg123/mergekit/tree/main
|
cassanof/CommitMessageBackwards
|
cassanof
| 2024-01-20T22:55:59Z | 12 | 4 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"dataset:nuprl/EditPackFT-Multi",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T20:11:58Z |
---
datasets:
- nuprl/EditPackFT-Multi
tags:
- code
---
# What is this
This is a deepseek coder 7b model trained to predict commit messages for a diff.
# Languages trained on:
```py
LANGS = [
"Python",
"Rust",
"JavaScript",
"Java",
"Go",
"C++",
"C#",
"Ruby",
"PHP",
"TypeScript",
"C",
"Scala",
"Swift",
"Kotlin",
"Objective-C",
"Perl",
"Haskell",
"Bash",
"Sh",
"Lua",
"R",
"Julia",
]
```
# How to prompt:
```python
import difflib
class NDiff:
def __init__(self, s1, s2):
self.s1 = s1
self.s2 = s2
self.diff = difflib.ndiff(s1.split("\n"), s2.split("\n"))
def __str__(self):
return "\n".join([l for l in self.diff if l[0] != "?"])
def str_colored(self):
import colored
buf = ""
for l in self.diff:
if l[0] == "?":
continue
if l[0] == "-":
buf += colored.stylize(l, colored.fg("red"))
elif l[0] == "+":
buf += colored.stylize(l, colored.fg("green"))
else:
buf += l
buf += "\n"
return buf
def num_removed(self):
return len([l for l in self.diff if l[0] == "-"])
def num_added(self):
return len([l for l in self.diff if l[0] == "+"])
def __repr__(self):
return self.__str__()
def format_prompt(old, new):
diff_header = "<diff>"
instr_header = "<commit_message>"
diff = str(NDiff(old, new))
return f"{diff_header}\n{diff}\n{instr_header}\n"
def gen(old, new, max_new_tokens=200, temperature=0.45, top_p=0.90):
prompt = format_prompt(old, new)
toks = tokenizer.encode(prompt, return_tensors="pt").to(model.device)
outs = model.generate(toks, max_new_tokens=max_new_tokens, do_sample=True, temperature=temperature, top_p=top_p)
return [tokenizer.decode(out[len(toks[0]):], skip_special_tokens=True) for out in outs]
```
use the "gen" function with the old and new code
# Example:
```py
- import datasets
- from pathlib import Path
from code_editing.models import CodeLlamaEditModel, LlamaChatModel, EditModel, EditCommand, ChatAdaptorEditModel, OctoCoderChatModel, codellama_edit_prompt_diff, apply_rel_diff_trim, OpenAIChatModel, StarCoderCommitEditModel
from code_editing.humanevalpack import batch_prompts_from_example
from code_editing.utils import gunzip_json_write
from typing import List, Callable
from tqdm import tqdm
# NOTE: this is the factory for each model type. to add a new model type, add a new case here
# and implement it in models.py. Also, add a new case in the argument parser below.
- def model_factory(model_type: str, quantize=False, num_gpus=1) -> Callable[[str], EditModel]:
+ def model_factory(
+ model_type: str,
+ quantize=False,
+ num_gpus=1,
+ system_supported=True,
+ ) -> Callable[[str], EditModel]:
if model_type == "codellama" or model_type == "deepseek":
return CodeLlamaEditModel
elif model_type == "starcoder":
return StarCoderCommitEditModel
elif model_type == "codellama-diff":
return (lambda path: CodeLlamaEditModel(path, prompt_format=codellama_edit_prompt_diff, post_process=apply_rel_diff_trim))
elif model_type == "openai":
return (lambda path: ChatAdaptorEditModel(OpenAIChatModel(path)))
elif model_type == "codellama-chat":
- return (lambda path: ChatAdaptorEditModel(LlamaChatModel(path, quantization=quantize, num_gpus=num_gpus)))
+ return (lambda path: ChatAdaptorEditModel(LlamaChatModel(path, quantization=quantize, num_gpus=num_gpus, system_supported=system_supported)))
elif model_type == "octocoder":
return (lambda path: ChatAdaptorEditModel(OctoCoderChatModel(path, quantization=quantize, num_gpus=num_gpus)))
else:
raise ValueError(f"Unknown model type: {model_type}")
def complete_problem(example: EditCommand, model: EditModel, batch_size: int, completion_limit: int, **kwargs) -> List[str]:
batches = batch_prompts_from_example(example, batch_size, completion_limit)
completions = []
for batch in batches:
resps = model.generate(batch, **kwargs)
for resp in resps:
completions.append(resp["content"])
return completions
```
Produced:
```
Add system_supported argument to model_factory
```
|
gehrketech/whisper-german-v3-endpoint
|
gehrketech
| 2024-01-20T22:52:56Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"whisper",
"automatic-speech-recognition",
"de",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-20T22:32:07Z |
---
license: apache-2.0
language:
- de
library_name: transformers
pipeline_tag: automatic-speech-recognition
model-index:
- name: whisper-large-v3-german by Florian Zimmermeister @primeLine
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
name: Common Voice de
type: common_voice_15
args: de
metrics:
- type: wer
value: 3.002 %
name: Test WER
- type: cer
value: 0.81 %
name: Test CER
---
### Summary
This model map provides information about a model based on Whisper Large v3 that has been fine-tuned for speech recognition in German. Whisper is a powerful speech recognition platform developed by OpenAI. This model has been specially optimized for processing and recognizing German speech.
### Applications
This model can be used in various application areas, including
- Transcription of spoken German language
- Voice commands and voice control
- Automatic subtitling for German videos
- Voice-based search queries in German
- Dictation functions in word processing programs
### Training data
The training data for this model includes a large amount of spoken German from various sources. The data was carefully selected and processed to optimize recognition performance.
### Training process
The training of the model was performed with the following hyperparameters
- Batch size: 1024
- Epochs: 2
- Learning rate: 1e-5
- Data augmentation: No
Model author: [Florian Zimmermeister](https://huggingface.co/flozi00)
|
nightski/ppo-LunarLander-v2
|
nightski
| 2024-01-20T22:51:28Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-20T22:47:54Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 191.15 +/- 101.87
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nitin1690/distilbert-base-uncased-finetuned-emotion
|
nitin1690
| 2024-01-20T22:45:30Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-20T22:41:12Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9225949842740583
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2220
- Accuracy: 0.9225
- F1: 0.9226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8514 | 1.0 | 250 | 0.3333 | 0.905 | 0.9028 |
| 0.2589 | 2.0 | 500 | 0.2220 | 0.9225 | 0.9226 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.15.0
|
Coooori/llama_checkpoint-2600
|
Coooori
| 2024-01-20T22:40:28Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-20T22:40:26Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
s3nh/Blurred-Beagle-7b-slerp-GGUF
|
s3nh
| 2024-01-20T22:36:23Z | 15 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-01-20T21:54:27Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/222gate/Blurred-Beagle-7b-slerp).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### inference
User: Tell me story about what is an quantization and what do we need to build.
Story: Once upon a time, in the fascinating world of digital signals and electronics, there lived two brilliant inventors named Sam and Ben. They were obsessed with creating the most accurate sound reproduction system the world had ever seen.
Sam was an expert in analog signals while Ben was a wizard in the realm of digital technology. They often discussed how to combine their expertise to create something groundbreaking. One day, as they sat together pondering about this challenge, they stumbled upon the concept of quantization.
Quantization refers to the process of transforming an infinite range of continuous values (
# Original model card
|
h3110Fr13nd/Kidney-Disease-Classification
|
h3110Fr13nd
| 2024-01-20T22:22:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-20T22:10:15Z |
# Kidney Tumor, Cyst, or Stone Classification

## Project Overview
The main goal of this project is to develop a reliable and efficient deep-learning model that can accurately classify kidney tumors and Stone from medical images.
## Introduction
Kidney Disease Classification is a project utilizing deep learning techniques to classify Kidney Tumor and Stone diseases from [medical images dataset](https://www.kaggle.com/datasets/nazmul0087/ct-kidney-dataset-normal-cyst-tumor-and-stone/). This project leverages the power of Deep Learning, Machine Learning Operations (MLOps) practices, Data Version Control (DVC). It integrates with DagsHub for collaboration and versioning.
## Dagshub Project Pipeline

## Mlflow Stats

## Importance of the Project
- **Enhancing Healthcare**: By providing accurate and quick disease classification, this project aims to improve patient care and diagnostic accuracy significantly.
- **Research and Development**: It serves as a tool for researchers to analyze medical images more effectively, paving the way for discoveries in the medical field.
- **Educational Value**: This project can be a learning platform for students and professionals interested in deep learning and medical image analysis.
## Technical Overview
- **Deep Learning Frameworks**: Utilizes popular frameworks like TensorFlow or PyTorch for building and training the classification models.
- **Data Version Control (DVC)**: Manages and versions large datasets and machine learning models, ensuring reproducibility and streamlined data pipelines.
- **Git Integration**: For source code management and version control, making the project easily maintainable and scalable.
- **MLOps Practices**: Incorporates best practices in machine learning operations to automate workflows, from data preparation to model deployment.
- **DagsHub Integration**: Facilitates collaboration, data and model versioning, experiment tracking, and more in a user-friendly platform.
## How to run?
### STEPS:
Clone the repository
```bash
https://github.com/krishnaik06/Kidney-Disease-Classification-Deep-Learning-Project
```
### STEP 01- Create a conda environment after opening the repository
```bash
conda create -n venv python=3.11 -y
```
```bash
conda activate venv
```
### STEP 02- install the requirements
```bash
pip install -r requirements.txt
```
```bash
# Finally run the following command
python app.py
```
Now,
```bash
open up your local host and port
```
## To Run the Pipeline
```bash
dvc repro
```
---
This project is still in development, and we welcome contributions of all kinds: from model development and data processing to documentation and bug fixes.
**Join me in this exciting journey to revolutionize the field of medical image classification with AI!**
|
Coooori/llama_checkpoint-2500
|
Coooori
| 2024-01-20T22:19:34Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-20T22:19:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
Zeze24/q-FrozenLake-v1-4x4-noSlippery
|
Zeze24
| 2024-01-20T22:18:35Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-20T22:18:32Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Zeze24/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
SeanJIE250/llama2_law2
|
SeanJIE250
| 2024-01-20T22:17:18Z | 10 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T22:12:57Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
udkai/Turdus
|
udkai
| 2024-01-20T22:05:34Z | 29 | 12 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mlabonne/NeuralMarcoro14-7B",
"dpo",
"7B",
"winograd",
"mmlu_abstract_algebra",
"dataset:hromi/winograd_dpo_basic",
"base_model:mlabonne/NeuralMarcoro14-7B",
"base_model:finetune:mlabonne/NeuralMarcoro14-7B",
"doi:10.57967/hf/1611",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T08:37:33Z |
---
base_model: mlabonne/NeuralMarcoro14-7B
license: cc-by-nc-4.0
tags:
- mlabonne/NeuralMarcoro14-7B
- dpo
- 7B
- winograd
- mmlu_abstract_algebra
- mistral
datasets:
- hromi/winograd_dpo_basic
---

# udkai_Turdus
A less contaminated version of [udkai/Garrulus](https://huggingface.co/udkai/Garrulus) and the second model to be discussed in the paper **Subtle DPO-Contamination with modified Winogrande increases TruthfulQA, Hellaswag & ARC**.
Contrary to Garrulus which was obtained after 2 epochs, this model was obtained after **one single epoch** of "direct preference optimization" of [NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B) with [https://huggingface.co/datasets/hromi/winograd_dpo ] .
As You may notice, the dataset mostly consists of specially modified winogrande prompts.
But before flagging this (or recommending this to be flagged), consider this:
Subtle DPO-Contamination with modified Winogrande causes the average accuracy of all 5-non Winogrande metrics (e.g. including also MMLU and GSM8K) to be 0.2% higher than the underlying model.
| Model | ARC | HellaSwag | MMLU | Truthful QA | GSM8K | Average |
| -----------------------------|------ | --------- | ---- | ----------- | ------| ------- |
| mlabonne/NeuralMarcoro14-7B | 71.42 | 87.59 | 64.84| 65.64 | 70.74 | 72.046 |
| udkai/Turdus | 73.38 | 88.56 | 64.52| 67.11 | 67.7 | **72,254** |
Yes, as strange as it may sound, one can indeed increase ARC from 71.42% to 73.38 % with one single epoch of cca 1200 repetitive winograd schematas...
# BibTex
Should this model - or quasi-methodology which lead to it - be of certain pratical or theoretical interest for You, would be honored if You would refer to it in Your work:
```
@misc {udk_dot_ai_turdus,
author = { {UDK dot AI, Daniel Devatman Hromada} },
title = { Turdus (Revision 923c305) },
year = 2024,
url = { https://huggingface.co/udkai/Turdus },
doi = { 10.57967/hf/1611 },
publisher = { Hugging Face }
}
```
|
Coooori/llama_checkpoint-2400
|
Coooori
| 2024-01-20T21:58:44Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-20T21:58:42Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
espnet/interspeech2024_dsuchallenge_wavlm_large_21_baseline
|
espnet
| 2024-01-20T21:44:55Z | 1 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:interspeech2024_dsu_challenge",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2024-01-20T21:38:20Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- interspeech2024_dsu_challenge
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/interspeech2024_dsuchallenge_wavlm_large_21_baseline`
This model was trained by simpleoier using interspeech2024_dsu_challenge recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 0d77ccfd8d980a996ac821253234a67a15f63129
pip install -e .
cd egs2/interspeech2024_dsu_challenge/asr2
./run.sh --skip_data_prep false --skip_train true --download_model espnet/interspeech2024_dsuchallenge_wavlm_large_21_baseline
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Jan 17 08:22:49 EST 2024`
- python version: `3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0]`
- espnet version: `espnet 202310`
- pytorch version: `pytorch 1.13.1`
- Git hash: ``
- Commit date: ``
## exp/asr_train_discrete_asr_e_branchformer1_1gpu_lr5e-4_warmup5k_raw_wavlm_large_21_km2000_bpe_rm3000_bpe_ts6000
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_ctc0.3_asr_model_valid.acc.ave/dev_clean|2703|54402|95.9|3.9|0.2|0.4|4.5|48.2|
|decode_ctc0.3_asr_model_valid.acc.ave/dev_other|2864|50948|92.5|6.9|0.6|0.6|8.1|60.4|
|decode_ctc0.3_asr_model_valid.acc.ave/test_1h|7439|57426|14.5|61.3|24.2|14.8|100.3|98.0|
|decode_ctc0.3_asr_model_valid.acc.ave/test_clean|2620|52576|96.0|3.8|0.3|0.4|4.4|47.6|
|decode_ctc0.3_asr_model_valid.acc.ave/test_other|2939|52343|92.4|7.0|0.6|0.6|8.3|63.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_ctc0.3_asr_model_valid.acc.ave/dev_clean|2703|288456|98.9|0.7|0.5|0.4|1.5|48.2|
|decode_ctc0.3_asr_model_valid.acc.ave/dev_other|2864|265951|97.5|1.4|1.0|0.7|3.2|60.4|
|decode_ctc0.3_asr_model_valid.acc.ave/test_1h|7439|299326|44.4|28.4|27.2|17.0|72.6|98.0|
|decode_ctc0.3_asr_model_valid.acc.ave/test_clean|2620|281530|98.9|0.6|0.5|0.4|1.4|47.6|
|decode_ctc0.3_asr_model_valid.acc.ave/test_other|2939|272758|97.6|1.4|1.0|0.7|3.1|63.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_ctc0.3_asr_model_valid.acc.ave/dev_clean|2703|82834|95.2|3.5|1.3|0.5|5.3|48.2|
|decode_ctc0.3_asr_model_valid.acc.ave/dev_other|2864|76205|91.6|6.4|2.0|1.1|9.5|60.4|
|decode_ctc0.3_asr_model_valid.acc.ave/test_1h|7439|159974|26.2|48.4|25.4|15.0|88.8|98.0|
|decode_ctc0.3_asr_model_valid.acc.ave/test_clean|2620|81195|95.6|3.2|1.2|0.5|4.9|47.6|
|decode_ctc0.3_asr_model_valid.acc.ave/test_other|2939|78676|91.6|6.2|2.2|1.0|9.5|63.0|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_discrete_asr_e_branchformer1_1gpu_lr5e-4_warmup5k.yaml
print_config: false
log_level: INFO
drop_last_iter: false
dry_run: false
iterator_type: sequence
valid_iterator_type: null
output_dir: exp/asr_train_discrete_asr_e_branchformer1_1gpu_lr5e-4_warmup5k_raw_wavlm_large_21_km2000_bpe_rm3000_bpe_ts6000
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 1
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: 1000
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
use_lora: false
save_lora_only: true
lora_conf: {}
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 120000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_rm_wavlm_large_21_km2000_bpe3000_bpe6000/train/src_text_shape.bpe
- exp/asr_stats_raw_rm_wavlm_large_21_km2000_bpe3000_bpe6000/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_rm_wavlm_large_21_km2000_bpe3000_bpe6000/valid/text_shape.bpe
- exp/asr_stats_raw_rm_wavlm_large_21_km2000_bpe3000_bpe6000/valid/src_text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 150
sort_in_batch: descending
shuffle_within_batch: false
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
train_data_path_and_name_and_type:
- - dump/raw/train/text.rm.wavlm_large_21_km2000
- src_text
- text
- - dump/raw/train/text.ts.en
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/text.ts.en
- text
- text
- - dump/raw/dev/text.rm.wavlm_large_21_km2000
- src_text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
allow_multi_rates: false
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.0005
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 5000
token_list:
- <blank>
- <unk>
- ▁
- S
- ▁THE
- ▁A
- ▁AND
- ▁TO
- ▁OF
- ED
- ▁IN
- ▁I
- T
- N
- ▁HE
- ING
- D
- ▁WAS
- E
- ▁THAT
- ▁IT
- Y
- ''''
- ▁HIS
- M
- I
- A
- LY
- ▁HAD
- ▁YOU
- ▁FOR
- ▁AS
- ▁WITH
- K
- ▁BE
- ▁HER
- R
- ER
- U
- ▁BUT
- ▁NOT
- RE
- ▁IS
- ▁SHE
- ▁ON
- P
- ▁AT
- L
- G
- ▁SO
- ▁ME
- H
- C
- LE
- O
- ▁KA
- ▁NO
- TA
- LI
- ▁HIM
- LA
- RI
- ▁WE
- ▁THEY
- ▁ALL
- ▁NA
- ▁MY
- ▁BY
- ▁HAVE
- ▁O
- ▁THIS
- AN
- ▁WERE
- NA
- IN
- ▁WHICH
- ▁DI
- NG
- RA
- ▁MA
- ▁AN
- ▁FROM
- NE
- ▁ONE
- MA
- ▁SAID
- W
- ▁DE
- น
- ▁RE
- AL
- '2'
- ▁OR
- B
- TE
- TI
- RO
- ▁THERE
- ▁DO
- SE
- ▁WHEN
- ▁SA
- ▁MAN
- ▁HA
- KA
- ▁THEIR
- LO
- ES
- CE
- ▁WOULD
- NI
- ▁C
- ▁B
- ▁E
- ▁WHO
- ▁PA
- ▁WHAT
- ▁UP
- ▁IF
- SI
- LL
- EN
- ▁THEM
- US
- ▁ARE
- KU
- UN
- ▁BA
- '7'
- ▁OUT
- ▁เ
- 'ON'
- ▁BEEN
- MI
- VE
- HA
- WA
- OR
- า
- ▁LA
- TO
- ▁SI
- IL
- DE
- ▁WILL
- ▁COULD
- CK
- ด
- UR
- AR
- GE
- DI
- KE
- ▁M
- ▁UN
- ST
- ▁TA
- ▁MO
- ▁NI
- '1'
- GA
- SA
- MO
- ▁MI
- ▁YA
- ม
- ▁MORE
- ME
- ▁INTO
- BA
- ▁มี
- PA
- Á
- ▁LIKE
- ▁SOME
- F
- DA
- ▁BU
- ย
- TH
- ▁MU
- ▁CON
- ▁THEN
- ▁SE
- อ
- ง
- KO
- ก
- ▁YOUR
- ▁NOW
- TED
- ▁VERY
- ▁CAN
- ▁LITTLE
- ATION
- ▁DA
- ▁DID
- ▁GO
- ▁PI
- V
- ION
- LU
- ▁KO
- IT
- YA
- '5'
- ▁CO
- ▁WA
- ▁HAS
- ▁ABOUT
- ▁TIME
- ▁NE
- ▁THAN
- ▁SEE
- ▁KNOW
- ▁TU
- KI
- ▁G
- ▁RA
- MP
- Р
- ▁K
- ▁SU
- RU
- ABLE
- X
- ENT
- ▁OVER
- ▁LO
- ▁TI
- PI
- ▁ANY
- BE
- ร
- AH
- ▁ของ
- ▁BO
- AT
- ▁WELL
- CH
- Ê
- PO
- ▁LONG
- ▁MEN
- ▁UPON
- ▁OTHER
- ▁GREAT
- TU
- PE
- '3'
- ▁PO
- ▁TWO
- ะ
- ▁ONLY
- IR
- CO
- ว
- RY
- ▁T
- ▁OUR
- EL
- ▁D
- ▁AFTER
- BO
- ▁DOWN
- ▁ST
- ▁TE
- ▁OLD
- ▁SHOULD
- ▁LI
- ▁MADE
- TION
- ITY
- MAN
- MENT
- ▁BEFORE
- А
- ▁MISS
- ▁GOOD
- ▁ครับ
- ET
- BU
- TING
- ▁อยู่
- ▁DAY
- ANG
- و
- NESS
- CA
- ▁WAY
- ▁DIS
- ▁PE
- HE
- ▁US
- Z
- CI
- OUS
- OL
- JA
- ▁SUCH
- ▁COME
- ▁EN
- Ი
- É
- BI
- IM
- ▁U
- ▁HO
- '4'
- VER
- ▁CAME
- ERS
- ▁HOW
- ▁MUCH
- ▁ค่ะ
- ▁GA
- ▁KE
- ▁MAY
- WE
- Ა
- ▁DU
- UT
- ▁HI
- เ
- ▁PRO
- ▁WHERE
- Л
- ▁JA
- Н
- ▁NGA
- ▁MISTER
- ▁BACK
- ▁NU
- ▁THESE
- EST
- HO
- ▁EM
- NT
- ▁NEVER
- Г
- ▁EX
- È
- Í
- О
- ▁KU
- ▁MUST
- ▁JE
- ▁ละ
- ▁WI
- ে
- FUL
- ▁THINK
- Ე
- ▁EVEN
- ▁BI
- ▁JUST
- GI
- ▁HU
- AP
- 'NO'
- ▁SAY
- া
- ▁RO
- บ
- ▁ITS
- TY
- ▁GI
- ▁MAKE
- ▁LE
- LED
- ▁ขาย
- ▁HOUSE
- ▁OWN
- ▁THOUGHT
- Д
- ه
- ล
- UNG
- QU
- ▁FIRST
- ▁DON
- VI
- С
- ▁AGAIN
- ▁L
- DO
- ి
- ▁OH
- ▁P
- AM
- KAN
- IC
- ▁บาท
- ▁KI
- ি
- ▁MIGHT
- AD
- AK
- ▁WENT
- ATE
- ่
- ▁HIMSELF
- ISH
- ▁AM
- ▁THROUGH
- Q
- ▁VA
- Ó
- UL
- VA
- GO
- YO
- ৰ
- À
- ▁HAND
- ี
- LING
- LESS
- Ò
- ▁VI
- ▁FA
- ▁FAR
- ▁F
- ▁PER
- ANT
- QA
- NY
- ్
- CHA
- ู
- AS
- ▁EVERY
- ▁HERE
- ▁HEAD
- TER
- ▁GET
- Î
- RED
- ▁W
- ▁N
- ▁LIFE
- ▁หนึ่ง
- Т
- YE
- ▁TOO
- DER
- MU
- ▁THOSE
- ▁CA
- ▁WITHOUT
- ▁EYES
- ▁OFF
- ▁MOST
- ▁AWAY
- ส
- IS
- Ო
- М
- Е
- ▁MANY
- র
- GU
- OK
- ▁NEW
- ▁SAW
- ANCE
- ▁ห
- UM
- ন
- WI
- ▁YOUNG
- ো
- ி
- ل
- ▁STILL
- ▁RU
- ▁BEING
- ু
- ▁UNDER
- ▁NIGHT
- ر
- ▁RIGHT
- ম
- ▁LAST
- У
- ▁PART
- ▁TAKE
- ▁FACE
- ▁TELL
- HI
- ▁PLACE
- ▁PUT
- ▁FOUND
- J
- Მ
- ▁YET
- ▁WHILE
- ▁PEOPLE
- Х
- ▁NOTHING
- AY
- ▁GRA
- OM
- ্
- ▁FI
- ▁WORK
- MB
- ▁ต
- ▁JU
- ▁THREE
- FF
- ARD
- SU
- ா
- ▁LOVE
- ห
- ▁THOUGH
- ې
- ▁PRE
- ZE
- ▁MAR
- ই
- CU
- '8'
- TON
- ▁FE
- ▁ROOM
- ▁PU
- HU
- TOR
- ▁TRA
- จ
- DU
- TIC
- ▁SP
- ▁SAME
- JE
- ం
- ا
- ป
- ▁ASKED
- VO
- ▁LOOK
- 的
- PER
- ▁EVER
- ▁YE
- ENCE
- ▁HEART
- ▁ก
- Ş
- ต
- າ
- SON
- ▁LEFT
- ้
- ▁FATHER
- ▁ANOTHER
- ▁GOT
- ▁LET
- ▁ส
- ▁TH
- ▁ขวด
- ▁CAR
- ุ
- NDA
- ZA
- ▁SHALL
- อง
- ▁BAR
- ▁V
- ▁ONCE
- MBA
- ท
- ▁ALWAYS
- JU
- RAN
- ▁ห่อ
- IST
- ▁WHY
- ی
- ิ
- ▁SEEMED
- ID
- Ы
- CHE
- ▁TOOK
- ▁JO
- ▁GIVE
- ນ
- ▁ค
- ▁MOMENT
- ▁BECAUSE
- ค
- SO
- ▁โ
- IA
- ▁DOOR
- ▁MIND
- LAN
- ▁HOME
- OS
- ▁END
- Э
- ▁TEN
- ▁สอง
- ▁TOLD
- ือ
- LES
- NDE
- ▁SHA
- LD
- Ვ
- ▁H
- ONG
- ▁LOOKED
- IVE
- ▁HEARD
- EK
- ▁SOON
- ▁MOTHER
- NING
- ▁SOMETHING
- ้า
- ▁APP
- ่า
- ን
- ত
- ▁LIGHT
- IAN
- ▁THINGS
- ▁QUE
- AU
- ▁KING
- IES
- ▁WANT
- ▁อาดอ
- ▁CH
- Თ
- ా
- ▁IMP
- ▁HIGH
- ▁THING
- ▁CHA
- TEN
- ▁SIDE
- ▁GOING
- ▁NAME
- ▁PAR
- ZI
- Ì
- ▁FIND
- Რ
- ດ
- ราะ
- ▁WORLD
- ன்
- ▁PRI
- И
- RT
- FI
- ກ
- ▁COMP
- ISE
- І
- ▁RI
- Ш
- ▁그
- م
- ▁YU
- ี่
- Ọ̀
- ▁EE
- ATED
- JO
- UD
- మ
- ▁REST
- ซ
- ు
- ▁COM
- GAN
- ▁CHE
- ▁CARE
- HAN
- ▁WISH
- ే
- ່
- ት
- FER
- LAND
- ▁WATER
- JI
- ▁YES
- FA
- ▁KIND
- ▁SHOW
- ▁BETTER
- แ
- ▁FO
- ▁LU
- ▁WAR
- WAN
- TCH
- س
- ▁SIR
- ▁KNEW
- ▁จ
- ▁WOMAN
- ▁ร
- ▁HARD
- BLE
- ▁আ
- ▁EACH
- KIN
- ▁ป
- AC
- ▁ANG
- IK
- FU
- ▁AGAINST
- ▁HAVING
- ອ
- ▁FEW
- Ს
- ▁GEN
- ▁BEGAN
- AI
- ▁FOUR
- ້
- АЙ
- ▁SING
- ለ
- ▁YEARS
- ▁AL
- త
- Й
- ▁ENOUGH
- ▁SET
- ক
- LAR
- АН
- WARD
- ▁PRESENT
- ▁OPEN
- ARY
- З
- ▁VOICE
- STER
- IONS
- Უ
- ▁MIN
- ▁WHITE
- ROW
- ল
- ▁EHHE
- IE
- ▁WHOLE
- ▁YO
- ▁አ
- INE
- ব
- ▁NOR
- ▁BELIEVE
- GIN
- 你
- ▁GIRL
- Ж
- OT
- ▁J
- ▁แ
- TRA
- ▁SUN
- Ẹ́
- ▁HUNDRED
- 이
- RON
- ▁DONE
- TURE
- กุ
- ฮ
- ▁BRA
- ▁CALLED
- ▁HOPE
- ▁AH
- ▁MORNING
- క
- NDI
- ▁NEAR
- BB
- 가
- ▁STE
- ▁TAI
- ข
- ▁STA
- ▁Ì
- ▁WALK
- ▁EL
- ▁SEEN
- ▁BETWEEN
- VED
- ▁นิ
- ▁บ
- ▁FORM
- THER
- ▁TRI
- ▁CLOSE
- ም
- ANA
- ▁STATE
- VING
- ▁FELT
- ▁CHI
- ል
- YI
- WO
- IP
- Ლ
- ▁อ
- ▁POWER
- TAN
- 고
- ம்
- WU
- ЭЭ
- 어
- ДА
- ▁มา
- ▁GU
- IF
- ▁HERSELF
- FOR
- ’
- ▁HALF
- বা
- ▁TOWARD
- Ọ́
- ▁BOTH
- ▁হ
- ▁POINT
- ▁À
- ▁AMONG
- ▁DOES
- ວ
- ▁HOWEVER
- ▁ALSO
- NYA
- ▁TURNED
- KUNA
- ▁POOR
- IGN
- ▁COURSE
- ▁JI
- ▁PERHAPS
- ▁NG
- ▁GE
- HAM
- ▁কি
- ห์
- ▁ORDER
- ▁SEA
- ▁REPLIED
- ▁QUITE
- ▁OL
- ▁MATTER
- ▁MYSELF
- TIVE
- ு
- ▁SURE
- 一
- ▁SPEAK
- Ө
- Ọ
- ITE
- RING
- LT
- ▁AR
- IOUS
- ▁CRE
- ํา
- ல
- KING
- CY
- TSI
- ช
- ▁ক
- URE
- ▁SMALL
- ▁GOD
- ▁สาม
- ▁น
- ▁EST
- ▁PERSON
- ▁GAVE
- వ
- ▁KEEP
- ▁CU
- ▁ALMOST
- Ú
- NCE
- TIN
- EM
- MEN
- ▁บู
- ▁PEN
- COR
- Ṣ
- OD
- ▁MER
- IGHT
- Ნ
- ▁SON
- FT
- ▁ท
- 다
- ▁PLAY
- ▁Х
- ▁TER
- VAL
- ت
- MER
- ▁WHOM
- ▁NEED
- TUR
- ▁WHI
- ์
- QUE
- ๊
- BY
- ▁DEAR
- ບ
- NGO
- ▁DES
- ງ
- OP
- CHI
- ▁POR
- RIES
- 不
- ▁TOGETHER
- ▁UNTIL
- DAY
- Ç
- ው
- ຫ
- ▁HANDS
- ▁SINCE
- ▁MON
- BER
- Š
- ▁LAND
- ي
- ▁CHILD
- ▁FEET
- FULLY
- ▁অঁ
- ▁NEXT
- ▁ANYTHING
- ▁WO
- 我
- ▁สิบ
- ▁BEST
- ▁GENERAL
- ▁FIVE
- ▁WORDS
- ▁DAN
- 는
- ▁BLACK
- АР
- ื่น
- ▁FACT
- ▁SAT
- Დ
- ▁BOY
- డ
- ▁LAY
- Ხ
- ▁MEAN
- PP
- ▁BROUGHT
- LAI
- ▁ALONG
- AGE
- ▁STOOD
- ▁WOOD
- ய
- Ü
- RIN
- CENT
- ▁IYA
- 지
- ▁LEAVE
- AKE
- ▁FRIEND
- В
- ▁SEN
- ▁LARGE
- ▁DAYS
- ▁SUB
- ▁SAN
- อินทรีย์
- ▁نه
- ▁HORSE
- ▁HELP
- মা
- ▁HEAR
- ▁CONSIDER
- IYA
- ▁RUN
- ▁AIR
- ▁CHAR
- ▁CALL
- ▁LIVE
- ▁ROUND
- ARI
- PU
- ▁READ
- ▁FULL
- ▁SIX
- ▁USE
- DY
- ▁DAR
- ▁JOHN
- ๋
- ▁OU
- THI
- দ
- ▁MONEY
- RIC
- কে
- ▁CHO
- ANE
- ▁หม
- МА
- NCH
- ▁TWENTY
- ே
- PUN
- UP
- ▁FIRE
- ▁MASTER
- ▁NATURE
- นา
- RESS
- ມ
- య
- স
- TRI
- ▁LESS
- ট
- লে
- ▁SENT
- ENG
- WAY
- ▁BEN
- ▁LAW
- ▁LAN
- ▁LOOKING
- Ẹ̀
- ▁GUA
- ATIONS
- ▁CE
- ▁RATHER
- ▁FEAR
- ▁WORD
- ▁GLO
- Ẹ
- ▁SHORT
- ▁VO
- ▁FAIR
- RS
- UK
- ▁LANG
- ▁MIS
- LAM
- ▁FL
- LER
- LLA
- ▁IDEA
- ▁স
- АА
- พ
- ரு
- ▁CASE
- ▁ᲓᲐ
- VES
- ▁COUNTRY
- ▁INDEED
- Გ
- ▁PASSED
- Ė
- PELA
- ▁INTEREST
- ALLY
- ▁QU
- ▁PAS
- ▁SOUND
- เล
- ▁ບໍ່
- ስ
- TTER
- ن
- ▁SÍ
- NU
- い
- IZ
- ▁FALL
- ▁PLAN
- ITIES
- ん
- ▁CRIED
- ▁CAP
- ▁COUNT
- ▁INTER
- ▁QUESTION
- ่ง
- RANG
- ▁GROW
- ర
- ছে
- 에
- ▁UNCLE
- ▁พ
- LLOW
- ▁TAKEN
- ▁REAL
- ▁Б
- PING
- ▁LADY
- ்
- ICAL
- NED
- টা
- ▁ไ
- ▁க
- ▁GONE
- ลา
- LIN
- ▁ACT
- ▁THOUSAND
- 하
- GRA
- ▁OTHERS
- MPA
- ▁เจ้า
- ▁REASON
- ▁DOCTOR
- LANG
- ▁না
- ▁AROUND
- ▁CLEAR
- ▁ব
- IANN
- SHED
- ▁CERTAIN
- Ù
- ▁SH
- ILY
- ▁WHOSE
- ▁ANSWERED
- ▁我
- খ
- ▁THEMSELVES
- ▁ᲰᲝ
- ▁DEATH
- ▁RAN
- ▁TRUE
- ▁ነው
- ▁WINDOW
- ▁WIFE
- Ž
- ▁BEHIND
- 有
- ▁CHILDREN
- UG
- ▁ᲠᲐ
- ▁BROTHER
- ▁NGI
- UC
- ▁REALLY
- ▁ЮМ
- ▁TEA
- ب
- প
- ▁PRA
- ▁啊
- ▁STEP
- TES
- নে
- ▁GROUND
- ิน
- ▁TILL
- RAY
- MENTS
- DURING
- ZO
- CUR
- ▁WOMEN
- ▁APA
- ▁OFTEN
- ▁PAN
- CHO
- ড
- FIN
- ▁ร้อย
- ▁DOUBT
- ▁TALK
- INA
- ▁LETTER
- ▁KAY
- ▁د
- ▁RED
- YAN
- ▁ล
- ▁ARM
- ▁SIGN
- ▁EAR
- AW
- ີ
- ▁ALREADY
- ▁KAN
- AUGHT
- বে
- ▁WONDER
- ▁PUR
- ▁م
- ▁А
- HON
- ▁ORA
- ▁FOOT
- ▁BOOK
- HAR
- ▁FELL
- ▁WATCH
- তে
- ▁HOLD
- 是
- ▁STREET
- ▁GRE
- ▁NÍ
- ▁LEG
- ▁KON
- ▁FLA
- ▁สุ
- ▁প
- ก้า
- న
- ▁THUS
- ▁FINE
- PHE
- ▁БАЙНА
- ▁BECAME
- ▁MANNER
- LEY
- DEN
- TERN
- ▁SHI
- ▁SIGHT
- ▁LORD
- ▁PARA
- ดา
- ▁TOWN
- SIDE
- ▁น้ํา
- GUE
- ▁BODY
- IH
- ▁DIDN
- ▁FEELING
- ▁KWA
- ▁WON
- ▁VE
- ో
- ▁ตาก
- ▁STRONG
- ▁CANNOT
- WIN
- ▁RETURNED
- ▁ЗА
- ▁PAIN
- ▁PAT
- লা
- ▁EIGHT
- ▁ALONE
- ▁BED
- స
- ถ
- ARA
- ALI
- ▁EVERYTHING
- FE
- NDO
- ▁BIG
- ▁แม่
- ▁ILL
- க்க
- PR
- ▁COMING
- ▁HAT
- Ờ
- ▁GIVEN
- ▁SECOND
- ต๋
- ДЕ
- KEN
- خ
- በ
- ÑA
- MBI
- ▁EZ
- Ą
- ▁ABOVE
- চ
- ద
- ச
- Ğ
- ▁REMEMBER
- ้อ
- TANG
- ▁DEAD
- ▁OB
- ▁你
- ▁MEET
- กร
- ▁ک
- ▁LINE
- ▁BEAUTIFUL
- Ქ
- ▁EXPECT
- ▁SLEEP
- ▁SEVEN
- LAH
- PAN
- GEN
- ▁DARK
- ▁CI
- ె
- Қ
- ▁IMA
- ▁SUPPOSE
- П
- ▁EVENING
- ▁EYE
- UH
- PAS
- Ც
- ▁BER
- ▁CITY
- ▁FELLOW
- ▁HELD
- ▁CAUSE
- ▁HUMAN
- ▁POU
- IG
- ▁PH
- த
- DRA
- ነ
- చ
- ያ
- ▁MET
- ▁ROSE
- ▁ART
- ▁FEEL
- SAN
- ▁AC
- ▁TURN
- ▁FREE
- ▁তো
- নি
- শ
- ▁SCHOOL
- ▁SOMETIMES
- ▁ப
- ంట
- ▁HOUR
- อย
- ற
- ▁PIN
- ▁OO
- ▁FORCE
- ▁YEAR
- ▁CUR
- ▁SISTER
- ک
- ▁UM
- ▁UNDERSTAND
- ▁DREAM
- IYOR
- ▁DEEP
- ▁SAYS
- ▁HAIR
- ▁DRAW
- ▁STRANGE
- জ
- ▁LEAST
- ▁KEPT
- ▁SPOKE
- ▁PASS
- ይ
- ▁হ্যাঁ
- ▁OBSERV
- จํา
- ่ะ
- บา
- ী
- ▁กล้วย
- ▁CAPTAIN
- ▁Q
- Კ
- ▁DRESS
- ᲕᲘ
- ▁SUR
- ர
- ▁FISH
- ▁BAD
- ▁FAMILY
- PPED
- ▁BIR
- হ
- ᲘᲡ
- ▁WALL
- ▁BEAR
- ASI
- นี้
- ▁BECOME
- ▁LEARN
- ▁ও
- ᲛᲐ
- ம
- ▁OBJECT
- ▁ÀWỌ
- ▁MM
- ้าย
- ▁UNA
- ር
- ▁তা
- ▁TABLE
- ▁OG
- ▁SORT
- SHIP
- ▁WHETHER
- ▁MAKING
- ش
- ▁PLEASE
- ப்ப
- ▁MAG
- ▁NUMBER
- ▁BON
- 도
- RUS
- CAL
- స్
- ▁ES
- ړ
- 在
- ▁GREEN
- Я
- ና
- ▁DIFFERENT
- ▁MOUNTAIN
- ▁EARTH
- ULA
- ▁OFFICE
- KHI
- ▁ANSWER
- ▁WIND
- ▁LAUGH
- ህ
- ▁ซาว
- ▁EITHER
- ▁FRIENDS
- ▁YANG
- ங்க
- ▁SUDDENLY
- ᲐᲠ
- ГҮЙ
- お
- ▁PAY
- ▁BRING
- ▁WITHIN
- ▁RETURN
- য়
- ▁VISIT
- ▁EH
- ன
- ▁TR
- ▁CHURCH
- ▁ตรา
- ▁BESIDE
- ▁BAL
- ิง
- ▁RING
- ▁PRINCE
- ▁SPIRIT
- ▁ITSELF
- ▁THOU
- ▁STORY
- ▁PAST
- ▁NGE
- PORT
- க்கு
- வ
- MBO
- ▁LOW
- গ
- 아
- ీ
- ▁DAUGHTER
- ▁வ
- IBLE
- ▁SY
- LIK
- ฟาร
- ▁SEVERAL
- ์ม
- TEL
- ▁ELSE
- ▁LOST
- ▁เกือ
- ▁AKO
- ▁ROAD
- ▁FUN
- ລ
- ▁SAM
- ▁APPEARED
- ▁HILL
- ▁NÓ
- ▁HAPPY
- ▁CHU
- TIM
- ▁POSSIBLE
- ▁REC
- หม
- ▁БОЛ
- ▁USED
- 라
- ▁SNOW
- ▁CUT
- ▁RIVER
- ▁ASK
- ▁ښه
- ▁GLAD
- د
- ▁WEEK
- DAN
- ▁Ა
- IZED
- ▁ห้า
- ▁UH
- ▁ANO
- ▁สี่
- ▁STAR
- ▁SCR
- ໂ
- の
- ▁PLA
- AWA
- BAN
- ▁COLD
- ▁STAND
- ັ
- ▁SUBJECT
- ▁او
- ▁WAIT
- ▁CONTINUED
- ▁FLOW
- GON
- ຮ
- ▁TROUBLE
- ▁아
- ని
- ▁CHANCE
- VIN
- ӨӨ
- Ñ
- ደ
- ர்
- மா
- KEUN
- ▁TAN
- รี
- YU
- ঁ
- Ả
- ▁BLUE
- ▁JOY
- ▁LISTEN
- ▁DESIRE
- য়ে
- ᲐᲜ
- RÍ
- ▁LATE
- ▁ใน
- ▁REACHED
- ▁KNOWN
- ▁SKI
- อบ
- TRO
- ெ
- ▁LÀ
- ▁দি
- ริ
- ▁LEAD
- AG
- ও
- โ
- ▁SAVE
- ▁AGE
- ▁MEANS
- ▁ته
- WN
- ▁QUI
- ▁KHÔNG
- ▁BUSINESS
- ▁FUR
- ▁FOLLOWED
- LLY
- Ч
- መ
- 서
- ▁COURT
- ▁PETER
- ▁TREE
- ▁SOUL
- ▁GRAND
- ▁IR
- ோ
- Ô
- ▁EIGHTEEN
- ▁THEREFORE
- ▁DANGER
- ຍ
- THOUGH
- ▁WILD
- LIGHT
- ▁NORTH
- ▁SAK
- あ
- ట
- ▁MARK
- ▁RICH
- '0'
- ▁เป็น
- ▁EXCEPT
- ▁GARDEN
- ДЫ
- ▁WANTED
- ▁ACROSS
- う
- ▁আছে
- ▁የ
- ▁TOUCH
- Ɔ
- ▁خو
- ▁PERFECT
- ULI
- ▁NYA
- ▁CERTAINLY
- WAR
- ▁LONGER
- ผ
- KHU
- ▁HUSBAND
- ▁OCCASION
- ▁BILL
- ▁SEEM
- ▁ENGLISH
- ላ
- ▁HẼE
- ▁거
- ІН
- ▁ARMS
- 을
- ▁CHAY
- ▁পা
- ▁PRINCESS
- ▁FRA
- IO
- ▁CHARACTER
- ▁DIFFICULT
- ▁OUGHT
- ▁SHIP
- HIN
- ▁ఆఁ
- ▁ໄປ
- MBER
- ไ
- ЭН
- து
- TIK
- ЫН
- ▁QUIET
- ENS
- INI
- IAL
- ▁COL
- ز
- ብ
- ▁ஆஹ்
- ▁ถุง
- Ტ
- ゃ
- ▁PRETTY
- ▁VIEW
- แก่
- ATIVE
- KHO
- ట్
- LÉ
- ▁Л
- ৈ
- ▁REMARK
- ▁SUFFER
- ▁ข้าวหอมมะลิ
- ยา
- ▁TIMES
- UX
- ▁SECRET
- ▁SWEET
- ▁OKE
- ▁SENSE
- ▁READY
- ▁DISCOVER
- ▁REGARD
- ▁CARRIED
- য়া
- ▁RID
- ดี
- ▁CHANGE
- К
- ▁ĐI
- ▁ເອີ
- ▁چې
- ▁COMMON
- RAL
- ▁ААА
- ▁SIN
- ▁К
- ▁EFFECT
- ▁போ
- ▁MÀ
- ▁INDIAN
- เพ
- ▁系
- ▁LIVED
- ▁LATER
- ▁PAPER
- ະ
- 게
- ▁MHM
- ▁TÍ
- GUA
- ▁จะ
- 爱
- ▁SAD
- ان
- WELL
- ▁PROF
- ▁BAK
- ▁MONTH
- ▁CAST
- বি
- BIL
- ▁ప
- ▁NATURAL
- ▁ADDED
- ▁EAT
- ▁TRU
- ▁NGO
- ▁NANG
- ▁FRONT
- ▁TSIT
- ▁HUNG
- ▁MANG
- ปลา
- ▁น้ําผึ้ง
- ▁RESPECT
- ▁SUCCESS
- ▁บ้าน
- ▁BROWN
- ANGAN
- ANI
- ▁TAKING
- RAC
- ABLY
- ศ
- ▁PUBLIC
- ▁SURPRISE
- ▁BREATH
- か
- ▁NH
- నా
- LUNG
- ▁PARTY
- KAR
- ▁THANK
- ▁나
- ຄ
- ▁กล่อง
- ▁STAY
- ▁ที่
- ▁TRANS
- ▁IKI
- ది
- ▁CAMP
- Ấ
- ▁แดมอ
- 기
- 은
- ▁TRUTH
- OUGH
- ▁SOUTH
- ▁TRIED
- ▁START
- BAR
- DHI
- ▁PARTICULAR
- ▁PURPOSE
- ▁SAL
- ▁EQUAL
- లే
- ▁ป้อ
- ▁FRO
- ▁GAL
- PHI
- Შ
- ▁PREPAR
- ▁ANA
- ▁WARM
- ▁DIRECT
- ้ง
- ▁ENTERED
- ENED
- TTLE
- ▁ИӘ
- ተ
- 니
- ▁እንደ
- ▁USUAL
- ▁STONE
- யா
- ▁BANK
- ▁RECEIVED
- ▁FORWARD
- ▁AMA
- ▁CÓ
- ້າ
- ▁TREES
- ▁GUN
- ▁CRY
- ▁SUGGEST
- ▁แปด
- ▁FIGURE
- ▁COMFORT
- ▁PLAIN
- ந்த
- ள
- RATION
- ▁SOFT
- ▁THY
- ▁ENTER
- HOT
- ISM
- ▁HAYI
- ÚN
- ▁PAPA
- BILITY
- EVER
- ▁গ
- னு
- ▁WALA
- 就
- ▁BLOOD
- ▁POSITION
- டி
- ▁SAFE
- ILE
- UB
- IAU
- ▁GIRLS
- ▁এই
- พัน
- ▁BEYOND
- ▁COMMAND
- ▁PROMISE
- ᲕᲔ
- ▁LIVING
- ▁MANA
- ▁HOURS
- ЫП
- 리
- బ
- ▁ТИЙМ
- た
- ▁ENGLAND
- ▁Ọ
- ▁BAI
- ▁কর
- ▁DELIGHT
- గ
- ை
- ວ່າ
- ໃ
- ▁PANG
- ▁Ừ
- Ų
- に
- WEN
- ▁이
- ▁ААН
- TUK
- NCHIK
- ▁AGO
- ▁MAIN
- ▁BELL
- ▁ᲒᲐ
- ▁SER
- ▁OPENED
- ሽ
- ▁አዎ
- ▁БА
- ட
- ▁DOING
- HANG
- ้อง
- ▁TREAT
- ▁SANE
- ᲚᲘ
- ▁PALE
- ▁پ
- 了
- ▁EXPERIENCE
- ▁CLASS
- ప
- FO
- ▁বল
- PUT
- ▁SIT
- ▁SERVICE
- ▁ENJOY
- ▁CHIEF
- ▁เคย
- ÀN
- ▁FER
- ▁AGREE
- ▁SAYING
- ▁ఆ
- ▁REMAIN
- ▁KANG
- డు
- ▁FOREST
- ▁ข้าว
- HING
- ▁PLEASURE
- な
- ▁WORTH
- ▁COMPANION
- ▁FAST
- ▁CARRY
- ▁MAL
- HOOD
- ▁MILES
- ก่อ
- ▁STRUCK
- ▁یې
- ▁HELLO
- ▁FIGHT
- ▁DEAL
- ▁SEAT
- 都
- ▁BEAUTY
- ▁CROSS
- ▁SILENCE
- ▁INSTANT
- க
- ▁NDE
- ட்ட
- তো
- HOO
- ▁อุ้ย
- Û
- ▁ข้าวกล้อง
- ИН
- ตร
- 去
- 好
- ▁CROWD
- ▁نو
- THE
- ▁BOYS
- ▁BAY
- RAH
- ▁NATION
- ▁SAIL
- রে
- CHU
- ▁BAN
- ທ
- ▁รับ
- し
- ర్
- ار
- KIM
- ▁ยา
- IFIED
- ▁QUA
- ▁ໄດ້
- Į
- ▁BRIGHT
- ▁НЬ
- ▁ZA
- ▁ᲐᲠ
- ▁MINUTES
- ຊ
- ▁SAKA
- ▁FINGER
- ᲓᲐ
- ▁IHE
- ▁SANG
- ▁ACCOUNT
- ▁SERVANT
- ▁YOURSELF
- ▁ຢູ່
- ረ
- ▁ກະ
- ▁کو
- ▁УУ
- САН
- ▁STANDING
- ▁ABLE
- ▁ROCK
- ▁NEARLY
- КЕ
- ▁BIT
- ะห์
- ீ
- ሰ
- て
- 到
- ▁নাই
- ▁THROW
- ҚА
- ▁কা
- 면
- ▁ATTENTION
- ▁CONDITION
- ▁MOUTH
- ▁TRAVEL
- ▁را
- ▁که
- ▁FAT
- ▁NON
- ตี
- ▁SMILE
- ▁YOUTH
- য
- เมียน
- ▁PICTURE
- ▁FURTHER
- ▁BOAT
- ▁NAA
- ▁VEN
- ▁ТА
- ட்டு
- ▁APPROACH
- ▁ຕ
- ▁EARLY
- ▁HAPPENED
- EG
- จังหวัด
- ሁ
- 人
- ▁ปุก
- ื
- ▁IMMEDIATELY
- ▁FLU
- ఎ
- ▁DRIVE
- LOG
- ▁GREW
- NTEN
- ማ
- ▁OPINION
- ▁COMPANY
- ▁PRAY
- GGED
- ▁YON
- ▁BOW
- ▁FORTH
- ▁EAST
- ▁НЭГ
- ま
- ▁NEITHER
- ▁MMHM
- ▁ສ
- ติ
- Ბ
- หย
- ▁THOR
- ▁NINE
- ▁ROLL
- ▁NONE
- ▁ACCEPT
- ค่ะ
- ▁GOLD
- ▁CHAIR
- ▁SEEMS
- ▁FOLLOW
- RIP
- ษ
- ټ
- ▁FLOOR
- ▁GLANCE
- DDING
- ను
- KONG
- ▁ขอ
- ้ว
- Ã
- ▁RỒI
- ▁THIRTY
- ▁THIRD
- SCRIBE
- ▁WIDE
- ▁GATHER
- ▁ÇI
- ▁THICK
- แต
- ▁TAIL
- গে
- ໍ
- ▁AFFAIR
- 요
- 해
- ▁FRESH
- ▁HEAVEN
- ได้
- ▁BEAT
- না
- ▁STOP
- ▁MAMA
- TAQ
- ▁MINE
- ల
- ▁MUSIC
- ▁LENGTH
- ▁НЕ
- ▁COMPLETE
- ▁GRACE
- ▁HABIT
- CKET
- ТЫ
- ৰি
- ОО
- こ
- ▁SHOUT
- ▁STOPPED
- ▁FILLED
- ັນ
- ▁GUARD
- ▁TRO
- HOR
- ▁QUEEN
- ら
- ኝ
- ▁AFRAID
- わ
- ▁CLOUD
- ▁دی
- కు
- ▁UTA
- ິ
- ூ
- ▁EVIDENT
- き
- ▁CREATURE
- ▁WOUND
- ▁STARTED
- ▁HUNT
- ▁UTTER
- 나
- ته
- Ế
- ▁DOG
- วัน
- ▁FIFTY
- ▁ไป
- ▁SAINT
- ZZ
- ▁ANNE
- ▁FIT
- ▁MOON
- న్న
- ฆ
- 个
- ๊ะ
- ▁যা
- ▁CONTINU
- СА
- ▁PRESS
- ራ
- と
- く
- ▁SELF
- ▁PIECE
- ▁OKAY
- ▁MAH
- ▁VER
- ▁KORO
- ▁HALL
- MBE
- ▁SN
- ▁LIE
- ▁STAP
- 过
- غ
- ▁EXCLAIMED
- ▁ЮУ
- ▁ATTEMPT
- 心
- ▁PROCEED
- ▁GUESS
- ▁YEN
- ګ
- ▁GOVERNMENT
- ▁REPEAT
- తా
- ▁BIRD
- ▁พื
- ▁EXPRESSION
- ІҢ
- ግ
- 唔
- ▁INSTEAD
- ▁BREAK
- ▁SILENT
- ▁APPEARANCE
- దు
- ▁SPRING
- ▁WONDERFUL
- ພ
- Ạ
- ▁EXPLAIN
- ▁RESULT
- ▁ANIMAL
- ▁БИ
- LẸ̀
- TSIN
- ▁BORN
- ▁GRAVE
- หา
- ▁MASS
- ▁ТЭГЭЭД
- ▁แสน
- 想
- ▁ESCAPE
- ▁هو
- రా
- ▁SITTING
- ▁LOVED
- ครับ
- ▁நா
- ▁OUTSIDE
- ▁হয়
- ຈ
- ยัง
- ຂ
- ฟักข้าว
- ▁ขนม
- ▁เข่า
- ▁MOVED
- ▁WEST
- ▁GEL
- BANG
- ▁TRY
- ች
- ከ
- ▁IMPOSSIBLE
- り
- ▁CORNER
- ▁LONDON
- ▁DEMAND
- ▁WHATEVER
- NGGO
- লি
- 한
- 天
- ▁COVERED
- ▁ДЭЭ
- CLOCK
- ▁TEARS
- ▁ERÊ
- ▁MAKA
- ▁JANE
- ▁JOIN
- RENG
- ంది
- ும்
- ேன்
- ▁เม็ด
- ▁DETERMIN
- ▁MADAME
- ▁PROPERTY
- ▁WRITE
- ▁HALO
- ▁SUIT
- PANG
- ▁PATH
- ▁EXPRESS
- ▁BROKEN
- TSO
- ▁এক
- ▁MEASURE
- ▁ATTEND
- ▁TALKING
- ▁XWE
- ອງ
- లు
- ▁POCKET
- แก
- ᲠᲔ
- ТА
- ▁BAND
- ▁APPEAR
- ▁POSSESS
- ▁PERSONAL
- Ц
- ▁هغه
- МЕН
- ▁WINTER
- ▁SCARCE
- ▁FÈ
- ▁HAPPEN
- ▁እን
- ገ
- ▁ACCORDING
- ▁CIRCUMSTANCE
- ▁ปิ๋น
- ▁FRENCH
- ▁CÁI
- ▁ATTACK
- ▁SHARP
- ▁ROMAN
- ছিল
- BORU
- DUL
- ▁MWEN
- ▁LAUGHED
- ▁ЖА
- ▁REMAINED
- ▁SERVE
- え
- も
- Ń
- ▁กระป๋อง
- 마
- ▁VILLAGE
- ち
- ▁AFTERNOON
- ▁БАЙГАА
- ▁VALLEY
- ▁MARRIED
- ▁SHORE
- ▁POPUL
- ▁FORGET
- రు
- ▁FOOD
- ▁THÌ
- ▁QUICK
- ▁LAID
- บัญชี
- Ề
- ▁EFFORT
- ▁HAPPINESS
- ▁MAJOR
- ▁DISTANCE
- ▁FRANK
- ▁هم
- ▁STORM
- ▁PERCEIV
- ▁BOUND
- ▁PLACED
- ▁ARMY
- ลัด
- ድ
- ▁کښې
- ▁உம்ம்
- ▁ไม่
- ▁ISABEL
- ▁WRONG
- ▁BLOW
- ▁BELOW
- ▁BOX
- ▁БАР
- ▁TAR
- ▁RACE
- ال
- டு
- ภ
- ▁คุกกี้
- พิเศษ
- ▁PROBABLY
- 要
- ▁QUARTER
- ▁ADMIT
- ▁FAITH
- ▁GENTLEMAN
- ▁SKY
- వా
- ▁دې
- ปา
- GGER
- কা
- ▁YEAH
- ▁MARY
- ▁TÓ
- Ố
- ▁PLEASANT
- ▁SOCIETY
- ▁คัด
- హ
- さ
- ▁GROUP
- ▁STRAIGHT
- 着
- จาย
- การ
- ▁FORTUNE
- TSAI
- ข้าว
- ITUDE
- หอมมะลิ
- ▁STRENGTH
- ▁ມັນ
- Б
- ட்
- ▁ENTIRELY
- ▁NECESSARY
- ▁ҒОЙ
- 야
- 있
- ▁DINNER
- ▁DREW
- ANGA
- ▁MEANT
- కి
- ▁QUICKLY
- ᲔᲑᲘ
- ▁AMERICAN
- ண
- ▁SEND
- หนองคู
- ▁INFLUENCE
- ▁BEGINNING
- ▁ຊິ
- ▁CHAPTER
- ▁EASY
- ັກ
- ▁BROKE
- ▁TRAIN
- ▁REACH
- າຍ
- つ
- ধ
- 사
- ょ
- ▁SCENE
- ▁PULL
- ▁น้อง
- ▁GIVING
- তি
- ▁SLIGHT
- ▁COLOR
- ▁MEMBER
- HOOK
- Წ
- Ұ
- ▁PRODUCE
- ▁SILVER
- ▁PAUS
- ▁DIRECTION
- ▁WAITING
- กล้อง
- ไห้
- ▁AUTHOR
- ▁DREAD
- ▁HISTORY
- ▁SINGLE
- ▁BATTLE
- ▁SHUT
- ГЕ
- Ắ
- ▁CONVERSATION
- ▁ESPECIALLY
- ▁ນີ້
- 까
- ొ
- ▁EASILY
- ▁BREAD
- ▁PEACE
- ▁OBLIG
- ▁FLY
- ▁MORAL
- ▁ACTION
- ฟ
- ▁TERRIBLE
- ▁தான்
- ▁REQUIRE
- ▁به
- ▁ซอง
- లో
- ᲐᲡ
- నే
- ▁ده
- ▁АЛ
- ▁MILL
- ▁AWAKE
- ▁STRANGER
- ชาย
- ▁دا
- ▁HARM
- ААД
- ▁TURNING
- ▁TRYING
- 들
- ▁HEAVY
- 会
- ▁EAGER
- ▁አይ
- ▁GAME
- ▁MBAE
- ▁RUSH
- వు
- ▁LUCK
- กลุ่ม
- ▁จีพลัส
- ▁แห้ง
- ▁SIMPLY
- ▁SIMPLE
- ண்
- ▁BELONG
- ▁وا
- ▁CONTENT
- ▁БАЙ
- ▁KILLED
- ▁REPORT
- ▁KUR
- ▁SPAR
- ▁SICK
- ▁LOUD
- NGGAL
- ▁BAŞ
- ▁LAKE
- ▁JOURNEY
- ▁আৰু
- োৱা
- ▁ANXIOUS
- ▁BILONG
- ▁STICK
- له
- ▁LIPS
- ТЕ
- IOH
- ப
- ้ย
- ູ
- แม่บ้าน
- ▁วังภูหมอก
- జ
- ธ
- ▁DECIDED
- ▁PALACE
- ▁BURN
- ▁LAGI
- ▁NJE
- ▁MAID
- ▁MOVE
- รินทร์
- ታ
- ወ
- ▁ADDRESS
- ▁STREAM
- ▁EVIL
- ▁IMAGINE
- ▁SLOWLY
- ▁CHANGED
- னா
- ▁REPRESENT
- ▁যে
- ▁MENTION
- ▁ก็อด
- ▁FOLLOWING
- ▁CATCH
- ร้อง
- IDAK
- ▁MARRY
- ▁SUPER
- ▁CONCERN
- ▁SEARCH
- ▁FAVOR
- ▁TEMPER
- ▁ปลาร
- ▁HANDSOME
- ల్
- জা
- ▁แท
- LÚ
- ▁THIN
- ▁อา
- ▁PASSION
- ▁SHAPE
- ▁ຫັ້ນ
- 보
- ▁HÍNA
- ▁SUMMER
- ▁CIVIL
- ▁PRESENCE
- ▁SERIOUS
- ▁SHOP
- ▁SMILED
- ▁SPOT
- ▁MOTION
- KHUAN
- ▁AUNT
- ▁DUTY
- ▁หก
- รีบู
- Ệ
- ▁SUFFICIENT
- ▁СОЛ
- ▁আমি
- ▁SHADOW
- ▁BROAD
- ▁MISTAKE
- గా
- เค
- ᲨᲘ
- ▁ALLOWED
- ▁SHOT
- ᲓᲘ
- ▁GAIN
- ▁MINUTE
- রা
- ▁INDIVIDUAL
- ▁ARRIVED
- ▁MARRIAGE
- ▁COUSIN
- ▁SLAVE
- ▁ASSIST
- ▁อะ
- คร
- ▁UYA
- ▁WEAK
- วัด
- ▁TWELVE
- ▁DEPART
- ▁RAISED
- TSHU
- ▁TRUST
- ▁SUDDEN
- ▁CHRIST
- เบอร์
- ▁UNDERSTOOD
- ▁DEGREE
- で
- ▁HONOUR
- ▁GLASS
- Ң
- CARA
- ▁LOWER
- Ẽ
- ປ
- Ფ
- ▁CONSCIOUS
- ▁เจ็ด
- よ
- 내
- 안
- 得
- ▁NÁÀ
- ▁SUPPORT
- ▁NARROW
- ▁BATH
- ▁KILL
- KOH
- ▁SPENT
- ح
- ▁REFLECT
- ▁น่าม
- క్క
- ▁WELEH
- ▁FRANCE
- ▁CALM
- ื่อ
- ้ม
- ว่า
- กัด
- ▁INCREASE
- ▁FRI
- ▁HONOR
- ▁FIRM
- ▁GOLDEN
- ▁POST
- ỚI
- ▁LATTER
- ▁YONG
- ▁GRASS
- ▁PÉ
- BÛ
- 으
- ▁เกษตร
- ▁ŞEY
- লো
- ᲑᲐ
- ธนาคาร
- ▁ADVANTAGE
- ▁FASHION
- ▁SWORD
- 래
- ▁সেই
- ▁ENEMY
- ▁VARIOUS
- ▁NASIDA
- ▁SOCIAL
- ▁TASTE
- ▁ᲡᲐ
- ▁BITTER
- ▁MOVEMENT
- สุวรรณ
- ▁เติ้ล
- ▁அவ
- ▁ຫວາ
- 说
- ▁DEVELOP
- み
- ▁MURDER
- ▁LADIES
- ▁YORK
- ▁ARRANGE
- ▁YELLOW
- ▁PURSU
- HREW
- ไชยศิ
- Ū
- Ჩ
- ▁กระปุก
- ▁CONDUCT
- ▁STRETCH
- ▁PREVENT
- ▁VICTOR
- ▁SITUATION
- ▁FINALLY
- ▁মই
- ▁RELATION
- ອນ
- ▁ভাল
- ีผล
- ▁ห้าง
- ᲜᲐ
- ▁MARCH
- ▁TENDER
- ЕЙ
- ▁MILLION
- ున్నా
- Ĩ
- ▁DECLARED
- สมุนไพร
- ▁KNOWLEDGE
- ▁DROPPED
- ▁ມາ
- ▁PROPOS
- ▁RISE
- ▁RULE
- ▁กะ
- ▁INVIT
- Ь
- ږ
- ณ
- ▁ЖАТЫР
- উ
- บือละห์
- 네
- ▁CARRIAGE
- ▁GAYOD
- МЫН
- ல்
- ఏ
- ▁ปะ
- ای
- ▁POND
- หุ้นส่วน
- 시
- Ơ
- Ღ
- ▁EVERYBODY
- 일
- や
- 多
- ▁ລະ
- ▁LEAVING
- ▁UWIS
- ▁دي
- దా
- SCRIPT
- FOLD
- ্যা
- ూ
- ▁근데
- 那
- ▁COLLECT
- ▁ANCIENT
- ▁PRISONER
- ▁RAPID
- న్నా
- ▁په
- ▁DANCE
- ของดี
- เปอร์เซ็นต์
- ባ
- ▁ESTABLISH
- ▁என்ன
- ▁DISAPPEAR
- ▁JUDGE
- ▁FAINT
- 里
- ኔ
- 거
- 来
- 로
- 下
- ফ
- 能
- ญ
- ج
- ف
- 를
- Ở
- 上
- 오
- 자
- 只
- 没
- 么
- ዚ
- ቀ
- 为
- 구
- 时
- 这
- 었
- ع
- چ
- Ồ
- る
- 情
- 也
- ভ
- Õ
- ষ
- 만
- 인
- ສ
- ښ
- Ộ
- 啲
- 너
- 때
- 무
- 했
- 开
- 又
- ቃ
- ክ
- が
- ொ
- ై
- 自
- ኛ
- Ữ
- 哦
- 대
- 여
- は
- 边
- پ
- Ị
- 啦
- 知
- 수
- 远
- 地
- 还
- ひ
- 后
- め
- 再
- ሱ
- Ầ
- 같
- 无
- 可
- 려
- 생
- 제
- ຖ
- Ớ
- 如
- 주
- ሳ
- 见
- 话
- 되
- 走
- Ε
- っ
- 起
- 让
- 데
- Ჰ
- ຜ
- 像
- 样
- Ę
- ጋ
- ৱ
- ష
- 간
- ኮ
- ጣ
- す
- Ể
- 看
- 几
- 点
- ጥ
- 听
- Ზ
- ዳ
- ঐ
- ຸ
- ሉ
- 生
- 의
- ق
- ቤ
- ፈ
- 러
- 런
- 回
- ふ
- 以
- የ
- 정
- れ
- ຕ
- 道
- 嘛
- 而
- じ
- ໄ
- Ɛ
- Ủ
- ろ
- ど
- Ợ
- 出
- Ừ
- 感
- 원
- 말
- 세
- ね
- 却
- だ
- 年
- ዛ
- ډ
- ሺ
- 对
- 日
- 두
- ず
- 음
- 笑
- 系
- 소
- 风
- け
- ぴ
- 分
- 呢
- 든
- 모
- 慢
- 手
- 眼
- 相
- ሚ
- ঠ
- ণ
- 真
- ዋ
- 别
- 最
- 애
- ぎ
- ቸ
- 드
- 랑
- 울
- 차
- ぐ
- ^
- ஷ
- ሩ
- 左
- 할
- ፍ
- そ
- 头
- ጠ
- Ụ
- 嗯
- 산
- 운
- ঝ
- ካ
- 放
- 겠
- ং
- ሀ
- 谁
- 明
- 间
- 비
- 학
- び
- 우
- 카
- 定
- 己
- 늘
- 전
- 더
- ዝ
- Ỏ
- 많
- 离
- 개
- 星
- Č
- Ý
- 海
- 상
- Ჯ
- 달
- 미
- せ
- 然
- ص
- ஜ
- 之
- 觉
- 很
- 成
- ゆ
- ሄ
- ሪ
- ኩ
- 타
- 花
- 부
- ば
- 前
- 世
- 和
- 太
- 光
- 把
- 금
- 물
- 스
- 저
- 처
- 次
- 当
- 中
- ຶ
- 家
- 未
- 각
- 름
- 막
- 봐
- 신
- 白
- 노
- 已
- ዬ
- 언
- ழ
- 空
- 住
- 럼
- Ё
- 两
- 梦
- 做
- ط
- ሌ
- 咪
- 度
- 냥
- 던
- 동
- 란
- శ
- 温
- 落
- 经
- 给
- Ằ
- 月
- Ჭ
- ቱ
- 流
- 먹
- 望
- 等
- 大
- 小
- 变
- 动
- 讲
- 雨
- 날
- 알
- 약
- 장
- ご
- 美
- ຟ
- ቅ
- 发
- 面
- 길
- 바
- 히
- 失
- 子
- 色
- 걸
- Ổ
- 路
- ዐ
- む
- 同
- 꼬
- 봤
- 치
- 声
- 留
- 每
- 抱
- 带
- 快
- Ǹ
- ሥ
- Ỉ
- 信
- 先
- 老
- 难
- 건
- 디
- 반
- 파
- 方
- 曾
- 泪
- 晚
- አ
- 打
- 总
- 十
- ئ
- ۍ
- ቢ
- Ỗ
- 굴
- 르
- 응
- 期
- 他
- 所
- 言
- ቶ
- 拥
- 歌
- 伤
- 问
- 云
- 更
- ض
- 从
- 忘
- 올
- Ự
- 少
- 意
- 长
- 怕
- 界
- 身
- 乐
- 永
- 계
- ኑ
- 念
- 野
- 살
- ሮ
- 于
- 现
- 被
- ぼ
- ฤ
- ሻ
- ቻ
- Ặ
- 买
- 山
- 님
- 른
- 워
- ৌ
- 교
- 直
- ฉ
- 些
- 候
- 것
- 냐
- 밤
- 실
- 와
- 좀
- 유
- 喜
- 쿠
- 欢
- 水
- ዜ
- 电
- 遇
- 난
- 맞
- 배
- 속
- 않
- 진
- 짝
- 화
- ঙ
- ጊ
- 呀
- 哪
- 怎
- 위
- 중
- 算
- 微
- 依
- 青
- べ
- 清
- 返
- 매
- 별
- 솔
- 줄
- 랄
- 夜
- 才
- 完
- ሬ
- 但
- 即
- 忆
- 愿
- 문
- 방
- ሞ
- ቆ
- 钟
- 轻
- 暖
- 何
- 许
- ሎ
- ぽ
- 背
- 누
- 因
- 行
- 단
- 돼
- 명
- 엔
- 직
- 집
- 청
- 痛
- 深
- 春
- 实
- 终
- ఓ
- ቺ
- 본
- 빛
- 새
- 입
- ฝ
- 受
- 口
- 터
- ざ
- 그
- 安
- 근
- ໋
- 停
- 怀
- 车
- 쳐
- 트
- ሊ
- ሙ
- ሲ
- Ẫ
- 节
- 갑
- 갔
- 년
- 눈
- 린
- 분
- 柔
- 千
- 向
- ஸ
- 单
- 事
- ቼ
- ኳ
- 喺
- 待
- 食
- 강
- 레
- 예
- 절
- 죠
- 容
- 错
- 느
- 种
- 反
- 静
- 唱
- 火
- 近
- ژ
- 苦
- 회
- 루
- 버
- 불
- 왔
- 甜
- 飞
- 满
- Ũ
- ጀ
- 此
- ໊
- ጂ
- ፊ
- 够
- 热
- 께
- 록
- 몇
- 적
- 져
- 쫌
- 행
- 刻
- 牵
- 音
- 往
- 双
- 法
- ణ
- ሠ
- ኪ
- ጉ
- ጭ
- 用
- 结
- 며
- 영
- 외
- 조
- ':'
- ̣
- ሜ
- ቂ
- ぬ
- 겨
- 귀
- ధ
- ぞ
- 儿
- 哭
- 越
- ̀
- 跟
- 假
- 叫
- 阳
- ጎ
- 作
- 原
- 思
- 竟
- 답
- 偷
- 它
- 气
- 沉
- 理
- 细
- 转
- 重
- 높
- 밖
- 선
- 식
- 씩
- 연
- 잎
- 종
- 피
- 久
- 入
- 外
- 脸
- 靠
- 城
- 醒
- 找
- 早
- 写
- 偶
- 友
- 该
- 渐
- 곱
- Ф
- ຝ
- ፋ
- Ử
- 吧
- 告
- 긴
- 재
- 프
- 녀
- 성
- 테
- 三
- 装
- 夏
- ほ
- 角
- 寻
- 睡
- ஐ
- ኋ
- 과
- 求
- 玩
- 香
- 곰
- 머
- 빨
- 쪽
- 코
- 콩
- 亲
- 今
- 女
- 紧
- 온
- 호
- 默
- 机
- 勇
- 请
- 首
- 关
- 掉
- 全
- 岁
- 活
- 감
- 공
- 六
- 蓝
- ዴ
- ጅ
- ጆ
- ጤ
- Ỡ
- 使
- 包
- 啱
- 嚟
- 她
- 画
- 钱
- 雪
- 국
- 글
- 쁜
- 업
- 息
- 随
- 与
- 底
- 烟
- 滴
- ぜ
- 步
- Ю
- 比
- భ
- げ
- 学
- 将
- 希
- 正
- 闲
- ሷ
- ቡ
- ዕ
- Ễ
- 味
- 尽
- 整
- 条
- 解
- 进
- 슬
- 용
- 체
- 케
- ጃ
- 场
- 웃
- 似
- 红
- 计
- 疯
- 语
- 엎
- 万
- 必
- 敢
- 旧
- 秋
- 街
- 切
- 排
- 遥
- 담
- ኸ
- ዱ
- 力
- 秒
- 결
- 된
- ฬ
- ቁ
- ኖ
- 五
- 埋
- 平
- 懂
- 旁
- 漫
- 飘
- 렸
- 받
- 쉬
- 슨
- 양
- 철
- 침
- ጫ
- 습
- 片
- 绝
- 四
- 害
- 守
- 约
- 书
- 傻
- 北
- 否
- 酒
- 季
- 残
- 照
- آ
- 累
- 꾸
- 발
- 천
- ሸ
- ኞ
- ዙ
- ፌ
- 乌
- 吗
- 始
- 市
- 应
- 恨
- 独
- 线
- 诺
- 透
- 격
- 경
- 따
- 맛
- 몸
- 썰
- 였
- 질
- 크
- 후
- 工
- 迷
- 토
- 丽
- 影
- 句
- 恋
- 需
- 黑
- 散
- 奔
- 啊
- 们
- 张
- 目
- 亮
- 忍
- 群
- 鱼
- 强
- 挂
- 넘
- ث
- ሂ
- ሴ
- 倾
- 管
- 荡
- 갈
- 십
- 엉
- 커
- ৎ
- 另
- 晴
- 穿
- 若
- 谎
- 闹
- 阵
- 목
- 월
- 편
- ৃ
- 公
- 处
- 干
- 合
- 坐
- 怪
- 易
- 站
- 认
- 狂
- 至
- 体
- 提
- 笔
- 收
- 阴
- 追
- 高
- ぱ
- 二
- 断
- 球
- 耳
- 诗
- 遍
- 配
- 预
- 복
- 짜
- ظ
- ఖ
- ኒ
- 习
- 冷
- 特
- 졸
- ሔ
- Ჟ
- 任
- 休
- 便
- 哩
- 字
- 报
- 改
- 灵
- 烧
- 神
- 纸
- 联
- 部
- 롱
- 색
- 씨
- 추
- 悲
- 浪
- 肯
- 西
- 东
- 初
- 半
- 局
- 脑
- 距
- 缘
- 聊
- 非
- 承
- ዎ
- 灯
- 彩
- 惜
- 接
- 交
- 保
- 孤
- 运
- 代
- 圈
- 憾
- 差
- 纯
- 连
- 逃
- 九
- 其
- 南
- 号
- 江
- 演
- 톡
- 혼
- ఐ
- ዶ
- ጓ
- ぶ
- 乱
- 决
- 叶
- 响
- 奇
- 尾
- 屋
- 林
- 模
- 训
- 论
- 迹
- 靓
- 났
- 등
- 떤
- 앞
- 통
- 희
- 传
- 八
- 化
- 曲
- 窗
- 表
- 证
- 립
- 송
- 태
- 台
- 恩
- 楼
- 并
- আ
- 熟
- 怨
- 送
- 景
- థ
- 떡
- 右
- 坏
- 娘
- 本
- 足
- 通
- 隔
- 利
- 名
- 常
- 数
- 碎
- 门
- ذ
- ቹ
- ጡ
- ፕ
- Ỹ
- 仔
- 倦
- 剩
- 封
- 尘
- 执
- 晨
- 泡
- 猫
- 痕
- 谅
- 谓
- 超
- 跳
- 轮
- 醉
- 망
- 붙
- 순
- 옥
- 옹
- 움
- 증
- 쪼
- 축
- 팔
- 럽
- 七
- 莫
- 选
- 항
- 噢
- 妈
- 尔
- 灰
- 躲
- 刚
- 握
- 零
- 挡
- 死
- 贴
- 杯
- ఆ
- 围
- 绕
- 拿
- 丝
- 悠
- 旅
- 百
- 止
- 观
- 吻
- 喵
- 堂
- 怜
- 懒
- 戏
- 草
- 顺
- ௌ
- ዮ
- 골
- 딩
- ቋ
- 呼
- 存
- 摆
- 斗
- 油
- 般
- 视
- 점
- 향
- ঃ
- ౌ
- ጪ
- Ẩ
- 企
- 养
- 哥
- 妙
- 惯
- 搞
- 擦
- 木
- 朵
- 波
- 注
- 淡
- 班
- 英
- 茶
- 贵
- 迎
- 锁
- 题
- 饭
- 马
- 骨
- 관
- 깔
- 끔
- 둥
- 떻
- 랐
- 룩
- 먼
- 민
- 벽
- 셨
- 얀
- 억
- 임
- ฎ
- ፒ
- 达
- 闪
- 颗
- 긋
- 嘴
- 撑
- 男
- 短
- 突
- 续
- 荒
- 识
- 诉
- 黄
- 低
- 折
- 舍
- 寄
- 朝
- 祝
- 课
- 挥
- 瓶
- 礼
- 幻
- 战
- 试
- 琴
- 닷
- 伞
- 剑
- 卷
- 吸
- 哈
- 惊
- 拒
- 梁
- 燃
- 租
- 第
- 羽
- 脚
- ጌ
- 品
- 喝
- 漂
- 铁
- 메
- 밥
- 키
- 페
- ̩
- ሶ
- ቄ
- ዪ
- Ẻ
- 享
- 价
- 伯
- 傍
- 冬
- 升
- 吞
- 国
- 急
- 房
- 抬
- 指
- 新
- 昏
- 替
- 服
- 涌
- 游
- 滚
- 田
- 眸
- 码
- 篇
- 芳
- 豆
- 退
- 避
- 酸
- 鲜
- 궁
- 깐
- 댁
- 덕
- 뜨
- 벗
- 베
- 석
- 숲
- 역
- 짓
- 쭉
- 쯤
- 찜
- 출
- 클
- 폰
- 활
- ቴ
- 급
- 댕
- 력
- 준
- 합
- ቲ
- 争
- 余
- 吵
- 唯
- 尝
- 旋
- 甘
- 놓
- 충
- 乎
- 盛
- 纷
- 辉
- 偏
- 挽
- 洋
- 立
- 颠
- 忙
- 藏
- 暗
- 跌
- 倒
- 含
- 层
- 古
- 格
- 临
- 极
- 脏
- 酷
- 魂
- 资
- 吃
- 根
- 毛
- 沙
- 碰
- 舒
- 蝶
- 辜
- 院
- 修
- 染
- 柠
- 烽
- 移
- 血
- 途
- 颜
- 魔
- 릴
- 법
- 패
- ሏ
- ሯ
- ኗ
- ዊ
- ዓ
- ፏ
- Ỳ
- 伏
- 借
- 共
- 冒
- 冲
- 功
- 叹
- 君
- 圆
- 垂
- 寒
- 寸
- 座
- 扬
- 抗
- 拉
- 换
- 揾
- 教
- 斑
- 浮
- 添
- 港
- 潮
- 烈
- 牌
- 牙
- 瘦
- 眶
- 砰
- 祷
- 穷
- 答
- 纪
- 绿
- 翻
- 肉
- 胜
- 苍
- 象
- 赖
- 辰
- 逐
- 镜
- 限
- 须
- 餐
- 骑
- 骚
- 鸦
- 겁
- 넌
- 놈
- 닐
- 될
- 뜩
- 렴
- 론
- 롭
- 쁘
- 심
- 씬
- 악
- 짐
- 쩔
- 탈
- 탕
- 튼
- 판
- 현
- 셋
- 쟁
- 환
- 唐
- 性
- 涯
- 物
- 珍
- 疼
- 缠
- 夕
- 설
- 쳤
- ፎ
- 卖
- 套
- 汤
- 良
- 솜
- 瑞
- 稳
- 缺
- 伴
- 唤
- 序
- 归
- 挑
- 翅
- 薄
- 咸
- 义
- 件
- 列
- 勾
- 嘿
- 属
- 岔
- 广
- 弹
- 掩
- 搭
- 欠
- 猜
- 符
- 腕
- 阔
- 낙
- 펼
- Ï
- 华
- 嫁
- 幽
- 抓
- 暂
- 烂
- 珊
- 疲
- 翼
- 触
- 逆
- 闻
- 킨
- 商
- 흥
- ً
- ቭ
- ቮ
- ዩ
- ፅ
- Ḿ
- Ỷ
- 丘
- 严
- 介
- 伪
- 位
- 冻
- 净
- 凉
- 刘
- 刺
- 博
- 厚
- 呵
- 嘈
- 团
- 壳
- 奏
- 姐
- 婆
- 宝
- 宫
- 宴
- 密
- 尊
- 川
- 店
- 延
- 引
- 徒
- 悦
- 惑
- 抖
- 抵
- 抹
- 拆
- 拖
- 拼
- 救
- 暴
- 束
- 校
- 款
- 毫
- 洗
- 测
- 湖
- 湿
- 灌
- 煌
- 熄
- 熬
- 犹
- 环
- 皮
- 盖
- 眠
- 票
- 秘
- 稻
- 窝
- 纵
- 绍
- 缝
- 考
- 者
- 舟
- 虹
- 警
- 讨
- 词
- 负
- 躯
- 载
- 逝
- 逼
- 量
- 针
- 际
- 陷
- 馆
- 鬼
- 麻
- 黎
- 龙
- 걍
- 껏
- 꿀
- 끊
- 낀
- 낼
- 똥
- 램
- 럴
- 렀
- 맨
- 몽
- 변
- 브
- 블
- 뿡
- 샘
- 싹
- 써
- 접
- 졌
- 줬
- 즈
- 짹
- 쨌
- 쫙
- 찡
- 채
- 컵
- 켜
- 틀
- 티
- 팅
- 폼
- 품
- 픈
- 读
- 견
- 멘
- 뻐
- 헤
- ዞ
- ፆ
- シ
- 主
- 刹
- 智
- 朗
- 权
- 炼
- 盏
- 릭
- 승
- 份
- 加
- 孩
- 摇
- 欲
- 造
- 金
- 隐
- 菊
- 黏
- ́
- 伟
- 婚
- 弄
- 招
- 毁
- 毕
- 激
- 踢
- 鼻
- 嗅
- 妞
- 尖
- 异
- 弦
- 弯
- 彻
- 烛
- 甸
- 眷
- 练
- 荣
- 蝉
- 雁
- 骗
- 齿
- 육
- ఘ
- 塞
- 帘
- 悄
- 拾
- 搁
- 晶
- 漠
- 竹
- 篱
- 羞
- 肠
- 闯
- 띄
- 뭇
- ቨ
- 핀
- '6'
- ሟ
- ሹ
- ዷ
- ጮ
- ”
- 业
- 乞
- 乡
- 井
- 亦
- 仰
- 俯
- 兮
- 兴
- 军
- 凝
- 凭
- 刮
- 剧
- 午
- 卡
- 卫
- 取
- 叛
- 叠
- 司
- 吼
- 嗌
- 困
- 块
- 坠
- 堆
- 堪
- 墨
- 奢
- 妖
- 姜
- 姿
- 嫌
- 嫣
- 宜
- 宠
- 客
- 寥
- 尺
- 岛
- 岸
- 巾
- 师
- 弟
- 弥
- 悬
- 悭
- 悸
- 惘
- 扁
- 扇
- 抢
- 抽
- 拳
- 探
- 推
- 插
- 支
- 文
- 料
- 斯
- 昼
- 暮
- 李
- 枯
- 某
- 栏
- 案
- 桥
- 欺
- 歉
- 沦
- 沸
- 泊
- 泥
- 淀
- 渊
- 源
- 溢
- 滞
- 滩
- 澈
- 澜
- 灿
- 炎
- 烦
- 煎
- 煲
- 狗
- 珠
- 瓷
- 盈
- 盒
- 盘
- 瞰
- 石
- 破
- 碌
- 碟
- 禁
- 程
- 箱
- 糊
- 糖
- 纹
- 织
- 绒
- 维
- 罢
- 罪
- 职
- 股
- 脊
- 腰
- 腾
- 膛
- 舞
- 船
- 茧
- 莞
- 莲
- 菜
- 蔚
- 蛊
- 蜡
- 融
- 衡
- 衣
- 衫
- 袋
- 讯
- 详
- 谈
- 谷
- 购
- 费
- 赔
- 赠
- 趁
- 趣
- 蹦
- 轰
- 辑
- 输
- 辛
- 辣
- 迟
- 逻
- 铺
- 锦
- 闭
- 闷
- 阻
- 附
- 陆
- 降
- 鞋
- 韵
- 顶
- 顾
- 顿
- 颤
- 馈
- 驶
- 验
- 骼
- 鸡
- 鸣
- 鸥
- 麽
- 검
- 궐
- 긍
- 껄
- 껴
- 꼈
- 꼴
- 꽤
- 끗
- 능
- 덜
- 덟
- 둔
- 딜
- 땐
- 떳
- 뚜
- 뚱
- 뜰
- 뜻
- 띤
- 랫
- 례
- 료
- 링
- 맥
- 맺
- 밀
- 범
- 볶
- 섹
- 앨
- 엾
- 옛
- 존
- 줍
- 찌
- 첫
- 춰
- 칠
- 켰
- 쾌
- 큼
- 텁
- 톨
- 플
- 허
- 험
- 헴
- 홉
- 힌
- 봄
- 뻤
- 쩌
- 巴
- 忽
- 愧
- 投
- 柳
- 滥
- 犯
- 调
- 끌
- 值
- 嫩
- 宿
- 废
- 建
- 恶
- 旦
- 板
- 治
- 爸
- 玉
- 疗
- 眯
- 瞒
- 设
- 蹈
- 辆
- 段
- 覆
- 乖
- 梯
- 举
- 힙
- 힐
- 효
- 혹
- 헬
- 퓨
- 탐
- 큰
- 츠
- 책
- 짠
- 잤
- 웨
- 엘
- 엑
- 앉
- 씀
- 썼
- 쌀
- 싱
- 숙
- 삼
- 뿌
- 뽑
- 뻔
- 벙
- 백
- 멋
- 락
- 똑
- 딴
- 뒤
- 녕
- 납
- 김
- 값
- 齐
- 麦
- 鸽
- 韶
- 隶
- 陶
- 阑
- 释
- 逸
- 辘
- 轳
- 赐
- 豫
- 谢
- 诶
- 衬
- 蝴
- 虽
- 虑
- 莎
- 腻
- 肥
- 聚
- 聆
- 翱
- 缤
- 稀
- 积
- 社
- 矜
- 盼
- 痣
- 疆
- 畅
- 甩
- 猪
- 父
- 烫
- 灭
- 溃
- 渴
- 淹
- 淌
- 涩
- 汉
- 樱
- 森
- 棉
- 朽
- 曦
- 晰
- 敞
- 摹
- 摸
- 掠
- 捞
- 挤
- 抉
- 慌
- 愁
- 恒
- 式
- 廿
- 廓
- 宾
- 室
- 宋
- 孔
- 契
- 夸
- 士
- 垫
- 土
- 嘉
- 喇
- 喀
- 啥
- 哼
- 厌
- 勒
- 冚
- 兰
- 兜
- 兑
- 俗
- 伸
- 丰
- “
- Ẵ
- ፖ
- ፐ
- ጽ
- ጵ
- ጄ
- ቪ
- ሑ
- ຣ
- ฑ
- 흘
- 핸
- 필
- 풀
- 퍼
- 탁
- 컴
- 춤
- 착
- 찢
- 죽
- 좁
- 읽
- 빼
- 봅
- 병
- 맘
- 땅
- 딸
- 둠
- 눌
- 녔
- 냇
- 낄
- 깊
- 龟
- 骄
- 饮
- 除
- 银
- 逢
- 踩
- 谱
- 衍
- 蜚
- 葬
- 获
- 苹
- 苞
- 芬
- 祈
- 番
- 狼
- 狈
- 渺
- 泣
- 树
- 敷
- 故
- 拜
- 扣
- 憨
- 惦
- 屌
- 备
- 境
- 坡
- 圳
- 嗮
- 喽
- 喧
- 善
- 啸
- 周
- 呃
- 医
- 准
- 充
- 傲
- 倔
- 佛
- 且
- ጩ
- ጁ
- ሦ
- ఛ
- 흔
- 햇
- 평
- 팀
- 특
- 캐
- 춥
- 최
- 줘
- 죄
- 왕
- 숨
- 뛰
- 걱
- 额
- 页
- 踏
- 赏
- 贪
- 脆
- 耀
- 翔
- 网
- 继
- 童
- 瑰
- 玫
- 犀
- 炊
- 洒
- 汹
- 欣
- 梗
- 晕
- 晏
- 掌
- 担
- 护
- 徨
- 彷
- 弱
- 奶
- 堡
- 坦
- 兆
- ஈ
- 흐
- 휴
- 휘
- 훔
- 확
- 형
- 함
- 포
- 취
- 잔
- 웠
- 옷
- 뼉
- 밑
- 맑
- 득
- 둘
- 늦
- 넓
- 굳
- 갖
- 陌
- 遮
- 逗
- 较
- 赶
- 诚
- 胸
- 绪
- 络
- 精
- 昨
- 慰
- 悉
- 崩
- 奈
- 埃
- 嘞
- 努
- 京
- ฒ
- Щ
- 폴
- 염
- 빗
- 넣
- 군
- 겼
- 谂
- 膀
- 肩
- 瞬
- 牛
- 桃
- 既
- 帮
- 姑
- 复
- 咋
- 及
- 仿
- 付
- ፀ
- 흰
- 투
- 족
- 잊
- 싫
- 뿐
- 밝
- 밌
- 멀
- 릎
- 떨
- 듣
- 됐
- 닭
- 곳
- 袅
- 耐
- 择
- 彼
- 坚
- ኤ
- ቷ
- 찍
- 완
- 볼
- 벌
- 딱
- 닌
- 낌
- 꽃
- 꺼
- 誓
- 蜜
- 茫
- 持
- 办
- 乜
- ኢ
- ฏ
- ء
- 즘
- 옆
- 뭔
- 끝
- 跑
- 河
- 楚
- 攞
- 或
- 忧
- 弃
- 寞
- 啡
- 咖
- ぺ
- ぷ
- 황
- 탉
- 참
- 암
- 쓰
- 确
- 寂
- Ъ
- İ
- 찾
- 잠
- 싸
- 떠
- 당
- 놀
- 끼
- 괜
- 광
- 陪
- 简
- 究
- 찮
- 즐
- 돈
- 睛
- 消
- 匆
- Ẳ
- ኘ
- ஊ
- 푸
- 탄
- 섯
- 빙
- 吹
- ቦ
- 았
- 술
- 깨
- 蚊
- 唉
- 哎
- 仲
- 친
- 창
- 잡
- 왜
- 꿈
- 遗
- 福
- 朋
- 힘
- 찬
- 슴
- 몰
- 뚝
- 남
- 阿
- 幸
- 哇
- へ
- ጨ
- ሐ
- 초
- 열
- 랬
- 작
- 畀
- ኬ
- 由
- 命
- ኦ
- ฐ
- ぅ
- 싶
- 돌
- 睇
- 啫
- ঢ
- 또
- 喔
- 손
- 걔
- 얘
- 빠
- ఈ
- ঞ
- 얼
- 못
- 喂
- 엄
- ূ
- 잘
- 嘟
- 什
- ኧ
- 좋
- 吓
- 번
- 람
- ヴ
- 记
- 없
- ஞ
- ዘ
- 잖
- ఒ
- ぉ
- 咩
- ぁ
- 嘅
- 㗎
- ዲ
- ஓ
- 咯
- ஏ
- ஒ
- ஃ
- ぃ
- Პ
- 뭐
- 冇
- 렇
- ሆ
- ঘ
- Ძ
- ゅ
- ぇ
- Ă
- ̃
- Ậ
- ఫ
- 佢
- 咁
- 果
- ຽ
- ఊ
- ఉ
- Ứ
- Ყ
- ځ
- உ
- ຼ
- Ö
- Â
- ຢ
- څ
- ఇ
- থ
- ஹ
- Ә
- ళ
- எ
- ఁ
- ங
- ື
- Ư
- ஆ
- இ
- Ғ
- অ
- ແ
- அ
- అ
- ந
- ົ
- এ
- ใ
- Đ
- ়
- Ү
- 丢
- 劣
- 匹
- 哑
- 嗓
- 嗨
- 嘲
- 填
- 宏
- 巷
- 志
- 扔
- 拙
- 桂
- 梨
- 渲
- 潦
- 爬
- 痹
- 签
- 素
- 翘
- 胚
- 腼
- 茹
- 虎
- 蚝
- 衷
- 褪
- 跃
- 逛
- 釉
- 钢
- 锐
- 队
- 饰
- 빔
- 偿
- 凑
- 剔
- 呦
- 增
- 宣
- 席
- 户
- 批
- 披
- 拂
- 拌
- 捧
- 搅
- 昔
- 晒
- 曝
- 松
- 栀
- 桐
- 檀
- 汗
- 液
- 炬
- 瑚
- 稍
- 篆
- 绽
- 聪
- 莹
- 蒙
- 袱
- 贝
- 违
- 뷔
- 呜
- 瑟
- 딪
- 릿
- 멈
- 셔
- 킬
- ఔ
- Œ
- ڼ
- ঔ
- ๆ
- ሕ
- ሼ
- ኙ
- ኜ
- ኡ
- ኽ
- ዉ
- ዌ
- ዥ
- ጢ
- ጦ
- ጧ
- ጬ
- ፂ
- ፉ
- ፓ
- ク
- 串
- 丹
- 产
- 亭
- 仍
- 仕
- 仙
- 优
- 估
- 佬
- 侈
- 侍
- 侵
- 俊
- 倏
- 倚
- 催
- 允
- 兄
- 冰
- 况
- 减
- 凡
- 则
- 判
- 制
- 刷
- 剪
- 割
- 助
- 劳
- 勉
- 匙
- 区
- 卅
- 卑
- 卓
- 占
- 印
- 厂
- 历
- 厕
- 厢
- 叮
- 史
- 吊
- 吭
- 呐
- 呓
- 呕
- 咆
- 咛
- 哀
- 哮
- 唇
- 唏
- 啩
- 喻
- 嘶
- 器
- 噩
- 嚷
- 囊
- 园
- 图
- 培
- 堕
- 塘
- 墅
- 墓
- 墙
- 壁
- 央
- 奚
- 奥
- 妨
- 妹
- 妻
- 娜
- 媚
- 孑
- 孓
- 孙
- 宁
- 官
- 宛
- 宽
- 寐
- 寓
- 察
- 尻
- 屉
- 屎
- 展
- 峰
- 州
- 巧
- 帶
- 帽
- 床
- 庞
- 弘
- 形
- 彰
- 征
- 徊
- 律
- 徘
- 循
- 忐
- 忑
- 忠
- 态
- 怅
- 怡
- 恐
- 恙
- 恢
- 恼
- 悔
- 患
- 悴
- 惟
- 惠
- 惧
- 惨
- 惩
- 惫
- 惹
- 愈
- 愣
- 愫
- 慨
- 憔
- 戒
- 扎
- 托
- 扯
- 抛
- 拋
- 拘
- 拢
- 拣
- 挣
- 捱
- 掀
- 掂
- 掏
- 揽
- 揿
- 摔
- 摞
- 摧
- 撂
- 撩
- 敏
- 敲
- 斟
- 旌
- 族
- 旗
- 旺
- 映
- 昧
- 晃
- 晓
- 晖
- 普
- 暄
- 暧
- 曳
- 曹
- 曼
- 末
- 杀
- 杂
- 杆
- 材
- 杰
- 枪
- 柄
- 柜
- 栖
- 框
- 桦
- 桨
- 梢
- 梭
- 棒
- 棠
- 椅
- 槽
- 檬
- 欧
- 母
- 毯
- 民
- 汁
- 池
- 汪
- 汽
- 沁
- 沫
- 沱
- 沾
- 沿
- 泰
- 洁
- 浅
- 浆
- 浑
- 浓
- 浦
- 淘
- 淮
- 渣
- 湛
- 湾
- 溉
- 滂
- 滋
- 滑
- 漏
- 灼
- 炭
- 烁
- 烊
- 煤
- 煮
- 爆
- 版
- 率
- 王
- 玛
- 玲
- 琵
- 琶
- 瑕
- 瓜
- 瓢
- 瓣
- 疏
- 疚
- 疤
- 痒
- 痴
- 皱
- 盗
- 盲
- 眉
- 眺
- 睄
- 矮
- 硬
- 碍
- 碑
- 碗
- 碧
- 祸
- 秀
- 私
- 秃
- 窄
- 窑
- 竭
- 笆
- 筐
- 筑
- 簸
- 米
- 类
- 粉
- 粘
- 粤
- 粥
- 絮
- 繁
- 纠
- 纱
- 绑
- 绘
- 绢
- 绣
- 绮
- 绳
- 绵
- 绻
- 缄
- 缆
- 缓
- 编
- 缚
- 缱
- 缸
- 罗
- 罚
- 羊
- 羔
- 翠
- 耕
- 耘
- 聋
- 肤
- 胆
- 胎
- 胭
- 脂
- 腆
- 腐
- 膝
- 舱
- 舷
- 艺
- 芒
- 芙
- 芜
- 药
- 萍
- 萎
- 蒸
- 蕖
- 藕
- 蘸
- 虚
- 虾
- 蚕
- 蜃
- 蟹
- 补
- 衰
- 袭
- 裙
- 褴
- 褶
- 规
- 订
- 议
- 讽
- 访
- 谊
- 谋
- 谜
- 谣
- 谬
- 豪
- 贞
- 账
- 货
- 贸
- 赤
- 赵
- 跤
- 踞
- 踟
- 蹰
- 躺
- 软
- 辈
- 辩
- 辽
- 迁
- 适
- 逅
- 逍
- 递
- 邂
- 邪
- 邮
- 酱
- 钓
- 铃
- 铲
- 锋
- 镌
- 镯
- 闸
- 闺
- 阱
- 陈
- 隙
- 雀
- 雅
- 集
- 雷
- 霓
- 霸
- 靛
- 鞘
- 颂
- 馨
- 驳
- 骂
- 魄
- 魅
- 鲍
- 鲤
- 鸯
- 鸳
- 鸿
- 鹅
- 鹤
- 鹰
- 걘
- 걷
- 겸
- 곡
- 곤
- 곽
- 굽
- 권
- 극
- 깜
- 꼭
- 꽁
- 꽂
- 꾼
- 꿇
- 꿔
- 뀌
- 낮
- 냈
- 널
- 녁
- 놨
- 뇨
- 눠
- 뉴
- 늪
- 닥
- 덩
- 뎅
- 독
- 돋
- 돔
- 듯
- 딘
- 땜
- 떴
- 똠
- 뚫
- 랗
- 랩
- 량
- 련
- 롤
- 룰
- 룸
- 림
- 몬
- 믄
- 믿
- 박
- 봉
- 북
- 붉
- 븐
- 빅
- 빚
- 빡
- 빴
- 뺄
- 섬
- 솟
- 쇄
- 쉴
- 쉽
- 슷
- 쎄
- 쏘
- 씻
- 앍
- 앎
- 압
- 앙
- 얇
- 얹
- 엠
- 엥
- 옮
- 옵
- 옾
- 욕
- 웅
- 웬
- 율
- 윽
- 익
- 잃
- 잇
- 젤
- 줌
- 즌
- 징
- 짱
- 째
- 쨈
- 쩍
- 쩐
- 쪘
- 쫓
- 찔
- 챔
- 첨
- 총
- 춘
- 췌
- 측
- 층
- 칫
- 켓
- 콧
- 큐
- 킹
- 탑
- 턱
- 턴
- 털
- 텍
- 텐
- 톱
- 퇴
- 퉁
- 튀
- 틍
- 팩
- 팬
- 팽
- 펴
- 픽
- 햐
- 헐
- 혀
- 혔
- 혜
- 혤
- 홍
- 훨
- እ
- ጸ
- ጹ
- 蓦
- 霞
- 넷
- 녹
- 쌓
- 욱
- 택
- 텔
- 표
- 典
- 冠
- 凤
- 啤
- 委
- 庆
- 弗
- 悍
- 惭
- 慕
- 搬
- 斜
- 梳
- 略
- 疑
- 矗
- 航
- 芍
- 芽
- 褛
- 辗
- 迫
- 醺
- 键
- 露
- 鷁
- 专
- 仅
- 克
- 免
- 叙
- 咳
- 嗽
- 塌
- 富
- 峭
- 峻
- 恻
- 拍
- 枝
- 橙
- 涟
- 漪
- 睁
- 砸
- 组
- 羁
- 萄
- 营
- 葡
- 败
- 赴
- 雕
- 颓
- 驻
- 各
- 氧
- ছ
- ํ
- ເ
- ็
- ึ
- ั
- 伙
- 坎
- <sos/eos>
src_token_list:
- <blank>
- <unk>
- 么
- 喤
- 佨
- 叡
- 卐
- 伍
- 乻
- 勀
- 习
- 众
- 亿
- 勐
- 呵
- 偦
- 乖
- 乸
- 伿
- 丆
- 並
- 卭
- 侲
- 亶
- 再
- 丽
- 偯
- 乍
- 乔
- 伴
- 儑
- 倓
- 呔
- 傛
- 厚
- 喉
- 傓
- 別
- 仩
- 仮
- 乜
- 佸
- 今
- 勾
- 勝
- 喭
- 喵
- 入
- 呃
- 俥
- 丨
- 剏
- 喽
- 儻
- 亣
- 劺
- 佃
- 侖
- 傎
- 儈
- 兌
- 做
- 刢
- 俩
- 喩
- 五
- 傶
- 乩
- 傒
- 僝
- 厊
- 几
- 匎
- 俉
- 吠
- 厪
- 侽
- 丩
- 划
- 侧
- 仛
- 呇
- 乣
- 刡
- 仟
- 其
- 兹
- 咎
- 啧
- 从
- 冉
- 俴
- 伾
- 冱
- 倌
- 勠
- 勲
- 叜
- 伢
- 删
- 伻
- 唼
- 儼
- 唴
- 上
- 兏
- 児
- 儝
- 喲
- 丁
- 侕
- 傉
- 且
- 兄
- 卫
- 人
- 伂
- 仏
- 唦
- 匶
- 侸
- 冎
- 吾
- 伎
- 凪
- 北
- 仆
- 劸
- 喍
- 仳
- 凛
- 傲
- 养
- 厹
- 傷
- 仸
- 吉
- 下
- 併
- 勢
- 劆
- 叵
- 儺
- 价
- 吡
- 剧
- 兾
- 侫
- 喃
- 兙
- 俫
- 匨
- 侓
- 佳
- 剚
- 劘
- 倆
- 听
- 丅
- 哲
- 侷
- 同
- 僓
- 剣
- 券
- 匡
- 专
- 侴
- 勥
- 仰
- 咮
- 唬
- 唶
- 吺
- 偂
- 仙
- 喨
- 刹
- 乾
- 主
- 伅
- 兒
- 保
- 叽
- 唊
- 乥
- 哏
- 儕
- 佗
- 刅
- 偪
- 俋
- 價
- 之剂伇
- 侒
- 侙
- 侑
- 卥
- 啡
- 凢
- 傇
- 佣
- 丵
- 偿
- 偒
- 唣
- 匴
- 俯
- 叺
- 哵
- 丢
- 佤
- 俅
- 丹
- 傯
- 匂
- 刊
- 傩
- 匠
- 升
- 叱
- 亇
- 准
- 仯
- 伒
- 句
- 唗
- 亽
- 匛
- 來
- 倩
- 傏
- 傱
- 公
- 哙
- 吹
- 儷
- 喆
- 喎
- 付
- 営
- 勩
- 卣
- 侚
- 刯
- 伝
- 呬
- 侟
- 丂
- 七
- 俪
- 唫
- 刄
- 厃
- 伲
- 享
- 勁
- 吰
- 咅
- 凓
- 倵
- 匮
- 啇
- 口
- 但
- 儾
- 乑
- 厐
- 勱
- 偢
- 呌
- 刔僬于
- 伳
- 冹
- 剈
- 凰
- 丄
- 劫
- 哨
- 乃
- 僧
- 僋
- 嗎
- 厦
- 丈
- 喇
- 亴
- 匊
- 啗
- 儎
- 傕
- 咡
- 亞
- 仈
- 假
- 偀
- 剘
- 古
- 伞
- 伤
- 両
- 京
- 咂
- 儉
- 劑
- 乏
- 冧
- 劬
- 哣
- 刘
- 傮
- 勡
- 只
- 傴
- 卅
- 周
- 倝
- 叹
- 令佋
- 听乬
- 儁
- 仃
- 仓
- 促
- 乛
- 傪
- 丶
- 倢
- 喬
- 亗
- 唢
- 傋例佈
- 傾
- 哅但
- 匩
- 兩
- 凂喐
- 剬
- 勚
- 仲
- 偤
- 儭
- 侻
- 刷
- 啯
- 侶
- 勊
- 含
- 位
- 刭
- 丝
- 停
- 勛
- 危
- 凍
- 倘
- 偳
- 剱
- 俍
- 哈
- 亗勽
- 冊
- 克
- 仇
- 九
- 匹
- 勎
- 冴
- 亡
- 侞
- 万
- 乂
- 佩
- 丌
- 喑
- 備
- ▁丕些
- 冂
- 冯
- 劳
- 倷
- 他
- 丯
- 僊
- 儩
- 勽
- 亅
- 乼
- 僗
- 伊
- 俛
- 呧
- 僯
- 佂
- 傀
- 件
- 厓
- 亥
- 却佻
- 侠
- 僾
- 伟
- 俹
- 凸
- 厅
- 兕
- 厸
- 久
- 僄乌
- 仌养
- 乘
- 匱
- 伄
- 佢
- 啵
- 亯俒
- 倬
- 僔
- 劇
- 啴
- 儲
- 丑
- 吇
- 俦
- 佐
- 僮
- 僙
- 傳偗
- 凶亦
- 傧冨
- 亠
- 僁
- 偭
- 伃
- 哭
- 倮佉
- 味
- 乞
- 啳
- 剅
- 倍
- 伅傊
- 代
- 乀
- 哾
- 临
- 丘
- 佘
- 冟
- 兙勢
- 乤
- 劜
- 凫
- 咗
- 仍
- 傧
- 丳
- 丫
- 侢乲
- 号
- 双仼
- 倲
- 哅
- 仢
- 喗
- 佟
- 乧
- 啨
- 举
- 丼
- 光
- 俜
- 冨
- 兮
- 华
- 伦
- 唾
- 偌
- 丟
- 哊
- 佭
- 俖
- 偮
- 傟
- 喂
- 匢
- 况
- 佹
- 傮剂伇
- 俘亷
- 俰
- 仡
- 乮
- 劰
- 傊
- 刁
- 凑
- 僨俢
- 侰
- 厧
- 俄与
- 产
- 係
- 剰
- 仹
- 乭
- 吞
- 卺
- 剃
- 亢
- 伔
- 僣
- 啅
- 列
- 僤
- 兖
- 单
- 丒
- 即丶
- 叩
- 匔
- 俄
- 倮
- 侺
- 劏
- 励
- 咡倲
- 乙
- 凁
- 匈
- 使
- 啌
- 厀
- 乆
- 僕
- 啃
- 匁
- 仺
- 俵
- ▁咧倒
- 兺
- 亵刀
- 业
- 会
- 伀
- 亷
- 咛
- 傺
- 倞份
- 全
- 匾
- 商
- 呣
- 厄
- 唬厐
- 伐
- 剁
- 凴
- 佝
- 借
- 僰
- 凗
- 份
- 乊
- 剠
- 卛
- 佬刟
- 們
- 俸
- 劽
- 厥
- 吢
- 两
- 侥
- 凿喿
- 仠
- 员
- 咤
- 厴
- 仌
- 叭
- 傜
- 喦
- 凲
- 吙
- 僑
- 偓
- 卫丼
- 卢
- 凎
- 唍
- 供
- 冊俪
- 乿
- 伮
- 唼儻
- 亐
- 僟
- 伬
- 丰
- 啓
- 刏
- 俻
- 偣
- 卤
- 僀僽
- 偔
- 企
- 伜
- 冩
- 倡
- 唸
- 勿
- 乌
- 剛
- 傳
- 亦
- 凣
- 丷
- 冀
- 吧
- 倇
- 凹
- 刎
- 又
- 傰
- 吆亨
- 兢
- 劈
- 剫
- 侐
- 儔
- 呿
- 啝
- 丿
- 凮
- 前
- 卾
- 俟
- 凉
- 伈
- 喊
- 叐
- 乘偭
- 仅
- 作
- 务
- 倎喍
- 咦
- 吞凗
- 喛
- 嗇
- 俌
- 哘
- 召
- 向
- 吚
- 叅
- 儀
- 仴
- 予
- 喉丩
- 卶
- 唹
- 倷乺
- 伯
- 傑
- 厜
- 否
- 倠争
- 喸
- 仉
- 啢
- 佈
- 俆
- 冈
- 咺乱
- 啰
- 亙
- 厐兺
- 倿
- 伶
- 匏
- 叞
- 俗
- 呖
- 剠乂
- 剖
- 后
- 唟
- 哋却佻
- 厵
- 兊
- 右叿
- 呖八
- 之剂啴
- 亜
- 咢
- 初
- 伤傎
- 南
- 剎勖
- 喓
- 八
- 函
- 凼
- 啚
- 侸劤
- 勈
- 匼
- 傽
- 嗃
- 唎
- 儇
- 三
- 偋
- 唽
- 吚匨
- 匕
- 亻
- 倠
- 倿参
- 唏
- 倰
- 和
- 員
- 偶
- 刽
- 呀兆
- 傍
- 傣
- 佖
- 価
- 仒
- 先
- 勡伃
- 丙
- 刌
- 停乛
- 傐
- 佻
- 俅啵
- 倾
- 务儡
- 匦
- 佯
- 傆
- 佑
- 侣喙佚
- 剆
- 匃
- 兼
- 了
- 僄乌丿
- 劤
- 匪
- 卟
- 吭
- 勰
- 勶
- 侨
- 凔三
- 兩俉
- 劯
- 东
- 俓
- 僴
- 凪凢
- 啸
- 刚
- 写
- 务儡吶
- 仨
- 倅
- 倎
- 乢
- 俧
- 偼
- 傥
- 伋
- 咂哙
- 剡亣
- 勿亹
- ▁
- 傅
- 仼
- 唎償
- 与
- 凒乎
- 到
- 乽
- 充
- 凡劍债
- 刂
- 侯
- 哢
- 刨
- 加
- 凶
- 優
- 吆
- 叿
- 剩
- 吡咎
- 合
- 侕侥
- 唖
- 卸
- 厥佘
- 僀付
- 佲
- 唎唿
- 僘
- 卺凟
- 吚冭
- 仪
- 凔
- 卉
- 卓
- 剃唳
- 仑
- 刜
- 兀
- 卯
- 唁
- 倞
- 丞
- 亭
- 劼
- 俀唀卜
- 仯唴
- 亓
- 乱
- 厲
- 叧
- 偍
- 厛
- 偒傰
- 唬兺
- 佥
- 哜
- 傋
- 互
- 予佔
- 丸
- 唘
- 亏
- 侜判你
- 儯
- 仜
- 哥
- 叮
- 吽佩
- 义佣
- 吼
- 儈乾
- 什
- 乖喨
- 传
- 吿
- ▁丕侄
- 偾區
- 伕
- 兓冹
- 偠
- 些
- 依
- 侳
- 喧
- 体亳
- 喒仝
- 台
- 侞伢
- 乫亻
- 偎
- 則倹
- 不
- 厔
- 吋
- 卞
- 呏侯
- 乓丶
- 仳勝
- 亸
- 咓
- 倻
- 倍哥
- 丠
- 偂儣
- 傭
- 乣乩
- 乐
- 働劯
- 喒
- 匞
- 俏咽僤
- 唵
- 劕
- 儗
- 仾
- 兡
- 亮
- 兝
- 勥乑
- 刦
- 俼
- 使侻
- 咿
- 儥
- 丳厵
- 儺乬
- 乷
- 偏
- 僵
- 厭
- 兗
- 丣
- 呀丰
- 侰丌
- 俘
- 偂伃
- 剷
- 叝
- 倜
- 冃咠
- 唰
- 佔
- 儷傊
- 偘
- 侜判
- 亖
- 偩亚呈
- 倖
- 厢
- 傚
- 刳兎
- 仜仏
- 吲
- 勨
- 什南
- 再勾
- 勺
- 乊傕
- 刔僬
- 啫
- 以
- 勵
- 唛
- 叔
- 唜
- 兑
- 唅哢
- 仚
- 住
- 傖
- 佃侽
- 匩倵
- 劮
- 侮
- 唀卜
- 卯呶
- 刋
- 儰
- 呁
- 佾
- 剩况
- 乶冯
- 举乢
- 佃仛
- 哓
- 佬倏
- 叵仅
- 名
- 儤
- 偅
- 仿叿
- 匢兵呅
- 卉哓
- 喊匞
- 匄
- 侹冗
- 哋
- 呫
- 丞啌
- 午
- 叏
- 几乍
- 哋佲
- 啊
- 勛亀
- 僣佴
- 佊叢
- 喷
- 吞凗喧
- 厒
- 僭
- 俑
- 呉
- 侘冚
- 亵
- 侘
- 侜你
- 乚
- 俽
- 乓傍
- 哀
- 云
- 冶
- 刟
- 則
- 作乢
- 侵勊
- 咪佒
- 凾
- 唸万
- 唩
- 仭
- 凟
- 千
- 厒叵
- 勤
- 佦
- 兲
- 哯
- 唿
- 乡
- 侴丼
- 呪傫
- 凼佲
- 吅
- 兪
- 吏
- 卸僗
- 匜儳
- 乒冣
- 和刭
- 哯仿叿
- 侞剁
- 嗃仦刍
- 乌丿
- 倳
- 偄
- ▁丕侄冰
- 勖
- 厬
- 吐
- 凪剏
- 啲
- 史
- 勃
- 僽
- 偫
- 伺
- 刻
- 叞冓
- 刳
- 儏冤啱
- 冗
- 双
- 唩哥
- 俘丏
- 叕俯
- 嗇区作
- 倚优
- 儚
- ▁咧倒丧
- 勬
- 兓
- 乓
- 喱
- 偙丿
- 倐
- 凪呃
- ▁儱
- 啹
- 刐
- 乩乣
- 偄僟
- 偟
- 偈
- 俏咽
- 咊俎
- 咡哾
- 刀
- 喋
- 令呻
- 儚丠
- 为侎
- 侦
- 丗
- 他喊
- 刾
- 厌
- 哸
- 乲
- 咞
- 呁傑
- 冩僋
- 叴伷
- 凑仓
- 僯右叿
- 丧
- 喷吐
- 交
- 冶乞
- 厉
- 亰产
- 唜侮
- 兮佅
- 仱
- 乽丟
- 唩唏
- 伉
- 俜佞
- 叽伻
- 唳
- 伉値匒咕
- 佶
- 剎
- 剄
- 乄
- 僄丿
- 咾
- 佇
- 唅
- 冰
- 亹
- 凼啯
- 伺勐
- 咪仇
- 不佉
- 乂储
- 佣儃
- 凝咽
- 匔丛
- 佴佯
- 卂劮
- 佴
- 吚啾
- 启
- 偬
- 另
- 交吢
- 井
- 劣
- 丯业
- 冁
- 問
- 傋例
- 叜偒
- 优勓
- 儱
- 唇
- 厃吰
- 丐傒
- 唢丠
- 仫
- 凝咽僤
- 啺
- 厱
- 俌叝喳
- 丂假
- 勾呃
- 侹
- 啍
- 儧
- 唬厐兺
- 冭
- 剤
- 兗况
- 刂冂
- 剂
- 介僆唳
- 亘
- 出喳
- 匫
- 俁
- 呈
- 兪勩
- 仑仮
- 吶
- 俐喗
- 咬
- 俻劑
- 俇
- 僓傰
- 吨
- 凝兓冹
- 僎
- 右
- 叭右叿
- 呫咓
- 叔亥
- 仈卟
- 乩吉
- 吽
- 勃仝
- 储
- 喉剏
- 乩伿
- 備侮
- 乺
- 丂停
- 俽刌佐
- 兝亜
- 偍伇
- 兟
- 僎仡
- 代哏
- 介亲劄匏
- 哬
- 喱呧
- 呭厲
- 刑侞
- 僁傪呧
- 佛僽付
- 匘厢
- 丐儃
- 侸双
- 乬
- 冃关唆
- 出
- 倏
- 吋呓
- 僻
- 則倹乺
- 乶
- 兎
- 喰匸
- 僣佴佯
- 凵
- 唇傅
- 卒
- 仝
- 剛伯
- 呭
- 卻
- 哯叿
- 卂
- 劰刎
- 俹俢
- 匙
- 厢件
- 啲傚
- 剽
- 匍
- 偗
- 偙
- 乛俜佞
- 俽刌
- 佐佅厮
- 僔喃
- 丣厑勧仑
- 厔刁
- 佛僽
- 僌
- 共
- 偲
- 促佹
- 厘
- 傞到
- 侶加
- 傮偍伇
- 亰
- 侒几
- 咆
- 佚
- 呃么
- 哒
- 俟举
- 喷侐
- 也
- 厹乑
- 否佥
- 你
- 剒
- 兲儧
- 乪僔
- 伬兎
- 呪丛
- 侸仼
- 吅仂
- 唩啯
- 億勗匍
- 俢
- 偷
- 咺
- 伽
- 历
- 儏
- 厶
- 吟儿
- 下伬
- 仆兠僻
- 乙凛
- 传喰
- 喗傎
- 儚乞
- 冒
- 亵刀凑
- 唥
- 傟兠僻
- 唛偞倽劕
- 即
- 丂儲
- 义刻儃
- 喰
- 倄剟
- 叞佱
- 儃
- 喳
- 么刢
- 僷
- 呹
- 吏俿
- 仁
- 俅兓冹
- 偡
- 个哀偷
- 億勗
- 乫
- 偡卋
- 咔
- 冑
- 傊亞
- ▁咧倒匁
- 冸
- 呐僩
- 冀前
- 俿
- 丗兊
- 剟
- 匔呪佡
- 僄
- 叢
- 匳冊
- 唃兜
- 伊佸伊
- 乼僛
- 丐
- 剞
- 叛
- 兮厮
- 侵
- 予佔凑
- 啗冧
- 兓咽
- 乭卻
- 侺厱呓
- 侇僈哽俪
- 冠哛
- 佉
- 兌剚
- 哽
- 僆唳
- 哞
- 凸儣
- 呭凲
- 債劊
- 呶
- 儊
- 丁佂傪叱
- 勎刨亱傚
- 呭亀
- 俏
- 喰先
- 不僛叐
- 佟劳
- 咬佊叢
- 凧
- 凰冔
- 呐
- 亩
- 个
- 佸伊
- 唥初
- 侊
- 唏儺
- 伉剑倫
- 佛
- 倀
- 侠公
- 丑冭
- 効
- 哯仿
- 呐哾
- 乛俜伄
- 佋
- 啱
- 剰侯
- 傤
- 乯
- 伿侒
- 乙匾
- 勥叽
- 僙咠
- 僽付
- 俴兝亜
- 优
- 咀
- 吉乩
- 偓凎
- 冤啱
- 倚勓
- 劦偁
- 兵呅
- 呏
- 咶劫
- 叒
- 劰刎厲
- 冄
- 伋冺吣
- 儸
- 喱呧叱
- 刄冂
- 冚
- 劄
- 匭
- 俗侵勊
- 剤凊偈
- 凁啨
- 伋亘咆俧
- 凙
- 厥卶
- 仹偙丿
- 唎亹
- 叼
- 呃喤
- 僬于
- 亍伡佒
- 告
- 剏凪
- 厍伧
- 丹兹丹
- 周勚
- 僓俉
- 勰儧
- 似
- 几亴
- 剠厔
- 務側冫
- 促唴
- 偽
- 伊佸
- 伽仇
- 仹偙
- 刿
- 侢
- 伲佟
- 卟剬
- 喝
- 刎凲
- 冓
- 吁
- 咕
- 伦亀
- 傶侽
- 勿償
- 剤凊
- 呪佡
- 啟
- 俦俆
- 哩
- 創
- 仱伟
- 仆勬名
- 刐偓
- 儊吶
- 唵卂劮
- 凸伃
- 傇佅厮
- 仚匄
- 俄件
- 乵
- 傶價
- 判
- 哿
- 儏剞亍
- 倧
- 呭兡
- 凥喓
- 億
- 哨倵
- 伤剘
- 佱
- 判你
- 剁叜
- 人來
- 呥偬
- 偲兙
- 傰伶
- 傯交
- 倌兪
- 偫冗
- 厃仳
- 儏剞
- 俜佞乖
- 仃专
- 么儲
- 傞亙
- 侷仩
- 厩
- 勲來
- 凘
- 呪
- 勽含
- 令
- 侕什
- 一
- 個
- 之剂
- 偋份
- 喡
- 免仑
- 前双
- 吻
- 中
- 厎
- 丙劺
- 侔倷
- 佟乶
- 俀厬
- 偪侽
- 倌业
- 卛冶乞
- 启佲
- 凶仼
- 咞另
- 合卢
- 兕享
- 丯厄
- 剙
- 争
- 凔吧
- 停乑
- 僾喷吐
- 伉剑倫咕
- 勽业
- 佤僩
- 仍唍
- 唫創
- 助
- 倽劕
- 侢千
- 佊
- 凂
- 兾仴
- 倄
- 侪
- 傴侵
- 儂
- 偘啹
- 傹
- 卌
- 么僤
- 兺呃
- 僱呉們
- 唌兩
- 勮
- 博
- 匳
- 冡哛
- 伐厃
- 剘啌
- 兛
- 俴兝
- 咬占啍
- 僈
- 啳卢
- 倊啇
- 乃儤
- 冔
- 価僊
- 伻剏
- 个偷
- 唌
- 呻
- 僨
- 儈儲
- 儓
- 勋
- 冖
- 儿
- 呃勾
- 哋却
- 俰亜僴
- 何匬兙
- 呏又
- 兓啵
- 义刻
- 俽傇
- 傮剂伇啴
- 呀争
- 価儺
- 和勊
- 喛厩
- 侦佖
- 偪习
- 咅傏
- 卜
- 冖否佥
- 劼儿
- 傴侵勊
- 匕佇
- 倽
- 仍僰
- 冻
- 仧
- 亊
- 傟乨
- 偟兲儧
- 俁健傹
- 働
- 儖傤
- 勓
- 剂伇
- 剠侺厱呓
- 凝俏咽僤
- 咬佊
- 伤仁
- 勿亹償
- 僝哨
- 刋儔
- 区
- 僀
- 卺亨
- 勠呀兆
- 剠乂储
- 劓
- 偹
- 却
- 侢勶
- 前劤
- 叠
- 凿
- 乛唬伄
- 偶匪
- 僝匩
- 係侴
- 剖喛
- 劘丯
- 刿倎喍
- 哅但吉
- 唁呣
- 偟兲
- 以丄
- 刑
- 兹亴兹
- 台乕偗
- 兝久
- 僆
- 傝
- 书啈卵
- 則倷乺
- 傓偯
- 么倠争
- 免仑仮
- 唄
- 佔叔
- 儁侫
- 丽勬名
- 凙剫
- 僩
- 伨
- 临唜
- 呥
- 厴厸
- 啫価
- 呿啹
- 唵卂
- 冭匨
- 匜佀儳
- 僛
- 伎倰
- 俋倻
- 喥
- 匸
- 乴偾區
- 哋喒仝
- 唑
- 咿励
- 哛
- 剻
- 侥劈
- 叫
- 唰丄俰
- 剆匨
- 債
- 任
- 呦匥
- 嗉下
- 佡
- 丮
- 伉剑倫丑冭
- 吖
- 偖呴侏偗
- 吿啹
- 兮佅厮
- 侎
- 伨仁
- 俲
- 務側冫哾
- 俶
- 仹偙兀
- 台傳偗
- 僑亨
- 僓偒
- 之倀
- 倷吏
- 儰偹乄
- 佗吡
- 啖喍
- 剦
- 准伊
- 厍
- 兹亴
- 其俐喗
- 劬侢
- 伬呁傑
- 仺卤
- 唴兒
- 儜仪
- 傲员
- 一區
- 伨倎
- 僰劳
- 啫偭
- 兠
- 凊
- 偬伈
- 估
- 俗侵
- 佔凑
- 乤厅
- 凡
- 匋
- 區
- 军
- 剭丮劣丗兊
- 儠
- 凒
- 亂冦卬
- 呉們
- 侉
- 儚倌兪
- 仹匾兀
- 劜咿
- 吘吷伥
- 唖伴唖
- 侵刭
- 伶儺
- 並儲
- 侤
- 休
- 亳
- 叽勥
- 侖勲
- 傇佅
- 佀
- 咁
- 倜倡
- 佫
- 古儉
- 兪佑
- 吹儕
- 來喇
- 吉俅
- 勻
- 呪佡喽
- 喙
- 冱丨冱
- 吏俿凹
- 准咅
- 仆兠乨
- 免
- 偛
- 佸伊佸
- 债
- 俞
- 伈凣厜
- 亸凘
- 厐亢
- 倚
- 专体亳
- 偶匹
- 啩
- 劭亳
- 僧卥
- 內
- 交励
- ▁仗僒
- 呵冃唆
- 偾區倨
- 兖吟儿倞
- 吘买别冀
- 勥叽勥
- 侣
- 倪
- 吿呿兢
- 啇侘
- 仵
- 丁佂傪呧
- 九伴九
- 丘唣
- 串
- 伶僊
- 咞另凫
- 剭
- 佱员
- 冡
- 乿呶
- 亃
- 咘俯
- 乱乧
- 偅亖
- 偫会否佥
- 厑仞
- 唢乞
- 內咐
- 呐僩勊
- 匋勮
- 啉
- 乶伐
- 儕厃
- 凧偷
- 唍以
- 倅代倅
- 咋
- 喿
- 倩佼
- 劬乲
- 値匒咕
- 刻儃
- 優唥
- 佈亠
- 厴卢
- 亨
- 仨刋
- 勨喿
- 咑
- 勼仜
- 吽佩专
- 列叩
- 亠啢
- 僇
- 呡
- 侞吏
- 劙制
- 俫吾
- 偣叛
- 儳
- 兾仳
- 啚丌
- 佸伊佸伊
- 吘买别係
- 唘倡
- 亴兹
- 佘吿呿
- 儼啫
- 兽
- 儮
- 丏
- 呐偏
- 倻俋倻
- 乕偗
- 偟儧
- ▁仗些
- 侇僈哽
- 偞
- 后凣
- 俗哾
- 厘劊
- 伬刳兎
- 儑佑
- 傶习
- 俽仝
- 亭后
- 俯們
- 倜俸
- 刓
- 冑会否佥
- 剑倫
- 喌
- 佐佅
- 偎匨
- 俚下
- 呥偬傅
- 価仃
- 唴伲
- 価伺
- 勥叽伻
- 厤
- ▁咧哒
- 傅仧兟
- 卓僈哽俪
- 呍
- 仱倄剟
- 佁
- 凥
- 亵京
- 伞僁
- 后凣厜
- 善
- 唷
- 僃
- 儡吶
- 喜
- 僱們
- 刨亱傚
- 勽俌叝喳
- 吊傸
- 吿啹兢
- 乑伻
- 匱刂冂
- 侢勈
- 吆卺
- 储吋
- 佅
- 勤刐
- 呇児
- 关
- 仿
- 劰亀
- 哕
- 喎从
- 佞
- 傐唥
- 侊丣
- 厉乗吥
- 咠
- 侓佳
- 哚
- 匦啚
- 俿凹
- 價傶
- 亀
- 侅
- 仅儩
- 乗吥
- 丒勫
- 劇勖
- 儔丙
- 伫
- 凥勫
- 剽哱
- 侲創
- 喐
- 亮刚
- 倞凗
- 么儈
- 哉
- 卓僈哽
- ▁仗侄
- 卂倢
- 俟傉
- 也啚
- 匔傫
- 冄儼
- 伮勤刐
- 厒仅
- 亮刚啺
- 亍
- 买别
- 乾儈乾
- 倐冀前
- ▁仗勯剄冒亊
- 吠凢
- 參冄
- 傟兠乨
- 勼
- 亵京吨
- 剝
- 亼佛
- 倚优勓
- 叴伙劉
- 勺俸
- 乪
- 仍侐
- 匙丗
- 咂匠
- 僁傪叱
- 厄儀
- 哬剈
- 叫冀
- 亞仏
- 儯佋
- 吣
- 刽仱倄剟
- 叛偶匪
- 匂喎
- 儖任
- 厰
- 儖傤任
- 兆
- 僦
- 乍几乍
- 伓
- 偂咬佊叢
- 唒
- 偾
- 临傖
- 勺呓
- 偭仏
- 俾儿
- 匜佀兩
- 伹
- 傺加
- 喨侰
- 侰厍伧
- 匹俫
- 低啱
- 俐
- 劖
- 佒
- 哘伮勤
- 亿僨俢
- 咭仪
- 嗃功伆功
- 乾么乾
- 冪
- 唣删
- 佼
- 仝傇
- 丯含
- 唆
- 咶
- 刑侞伢
- 俘丏偩亚呈
- 几亴几亴
- 二
- 伪
- 剰剈
- 勗
- 侕儇
- 卒佁
- 佂傪叱
- 僯右
- 俓合
- 僙冃唆
- 傍勐
- 厸乞
- 代倅
- 偞倽
- 哜哬
- 傴刭
- 啧儣
- 倿凊
- 亄
- 倯凯
- 倮僛叐
- 俌乯喳
- 乕
- 仱伟剟
- 厄倩
- 偏傚
- 匜佀
- 傋七
- 主剈
- 厸丠
- 伡
- 剭佺劣丗
- 俓哅
- 喱傪呧
- 僳
- 倁
- 傀亼佛僽
- 哩伨倎
- 勇
- 俉啝
- 儣
- 勐乬
- 凮啃
- 僾喷吖
- 俾
- 侺储
- 凴兩俉
- 劦偁劄匏
- 仟儎
- 亽唣
- 亁啶咖
- 傗
- 倴
- 僄但
- 厚乑
- 伏
- 亝俯
- 喇儎
- 冽侗
- 兕刐
- 亭后凣厜
- 咞倬
- 侩
- 准啵
- 傣厸
- ▁丕冰
- 吘
- 偤句偤句
- 喊匮
- 唬伄
- 伈咢
- 刷佱
- 召偹乄
- 侃
- 丁佂呧
- 人伲
- 偣叅
- 伞佂傪呧
- 侕侥劈
- 侇
- 咞另傚
- 刚啺
- 售
- 剚他
- 勫
- 乮啗
- 俽刌佐佅厮
- 凔三吧
- 內喝
- 乫亻凸
- 唰丄
- 厞
- 侕侰
- 偌前
- 唸乭卻
- 俓哅但
- 傎俗刭
- 伬刳
- 剷丳厵
- 剴
- 体
- 丽兠僻
- 丝喦
- 偖
- 亴兹亴
- 临唜侮
- 俤
- 世匙
- 劘喤丯
- 写你
- 伺僊
- 剬俢
- 乚唳
- 俚
- 优冴
- 予吰
- 保伴保
- 何仕
- 唖伴
- 伐厅
- 伉値匒
- 儅丫
- 冤
- 匬
- 伺儺
- 上伲
- 倊
- 係侳
- 为像
- 偢亽
- 咙
- 乔凷
- 刌佐
- 債劊冗
- 唫刘
- 仍吐
- 亱傚
- 剞亍
- 傘
- 倠么倠争
- 单凸
- 丈冱丈
- 倹
- 低呸
- 侂
- 冃
- 呓
- 冝
- 互五
- 几亴几
- 啭
- 劋
- 厴喸
- 匼倽劕
- 冹乾
- 乼不
- 亼僀
- 准冹
- 侑僋
- 値
- 儃侨
- 咘
- 亴几亴
- 丽兠乨
- 仚咾冁
- 傞
- 儲並儲
- 厘劊冗
- 乍佨乍
- 哋喒
- 値匒
- 唛偞劕
- 俈
- 俫喦
- 凝
- 喊唰
- 唾冗
- 二凧偷亍伡佒
- 勜亳
- 凶亦俞
- 介僆
- 刽仑
- 嗀
- 偣亮刚啺
- 丧唽
- 传喰先
- 呦匥冊
- 厑
- 協
- 僓偒傰
- 中侦佖
- 匿
- 參匉
- 剭丮劣丗
- 倎七
- 他喊匞
- 史冖否佥
- 啾
- 哘勤刐
- 叴伙劉仮
- 佧
- 劗
- 侒伿侒
- 亼
- 卒佁匄
- 刀凑
- 價傶價
- 刍
- 匊叅
- 仯佹
- 叛偶
- 冀唳
- 儲么儲
- 丁僁呧
- 叀
- 凝兓咽
- 区乢
- 儸唑
- 厙
- 卍
- 啿
- 儬咭仪
- 例
- 刜厶
- 义
- 呝
- 勘
- 僁呧叱
- 传匸
- 丗函
- 乣乩乣
- 儚乞兪
- 丁僁
- 伣
- 僬
- 儘
- 传喰匸
- 厮
- 剜
- 卛冶
- 啳丅
- 唏僊
- 卶咿交
- 僁亦
- 厥卶咿
- 伨倎七
- ▁仗勯剄
- 册
- 刉
- 個克僽
- 伞喱
- 呗
- 仦
- 啐
- 僜
- 冺
- 唃
- 呸
- 傂
- 劍
- 呀
- 亾
- 占
- 厯
- 介
- 偵
- 冾
- 啋
- 侾
- 咭
- 倃
- 劊
- 嗉
- 伩
- 击
- 倉
- 偊
- 伷
- 乴
- 参
- 亲
- 儡
- 亟
- 冇
- 分
- 呯
- 为
- 偁
- 咼
- 厠
- 切
- 健
- 凬
- 冠
- 伛
- 伥
- 叴
- 侼
- 余
- 偻
- 吥
- 佬
- 喏
- 卋
- 厼
- 品
- 倨
- 剡
- 喣
- 兵
- 勳
- 儨
- 匥
- 凷
- 乇
- 丛
- 咩
- 咐
- 傌
- 偨
- 匟
- 唂
- 匆
- 哱
- 兇
- 修
- 俷
- 叙
- 啜
- 啂
- 唉
- 侜
- 佽
- 喺
- 功
- 劝
- 剥
- 乎
- 咜
- 啥
- 便
- 哫
- 剓
- 劦
- 们
- 僠
- 刔
- 吟
- 唝
- 受
- 佺
- 凐
- 喻
- 儖
- 匧
- 勑
- 像
- 倦
- 僒
- 咉
- 僲
- 呦
- 剗
- 何
- 伭
- 侌
- 凋
- 側
- 侔
- 化
- 匘
- 典
- 処
- 吒
- 卙
- 傡
- 儜
- 剐
- 償
- 亚
- 呅
- 厇
- 啑
- 县
- 冼
- 冢
- 倭
- 僼
- 侄
- 卬
- 哤
- 哐
- 勯
- 倛
- 勌
- 剨
- 喢
- 勜
- 劵
- 乗
- 卡
- 伌
- 唀
- 咚
- 僶
- 勪
- 仐
- 哮
- 儍
- 冮
- 兘
- 偝
- 勉
- 俠
- 冫
- 傫
- 匚
- 伖
- 兰
- 凖
- 俙
- 世
- 叶
- 倫
- 匜
- 乹
- 咪
- 伙
- 之
- 啮
- 响
- 剀
- 卷
- 农
- 勧
- 勣
- 啖
- 倔
- 呲
- 刺
- 吔
- 啬
- 厨
- 勴
- 净
- 刴
- 叄
- 厣
- 厫
- 仂
- 傸
- 勆
- 凞
- 唈
- 刞
- 偺
- 俺
- 叕
- 匷
- 刖
- 凤
- 冕
- 劃
- 啄
- 冬
- 俀
- 唚
- 偐
- 儌
- 劉
- 俎
- 僖
- 刧
- 伱
- 劅
- 信
- 唕
- 伧
- 哇
- 协
- 倕
- 值
- 唐
- 佄
- 厕
- 冋
- 剢
- 吸
- 円
- 俳
- 于
- 哼
- 咨
- 吷
- 喅
- 呂
- 哔
- 咏
- 儬
- 凃
- 乨
- 吤
- 侭
- 匐
- 俊
- 倶
- 倱
- 匉
- 元
- 哺
- 儞
- 咸
- 単
- 厾
- 刲
- 佪
- 呴
- 傈
- 僚
- 丱
- 吓
- 叻
- 咖
- 冦
- 亱
- 呱
- 哻
- 卑
- 凌
- 兜
- 凜
- 嗏
- 倂
- 勍
- 剌
- 勅
- 勒
- 吊
- 厖
- 匌
- 卨
- 厈
- 儫
- 哦
- 伸
- 偩
- 僱
- 咫
- 偱
- 僥
- 吂
- 侗
- 亪
- 俔
- 匤
- 嗂
- 哗
- 吪
- 呮
- 吃
- 凱
- 卧
- 勦
- 侬
- 唻
- 劲
- 喯
- 卮
- 凳
- 友
- 啶
- 傁
- 喕
- 取
- 動
- 咯
- 劙
- 労
- 叾
- 剋
- 冲
- 各
- 偑
- 割
- 喟
- 咵
- 傔
- 发
- 啤
- 内
- 單
- 冥
- 吱
- 呩
- 務
- 咽
- 刮
- 刬
- 哌
- 叇
- 參
- 刵
- 咊
- 咰
- 冏
- 兿
- 吜
- 削
- 办
- 哃
- 喪
- 及
- 剺
- 劾
- 买
- 卩
- 卿
- 剔
- 势
- 呢
- 喠
- 俒
- 剹
- 偰
- 佰
- 匒
- 俕
- 劎
- 力
- 劶
- 叚
- 僺
- 兯
- 喁
- 咥
- 儽
- 呞
- 佮
- 创
- 医
- 低
- 俬
- 亂
- 党
- 匵
- 啙
- 咷
- 僅
- 僫
- 刣
- 咝
- 叆
- 变
- 勔
- 侁
- 倈
- 卝
- 候
- 傦
- 哶
- 啁
- 吴
- 吀
- 唙
- 卲
- 啀
- 儅
- 呤
- 儐
- 倗
- 兔
- 冐
- 亝
- 君
- 呕
- 僢
- 动
- 咒
- 卦
- 制
- 儢
- 劒
- 啠
- 仕
- 啔
- 咟
- 凯
- 唱
- 丬
- 哟
- 哂
- 冷
- 喈
- 劥
- 凕
- 偸
- 儒
- 侏
- 僂
- 喚
- 叨
- 冘
- 凩
- 侱
- 劭
- 卹
- 刃
- 兦
- 刈
- 啽
- 伆
- 嗆
- 唪
- 傿
- 呆
- 压
- 厡
- 厂
- 唔
- 仞
- 咴
- 僡
- 侍
- 儶
- 匣
- 劚
- 唨
- 乁
- 儵
- 卖
- 咳
- 偧
- 事
- 劻
- 刕
- 厷
- 佌
- 厽
- 呎
- 兤
- 呒
- 嗅
- 唠
- 剸
- 吝
- 卆
- 冿
- 啷
- 唞
- 凅
- 劁
- 剪
- 咻
- 厺
- 仄
- 哹
- 僸
- 勸
- 剉
- 劌
- 亁
- 呋
- 咄
- 哠
- 仔
- 副
- 凨
- 剾
- 傃
- 剶
- 侀
- 啛
- 勷
- 嗈
- 吩
- 唧
- 匲
- 厳
- 勏
- 凚
- 匝
- 儋
- 唲
- 剼
- 哑
- 匓
- 募
- 兂
- 匯
- 唺
- 咱
- 剳
- 嗄
- 嗋
- 乒
- 厝
- 喹
- 偉
- 剑
- 利
- 呟
- 冞
- 喞
- 哧
- 哰
- 劂
- 喾
- 傢
- 劐
- 匰
- 哷
- 傠
- 勂
- 儙
- 吳
- 啦
- 佷
- 剕
- 呺
- 吕
- 喖
- 吵
- 呼
- 喔
- 啻
- 六
- 吗
- 僪
- 凇
- 呄
- 冽
- 哝
- 咇
- 反
- 半
- 十
- 包
- 僞
- 兞
- 唯
- 呷
- 匽
- 儆
- 倒
- 剮
- 凈
- 劔
- 呰
- 刼
- 叟
- 劧
- 倸
- 哎
- 亯
- 叉
- 凭
- 减
- 可
- 兣
- 劷
- 倣
- 厏
- 偃
- 劀
- 乳
- 呜
- 司
- 嗁
- 傻
- 则
- 偆
- 匀
- 啪
- 呠
- 刪
- 喘
- 厁
- 喴
- 啼
- 啎
- 刱
- 咣
- 呚
- 卪
- 偕
- 允
- 努
- 劢
- 勄
- 啕
- 匇
- 傄
- 哳
- 勭
- 命
- 呙
- 唋
- 倯
- 啒
- 勹
- 剿
- 啘
- 侈
- 叁
- 俣
- 喫
- 呛
- 啈
- 原
- 吮
- 呑
- 哴
- 兴
- 俨
- 儹
- 厗
- 唡
- 伇
- 喼
- 卵
- 儛
- 咍
- 嗌
- 俭
- 凄
- 呾
- 冣
- 収
- 匑
- 别
- 僐
- 僿
- 印
- 卼
- 咲
- 哖
- 勞
- 倥
- 咹
- 儴
- 哆
- 具
- 俱
- 催
- 唭
- 嗊
- 哪
- 决
- 儦
- 喀
- 嗍
- 傼
- 书
- 咃
- 去
- 唓
- 啞
- 卽
- 哄
- 僉
- 叓
- 卄
- 咈
- 伵
- 啣
- 唤
- 仗
- 丕
- 严
- 咧
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
use_preprocessor: true
token_type: bpe
src_token_type: bpe
bpemodel: data/token_list/tgt_bpe_unigram6000_ts_en/bpe.model
src_bpemodel: data/token_list/src_bpe_unigram3000_rm_wavlm_large_21_km2000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
tokenizer_encode_conf: null
src_tokenizer_encode_conf:
enable_sampling: true
alpha: 0.4
nbest_size: -1
frontend: embed
frontend_conf:
embed_dim: 512
positional_dropout_rate: 0.1
specaug: specaug
specaug_conf:
apply_time_warp: false
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: false
freq_mask_width_range:
- 0
- 10
num_freq_mask: 0
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 10
preencoder: null
preencoder_conf: {}
encoder: e_branchformer
encoder_conf:
output_size: 256
attention_heads: 4
attention_layer_type: rel_selfattn
pos_enc_layer_type: rel_pos
rel_pos_type: latest
cgmlp_linear_units: 1024
cgmlp_conv_kernel: 31
use_linear_after_conv: false
gate_activation: identity
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv1d2
layer_drop_rate: 0.0
linear_units: 1024
positionwise_layer_type: linear
use_ffn: true
macaron_ffn: true
merge_conv_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
layer_drop_rate: 0.0
model: discrete_asr
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
share_decoder_input_output_embed: false
share_encoder_decoder_input_embed: false
required:
- output_dir
- src_token_list
- token_list
version: '202310'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
karawalla/mistral_b_karawalla_aqtraining24005
|
karawalla
| 2024-01-20T21:44:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-20T21:43:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CLMBR/superlative-quantifier-lstm-1
|
CLMBR
| 2024-01-20T21:42:44Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-16T15:30:01Z |
---
tags:
- generated_from_trainer
model-index:
- name: superlative-quantifier-lstm-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# superlative-quantifier-lstm-1
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7763 | 0.03 | 76320 | 4.7620 |
| 4.4989 | 1.03 | 152640 | 4.4853 |
| 4.3556 | 0.03 | 228960 | 4.3511 |
| 4.2721 | 1.03 | 305280 | 4.2690 |
| 4.2111 | 0.03 | 381600 | 4.2130 |
| 4.1619 | 1.03 | 457920 | 4.1721 |
| 4.1268 | 0.03 | 534240 | 4.1413 |
| 4.0969 | 1.03 | 610560 | 4.1164 |
| 4.0684 | 0.03 | 686880 | 4.0977 |
| 4.0424 | 1.03 | 763200 | 4.0819 |
| 4.0213 | 0.03 | 839520 | 4.0687 |
| 4.0022 | 1.03 | 915840 | 4.0577 |
| 3.9921 | 2.03 | 992160 | 4.0491 |
| 3.9758 | 0.03 | 1068480 | 4.0414 |
| 3.9621 | 1.03 | 1144800 | 4.0342 |
| 3.9449 | 2.03 | 1221120 | 4.0284 |
| 3.931 | 0.03 | 1297440 | 4.0237 |
| 3.9215 | 1.03 | 1373760 | 4.0190 |
| 3.9109 | 0.03 | 1450080 | 4.0155 |
| 3.9095 | 0.03 | 1526400 | 4.0124 |
| 3.9049 | 1.03 | 1602720 | 4.0085 |
| 3.9 | 0.03 | 1679040 | 4.0056 |
| 3.8962 | 1.03 | 1755360 | 4.0035 |
| 3.8938 | 2.03 | 1831680 | 4.0011 |
| 3.8866 | 0.03 | 1908000 | 3.9991 |
| 3.8816 | 0.03 | 1984320 | 3.9976 |
| 3.8738 | 0.03 | 2060640 | 3.9959 |
| 3.8678 | 1.03 | 2136960 | 3.9945 |
| 3.8677 | 0.03 | 2213280 | 3.9936 |
| 3.8607 | 1.03 | 2289600 | 3.9922 |
| 3.8561 | 2.03 | 2365920 | 3.9911 |
| 3.8476 | 0.03 | 2442240 | 3.9901 |
| 3.841 | 1.03 | 2518560 | 3.9895 |
| 3.8375 | 2.03 | 2594880 | 3.9884 |
| 3.834 | 0.03 | 2671200 | 3.9873 |
| 3.837 | 0.03 | 2747520 | 3.9869 |
| 3.839 | 1.03 | 2823840 | 3.9863 |
| 3.8355 | 0.03 | 2900160 | 3.9858 |
| 3.8396 | 1.03 | 2976480 | 3.9855 |
| 3.8402 | 2.02 | 3052726 | 3.9853 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
pquiggles/mistral-7b-doctor-finetuned
|
pquiggles
| 2024-01-20T21:30:36Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-01-20T07:12:45Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Ba2han/TinyOpenHermes-1.1B-4k
|
Ba2han
| 2024-01-20T21:21:50Z | 1,367 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:teknium/openhermes",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T19:31:25Z |
---
license: cc-by-nc-4.0
datasets:
- teknium/openhermes
---

| Task |Version| Metric |Value | |Stderr|
|-------------------------------|------:|--------|-----:|---|-----:|
|hendrycksTest-logical_fallacies| 1|acc |0.3067|± |0.0362|
| | |acc_norm|**0.3067**|± |0.0362|
|hendrycksTest-global_facts | 1|acc | 0.3|± |0.0461|
| | |acc_norm| 0.3|± |0.0461|
|hendrycksTest-abstract_algebra | 1|acc |0.2700|± |0.0446|
| | |acc_norm|**0.2700**|± |0.0446|
|hendrycksTest-college_chemistry| 1|acc |0.3100|± |0.0465|
| | |acc_norm|**0.3100**|± |0.0465|
|hendrycksTest-college_physics | 1|acc |0.2157|± |0.0409|
| | |acc_norm|**0.2157**|± |0.0409|
|hendrycksTest-formal_logic | 1|acc |0.2857|± |0.0404|
| | |acc_norm|**0.2857**|± |0.0404|
Compared to TinyLlama-1.1B-Chat-v1.0:
Algebra UP **17.4%**
Formal Logic UP **24.2%**
Logical Fallacies UP **35.4%**
Template Format: **Alpaca**
It took 4 hours to train in 1 epoch with an RTX 3090.

|
max044/rubik_roberta
|
max044
| 2024-01-20T21:18:26Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-20T19:17:46Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: rubik_roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubik_roberta
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.1666 | 1.0 | 3334 | 0.1127 |
| 0.0928 | 2.0 | 6668 | 0.0649 |
| 0.0707 | 3.0 | 10002 | 0.0509 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.3.0.dev20240108
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Coooori/llama_checkpoint-2200
|
Coooori
| 2024-01-20T21:16:57Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-20T21:16:54Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
stablediffusionapi/majicmix-lux-v3
|
stablediffusionapi
| 2024-01-20T21:12:01Z | 21 | 0 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-20T21:09:56Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# majicMIX lux 麦橘辉耀 v3 API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "majicmix-lux-v3"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/majicmix-lux-v3)
Model link: [View model](https://modelslab.com/models/majicmix-lux-v3)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "majicmix-lux-v3",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
peterkchung/zephyr-7b-beta-qlora-helpfulsteer-v0.1
|
peterkchung
| 2024-01-20T21:03:22Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-20T20:24:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zaq-hack/MistralTrix-v1-GPTQ
|
zaq-hack
| 2024-01-20T20:58:24Z | 10 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T18:07:04Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
dtype: bfloat16
tags:
- merge
---
GPTQ for one of the best small models you can get.</br>
All credit to the creator of it who is 'just a guy that likes to ... tinker'</br>
This model is warp speed hosted on [Aphrodite-engine](https://github.com/PygmalionAI/aphrodite-engine) which is why I made this.
# Results:
T: 🟦
Model: CultriX/MistralTrix-v1 📑
Average: 73.39
ARC: 72.27
HellaSwag: 88.33
MMLU: 65.24
TruthfulQA: 70.73
Winogrande: 80.98
GSM8K: 62.77
# Edit/Disclaimer:
Currently the #1 ranked 7B LLM on the LLM Leaderboards, woah!
I did not expect that result at all and am in no way a professional when it comes to LLM's or computer science in general,
just a guy that likes to nerd about and tinker around.
For those wondering how I achieved this, the answer is that I simply attempted to apply the techniques outlined in this amazing article myself: https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac
Therefore, all credit basically goes to the guy who wrote that.
He offers the exact Colab notebook I used to train this model for free, as well as a really nice GitHub page I hope he doesn't mind me sharing: https://github.com/mlabonne/llm-course/
So huge thank you to him for sharing his knowledge and learning me a thing or two in the process!
# GGUF
I attempted to quantisize the model myself, which again I pretty much have no clue about, but it seems to run fine for me when I test them:
https://huggingface.co/CultriX/MistralTrix-v1-GGUF
I'll say it one more time though:
"I am a complete beginner to all of this, so if these do end up sucking don't be surprised."
You have been warned :)
# Description:
(trained on a single Colab GPU in less than a few hours)
MistralTrix-v1 is an zyh3826/GML-Mistral-merged-v1 model that has been further fine-tuned with Direct Preference Optimization (DPO) using Intel's dataset for neural-chat-7b-v3-1.
It surpasses the original model on several benchmarks (see results).
It is directly inspired by the RLHF process described by Intel/neural-chat-7b-v3-1's authors to improve performance.
I used the same dataset and reformatted it to apply the ChatML template.
The code to train this model is available on Google Colab and GitHub.
Fine-tuning took about an hour on Google Colab A-1000 GPU with 40GB VRAM.
# TRAINING SPECIFICATIONS
> LoRA configuration
peft_config = LoraConfig(
r=16,
lora_alpha=16,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
)
> Model to fine-tune
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
load_in_4bit=True
)
model.config.use_cache = False
> Reference model
ref_model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
load_in_4bit=True
)
> Training arguments
training_args = TrainingArguments(
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=200,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=100,
bf16=True,
report_to="wandb",
)
> Create DPO trainer
dpo_trainer = DPOTrainer(
model,
ref_model,
args=training_args,
train_dataset=dataset,
tokenizer=tokenizer,
peft_config=peft_config,
beta=0.1,
max_prompt_length=1024,
max_length=1536,
)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.