modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Strudel7182/ppo-LunarLander-v2
|
Strudel7182
| 2024-01-23T17:51:57Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-23T17:24:45Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 244.83 +/- 24.32
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LoneStriker/Crunchy-onion-3.5bpw-h6-exl2
|
LoneStriker
| 2024-01-23T17:46:49Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"dataset:lemonilia/LimaRP",
"dataset:grimulkan/theory-of-mind",
"dataset:Epiculous/Gnosis",
"license:agpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T17:38:01Z |
---
license: agpl-3.0
datasets:
- lemonilia/LimaRP
- grimulkan/theory-of-mind
- Epiculous/Gnosis
---
# Crunchy-onion
This model is created by training Mixtral base model on LimaRP (ShareGPT format provided by SAO), theory of mind, and gnosis(provided by jeiku).
The 4-bit qlora was then merged into Mixtral Instruct resulting in what you see here.
Works best with ChatML Instruct
|
keremgencer/mistral-7b-dolly
|
keremgencer
| 2024-01-23T17:44:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T17:43:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FelixChao/WestSeverus-7B
|
FelixChao
| 2024-01-23T17:43:20Z | 1,359 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"senseable/WestLake-7B-v2",
"FelixChao/Severus-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T17:35:43Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- senseable/WestLake-7B-v2
- FelixChao/Severus-7B
---
# WestSeverus-7B
WestSeverus-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
* [FelixChao/Severus-7B](https://huggingface.co/FelixChao/Severus-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: senseable/WestLake-7B-v2
layer_range: [0, 32]
- model: FelixChao/Severus-7B
layer_range: [0, 32]
merge_method: slerp
base_model: senseable/WestLake-7B-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "FelixChao/WestSeverus-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
LoneStriker/Crunchy-onion-3.0bpw-h6-exl2
|
LoneStriker
| 2024-01-23T17:37:58Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"dataset:lemonilia/LimaRP",
"dataset:grimulkan/theory-of-mind",
"dataset:Epiculous/Gnosis",
"license:agpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T17:30:33Z |
---
license: agpl-3.0
datasets:
- lemonilia/LimaRP
- grimulkan/theory-of-mind
- Epiculous/Gnosis
---
# Crunchy-onion
This model is created by training Mixtral base model on LimaRP (ShareGPT format provided by SAO), theory of mind, and gnosis(provided by jeiku).
The 4-bit qlora was then merged into Mixtral Instruct resulting in what you see here.
Works best with ChatML Instruct
|
Rimsha19/TeacherEducationFramework21stCentury
|
Rimsha19
| 2024-01-23T17:36:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-23T17:34:55Z |
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Teacher Education Framework</title>
<style>
/* Add your CSS styles here */
body {
font-family: Arial, sans-serif;
margin: 0;
padding: 0;
background-color: #f4f4f4;
}
header {
background-color: #333;
color: #fff;
padding: 1em;
text-align: center;
}
section {
margin: 1em;
padding: 1em;
background-color: #fff;
border-radius: 8px;
box-shadow: 0 0 10px rgba(0, 0, 0, 0.1);
}
</style>
</head>
<body>
<header>
<h1>Teacher Education Framework</h1>
</header>
<section>
<h2>1. Policy and Governance</h2>
<ul>
<li>Establish a National Teacher Education Policy aligned with contemporary educational needs.</li>
<li>Ensure a merit-based recruitment system for teachers to enhance the quality of educators.</li>
<li>Implement transparent and accountable governance mechanisms, reducing political interference.</li>
</ul>
</section>
<!-- Repeat the above structure for each section (2-13) -->
</body>
</html>
|
ahebbar69/10-52-llama
|
ahebbar69
| 2024-01-23T17:35:21Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-01-23T17:22:55Z |
---
library_name: peft
base_model: NousResearch/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
Heromnxpw0/ppo-LunarLander-v2
|
Heromnxpw0
| 2024-01-23T17:34:51Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-23T17:33:41Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.81 +/- 16.93
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LoneStriker/Crunchy-onion-2.4bpw-h6-exl2
|
LoneStriker
| 2024-01-23T17:30:32Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"dataset:lemonilia/LimaRP",
"dataset:grimulkan/theory-of-mind",
"dataset:Epiculous/Gnosis",
"license:agpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T16:15:56Z |
---
license: agpl-3.0
datasets:
- lemonilia/LimaRP
- grimulkan/theory-of-mind
- Epiculous/Gnosis
---
# Crunchy-onion
This model is created by training Mixtral base model on LimaRP (ShareGPT format provided by SAO), theory of mind, and gnosis(provided by jeiku).
The 4-bit qlora was then merged into Mixtral Instruct resulting in what you see here.
Works best with ChatML Instruct
|
mudogruer/electra-emotion
|
mudogruer
| 2024-01-23T17:27:50Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:google/electra-base-discriminator",
"base_model:finetune:google/electra-base-discriminator",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-23T17:20:05Z |
---
license: apache-2.0
base_model: google/electra-base-discriminator
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: electra-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.944
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-emotion
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1403
- Accuracy: 0.944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6777 | 1.0 | 500 | 0.2635 | 0.9155 |
| 0.186 | 2.0 | 1000 | 0.1598 | 0.935 |
| 0.113 | 3.0 | 1500 | 0.1403 | 0.944 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
gael1130/ppo-LunarLander-v2
|
gael1130
| 2024-01-23T17:20:40Z | 12 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-20T21:15:04Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 284.64 +/- 25.31
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
alionder/laptop_kriter
|
alionder
| 2024-01-23T17:16:17Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:burakaytan/roberta-base-turkish-uncased",
"base_model:finetune:burakaytan/roberta-base-turkish-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-23T17:15:53Z |
---
license: mit
base_model: burakaytan/roberta-base-turkish-uncased
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: laptop_kriter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# laptop_kriter
This model is a fine-tuned version of [burakaytan/roberta-base-turkish-uncased](https://huggingface.co/burakaytan/roberta-base-turkish-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2151
- F1: 0.7709
- Roc Auc: 0.8574
- Accuracy: 0.7344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:|
| 0.3066 | 1.0 | 1151 | 0.2457 | 0.5688 | 0.7257 | 0.6484 |
| 0.2325 | 2.0 | 2302 | 0.2088 | 0.6630 | 0.7908 | 0.6719 |
| 0.1723 | 3.0 | 3453 | 0.2023 | 0.6933 | 0.8174 | 0.6875 |
| 0.159 | 4.0 | 4604 | 0.2004 | 0.7312 | 0.8363 | 0.7188 |
| 0.1306 | 5.0 | 5755 | 0.2138 | 0.7168 | 0.8104 | 0.7148 |
| 0.1034 | 6.0 | 6906 | 0.2103 | 0.7745 | 0.8641 | 0.7539 |
| 0.0865 | 7.0 | 8057 | 0.2107 | 0.7684 | 0.8530 | 0.75 |
| 0.0733 | 8.0 | 9208 | 0.2099 | 0.7757 | 0.8663 | 0.7383 |
| 0.0643 | 9.0 | 10359 | 0.2130 | 0.7772 | 0.8586 | 0.7539 |
| 0.0617 | 10.0 | 11510 | 0.2151 | 0.7709 | 0.8574 | 0.7344 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Disty0/SoteMixV3
|
Disty0
| 2024-01-23T17:14:21Z | 27 | 3 |
diffusers
|
[
"diffusers",
"onnx",
"safetensors",
"art",
"anime",
"stable diffusion",
"openvino",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-12T17:00:17Z |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
- anime
- stable diffusion
- openvino
- onnx
---
SoteMix V3 is trained at 1024x1536 for high resolution image generations.
This model is tested on SD.Next with Diffusers backend and HyperTile size set to 0 (Auto).
Positive Prompts:
```
masterpiece, best quality, highres, 1girl,
```
Negative Prompts:
```
(worst quality, low quality, lowres), zombie, interlocked fingers,
```
Do not use any negative embeddings.
Sampler: `Euler a`
Steps: `30-40`
Clip Skip: `1` or `2`
CFG: `4-7`
Base Resolution: `512x` / `768x` / `1024x` / `768x1280` / `960x1280` / `1024x1536` / `1920x1080`
Model can still be chaotic at `1024x1536` and `1920x1080`.
Second Pass / Hires:
Sampler: `Euler` / `Euler a`
Steps: `10` with `Euler` / `20` with `Euler a`
Upscaler: `RealESRGAN 4x+ Anime6B` / `ESRGAN 4x-AnimeSharp` with `0.2`-`0.3` denoise strength.
CFG: `6-9`
Resolution: `2x` of the base resolution.
Training:
My GPU couldn't handle full model training at these resolutions so i trained it as a `512` layer Lora with SoteMix V1 as the base.
Used highres as the trigger word. Also used raifu trigger word with my OC character.
Resolution: `1024x1536 with Bucketing`
Batch Size: `1`
Steps: `40000`
GPU: `Intel ARC A770 16GB`
Bucket:
```
bucket 0: resolution (832, 1664), count: 49
bucket 1: resolution (896, 1280), count: 1
bucket 2: resolution (896, 1536), count: 2
bucket 3: resolution (960, 1408), count: 8
bucket 4: resolution (960, 1472), count: 57
bucket 5: resolution (960, 1536), count: 12
bucket 6: resolution (960, 1600), count: 537
bucket 7: resolution (1024, 1344), count: 266
bucket 8: resolution (1024, 1408), count: 349
bucket 9: resolution (1024, 1472), count: 1535
bucket 10: resolution (1024, 1536), count: 950
bucket 11: resolution (1088, 1280), count: 63
bucket 12: resolution (1152, 1216), count: 62
bucket 13: resolution (1152, 1280), count: 147
bucket 14: resolution (1152, 1344), count: 114
bucket 15: resolution (1216, 1152), count: 44
bucket 16: resolution (1216, 1216), count: 409
bucket 17: resolution (1216, 1280), count: 53
bucket 18: resolution (1280, 576), count: 20
bucket 19: resolution (1280, 640), count: 94
bucket 20: resolution (1280, 704), count: 217
bucket 21: resolution (1280, 768), count: 102
bucket 22: resolution (1280, 832), count: 118
bucket 23: resolution (1280, 896), count: 280
bucket 24: resolution (1280, 960), count: 137
bucket 25: resolution (1280, 1024), count: 32
bucket 26: resolution (1280, 1088), count: 27
bucket 27: resolution (1280, 1152), count: 61
bucket 28: resolution (1280, 1216), count: 24
bucket 29: resolution (1344, 1024), count: 17
bucket 30: resolution (1344, 1152), count: 38
bucket 31: resolution (1536, 896), count: 94
bucket 32: resolution (1536, 1024), count: 34
bucket 33: resolution (1600, 960), count: 196
bucket 34: resolution (1664, 832), count: 21
bucket 35: resolution (2048, 768), count: 3
bucket 36: resolution (2304, 576), count: 1
mean ar error (without repeats): 0.01257769833438255
```
Merge:
Merged SoteMix V1 with Lunar Radiance Light and then merged the Hires Lora i trained on top of it.
Merge ratio: `(0.6 SoteMix V1 + 0.4 Lunar Radiance Light) + 0.7 Hires Lora`

|
raj-rahullll/my-pet
|
raj-rahullll
| 2024-01-23T17:14:18Z | 3 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-23T17:09:32Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet- Dreambooth model trained by raj-rahullll following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 22BTRIS045
Sample pictures of this concept:


|
H1032200368/tunes
|
H1032200368
| 2024-01-23T17:10:43Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"musicgen",
"text-to-audio",
"arxiv:2306.05284",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-01-23T16:26:08Z |
---
inference: true
tags:
- musicgen
license: cc-by-nc-4.0
pipeline_tag: text-to-audio
widget:
- text: "a funky house with 80s hip hop vibes"
example_title: "Prompt 1"
- text: "a chill song with influences from lofi, chillstep and downtempo"
example_title: "Prompt 2"
- text: "a catchy beat for a podcast intro"
example_title: "Prompt 3"
---
# MusicGen - Small - 300M
MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts.
It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio.
MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*.
Four checkpoints are released:
- [**small** (this checkpoint)](https://huggingface.co/facebook/musicgen-small)
- [medium](https://huggingface.co/facebook/musicgen-medium)
- [large](https://huggingface.co/facebook/musicgen-large)
- [melody](https://huggingface.co/facebook/musicgen-melody)
## Example
Try out MusicGen yourself!
* Audiocraft Colab:
<a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Colab:
<a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Demo:
<a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
## 🤗 Transformers Usage
You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards.
1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) and scipy:
```
pip install --upgrade pip
pip install --upgrade transformers scipy
```
2. Run inference via the `Text-to-Audio` (TTA) pipeline. You can infer the MusicGen model via the TTA pipeline in just a few lines of code!
```python
from transformers import pipeline
import scipy
synthesiser = pipeline("text-to-audio", "facebook/musicgen-small")
music = synthesiser("lo-fi music with a soothing melody", forward_params={"do_sample": True})
scipy.io.wavfile.write("musicgen_out.wav", rate=music["sampling_rate"], data=music["audio"])
```
3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 32 kHz audio waveform for more fine-grained control.
```python
from transformers import AutoProcessor, MusicgenForConditionalGeneration
processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
inputs = processor(
text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, max_new_tokens=256)
```
3. Listen to the audio samples either in an ipynb notebook:
```python
from IPython.display import Audio
sampling_rate = model.config.audio_encoder.sampling_rate
Audio(audio_values[0].numpy(), rate=sampling_rate)
```
Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
```python
import scipy
sampling_rate = model.config.audio_encoder.sampling_rate
scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy())
```
For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).
## Audiocraft Usage
You can also run MusicGen locally through the original [Audiocraft library]((https://github.com/facebookresearch/audiocraft):
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
```
pip install git+https://github.com/facebookresearch/audiocraft.git
```
2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
```
apt-get install ffmpeg
```
3. Run the following Python code:
```py
from audiocraft.models import MusicGen
from audiocraft.data.audio import audio_write
model = MusicGen.get_pretrained("small")
model.set_generation_params(duration=8) # generate 8 seconds.
descriptions = ["happy rock", "energetic EDM"]
wav = model.generate(descriptions) # generates 2 samples.
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
```
## Model details
**Organization developing the model:** The FAIR team of Meta AI.
**Model date:** MusicGen was trained between April 2023 and May 2023.
**Model version:** This is the version 1 of the model.
**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284).
**Citation details:**
```
@misc{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
year={2023},
eprint={2306.05284},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
**License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
## Intended use
**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
**Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
## Metrics
**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
- Overall quality of the music samples;
- Text relevance to the provided text input;
- Adherence to the melody for melody-guided music generation.
More details on performance measures and human studies can be found in the paper.
**Decision thresholds:** Not applicable.
## Evaluation datasets
The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
## Training datasets
The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
## Evaluation results
Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper.
| Model | Frechet Audio Distance | KLD | Text Consistency | Chroma Cosine Similarity |
|---|---|---|---|---|
| **facebook/musicgen-small** | 4.88 | 1.42 | 0.27 | - |
| facebook/musicgen-medium | 5.14 | 1.38 | 0.28 | - |
| facebook/musicgen-large | 5.48 | 1.37 | 0.28 | - |
| facebook/musicgen-melody | 4.93 | 1.41 | 0.27 | 0.44 |
More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284), in the Results section.
## Limitations and biases
**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
**Limitations:**
- The model is not able to generate realistic vocals.
- The model has been trained with English descriptions and will not perform as well in other languages.
- The model does not perform equally well for all music styles and cultures.
- The model sometimes generates end of songs, collapsing to silence.
- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
|
web2savar/w2v-bert-2.0-mongolian-colab-CV16.0
|
web2savar
| 2024-01-23T17:08:07Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"base_model:ylacombe/w2v-bert-2.0",
"base_model:finetune:ylacombe/w2v-bert-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-23T16:58:25Z |
---
base_model: ylacombe/w2v-bert-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
model-index:
- name: w2v-bert-2.0-mongolian-colab-CV16.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-mongolian-colab-CV16.0
This model is a fine-tuned version of [ylacombe/w2v-bert-2.0](https://huggingface.co/ylacombe/w2v-bert-2.0) on the common_voice_11_0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Mihaiii/stablelm-zephyr-3b-OV_FP14-4BIT
|
Mihaiii
| 2024-01-23T17:07:23Z | 2 | 0 |
transformers
|
[
"transformers",
"openvino",
"stablelm_epoch",
"text-generation",
"conversational",
"custom_code",
"license:other",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-23T15:34:13Z |
---
library_name: transformers
license: other
---
The quantized version of [stablelm-zephyr-3b](https://huggingface.co/stabilityai/stablelm-zephyr-3b) after running the steps on from [here](https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/273-stable-zephyr-3b-chatbot/273-stable-zephyr-3b-chatbot.ipynb)
You can use it like this (steps taken from the above link):
```bash
pip install -q git+https://github.com/huggingface/optimum-intel.git@e22a2ac26b3a6c7854da956d538f784ebeca879b onnx openvino-nightly
```
then
```python
from optimum.intel.openvino import OVModelForCausalLM
from transformers import AutoConfig, AutoTokenizer
from optimum.utils import NormalizedTextConfig, NormalizedConfigManager
NormalizedConfigManager._conf['stablelm_epoch'] = NormalizedTextConfig.with_args(num_layers='num_hidden_layers', num_attention_heads='num_attention_heads')
NormalizedConfigManager._conf['stablelm-epoch'] = NormalizedTextConfig.with_args(num_layers='num_hidden_layers', num_attention_heads='num_attention_heads')
model_path = 'Mihaiii/stablelm-zephyr-3b-OV_FP14-4BIT'
model = OVModelForCausalLM.from_pretrained(model_path, compile=False, config=AutoConfig.from_pretrained(model_path, trust_remote_code=True), stateful=True)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
prompt = [{'role': 'user', 'content': 'List 3 synonyms for the word "tiny"'}]
inputs = tokenizer.apply_chat_template(
prompt,
add_generation_prompt=True,
return_tensors='pt'
)
tokens = model.generate(
inputs.to(model.device),
max_new_tokens=1024,
temperature=0.8,
do_sample=True
)
print(tokenizer.decode(tokens[0], skip_special_tokens=False))
```
|
candyhaws/a2c-PandaReachDense-v3
|
candyhaws
| 2024-01-23T17:05:31Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-23T17:01:12Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.20 +/- 0.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sprenkamp/BGB
|
sprenkamp
| 2024-01-23T17:04:49Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-21T16:17:20Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
LoneStriker/speechless-zephyr-code-functionary-7b-5.0bpw-h6-exl2
|
LoneStriker
| 2024-01-23T16:56:31Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T16:54:31Z |
---
language:
- en
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
---
<p><h1> speechless-zephyr-code-functionary-7b </h1></p>
This model is the one of the moloras (Mixture-of-Multi-LoRAs) experiments.
Extract LoRA modules from below models (all based Mistral-7B-v0.1), each LoRA module has its own unique skills. By using multi-loras, they can be combined together statically or dynamically to form a versatile new model.
- HuggingFaceH4/zephyr-7b-beta (Uncensored Model)
- meetkai/functionary-small-v2.2 (Execute functions/plugins)
- uukuguy/speechless-code-mistral-7b-v1.0 (Enhance Coding)
The entire process is completed through the use of extract-lora, merge-lora, and lora-hub provided by multi-loras.
The router of mixture-of-multi-loras enables an automatic assembling of LoRA modules, using a gradientfree approach to obtain the coefficients of LoRA modules and requiring only a handful of inference steps for unseen tasks.
Code: https://github.com/uukuguy/multi_loras
## LM-Evaluation-Harness
[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric | Value |
| --- | --- |
| ARC | 61.52 |
| HellaSwag | 83.88 |
| MMLU | 64.71 |
| TruthfulQA | 44.99 |
| Winogrande | 78.69 |
| GSM8K | 43.82 |
| Average | 62.93 |
|
LoneStriker/speechless-zephyr-code-functionary-7b-4.0bpw-h6-exl2
|
LoneStriker
| 2024-01-23T16:54:29Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T16:52:48Z |
---
language:
- en
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
---
<p><h1> speechless-zephyr-code-functionary-7b </h1></p>
This model is the one of the moloras (Mixture-of-Multi-LoRAs) experiments.
Extract LoRA modules from below models (all based Mistral-7B-v0.1), each LoRA module has its own unique skills. By using multi-loras, they can be combined together statically or dynamically to form a versatile new model.
- HuggingFaceH4/zephyr-7b-beta (Uncensored Model)
- meetkai/functionary-small-v2.2 (Execute functions/plugins)
- uukuguy/speechless-code-mistral-7b-v1.0 (Enhance Coding)
The entire process is completed through the use of extract-lora, merge-lora, and lora-hub provided by multi-loras.
The router of mixture-of-multi-loras enables an automatic assembling of LoRA modules, using a gradientfree approach to obtain the coefficients of LoRA modules and requiring only a handful of inference steps for unseen tasks.
Code: https://github.com/uukuguy/multi_loras
## LM-Evaluation-Harness
[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric | Value |
| --- | --- |
| ARC | 61.52 |
| HellaSwag | 83.88 |
| MMLU | 64.71 |
| TruthfulQA | 44.99 |
| Winogrande | 78.69 |
| GSM8K | 43.82 |
| Average | 62.93 |
|
xianbao/test-model
|
xianbao
| 2024-01-23T16:31:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-06T01:21:03Z |
---
extra_gated_prompt: "You agree to not use the model to conduct experiments that cause harm to human subjects."
extra_gated_fields:
Company: text
Country: text
I agree to use this model for non-commercial use ONLY: checkbox
---
## 📌 Introduction
- 🤖 The Yi series models are the next generation of open-source large language models trained from scratch by [01.AI](https://01.ai/).
- 🙌 Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example,
- For English language capability, the Yi series models ranked 2nd (just behind GPT-4), outperforming other LLMs (such as LLaMA2-chat-70B, Claude 2, and ChatGPT) on the [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/) in Dec 2023.
- For Chinese language capability, the Yi series models landed in 2nd place (following GPT-4), surpassing other LLMs (such as Baidu ERNIE, Qwen, and Baichuan) on the [SuperCLUE](https://www.superclueai.com/) in Oct 2023.
- 🙏 (Credits to LLaMA) Thanks to the Transformer and LLaMA open-source communities, as they reducing the efforts required to build from scratch and enabling the utilization of the same tools within the AI ecosystem.
<details style="display: inline;"><summary> If you're interested in Yi's adoption of LLaMA architecture and license usage policy, see <span style="color: green;">Yi's relation with LLaMA.</span> ⬇️</summary> <ul>
> 💡 TL;DR
>
> The Yi series models adopt the same model architecture as LLaMA but are **NOT** derivatives of LLaMA.
- Both Yi and LLaMA are all based on the Transformer structure, which has been the standard architecture for large language models since 2018.
- Grounded in the Transformer architecture, LLaMA has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions LLaMA as the recognized foundational framework for models including Yi.
- Thanks to the Transformer and LLaMA architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems.
- However, the Yi series models are NOT derivatives of LLaMA, as they do not use LLaMA's weights.
- As LLaMA's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure.
- Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing LLaMA on the [Alpaca Leaderboard in Dec 2023](https://tatsu-lab.github.io/alpaca_eval/).
</ul>
</details>
|
CLMBR/npi-only-transformer-4
|
CLMBR
| 2024-01-23T16:25:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T14:37:02Z |
---
tags:
- generated_from_trainer
model-index:
- name: npi-only-transformer-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# npi-only-transformer-4
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2235 | 0.03 | 76320 | 4.1957 |
| 4.019 | 1.03 | 152640 | 4.0271 |
| 3.9115 | 0.03 | 228960 | 3.9505 |
| 3.8389 | 1.03 | 305280 | 3.9099 |
| 3.7889 | 0.03 | 381600 | 3.8846 |
| 3.749 | 1.03 | 457920 | 3.8686 |
| 3.7151 | 0.03 | 534240 | 3.8581 |
| 3.6879 | 1.03 | 610560 | 3.8510 |
| 3.6587 | 0.03 | 686880 | 3.8468 |
| 3.6325 | 1.03 | 763200 | 3.8441 |
| 3.6082 | 0.03 | 839520 | 3.8417 |
| 3.5868 | 1.03 | 915840 | 3.8415 |
| 3.5695 | 0.03 | 992160 | 3.8415 |
| 3.5516 | 1.03 | 1068480 | 3.8433 |
| 3.5316 | 0.03 | 1144800 | 3.8432 |
| 3.5291 | 1.03 | 1221120 | 3.8443 |
| 3.5091 | 0.03 | 1297440 | 3.8459 |
| 3.4953 | 1.03 | 1373760 | 3.8458 |
| 3.4831 | 0.03 | 1450080 | 3.8475 |
| 3.4707 | 1.03 | 1526400 | 3.8479 |
| 3.4629 | 0.03 | 1602720 | 3.8500 |
| 3.4549 | 0.03 | 1679040 | 3.8510 |
| 3.4461 | 1.03 | 1755360 | 3.8524 |
| 3.4385 | 0.03 | 1831680 | 3.8544 |
| 3.426 | 1.03 | 1908000 | 3.8561 |
| 3.4132 | 0.03 | 1984320 | 3.8569 |
| 3.399 | 1.03 | 2060640 | 3.8577 |
| 3.3863 | 0.03 | 2136960 | 3.8583 |
| 3.376 | 1.03 | 2213280 | 3.8598 |
| 3.3638 | 0.03 | 2289600 | 3.8609 |
| 3.3519 | 1.03 | 2365920 | 3.8610 |
| 3.3527 | 0.03 | 2442240 | 3.8618 |
| 3.338 | 1.03 | 2518560 | 3.8625 |
| 3.3299 | 0.03 | 2594880 | 3.8628 |
| 3.3204 | 0.03 | 2671200 | 3.8632 |
| 3.3114 | 1.03 | 2747520 | 3.8629 |
| 3.3075 | 0.03 | 2823840 | 3.8630 |
| 3.3027 | 1.03 | 2900160 | 3.8619 |
| 3.2984 | 0.03 | 2976480 | 3.8611 |
| 3.2935 | 1.02 | 3052726 | 3.8598 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ambarish004/vit-base-patch16-224-finetuned-polyterrasse
|
ambarish004
| 2024-01-23T16:19:27Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-22T11:04:41Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned-polyterrasse
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-polyterrasse
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2635
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.86 | 3 | 0.5713 | 0.6667 |
| No log | 2.0 | 7 | 0.2635 | 1.0 |
| 0.3363 | 2.86 | 10 | 0.1832 | 1.0 |
| 0.3363 | 4.0 | 14 | 0.1458 | 1.0 |
| 0.3363 | 4.29 | 15 | 0.1437 | 1.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jeremygf/t5-small-samsum
|
jeremygf
| 2024-01-23T16:11:50Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-23T15:48:18Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: t5-small-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-samsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2414 | 0.27 | 500 | 2.0112 |
| 2.1241 | 0.54 | 1000 | 1.9260 |
| 2.0784 | 0.81 | 1500 | 1.8947 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.1+cu117
- Datasets 2.16.0
- Tokenizers 0.15.0
|
eloi-goncalves/handsfree_intent_classification_2
|
eloi-goncalves
| 2024-01-23T16:06:00Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-09T03:29:30Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: handsfree_intent_classification_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# handsfree_intent_classification_2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0180
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0218 | 1.0 | 1586 | 0.0200 | 0.9908 |
| 0.0193 | 2.0 | 3172 | 0.0180 | 0.9925 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
princeton-nlp/Sheared-LLaMA-1.3B
|
princeton-nlp
| 2024-01-23T16:04:46Z | 27,816 | 93 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2310.06694",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-10T15:22:13Z |
---
license: apache-2.0
---
**Paper**: [https://arxiv.org/pdf/2310.06694.pdf](https://arxiv.org/pdf/2310.06694.pdf)
**Code**: https://github.com/princeton-nlp/LLM-Shearing
**Models**: [Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B), [Sheared-LLaMA-2.7B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B)
**Pruned Models without Continued Pre-training**: [Sheared-LLaMA-1.3B-Pruned](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B-Pruned), [Sheared-LLaMA-2.7B-Pruned](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B-Pruned)
**Instruction-tuned Models**: [Sheared-LLaMA-1.3B-ShareGPT](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B-ShareGPT), [Sheared-LLaMA-2.7B-ShareGPT](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B-ShareGPT)
**License**: Must comply with license of Llama2 since it's a model derived from Llama2.
---
Sheared-LLaMA-1.3B is a model pruned and further pre-trained from [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf). We dynamically load data from different domains in the [RedPajama dataset](https://github.com/togethercomputer/RedPajama-Data) to prune and contune pre-train the model. We use 0.4B tokens for pruning and 50B tokens for continued pre-training the pruned model. This model can be loaded with HuggingFace via
```
model = AutoModelForCausalLM.from_pretrained("princeton-nlp/Sheared-LLaMA-1.3B")
```
- Smaller-scale
- Same vocabulary as LLaMA1 and LLaMA2
- Derived with a budget of 50B tokens by utilizing existing strong LLMs
## Downstream Tasks
We evaluate on an extensive set of downstream tasks including reasoning, reading comprehension, language modeling and knowledge intensive tasks. Our Sheared-LLaMA models outperform existing large language models.
| Model | # Pre-training Tokens | Average Performance |
| ------------------- | --------------------- | ------------------- |
| LLaMA2-7B | 2T | 64.6 |
**1.3B**
| Model | # Pre-training Tokens | Average Performance |
| ------------------- | --------------------- | ------------------- |
| OPT-1.3B | 300B | 48.2 |
| Pythia-1.4B | 300B | 48.9 |
| **Sheared-LLaMA-1.3B** | **50B** | **51.0** |
**3B**
| Model | # Pre-training Tokens | Average Performance |
| ------------------- | --------------------- | ------------------- |
| OPT-2.7B | 300B | 51.4 |
| Pythia-2.8B | 300B | 52.5 |
| INCITE-Base-3B | 800B | 54.7 |
| Open-LLaMA-3B-v1 | 1T | 55.1 |
| Open-LLaMA-3B-v2 | 1T | 55.7 |
| Sheared-LLaMA-2.7B | 50B | 56.7 |
## Bibtex
```
@article{xia2023sheared,
title={Sheared llama: Accelerating language model pre-training via structured pruning},
author={Xia, Mengzhou and Gao, Tianyu and Zeng, Zhiyuan and Chen, Danqi},
journal={arXiv preprint arXiv:2310.06694},
year={2023}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_princeton-nlp__Sheared-LLaMA-1.3B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 31.47 |
| ARC (25-shot) | 32.85 |
| HellaSwag (10-shot) | 60.91 |
| MMLU (5-shot) | 25.71 |
| TruthfulQA (0-shot) | 37.14 |
| Winogrande (5-shot) | 58.64 |
| GSM8K (5-shot) | 0.45 |
| DROP (3-shot) | 4.56 |
|
MaziyarPanahi/zephyr-7b-beta-Mistral-7B-Instruct-v0.1
|
MaziyarPanahi
| 2024-01-23T16:04:40Z | 25 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"HuggingFaceH4/zephyr-7b-beta",
"pytorch",
"generated_from_trainer",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"arxiv:2305.18290",
"arxiv:2310.16944",
"base_model:mistralai/Mistral-7B-v0.1",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us",
"conversational",
"license:apache-2.0"
] |
text-generation
| 2024-01-20T17:12:39Z |
---
license: apache-2.0
tags:
- Safetensors
- text-generation-inference
- merge
- mistral
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- HuggingFaceH4/zephyr-7b-beta
- transformers
- pytorch
- safetensors
- mistral
- text-generation
- generated_from_trainer
- en
- dataset:HuggingFaceH4/ultrachat_200k
- dataset:HuggingFaceH4/ultrafeedback_binarized
- arxiv:2305.18290
- arxiv:2310.16944
- base_model:mistralai/Mistral-7B-v0.1
- license:mit
- model-index
- autotrain_compatible
- endpoints_compatible
- has_space
- text-generation-inference
- region:us
---
# zephyr-7b-beta-Mistral-7B-Instruct-v0.1
zephyr-7b-beta-Mistral-7B-Instruct-v0.1 is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
* [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/MaziyarPanahi/zephyr-7b-beta-Mistral-7B-Instruct-v0.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/MaziyarPanahi/zephyr-7b-beta-Mistral-7B-Instruct-v0.1-GGUF)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.1
layer_range: [0, 32]
- model: HuggingFaceH4/zephyr-7b-beta
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/zephyr-7b-beta-Mistral-7B-Instruct-v0.1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
SuvajitGB/NeuralPipe-7B-slerp
|
SuvajitGB
| 2024-01-23T16:01:16Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-09T17:42:27Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# NeuralPipe-7B-slerp
This model is a merge of the following models made with [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "suvajitgb/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
princeton-nlp/Sheared-LLaMA-2.7B-Pruned
|
princeton-nlp
| 2024-01-23T15:59:40Z | 53 | 3 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2310.06694",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T15:51:55Z |
---
license: llama2
---
---
license: llama2
---
**Paper**: [https://arxiv.org/pdf/2310.06694.pdf](https://arxiv.org/pdf/2310.06694.pdf)
**Code**: https://github.com/princeton-nlp/LLM-Shearing
**Models**: [Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B), [Sheared-LLaMA-2.7B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B)
**Pruned Models without Continued Pre-training**: [Sheared-LLaMA-1.3B-Pruned](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B-Pruned), [Sheared-LLaMA-2.7B-Pruned](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B-Pruned)
**Instruction-tuned Models**: [Sheared-LLaMA-1.3B-ShareGPT](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B-ShareGPT), [Sheared-LLaMA-2.7B-ShareGPT](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B-ShareGPT)
**License**: Must comply with license of Llama2 since it's a model derived from Llama2.
Sheared-LLaMA-2.7B-Pruned is the model pruned from [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) **without continued pre-training**.
We used roughly 0.4B tokens to perform the pruning experiment. This model could be a good use to study
- effective data mixtures for continued pre-training
- comparisons to other pruning techniques
- extensive evaluations to understand how pruning affects knowledge and reasoning capabilities of LLMs
|
kimwooglae/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34
|
kimwooglae
| 2024-01-23T15:58:44Z | 59 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T15:29:31Z |
---
language:
- en
pipeline_tag: text-generation
license: cc-by-nc-4.0
---
# WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34
## Model Details
**Developed by**
[Inswave Systems](https://www.inswave.com) UI Platform Team
**Base Model**
[LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B)
# Implementation Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kimwooglae/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
|
princeton-nlp/Sheared-LLaMA-1.3B-Pruned
|
princeton-nlp
| 2024-01-23T15:57:39Z | 122 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2310.06694",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T15:48:18Z |
---
license: llama2
---
**Paper**: [https://arxiv.org/pdf/2310.06694.pdf](https://arxiv.org/pdf/2310.06694.pdf)
**Code**: https://github.com/princeton-nlp/LLM-Shearing
**Models**: [Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B), [Sheared-LLaMA-2.7B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B)
**Pruned Models without Continued Pre-training**: [Sheared-LLaMA-1.3B-Pruned](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B-Pruned), [Sheared-LLaMA-2.7B-Pruned](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B-Pruned)
**Instruction-tuned Models**: [Sheared-LLaMA-1.3B-ShareGPT](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B-ShareGPT), [Sheared-LLaMA-2.7B-ShareGPT](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B-ShareGPT)
**License**: Must comply with license of Llama2 since it's a model derived from Llama2.
Sheared-LLaMA-1.3B-Pruned is the model pruned from [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) **without continued pre-training**.
We used roughly 0.4B tokens to perform the pruning experiment. This model could be a good use to study
- effective data mixtures for continued pre-training
- comparisons to other pruning techniques
- extensive evaluations to understand how pruning affects knowledge and reasoning capabilities of LLMs
|
Abhinav28/large-v3-hi-commonvoice-11-peft-trained-adapter-withfp16-30-percent-droupout-0.05
|
Abhinav28
| 2024-01-23T15:55:39Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Abhinav28/openai-whisper-large-v3",
"base_model:adapter:Abhinav28/openai-whisper-large-v3",
"region:us"
] | null | 2024-01-23T11:29:54Z |
---
library_name: peft
base_model: Abhinav28/openai-whisper-large-v3
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
raj-rahullll/my-pet-cat
|
raj-rahullll
| 2024-01-23T15:51:32Z | 0 | 0 | null |
[
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-23T15:48:02Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat- Dreambooth model trained by raj-rahullll following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 22BTRIS045
Sample pictures of this concept:



|
badokorach/Albert-finetuned-210124
|
badokorach
| 2024-01-23T15:50:42Z | 46 | 0 |
transformers
|
[
"transformers",
"tf",
"albert",
"question-answering",
"generated_from_keras_callback",
"base_model:twmkn9/albert-base-v2-squad2",
"base_model:finetune:twmkn9/albert-base-v2-squad2",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-21T12:41:17Z |
---
base_model: twmkn9/albert-base-v2-squad2
tags:
- generated_from_keras_callback
model-index:
- name: badokorach/Albert-finetuned-210124
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# badokorach/Albert-finetuned-210124
This model is a fine-tuned version of [twmkn9/albert-base-v2-squad2](https://huggingface.co/twmkn9/albert-base-v2-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1484
- Validation Loss: 0.0
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2265, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.002}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.5485 | 0.0 | 0 |
| 1.7396 | 0.0 | 1 |
| 1.2623 | 0.0 | 2 |
| 0.9069 | 0.0 | 3 |
| 0.6427 | 0.0 | 4 |
| 0.4773 | 0.0 | 5 |
| 0.3798 | 0.0 | 6 |
| 0.3165 | 0.0 | 7 |
| 0.2573 | 0.0 | 8 |
| 0.2261 | 0.0 | 9 |
| 0.2054 | 0.0 | 10 |
| 0.1899 | 0.0 | 11 |
| 0.1712 | 0.0 | 12 |
| 0.1603 | 0.0 | 13 |
| 0.1484 | 0.0 | 14 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
signon-project/mbart-large-cc25-ft-amr30-nl
|
signon-project
| 2024-01-23T15:48:47Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/mbart-large-cc25",
"base_model:finetune:facebook/mbart-large-cc25",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-23T15:47:36Z |
---
base_model: facebook/mbart-large-cc25
tags:
- generated_from_trainer
model-index:
- name: nl+no_processing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nl+no_processing
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6038
- Smatch Precision: 73.7
- Smatch Recall: 76.48
- Smatch Fscore: 75.06
- Smatch Unparsable: 0
- Percent Not Recoverable: 0.2323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Smatch Precision | Smatch Recall | Smatch Fscore | Smatch Unparsable | Percent Not Recoverable |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:-------------:|:-----------------:|:-----------------------:|
| 0.8025 | 1.0 | 3477 | 1.3793 | 18.51 | 65.71 | 28.88 | 0 | 0.0 |
| 0.13 | 2.0 | 6954 | 0.9377 | 27.0 | 71.3 | 39.16 | 0 | 0.1161 |
| 0.0953 | 3.0 | 10431 | 0.7509 | 34.09 | 72.74 | 46.42 | 0 | 0.1161 |
| 0.1386 | 4.0 | 13908 | 0.8524 | 33.38 | 73.32 | 45.87 | 2 | 0.0 |
| 0.0974 | 5.0 | 17385 | 0.6957 | 41.69 | 73.92 | 53.31 | 0 | 0.0 |
| 0.0705 | 6.0 | 20862 | 0.6145 | 47.98 | 75.12 | 58.55 | 0 | 0.0 |
| 0.2265 | 7.0 | 24339 | 0.6439 | 47.06 | 75.53 | 57.99 | 0 | 0.0 |
| 0.0506 | 8.0 | 27817 | 0.5974 | 53.0 | 76.95 | 62.77 | 0 | 0.0 |
| 0.064 | 9.0 | 31294 | 0.6387 | 51.83 | 77.47 | 62.11 | 0 | 0.0 |
| 0.0112 | 10.0 | 34771 | 0.6066 | 54.82 | 76.98 | 64.03 | 0 | 0.0 |
| 0.047 | 11.0 | 38248 | 0.5970 | 60.36 | 77.04 | 67.69 | 0 | 0.0 |
| 0.0134 | 12.0 | 41725 | 0.5675 | 61.72 | 77.15 | 68.58 | 0 | 0.0 |
| 0.0656 | 13.0 | 45202 | 0.6210 | 62.8 | 76.92 | 69.15 | 0 | 0.0581 |
| 0.015 | 14.0 | 48679 | 0.6257 | 62.8 | 77.32 | 69.31 | 0 | 0.0 |
| 0.0134 | 15.0 | 52156 | 0.5635 | 66.7 | 77.34 | 71.63 | 0 | 0.1161 |
| 0.0265 | 16.0 | 55634 | 0.5839 | 67.61 | 76.76 | 71.89 | 0 | 0.0581 |
| 0.0219 | 17.0 | 59111 | 0.5894 | 68.66 | 77.43 | 72.78 | 0 | 0.1161 |
| 0.0008 | 18.0 | 62588 | 0.5981 | 68.44 | 77.57 | 72.72 | 0 | 0.0 |
| 0.0157 | 19.0 | 66065 | 0.6184 | 69.88 | 77.42 | 73.46 | 0 | 0.0581 |
| 0.0334 | 20.0 | 69542 | 0.6026 | 70.76 | 77.37 | 73.92 | 0 | 0.2323 |
| 0.0619 | 21.0 | 73019 | 0.6021 | 72.03 | 77.0 | 74.44 | 0 | 0.1742 |
| 0.0075 | 22.0 | 76496 | 0.6166 | 72.33 | 76.74 | 74.47 | 0 | 0.0581 |
| 0.0164 | 23.0 | 79973 | 0.6100 | 72.75 | 77.03 | 74.83 | 0 | 0.2323 |
| 0.0011 | 24.0 | 83451 | 0.6037 | 73.7 | 76.51 | 75.08 | 0 | 0.2323 |
| 0.0865 | 25.0 | 86925 | 0.6038 | 73.7 | 76.48 | 75.06 | 0 | 0.2323 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
signon-project/mbart-large-cc25-ft-amr30-en
|
signon-project
| 2024-01-23T15:42:02Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/mbart-large-cc25",
"base_model:finetune:facebook/mbart-large-cc25",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-23T15:40:46Z |
---
base_model: facebook/mbart-large-cc25
tags:
- generated_from_trainer
model-index:
- name: en+no_processing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en+no_processing
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4481
- Smatch Precision: 80.57
- Smatch Recall: 83.81
- Smatch Fscore: 82.16
- Smatch Unparsable: 0
- Percent Not Recoverable: 0.3484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Smatch Precision | Smatch Recall | Smatch Fscore | Smatch Unparsable | Percent Not Recoverable |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:-------------:|:-----------------:|:-----------------------:|
| 0.3471 | 1.0 | 3477 | 1.4889 | 22.35 | 73.05 | 34.23 | 0 | 0.1161 |
| 0.1741 | 2.0 | 6954 | 0.8681 | 30.1 | 71.92 | 42.44 | 0 | 0.1161 |
| 0.1296 | 3.0 | 10431 | 0.7081 | 38.6 | 78.68 | 51.8 | 0 | 0.0581 |
| 0.1308 | 4.0 | 13908 | 0.9546 | 37.49 | 78.23 | 50.69 | 0 | 0.0 |
| 0.2213 | 5.0 | 17385 | 0.5544 | 47.63 | 81.17 | 60.03 | 0 | 0.0 |
| 0.0317 | 6.0 | 20862 | 0.4884 | 49.3 | 80.9 | 61.27 | 0 | 0.0 |
| 0.1007 | 7.0 | 24339 | 0.4763 | 54.88 | 82.09 | 65.78 | 0 | 0.0 |
| 0.092 | 8.0 | 27817 | 0.4444 | 57.37 | 83.2 | 67.91 | 0 | 0.0 |
| 0.1051 | 9.0 | 31294 | 0.4192 | 64.37 | 83.81 | 72.82 | 0 | 0.0 |
| 0.0079 | 10.0 | 34771 | 0.4685 | 61.3 | 83.1 | 70.55 | 0 | 0.0 |
| 0.0211 | 11.0 | 38248 | 0.4389 | 63.36 | 84.57 | 72.44 | 0 | 0.1161 |
| 0.1122 | 12.0 | 41725 | 0.4146 | 69.39 | 83.56 | 75.82 | 0 | 0.0581 |
| 0.0183 | 13.0 | 45202 | 0.4003 | 73.9 | 83.71 | 78.5 | 0 | 0.0 |
| 0.0244 | 14.0 | 48679 | 0.4208 | 73.79 | 83.92 | 78.53 | 0 | 0.1161 |
| 0.0116 | 15.0 | 52156 | 0.4248 | 73.88 | 83.85 | 78.55 | 0 | 0.1161 |
| 0.0357 | 16.0 | 55634 | 0.4235 | 75.78 | 84.08 | 79.71 | 0 | 0.1161 |
| 0.0006 | 17.0 | 59111 | 0.4181 | 76.15 | 84.15 | 79.95 | 0 | 0.0581 |
| 0.0329 | 18.0 | 62588 | 0.4494 | 77.21 | 84.12 | 80.52 | 0 | 0.0 |
| 0.0003 | 19.0 | 66065 | 0.4389 | 78.02 | 84.13 | 80.96 | 0 | 0.0 |
| 0.04 | 20.0 | 69542 | 0.4439 | 78.78 | 84.23 | 81.41 | 0 | 0.0 |
| 0.0182 | 21.0 | 73019 | 0.4430 | 79.82 | 84.05 | 81.88 | 0 | 0.0581 |
| 0.0006 | 22.0 | 76496 | 0.4488 | 79.96 | 83.74 | 81.81 | 0 | 0.0581 |
| 0.0074 | 23.0 | 79973 | 0.4569 | 79.84 | 83.85 | 81.79 | 0 | 0.0581 |
| 0.0133 | 24.0 | 83451 | 0.4469 | 80.45 | 83.81 | 82.09 | 0 | 0.2904 |
| 0.0055 | 25.0 | 86925 | 0.4481 | 80.57 | 83.81 | 82.16 | 0 | 0.3484 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
CLMBR/npi-only-transformer-0
|
CLMBR
| 2024-01-23T15:41:08Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T13:50:56Z |
---
tags:
- generated_from_trainer
model-index:
- name: npi-only-transformer-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# npi-only-transformer-0
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8602
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2219 | 0.03 | 76320 | 4.1939 |
| 4.0171 | 1.03 | 152640 | 4.0249 |
| 3.9099 | 0.03 | 228960 | 3.9509 |
| 3.8379 | 1.03 | 305280 | 3.9097 |
| 3.7887 | 0.03 | 381600 | 3.8848 |
| 3.7486 | 0.03 | 457920 | 3.8681 |
| 3.7135 | 1.03 | 534240 | 3.8583 |
| 3.6868 | 0.03 | 610560 | 3.8516 |
| 3.6574 | 1.03 | 686880 | 3.8469 |
| 3.6311 | 0.03 | 763200 | 3.8440 |
| 3.6076 | 1.03 | 839520 | 3.8431 |
| 3.5866 | 0.03 | 915840 | 3.8422 |
| 3.5683 | 1.03 | 992160 | 3.8421 |
| 3.5492 | 0.03 | 1068480 | 3.8424 |
| 3.5304 | 1.03 | 1144800 | 3.8433 |
| 3.5315 | 0.03 | 1221120 | 3.8459 |
| 3.5103 | 1.03 | 1297440 | 3.8459 |
| 3.4974 | 0.03 | 1373760 | 3.8475 |
| 3.4858 | 1.03 | 1450080 | 3.8485 |
| 3.4723 | 0.03 | 1526400 | 3.8502 |
| 3.4644 | 1.03 | 1602720 | 3.8505 |
| 3.4557 | 0.03 | 1679040 | 3.8526 |
| 3.4466 | 1.03 | 1755360 | 3.8532 |
| 3.4389 | 0.03 | 1831680 | 3.8546 |
| 3.4245 | 1.03 | 1908000 | 3.8560 |
| 3.4119 | 0.03 | 1984320 | 3.8569 |
| 3.3964 | 1.03 | 2060640 | 3.8589 |
| 3.3868 | 0.03 | 2136960 | 3.8584 |
| 3.3744 | 1.03 | 2213280 | 3.8605 |
| 3.3638 | 0.03 | 2289600 | 3.8619 |
| 3.3497 | 1.03 | 2365920 | 3.8616 |
| 3.3566 | 0.03 | 2442240 | 3.8614 |
| 3.3404 | 1.03 | 2518560 | 3.8625 |
| 3.3326 | 0.03 | 2594880 | 3.8628 |
| 3.3241 | 1.03 | 2671200 | 3.8628 |
| 3.3149 | 0.03 | 2747520 | 3.8632 |
| 3.3085 | 1.03 | 2823840 | 3.8625 |
| 3.3024 | 0.03 | 2900160 | 3.8626 |
| 3.2978 | 1.03 | 2976480 | 3.8610 |
| 3.2933 | 0.02 | 3052726 | 3.8602 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
20Tech24/my-beautiful-cat-ewq
|
20Tech24
| 2024-01-23T15:40:03Z | 3 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-23T15:35:53Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My--Beautiful-Cat-EWQ Dreambooth model trained by 20Tech24 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 206320080
Sample pictures of this concept:
.jpg)
|
zakcroft/zephyr-7b-sft-lora
|
zakcroft
| 2024-01-23T15:39:24Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-23T15:37:43Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: zephyr-7b-sft-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-lora
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 1.1568 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.0
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
onarganogun/videomae-large-kissing_14-01-2024
|
onarganogun
| 2024-01-23T15:34:14Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-large",
"base_model:finetune:MCG-NJU/videomae-large",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-01-13T21:52:01Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-large
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: videomae-large-kissing_14-01-2024
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-large-kissing_14-01-2024
This model is a fine-tuned version of [MCG-NJU/videomae-large](https://huggingface.co/MCG-NJU/videomae-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3655
- Accuracy: 0.9479
- Precision: 0.9547
- Recall: 0.9405
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 18165
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|
| 0.6377 | 0.07 | 1212 | 0.6211 | 0.6645 | 0.6755 | 0.6331 |
| 0.586 | 1.07 | 2424 | 0.4979 | 0.7835 | 0.8057 | 0.7471 |
| 0.3675 | 2.07 | 3636 | 0.3910 | 0.8983 | 0.9335 | 0.8579 |
| 0.8145 | 3.07 | 4848 | 0.3776 | 0.9207 | 0.9426 | 0.8959 |
| 0.6408 | 4.07 | 6060 | 0.3674 | 0.9322 | 0.9470 | 0.9157 |
| 0.01 | 5.07 | 7272 | 0.3630 | 0.9298 | 0.9422 | 0.9157 |
| 0.0274 | 6.07 | 8484 | 0.3808 | 0.9289 | 0.9233 | 0.9355 |
| 0.0002 | 7.07 | 9696 | 0.3566 | 0.9397 | 0.9508 | 0.9273 |
| 0.0058 | 8.07 | 10908 | 0.3609 | 0.9446 | 0.9622 | 0.9256 |
| 0.1551 | 9.07 | 12120 | 0.3757 | 0.9413 | 0.9465 | 0.9355 |
| 0.1784 | 10.07 | 13332 | 0.3410 | 0.9496 | 0.9579 | 0.9405 |
| 0.0011 | 11.07 | 14544 | 0.3707 | 0.9455 | 0.9455 | 0.9455 |
| 0.0001 | 12.07 | 15756 | 0.3719 | 0.9479 | 0.9547 | 0.9405 |
| 0.0307 | 13.07 | 16968 | 0.3657 | 0.9463 | 0.9530 | 0.9388 |
| 0.0002 | 14.07 | 18165 | 0.3655 | 0.9479 | 0.9547 | 0.9405 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
KryptDo0x/ppo-LunarLander-v2
|
KryptDo0x
| 2024-01-23T15:31:08Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-20T15:47:30Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.06 +/- 13.09
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YouKnowMee/Mistral-7b-instruct-v0.2-summ-sft-dpo-ed3
|
YouKnowMee
| 2024-01-23T15:24:04Z | 0 | 0 | null |
[
"safetensors",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-23T15:09:43Z |
---
license: cc-by-nc-4.0
---
Description to load and test will be added soon. More details on training and data will be added aswell.
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
TBD
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Hi, my name is "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
YouKnowMee/Mistral-7b-instruct-v0.2-summ-sft-dpo-ed2
|
YouKnowMee
| 2024-01-23T15:23:50Z | 0 | 0 | null |
[
"safetensors",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-23T15:07:19Z |
---
license: cc-by-nc-4.0
---
Description to load and test will be added soon. More details on training and data will be added aswell.
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
TBD
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Hi, my name is "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
kaist-ai/metamath-langbridge-9b
|
kaist-ai
| 2024-01-23T15:22:15Z | 6 | 1 |
transformers
|
[
"transformers",
"safetensors",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"hi",
"hmn",
"ht",
"hu",
"hy",
"ig",
"is",
"it",
"iw",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"und",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu",
"arxiv:2401.10695",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T10:33:40Z |
---
license: apache-2.0
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
library_name: transformers
---
## Links for Reference
- **Repository: https://github.com/kaistAI/LangBridge**
- **Paper: [LangBridge: Multilingual Reasoning Without Multilingual Supervision](https://arxiv.org/pdf/2401.10695.pdf)**
- **Point of Contact: [email protected]**
# TL;DR
🤔LMs good at reasoning are mostly English-centric (MetaMath, Orca 2, etc).
😃Let’s adapt them to solve multilingual tasks. BUT without using multilingual data!
LangBridge “bridges” mT5 encoder and the target LM together while utilizing only English data. In test time, LangBridge models can solve multilingual reasoning tasks effectively.

# Usage
Please refer to the [Github repository](https://github.com/kaistAI/LangBridge) for detailed usage examples.
# Related Models
[Check out other LangBridge models.](https://huggingface.co/collections/kaist-ai/langbridge-65afbbdae50627e40ca58f9a)
We have:
- Llama 2
- Llemma
- MetaMath
- Code Llama
- Orca 2
# Citation
If you find the following model helpful, please consider citing our paper!
**BibTeX:**
```bibtex
@misc{yoon2024langbridge,
title={LangBridge: Multilingual Reasoning Without Multilingual Supervision},
author={Dongkeun Yoon and Joel Jang and Sungdong Kim and Seungone Kim and Sheikh Shafayat and Minjoon Seo},
year={2024},
eprint={2401.10695},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
YouKnowMee/Mistral-7b-instruct-v0.2-summ-dpo-ed3
|
YouKnowMee
| 2024-01-23T15:22:09Z | 0 | 1 | null |
[
"safetensors",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-23T15:05:53Z |
---
license: cc-by-nc-4.0
---
Description to load and test will be added soon. More details on training and data will be added aswell.
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
TBD
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Hi, my name is "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
YouKnowMee/Mistral-7b-instruct-v0.2-summ-dpo-ed2
|
YouKnowMee
| 2024-01-23T15:21:50Z | 0 | 1 | null |
[
"safetensors",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-23T15:06:12Z |
---
license: cc-by-nc-4.0
---
Description to load and test will be added soon. More details on training and data will be added aswell.
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
TBD
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Hi, my name is "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
kaist-ai/codellama-langbridge-9b
|
kaist-ai
| 2024-01-23T15:20:22Z | 10 | 1 |
transformers
|
[
"transformers",
"safetensors",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"hi",
"hmn",
"ht",
"hu",
"hy",
"ig",
"is",
"it",
"iw",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"und",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu",
"arxiv:2401.10695",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T11:30:08Z |
---
license: apache-2.0
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
library_name: transformers
---
## Links for Reference
- **Repository: https://github.com/kaistAI/LangBridge**
- **Paper: [LangBridge: Multilingual Reasoning Without Multilingual Supervision](https://arxiv.org/pdf/2401.10695.pdf)**
- **Point of Contact: [email protected]**
# TL;DR
🤔LMs good at reasoning are mostly English-centric (MetaMath, Orca 2, etc).
😃Let’s adapt them to solve multilingual tasks. BUT without using multilingual data!
LangBridge “bridges” mT5 encoder and the target LM together while utilizing only English data. In test time, LangBridge models can solve multilingual reasoning tasks effectively.

# Usage
Please refer to the [Github repository](https://github.com/kaistAI/LangBridge) for detailed usage examples.
# Related Models
[Check out other LangBridge models.](https://huggingface.co/collections/kaist-ai/langbridge-65afbbdae50627e40ca58f9a)
We have:
- Llama 2
- Llemma
- MetaMath
- Code Llama
- Orca 2
# Citation
If you find the following model helpful, please consider citing our paper!
**BibTeX:**
```bibtex
@misc{yoon2024langbridge,
title={LangBridge: Multilingual Reasoning Without Multilingual Supervision},
author={Dongkeun Yoon and Joel Jang and Sungdong Kim and Seungone Kim and Sheikh Shafayat and Minjoon Seo},
year={2024},
eprint={2401.10695},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
YouKnowMee/Mistral-7b-instruct-v0.2-summ-sft-ed1
|
YouKnowMee
| 2024-01-23T15:18:48Z | 0 | 0 | null |
[
"safetensors",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-23T15:08:11Z |
---
license: cc-by-nc-4.0
---
Description to load and test will be added soon. More details on training and data will be added aswell.
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
TBD
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Hi, my name is "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
kaist-ai/codellama-langbridge-20b
|
kaist-ai
| 2024-01-23T15:18:19Z | 10 | 1 |
transformers
|
[
"transformers",
"safetensors",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"hi",
"hmn",
"ht",
"hu",
"hy",
"ig",
"is",
"it",
"iw",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"und",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu",
"arxiv:2401.10695",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T12:10:27Z |
---
license: apache-2.0
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
library_name: transformers
---
## Links for Reference
- **Repository: https://github.com/kaistAI/LangBridge**
- **Paper: [LangBridge: Multilingual Reasoning Without Multilingual Supervision](https://arxiv.org/pdf/2401.10695.pdf)**
- **Point of Contact: [email protected]**
# TL;DR
🤔LMs good at reasoning are mostly English-centric (MetaMath, Orca 2, etc).
😃Let’s adapt them to solve multilingual tasks. BUT without using multilingual data!
LangBridge “bridges” mT5 encoder and the target LM together while utilizing only English data. In test time, LangBridge models can solve multilingual reasoning tasks effectively.

# Usage
Please refer to the [Github repository](https://github.com/kaistAI/LangBridge) for detailed usage examples.
# Related Models
[Check out other LangBridge models.](https://huggingface.co/collections/kaist-ai/langbridge-65afbbdae50627e40ca58f9a)
We have:
- Llama 2
- Llemma
- MetaMath
- Code Llama
- Orca 2
# Citation
If you find the following model helpful, please consider citing our paper!
**BibTeX:**
```bibtex
@misc{yoon2024langbridge,
title={LangBridge: Multilingual Reasoning Without Multilingual Supervision},
author={Dongkeun Yoon and Joel Jang and Sungdong Kim and Seungone Kim and Sheikh Shafayat and Minjoon Seo},
year={2024},
eprint={2401.10695},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
kaist-ai/llama2-langbridge-9b
|
kaist-ai
| 2024-01-23T15:15:01Z | 8 | 7 |
transformers
|
[
"transformers",
"safetensors",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"hi",
"hmn",
"ht",
"hu",
"hy",
"ig",
"is",
"it",
"iw",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"und",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu",
"arxiv:2401.10695",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T10:59:18Z |
---
license: apache-2.0
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
library_name: transformers
---
## Links for Reference
- **Repository: https://github.com/kaistAI/LangBridge**
- **Paper: [LangBridge: Multilingual Reasoning Without Multilingual Supervision](https://arxiv.org/pdf/2401.10695.pdf)**
- **Point of Contact: [email protected]**
# TL;DR
🤔LMs good at reasoning are mostly English-centric (MetaMath, Orca 2, etc).
😃Let’s adapt them to solve multilingual tasks. BUT without using multilingual data!
LangBridge “bridges” mT5 encoder and the target LM together while utilizing only English data. In test time, LangBridge models can solve multilingual reasoning tasks effectively.

# Usage
Please refer to the [Github repository](https://github.com/kaistAI/LangBridge) for detailed usage examples.
# Related Models
[Check out other LangBridge models.](https://huggingface.co/collections/kaist-ai/langbridge-65afbbdae50627e40ca58f9a)
We have:
- Llama 2
- Llemma
- MetaMath
- Code Llama
- Orca 2
# Citation
If you find the following model helpful, please consider citing our paper!
**BibTeX:**
```bibtex
@misc{yoon2024langbridge,
title={LangBridge: Multilingual Reasoning Without Multilingual Supervision},
author={Dongkeun Yoon and Joel Jang and Sungdong Kim and Seungone Kim and Sheikh Shafayat and Minjoon Seo},
year={2024},
eprint={2401.10695},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
severcorp/lm3
|
severcorp
| 2024-01-23T15:09:03Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T15:08:09Z |
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
widget:
- text: "<|system|>\nYou are a chatbot who can help code!</s>\n<|user|>\nWrite me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI.</s>\n<|assistant|>\n"
---
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is the chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T). **We follow [HF's Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha)'s training recipe.** The model was " initially fine-tuned on a variant of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contain 64k prompts and model completions that are ranked by GPT-4."
#### How to use
You will need the transformers>=4.34
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="TinyLlama/TinyLlama-1.1B-Chat-v1.0", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
# <|system|>
# You are a friendly chatbot who always responds in the style of a pirate.</s>
# <|user|>
# How many helicopters can a human eat in one sitting?</s>
# <|assistant|>
# ...
```
|
Varshini-14/my-pet-dog
|
Varshini-14
| 2024-01-23T15:03:04Z | 0 | 0 | null |
[
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-23T14:59:33Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by Varshini-14 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 22BTRIS057
Sample pictures of this concept:
.webp)
.jpg)
.webp)
.webp)
.jpg)
|
mjawor234/distilbert-base-uncased-finetuned-squad
|
mjawor234
| 2024-01-23T15:02:33Z | 46 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-22T21:00:10Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: mjawor234/distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mjawor234/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9617
- Train End Logits Accuracy: 0.7333
- Train Start Logits Accuracy: 0.6915
- Validation Loss: 1.1133
- Validation End Logits Accuracy: 0.7008
- Validation Start Logits Accuracy: 0.6660
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.4970 | 0.6101 | 0.5709 | 1.1459 | 0.6881 | 0.6540 | 0 |
| 0.9617 | 0.7333 | 0.6915 | 1.1133 | 0.7008 | 0.6660 | 1 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
CLMBR/npi-sent-neg-transformer-4
|
CLMBR
| 2024-01-23T14:50:28Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T15:07:35Z |
---
tags:
- generated_from_trainer
model-index:
- name: npi-sent-neg-transformer-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# npi-sent-neg-transformer-4
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2294 | 0.03 | 76320 | 4.1976 |
| 4.0258 | 1.03 | 152640 | 4.0278 |
| 3.9153 | 0.03 | 228960 | 3.9533 |
| 3.8506 | 1.03 | 305280 | 3.9122 |
| 3.7999 | 0.03 | 381600 | 3.8867 |
| 3.7591 | 1.03 | 457920 | 3.8711 |
| 3.7275 | 0.03 | 534240 | 3.8604 |
| 3.6964 | 1.03 | 610560 | 3.8532 |
| 3.6674 | 0.03 | 686880 | 3.8489 |
| 3.6392 | 1.03 | 763200 | 3.8465 |
| 3.6147 | 0.03 | 839520 | 3.8450 |
| 3.5948 | 0.03 | 915840 | 3.8441 |
| 3.575 | 1.03 | 992160 | 3.8453 |
| 3.5534 | 0.03 | 1068480 | 3.8445 |
| 3.5379 | 1.03 | 1144800 | 3.8452 |
| 3.5285 | 0.03 | 1221120 | 3.8465 |
| 3.5112 | 1.03 | 1297440 | 3.8482 |
| 3.5024 | 0.03 | 1373760 | 3.8482 |
| 3.4844 | 1.03 | 1450080 | 3.8503 |
| 3.4812 | 0.03 | 1526400 | 3.8522 |
| 3.4704 | 1.03 | 1602720 | 3.8541 |
| 3.4636 | 0.03 | 1679040 | 3.8544 |
| 3.4543 | 1.03 | 1755360 | 3.8559 |
| 3.4423 | 0.03 | 1831680 | 3.8573 |
| 3.4292 | 1.03 | 1908000 | 3.8595 |
| 3.4145 | 0.03 | 1984320 | 3.8600 |
| 3.4003 | 1.03 | 2060640 | 3.8617 |
| 3.3921 | 0.03 | 2136960 | 3.8631 |
| 3.3779 | 1.03 | 2213280 | 3.8630 |
| 3.3635 | 0.03 | 2289600 | 3.8653 |
| 3.3548 | 1.03 | 2365920 | 3.8652 |
| 3.3528 | 0.03 | 2442240 | 3.8671 |
| 3.3407 | 1.03 | 2518560 | 3.8685 |
| 3.3356 | 0.03 | 2594880 | 3.8683 |
| 3.3213 | 1.03 | 2671200 | 3.8682 |
| 3.3205 | 0.03 | 2747520 | 3.8687 |
| 3.3126 | 1.03 | 2823840 | 3.8683 |
| 3.3079 | 0.03 | 2900160 | 3.8675 |
| 3.3038 | 0.03 | 2976480 | 3.8673 |
| 3.2973 | 1.02 | 3052726 | 3.8656 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
homerquan/poca-SoccerTwos
|
homerquan
| 2024-01-23T14:37:08Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-01-23T14:36:01Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: homerquan/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
silvercoder67/Mistral-7b-instruct-v0.2-summ-dpo-e3
|
silvercoder67
| 2024-01-23T14:35:47Z | 0 | 0 | null |
[
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-23T14:35:23Z |
---
license: cc-by-nc-4.0
---
Description to load and test will be added soon. More details on training and data will be added aswell.
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
TBD
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Hi, my name is "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
kijeong22/swin-finetuned-output_dim2
|
kijeong22
| 2024-01-23T14:33:24Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-23T08:39:01Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: swin-finetuned-output_dim2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-finetuned-output_dim2
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3851
- Accuracy: 0.5088
- Precision: 0.4677
- Recall: 0.5024
- F1: 0.4844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.6931 | 0.1 | 10 | 0.6931 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6931 | 0.2 | 20 | 0.6931 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.693 | 0.3 | 30 | 0.6930 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6929 | 0.4 | 40 | 0.6928 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6925 | 0.5 | 50 | 0.6925 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6929 | 0.59 | 60 | 0.6924 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6921 | 0.69 | 70 | 0.6922 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6919 | 0.79 | 80 | 0.6918 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6927 | 0.89 | 90 | 0.6915 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6927 | 0.99 | 100 | 0.6911 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6922 | 1.09 | 110 | 0.6910 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6914 | 1.19 | 120 | 0.6909 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6924 | 1.29 | 130 | 0.6912 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.692 | 1.39 | 140 | 0.6915 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6901 | 1.49 | 150 | 0.6906 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6923 | 1.58 | 160 | 0.6903 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6925 | 1.68 | 170 | 0.6908 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6908 | 1.78 | 180 | 0.6905 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6919 | 1.88 | 190 | 0.6904 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6917 | 1.98 | 200 | 0.6902 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6874 | 2.08 | 210 | 0.6897 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6966 | 2.18 | 220 | 0.6898 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6905 | 2.28 | 230 | 0.6907 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.691 | 2.38 | 240 | 0.6905 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6899 | 2.48 | 250 | 0.6899 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.694 | 2.57 | 260 | 0.6911 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6913 | 2.67 | 270 | 0.6913 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6917 | 2.77 | 280 | 0.6911 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6914 | 2.87 | 290 | 0.6903 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6942 | 2.97 | 300 | 0.6914 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6921 | 3.07 | 310 | 0.6919 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.692 | 3.17 | 320 | 0.6913 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6912 | 3.27 | 330 | 0.6902 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6923 | 3.37 | 340 | 0.6900 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6894 | 3.47 | 350 | 0.6900 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6878 | 3.56 | 360 | 0.6900 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6938 | 3.66 | 370 | 0.6909 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6905 | 3.76 | 380 | 0.6912 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6916 | 3.86 | 390 | 0.6903 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.693 | 3.96 | 400 | 0.6909 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.691 | 4.06 | 410 | 0.6903 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.689 | 4.16 | 420 | 0.6901 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.696 | 4.26 | 430 | 0.6901 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6921 | 4.36 | 440 | 0.6905 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.693 | 4.46 | 450 | 0.6912 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6894 | 4.55 | 460 | 0.6903 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6879 | 4.65 | 470 | 0.6900 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6945 | 4.75 | 480 | 0.6896 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6915 | 4.85 | 490 | 0.6900 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6919 | 4.95 | 500 | 0.6901 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6931 | 5.05 | 510 | 0.6912 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.692 | 5.15 | 520 | 0.6918 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6901 | 5.25 | 530 | 0.6898 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6948 | 5.35 | 540 | 0.6904 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6899 | 5.45 | 550 | 0.6900 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6892 | 5.54 | 560 | 0.6901 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6874 | 5.64 | 570 | 0.6902 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6952 | 5.74 | 580 | 0.6909 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6928 | 5.84 | 590 | 0.6923 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6919 | 5.94 | 600 | 0.6914 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6931 | 6.04 | 610 | 0.6909 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6914 | 6.14 | 620 | 0.6911 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6917 | 6.24 | 630 | 0.6905 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6883 | 6.34 | 640 | 0.6901 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6902 | 6.44 | 650 | 0.6915 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6963 | 6.53 | 660 | 0.6910 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6914 | 6.63 | 670 | 0.6918 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6882 | 6.73 | 680 | 0.6904 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.69 | 6.83 | 690 | 0.6909 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6903 | 6.93 | 700 | 0.6917 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6912 | 7.03 | 710 | 0.6934 | 0.5044 | 0.4057 | 0.1699 | 0.2395 |
| 0.6912 | 7.13 | 720 | 0.6978 | 0.4538 | 0.4395 | 0.6866 | 0.5359 |
| 0.6841 | 7.23 | 730 | 0.6936 | 0.5253 | 0.4054 | 0.0718 | 0.1220 |
| 0.6899 | 7.33 | 740 | 0.6920 | 0.5385 | 0.4872 | 0.0909 | 0.1532 |
| 0.6938 | 7.43 | 750 | 0.6902 | 0.5396 | 0.0 | 0.0 | 0.0 |
| 0.6887 | 7.52 | 760 | 0.6897 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6937 | 7.62 | 770 | 0.6907 | 0.5396 | 0.3333 | 0.0024 | 0.0048 |
| 0.6876 | 7.72 | 780 | 0.6922 | 0.5374 | 0.4444 | 0.0287 | 0.0539 |
| 0.6877 | 7.82 | 790 | 0.6941 | 0.5374 | 0.4483 | 0.0311 | 0.0582 |
| 0.6864 | 7.92 | 800 | 0.6952 | 0.4956 | 0.4591 | 0.5502 | 0.5005 |
| 0.6824 | 8.02 | 810 | 0.6910 | 0.5407 | 0.0 | 0.0 | 0.0 |
| 0.6941 | 8.12 | 820 | 0.6925 | 0.5341 | 0.4464 | 0.0598 | 0.1055 |
| 0.6878 | 8.22 | 830 | 0.6957 | 0.4516 | 0.4434 | 0.7584 | 0.5596 |
| 0.6881 | 8.32 | 840 | 0.6990 | 0.5022 | 0.4493 | 0.3708 | 0.4063 |
| 0.6868 | 8.42 | 850 | 0.6960 | 0.4923 | 0.4225 | 0.2871 | 0.3419 |
| 0.6845 | 8.51 | 860 | 0.6965 | 0.4901 | 0.4378 | 0.3876 | 0.4112 |
| 0.6863 | 8.61 | 870 | 0.6955 | 0.5077 | 0.3828 | 0.1172 | 0.1795 |
| 0.6788 | 8.71 | 880 | 0.7016 | 0.4593 | 0.4155 | 0.4354 | 0.4252 |
| 0.6835 | 8.81 | 890 | 0.7028 | 0.4626 | 0.4489 | 0.7464 | 0.5606 |
| 0.6862 | 8.91 | 900 | 0.7061 | 0.5121 | 0.3839 | 0.1029 | 0.1623 |
| 0.7011 | 9.01 | 910 | 0.6953 | 0.4802 | 0.4478 | 0.5646 | 0.4995 |
| 0.6858 | 9.11 | 920 | 0.6938 | 0.5176 | 0.4604 | 0.2919 | 0.3572 |
| 0.6849 | 9.21 | 930 | 0.6980 | 0.4989 | 0.4438 | 0.3589 | 0.3968 |
| 0.6792 | 9.31 | 940 | 0.6982 | 0.5055 | 0.4579 | 0.4163 | 0.4361 |
| 0.6801 | 9.41 | 950 | 0.6990 | 0.5055 | 0.4556 | 0.3923 | 0.4216 |
| 0.6815 | 9.5 | 960 | 0.7024 | 0.5 | 0.4593 | 0.5 | 0.4788 |
| 0.6816 | 9.6 | 970 | 0.7126 | 0.4945 | 0.4521 | 0.4737 | 0.4626 |
| 0.675 | 9.7 | 980 | 0.7096 | 0.5132 | 0.4658 | 0.4067 | 0.4342 |
| 0.6758 | 9.8 | 990 | 0.6992 | 0.5198 | 0.4642 | 0.2943 | 0.3602 |
| 0.6861 | 9.9 | 1000 | 0.6934 | 0.5253 | 0.4739 | 0.3038 | 0.3703 |
| 0.6843 | 10.0 | 1010 | 0.6939 | 0.5264 | 0.4833 | 0.4498 | 0.4659 |
| 0.6826 | 10.1 | 1020 | 0.7014 | 0.4868 | 0.4607 | 0.6866 | 0.5514 |
| 0.6599 | 10.2 | 1030 | 0.7142 | 0.4923 | 0.4540 | 0.5191 | 0.4844 |
| 0.6644 | 10.3 | 1040 | 0.7180 | 0.4813 | 0.45 | 0.5813 | 0.5073 |
| 0.6948 | 10.4 | 1050 | 0.7138 | 0.4681 | 0.4531 | 0.7632 | 0.5686 |
| 0.6613 | 10.5 | 1060 | 0.7181 | 0.4934 | 0.4108 | 0.2368 | 0.3005 |
| 0.6745 | 10.59 | 1070 | 0.7224 | 0.4538 | 0.4513 | 0.8756 | 0.5956 |
| 0.6917 | 10.69 | 1080 | 0.7040 | 0.4901 | 0.4559 | 0.5694 | 0.5064 |
| 0.6764 | 10.79 | 1090 | 0.6936 | 0.5275 | 0.4727 | 0.2488 | 0.3260 |
| 0.6698 | 10.89 | 1100 | 0.6977 | 0.5242 | 0.4615 | 0.2153 | 0.2936 |
| 0.6679 | 10.99 | 1110 | 0.7073 | 0.5242 | 0.4788 | 0.4043 | 0.4384 |
| 0.6626 | 11.09 | 1120 | 0.7122 | 0.4736 | 0.4526 | 0.6962 | 0.5485 |
| 0.6482 | 11.19 | 1130 | 0.7241 | 0.4758 | 0.4566 | 0.7416 | 0.5652 |
| 0.6526 | 11.29 | 1140 | 0.7236 | 0.5077 | 0.4599 | 0.4115 | 0.4343 |
| 0.6563 | 11.39 | 1150 | 0.7291 | 0.4725 | 0.4511 | 0.6842 | 0.5437 |
| 0.6528 | 11.49 | 1160 | 0.7212 | 0.5165 | 0.4732 | 0.4641 | 0.4686 |
| 0.644 | 11.58 | 1170 | 0.7130 | 0.5341 | 0.4924 | 0.4665 | 0.4791 |
| 0.6566 | 11.68 | 1180 | 0.7185 | 0.4769 | 0.4558 | 0.7153 | 0.5568 |
| 0.6783 | 11.78 | 1190 | 0.7211 | 0.5253 | 0.4817 | 0.4402 | 0.46 |
| 0.6726 | 11.88 | 1200 | 0.7195 | 0.5022 | 0.4713 | 0.6866 | 0.5589 |
| 0.658 | 11.98 | 1210 | 0.7189 | 0.5253 | 0.4733 | 0.2967 | 0.3647 |
| 0.6505 | 12.08 | 1220 | 0.7184 | 0.4923 | 0.4556 | 0.5407 | 0.4945 |
| 0.6411 | 12.18 | 1230 | 0.7531 | 0.4912 | 0.4514 | 0.5 | 0.4745 |
| 0.6507 | 12.28 | 1240 | 0.7700 | 0.4659 | 0.4450 | 0.6579 | 0.5309 |
| 0.6382 | 12.38 | 1250 | 0.7520 | 0.4945 | 0.4555 | 0.5144 | 0.4831 |
| 0.6393 | 12.48 | 1260 | 0.7335 | 0.4967 | 0.4548 | 0.4809 | 0.4674 |
| 0.6353 | 12.57 | 1270 | 0.7525 | 0.4725 | 0.4566 | 0.7799 | 0.5760 |
| 0.6476 | 12.67 | 1280 | 0.7404 | 0.5011 | 0.4622 | 0.5263 | 0.4922 |
| 0.6155 | 12.77 | 1290 | 0.7564 | 0.5011 | 0.4659 | 0.5885 | 0.5201 |
| 0.6102 | 12.87 | 1300 | 0.7757 | 0.5 | 0.4626 | 0.5478 | 0.5016 |
| 0.6436 | 12.97 | 1310 | 0.7670 | 0.4945 | 0.4662 | 0.6938 | 0.5577 |
| 0.5986 | 13.07 | 1320 | 0.7703 | 0.5 | 0.4705 | 0.7057 | 0.5646 |
| 0.6127 | 13.17 | 1330 | 0.7815 | 0.5198 | 0.4775 | 0.4833 | 0.4804 |
| 0.6191 | 13.27 | 1340 | 0.7723 | 0.5099 | 0.4714 | 0.5526 | 0.5088 |
| 0.5921 | 13.37 | 1350 | 0.7924 | 0.4857 | 0.4598 | 0.6842 | 0.55 |
| 0.6179 | 13.47 | 1360 | 0.7995 | 0.4857 | 0.4583 | 0.6579 | 0.5403 |
| 0.6021 | 13.56 | 1370 | 0.7742 | 0.4956 | 0.4675 | 0.7057 | 0.5624 |
| 0.6047 | 13.66 | 1380 | 0.7930 | 0.5231 | 0.4770 | 0.3971 | 0.4334 |
| 0.6218 | 13.76 | 1390 | 0.7713 | 0.4857 | 0.4602 | 0.6914 | 0.5526 |
| 0.5966 | 13.86 | 1400 | 0.7635 | 0.5077 | 0.4704 | 0.5694 | 0.5152 |
| 0.587 | 13.96 | 1410 | 0.7922 | 0.5088 | 0.4681 | 0.5096 | 0.4880 |
| 0.5508 | 14.06 | 1420 | 0.8150 | 0.4879 | 0.4632 | 0.7225 | 0.5645 |
| 0.5498 | 14.16 | 1430 | 0.8510 | 0.4989 | 0.4708 | 0.7321 | 0.5730 |
| 0.5746 | 14.26 | 1440 | 0.8129 | 0.5165 | 0.4757 | 0.5144 | 0.4943 |
| 0.5433 | 14.36 | 1450 | 0.8512 | 0.5066 | 0.4737 | 0.6675 | 0.5541 |
| 0.5509 | 14.46 | 1460 | 0.8718 | 0.4956 | 0.4614 | 0.5861 | 0.5163 |
| 0.5687 | 14.55 | 1470 | 0.8289 | 0.4912 | 0.4637 | 0.6866 | 0.5535 |
| 0.5495 | 14.65 | 1480 | 0.8390 | 0.5176 | 0.4729 | 0.4378 | 0.4547 |
| 0.5376 | 14.75 | 1490 | 0.8566 | 0.4945 | 0.4677 | 0.7273 | 0.5693 |
| 0.5832 | 14.85 | 1500 | 0.7991 | 0.5198 | 0.4797 | 0.5359 | 0.5062 |
| 0.5663 | 14.95 | 1510 | 0.8092 | 0.4945 | 0.4604 | 0.5837 | 0.5148 |
| 0.5417 | 15.05 | 1520 | 0.8595 | 0.5 | 0.4695 | 0.6818 | 0.5561 |
| 0.5143 | 15.15 | 1530 | 0.8894 | 0.5110 | 0.4632 | 0.4067 | 0.4331 |
| 0.4905 | 15.25 | 1540 | 0.8943 | 0.5 | 0.4593 | 0.5 | 0.4788 |
| 0.4928 | 15.35 | 1550 | 0.9057 | 0.5011 | 0.4663 | 0.5957 | 0.5231 |
| 0.5079 | 15.45 | 1560 | 0.8770 | 0.5231 | 0.4833 | 0.5550 | 0.5167 |
| 0.4985 | 15.54 | 1570 | 0.9009 | 0.5033 | 0.4719 | 0.6818 | 0.5577 |
| 0.5223 | 15.64 | 1580 | 0.9104 | 0.5066 | 0.4678 | 0.5383 | 0.5006 |
| 0.4982 | 15.74 | 1590 | 0.9134 | 0.5121 | 0.4645 | 0.4067 | 0.4337 |
| 0.5268 | 15.84 | 1600 | 0.8799 | 0.5110 | 0.4740 | 0.5885 | 0.5251 |
| 0.5138 | 15.94 | 1610 | 0.9454 | 0.4835 | 0.4626 | 0.7703 | 0.5781 |
| 0.5308 | 16.04 | 1620 | 0.8870 | 0.5 | 0.4602 | 0.5120 | 0.4847 |
| 0.4535 | 16.14 | 1630 | 0.9438 | 0.5121 | 0.4652 | 0.4163 | 0.4394 |
| 0.4684 | 16.24 | 1640 | 0.9885 | 0.5110 | 0.4712 | 0.5287 | 0.4983 |
| 0.4454 | 16.34 | 1650 | 0.9837 | 0.4934 | 0.4615 | 0.6172 | 0.5281 |
| 0.4352 | 16.44 | 1660 | 1.0165 | 0.5022 | 0.4694 | 0.6411 | 0.5420 |
| 0.433 | 16.53 | 1670 | 1.0183 | 0.4890 | 0.4608 | 0.6603 | 0.5428 |
| 0.4868 | 16.63 | 1680 | 0.9493 | 0.4989 | 0.4681 | 0.6675 | 0.5503 |
| 0.4758 | 16.73 | 1690 | 0.9099 | 0.4824 | 0.4499 | 0.5694 | 0.5026 |
| 0.4697 | 16.83 | 1700 | 0.9493 | 0.4967 | 0.4644 | 0.6244 | 0.5327 |
| 0.4595 | 16.93 | 1710 | 0.9750 | 0.5022 | 0.4658 | 0.5694 | 0.5124 |
| 0.4468 | 17.03 | 1720 | 0.9663 | 0.5066 | 0.4650 | 0.4928 | 0.4785 |
| 0.4202 | 17.13 | 1730 | 1.0212 | 0.4956 | 0.4486 | 0.4282 | 0.4382 |
| 0.4067 | 17.23 | 1740 | 1.0480 | 0.4956 | 0.4543 | 0.4880 | 0.4706 |
| 0.4076 | 17.33 | 1750 | 1.0637 | 0.5022 | 0.4627 | 0.5191 | 0.4893 |
| 0.3949 | 17.43 | 1760 | 1.0986 | 0.4835 | 0.4587 | 0.6914 | 0.5515 |
| 0.4204 | 17.52 | 1770 | 1.0845 | 0.4989 | 0.4661 | 0.6244 | 0.5337 |
| 0.4199 | 17.62 | 1780 | 1.0097 | 0.5110 | 0.4708 | 0.5215 | 0.4949 |
| 0.4258 | 17.72 | 1790 | 0.9844 | 0.5055 | 0.4690 | 0.5789 | 0.5182 |
| 0.4006 | 17.82 | 1800 | 1.0503 | 0.5033 | 0.4647 | 0.5359 | 0.4978 |
| 0.4193 | 17.92 | 1810 | 1.0635 | 0.5077 | 0.4654 | 0.4833 | 0.4742 |
| 0.4193 | 18.02 | 1820 | 0.9981 | 0.5121 | 0.4683 | 0.4593 | 0.4638 |
| 0.3659 | 18.12 | 1830 | 1.0970 | 0.5121 | 0.4780 | 0.6746 | 0.5595 |
| 0.3792 | 18.22 | 1840 | 1.1004 | 0.5110 | 0.4710 | 0.5239 | 0.4960 |
| 0.3463 | 18.32 | 1850 | 1.1114 | 0.5132 | 0.4739 | 0.5431 | 0.5061 |
| 0.3689 | 18.42 | 1860 | 1.0942 | 0.5143 | 0.4746 | 0.5359 | 0.5034 |
| 0.3568 | 18.51 | 1870 | 1.1142 | 0.5033 | 0.4661 | 0.5598 | 0.5087 |
| 0.3903 | 18.61 | 1880 | 1.1551 | 0.5 | 0.4653 | 0.5933 | 0.5216 |
| 0.3606 | 18.71 | 1890 | 1.1708 | 0.5011 | 0.4665 | 0.6005 | 0.5251 |
| 0.3514 | 18.81 | 1900 | 1.1328 | 0.5099 | 0.4685 | 0.4976 | 0.4826 |
| 0.3892 | 18.91 | 1910 | 1.1357 | 0.5066 | 0.4705 | 0.5909 | 0.5239 |
| 0.3593 | 19.01 | 1920 | 1.1000 | 0.5099 | 0.4739 | 0.6077 | 0.5325 |
| 0.3061 | 19.11 | 1930 | 1.2081 | 0.5231 | 0.4846 | 0.6029 | 0.5373 |
| 0.352 | 19.21 | 1940 | 1.1939 | 0.5121 | 0.4790 | 0.7081 | 0.5714 |
| 0.4023 | 19.31 | 1950 | 1.0680 | 0.5143 | 0.4764 | 0.5789 | 0.5227 |
| 0.3053 | 19.41 | 1960 | 1.2004 | 0.5044 | 0.4591 | 0.4426 | 0.4507 |
| 0.3656 | 19.5 | 1970 | 1.2460 | 0.4978 | 0.4614 | 0.5574 | 0.5049 |
| 0.3284 | 19.6 | 1980 | 1.2034 | 0.5055 | 0.4671 | 0.5431 | 0.5022 |
| 0.3629 | 19.7 | 1990 | 1.1308 | 0.5099 | 0.4729 | 0.5837 | 0.5225 |
| 0.3082 | 19.8 | 2000 | 1.2077 | 0.5077 | 0.4707 | 0.5766 | 0.5183 |
| 0.335 | 19.9 | 2010 | 1.2057 | 0.5088 | 0.4692 | 0.5287 | 0.4972 |
| 0.3233 | 20.0 | 2020 | 1.1871 | 0.5055 | 0.4626 | 0.4737 | 0.4681 |
| 0.2813 | 20.1 | 2030 | 1.2896 | 0.5044 | 0.4663 | 0.5455 | 0.5028 |
| 0.2746 | 20.2 | 2040 | 1.3054 | 0.5165 | 0.4769 | 0.5431 | 0.5078 |
| 0.2848 | 20.3 | 2050 | 1.3196 | 0.4857 | 0.4560 | 0.6196 | 0.5254 |
| 0.3281 | 20.4 | 2060 | 1.3152 | 0.4978 | 0.4596 | 0.5311 | 0.4928 |
| 0.3057 | 20.5 | 2070 | 1.2997 | 0.5099 | 0.4718 | 0.5598 | 0.5120 |
| 0.3119 | 20.59 | 2080 | 1.2248 | 0.4956 | 0.4628 | 0.6100 | 0.5263 |
| 0.3049 | 20.69 | 2090 | 1.2603 | 0.5132 | 0.4744 | 0.5550 | 0.5116 |
| 0.2884 | 20.79 | 2100 | 1.3128 | 0.5242 | 0.4817 | 0.4713 | 0.4764 |
| 0.2976 | 20.89 | 2110 | 1.2636 | 0.5099 | 0.4719 | 0.5622 | 0.5131 |
| 0.3018 | 20.99 | 2120 | 1.2754 | 0.4967 | 0.4631 | 0.6005 | 0.5229 |
| 0.2592 | 21.09 | 2130 | 1.3369 | 0.5099 | 0.4693 | 0.5120 | 0.4897 |
| 0.2635 | 21.19 | 2140 | 1.4117 | 0.5 | 0.4615 | 0.5311 | 0.4939 |
| 0.2777 | 21.29 | 2150 | 1.3780 | 0.4956 | 0.4650 | 0.6507 | 0.5424 |
| 0.2852 | 21.39 | 2160 | 1.2945 | 0.4967 | 0.4567 | 0.5048 | 0.4795 |
| 0.2733 | 21.49 | 2170 | 1.3155 | 0.5165 | 0.4730 | 0.4617 | 0.4673 |
| 0.2832 | 21.58 | 2180 | 1.3760 | 0.4934 | 0.4611 | 0.6100 | 0.5252 |
| 0.2548 | 21.68 | 2190 | 1.3889 | 0.4989 | 0.4572 | 0.4856 | 0.4710 |
| 0.2819 | 21.78 | 2200 | 1.3685 | 0.5044 | 0.4569 | 0.4187 | 0.4370 |
| 0.2588 | 21.88 | 2210 | 1.3512 | 0.5044 | 0.4667 | 0.5526 | 0.5060 |
| 0.2995 | 21.98 | 2220 | 1.2891 | 0.4923 | 0.4621 | 0.6411 | 0.5371 |
| 0.2861 | 22.08 | 2230 | 1.3926 | 0.5077 | 0.4599 | 0.4115 | 0.4343 |
| 0.2619 | 22.18 | 2240 | 1.4049 | 0.5055 | 0.4683 | 0.5646 | 0.5119 |
| 0.2154 | 22.28 | 2250 | 1.4520 | 0.5 | 0.4645 | 0.5789 | 0.5154 |
| 0.2592 | 22.38 | 2260 | 1.4432 | 0.5 | 0.4565 | 0.4641 | 0.4603 |
| 0.2362 | 22.48 | 2270 | 1.4400 | 0.5066 | 0.4701 | 0.5837 | 0.5208 |
| 0.2534 | 22.57 | 2280 | 1.3972 | 0.4945 | 0.4588 | 0.5598 | 0.5043 |
| 0.2437 | 22.67 | 2290 | 1.3943 | 0.5055 | 0.4633 | 0.4833 | 0.4731 |
| 0.2252 | 22.77 | 2300 | 1.4743 | 0.5066 | 0.4619 | 0.4498 | 0.4558 |
| 0.2657 | 22.87 | 2310 | 1.4640 | 0.5088 | 0.4678 | 0.5048 | 0.4856 |
| 0.2501 | 22.97 | 2320 | 1.4328 | 0.5121 | 0.4762 | 0.6220 | 0.5394 |
| 0.2473 | 23.07 | 2330 | 1.4163 | 0.5077 | 0.4631 | 0.4498 | 0.4563 |
| 0.2382 | 23.17 | 2340 | 1.5845 | 0.4791 | 0.4524 | 0.6364 | 0.5288 |
| 0.2497 | 23.27 | 2350 | 1.4014 | 0.5033 | 0.4587 | 0.4522 | 0.4554 |
| 0.2401 | 23.37 | 2360 | 1.3488 | 0.5033 | 0.4635 | 0.5167 | 0.4887 |
| 0.2185 | 23.47 | 2370 | 1.4671 | 0.4945 | 0.4590 | 0.5622 | 0.5054 |
| 0.2 | 23.56 | 2380 | 1.5153 | 0.4846 | 0.4447 | 0.4904 | 0.4664 |
| 0.2614 | 23.66 | 2390 | 1.4911 | 0.4945 | 0.4555 | 0.5144 | 0.4831 |
| 0.2117 | 23.76 | 2400 | 1.4923 | 0.4978 | 0.4589 | 0.5215 | 0.4882 |
| 0.2383 | 23.86 | 2410 | 1.4842 | 0.5011 | 0.4625 | 0.5311 | 0.4944 |
| 0.2466 | 23.96 | 2420 | 1.4529 | 0.5132 | 0.4719 | 0.5024 | 0.4867 |
| 0.2218 | 24.06 | 2430 | 1.4223 | 0.4967 | 0.4588 | 0.5335 | 0.4934 |
| 0.1857 | 24.16 | 2440 | 1.5669 | 0.5033 | 0.4585 | 0.4498 | 0.4541 |
| 0.2053 | 24.26 | 2450 | 1.5503 | 0.5110 | 0.4697 | 0.5 | 0.4844 |
| 0.227 | 24.36 | 2460 | 1.5109 | 0.5132 | 0.4731 | 0.5263 | 0.4983 |
| 0.1927 | 24.46 | 2470 | 1.5388 | 0.5143 | 0.4762 | 0.5742 | 0.5206 |
| 0.2133 | 24.55 | 2480 | 1.5083 | 0.4989 | 0.4570 | 0.4833 | 0.4698 |
| 0.2454 | 24.65 | 2490 | 1.5215 | 0.5143 | 0.4789 | 0.6531 | 0.5526 |
| 0.2302 | 24.75 | 2500 | 1.4555 | 0.4923 | 0.4577 | 0.5694 | 0.5075 |
| 0.2324 | 24.85 | 2510 | 1.4294 | 0.5088 | 0.4691 | 0.5263 | 0.4961 |
| 0.2193 | 24.95 | 2520 | 1.4512 | 0.5154 | 0.4772 | 0.5766 | 0.5222 |
| 0.1973 | 25.05 | 2530 | 1.5369 | 0.5022 | 0.4601 | 0.4833 | 0.4714 |
| 0.1882 | 25.15 | 2540 | 1.5999 | 0.4901 | 0.4517 | 0.5144 | 0.4810 |
| 0.2012 | 25.25 | 2550 | 1.5917 | 0.5099 | 0.4713 | 0.5502 | 0.5077 |
| 0.1773 | 25.35 | 2560 | 1.5543 | 0.4989 | 0.4587 | 0.5048 | 0.4806 |
| 0.1827 | 25.45 | 2570 | 1.6074 | 0.5022 | 0.4620 | 0.5096 | 0.4846 |
| 0.1784 | 25.54 | 2580 | 1.6887 | 0.4934 | 0.4527 | 0.4928 | 0.4719 |
| 0.2172 | 25.64 | 2590 | 1.6902 | 0.4901 | 0.4542 | 0.5455 | 0.4957 |
| 0.2038 | 25.74 | 2600 | 1.6347 | 0.4912 | 0.4566 | 0.5670 | 0.5059 |
| 0.2211 | 25.84 | 2610 | 1.5360 | 0.5055 | 0.4671 | 0.5431 | 0.5022 |
| 0.2084 | 25.94 | 2620 | 1.6047 | 0.5088 | 0.4678 | 0.5048 | 0.4856 |
| 0.1848 | 26.04 | 2630 | 1.6064 | 0.4989 | 0.4556 | 0.4665 | 0.4610 |
| 0.1802 | 26.14 | 2640 | 1.6417 | 0.5055 | 0.4677 | 0.5550 | 0.5077 |
| 0.1702 | 26.24 | 2650 | 1.6046 | 0.5011 | 0.4589 | 0.4809 | 0.4696 |
| 0.171 | 26.34 | 2660 | 1.6522 | 0.5 | 0.4582 | 0.4856 | 0.4715 |
| 0.1766 | 26.44 | 2670 | 1.6750 | 0.4912 | 0.4542 | 0.5335 | 0.4906 |
| 0.1928 | 26.53 | 2680 | 1.6669 | 0.4912 | 0.4553 | 0.5478 | 0.4973 |
| 0.2055 | 26.63 | 2690 | 1.6340 | 0.5066 | 0.4644 | 0.4833 | 0.4736 |
| 0.1908 | 26.73 | 2700 | 1.6952 | 0.5044 | 0.4660 | 0.5407 | 0.5006 |
| 0.1867 | 26.83 | 2710 | 1.6238 | 0.5033 | 0.4649 | 0.5383 | 0.4989 |
| 0.1694 | 26.93 | 2720 | 1.6618 | 0.5033 | 0.4691 | 0.6172 | 0.5331 |
| 0.1829 | 27.03 | 2730 | 1.6839 | 0.5154 | 0.4712 | 0.4498 | 0.4602 |
| 0.2044 | 27.13 | 2740 | 1.7790 | 0.4945 | 0.4656 | 0.6794 | 0.5525 |
| 0.1778 | 27.23 | 2750 | 1.6826 | 0.5 | 0.4539 | 0.4354 | 0.4444 |
| 0.1677 | 27.33 | 2760 | 1.7339 | 0.5066 | 0.4699 | 0.5789 | 0.5188 |
| 0.1934 | 27.43 | 2770 | 1.7476 | 0.4956 | 0.4561 | 0.5096 | 0.4814 |
| 0.2149 | 27.52 | 2780 | 1.6146 | 0.5110 | 0.4681 | 0.4737 | 0.4709 |
| 0.1767 | 27.62 | 2790 | 1.6051 | 0.4989 | 0.4623 | 0.5574 | 0.5054 |
| 0.1588 | 27.72 | 2800 | 1.7543 | 0.5033 | 0.4630 | 0.5096 | 0.4852 |
| 0.1958 | 27.82 | 2810 | 1.7977 | 0.4978 | 0.4614 | 0.5574 | 0.5049 |
| 0.1719 | 27.92 | 2820 | 1.7153 | 0.4967 | 0.4603 | 0.5550 | 0.5033 |
| 0.1775 | 28.02 | 2830 | 1.6980 | 0.4956 | 0.4572 | 0.5239 | 0.4883 |
| 0.1235 | 28.12 | 2840 | 1.8467 | 0.4989 | 0.4617 | 0.5478 | 0.5011 |
| 0.1432 | 28.22 | 2850 | 1.8957 | 0.5022 | 0.4625 | 0.5167 | 0.4881 |
| 0.2081 | 28.32 | 2860 | 1.8517 | 0.5033 | 0.4673 | 0.5813 | 0.5181 |
| 0.1713 | 28.42 | 2870 | 1.7427 | 0.4989 | 0.4615 | 0.5455 | 0.5000 |
| 0.1553 | 28.51 | 2880 | 1.7227 | 0.5011 | 0.4639 | 0.5526 | 0.5044 |
| 0.1733 | 28.61 | 2890 | 1.7550 | 0.4934 | 0.4544 | 0.5120 | 0.4814 |
| 0.1778 | 28.71 | 2900 | 1.7158 | 0.4945 | 0.4562 | 0.5239 | 0.4878 |
| 0.1589 | 28.81 | 2910 | 1.7666 | 0.4934 | 0.4527 | 0.4928 | 0.4719 |
| 0.182 | 28.91 | 2920 | 1.8048 | 0.4989 | 0.4635 | 0.5766 | 0.5139 |
| 0.1579 | 29.01 | 2930 | 1.7952 | 0.4934 | 0.4479 | 0.4426 | 0.4452 |
| 0.1716 | 29.11 | 2940 | 1.7790 | 0.4945 | 0.4598 | 0.5742 | 0.5106 |
| 0.1598 | 29.21 | 2950 | 1.7497 | 0.4967 | 0.4593 | 0.5407 | 0.4967 |
| 0.1743 | 29.31 | 2960 | 1.7378 | 0.4978 | 0.4528 | 0.4474 | 0.4501 |
| 0.1486 | 29.41 | 2970 | 1.8025 | 0.5022 | 0.4705 | 0.6675 | 0.5519 |
| 0.1609 | 29.5 | 2980 | 1.7530 | 0.5121 | 0.4649 | 0.4115 | 0.4365 |
| 0.14 | 29.6 | 2990 | 1.8207 | 0.5077 | 0.4685 | 0.5335 | 0.4989 |
| 0.1639 | 29.7 | 3000 | 1.8235 | 0.5 | 0.4563 | 0.4617 | 0.4590 |
| 0.1589 | 29.8 | 3010 | 1.7896 | 0.5022 | 0.4644 | 0.5455 | 0.5017 |
| 0.1668 | 29.9 | 3020 | 1.7719 | 0.5110 | 0.4685 | 0.4809 | 0.4746 |
| 0.1683 | 30.0 | 3030 | 1.7143 | 0.5044 | 0.4677 | 0.5718 | 0.5145 |
| 0.1442 | 30.1 | 3040 | 1.8204 | 0.5143 | 0.4713 | 0.4713 | 0.4713 |
| 0.1783 | 30.2 | 3050 | 1.9409 | 0.5066 | 0.4688 | 0.5574 | 0.5093 |
| 0.1399 | 30.3 | 3060 | 1.9196 | 0.4967 | 0.4578 | 0.5191 | 0.4865 |
| 0.1396 | 30.4 | 3070 | 1.8497 | 0.5011 | 0.4627 | 0.5335 | 0.4956 |
| 0.1605 | 30.5 | 3080 | 1.8745 | 0.5011 | 0.4659 | 0.5885 | 0.5201 |
| 0.1748 | 30.59 | 3090 | 1.8298 | 0.5011 | 0.4650 | 0.5718 | 0.5129 |
| 0.1468 | 30.69 | 3100 | 1.8500 | 0.5066 | 0.4680 | 0.5431 | 0.5028 |
| 0.1416 | 30.79 | 3110 | 1.9355 | 0.4967 | 0.4593 | 0.5407 | 0.4967 |
| 0.1364 | 30.89 | 3120 | 1.9258 | 0.4956 | 0.4617 | 0.5909 | 0.5184 |
| 0.155 | 30.99 | 3130 | 1.8446 | 0.5088 | 0.4688 | 0.5215 | 0.4938 |
| 0.1281 | 31.09 | 3140 | 1.8884 | 0.5011 | 0.4634 | 0.5455 | 0.5011 |
| 0.1513 | 31.19 | 3150 | 1.9357 | 0.4934 | 0.4559 | 0.5311 | 0.4906 |
| 0.1454 | 31.29 | 3160 | 1.9112 | 0.5044 | 0.4634 | 0.5 | 0.4810 |
| 0.1334 | 31.39 | 3170 | 1.9049 | 0.5022 | 0.4628 | 0.5215 | 0.4904 |
| 0.1622 | 31.49 | 3180 | 1.8650 | 0.5066 | 0.4693 | 0.5670 | 0.5135 |
| 0.1112 | 31.58 | 3190 | 1.9197 | 0.5033 | 0.4647 | 0.5359 | 0.4978 |
| 0.1323 | 31.68 | 3200 | 1.9855 | 0.5110 | 0.4630 | 0.4043 | 0.4317 |
| 0.1091 | 31.78 | 3210 | 2.0638 | 0.5044 | 0.4678 | 0.5742 | 0.5156 |
| 0.1441 | 31.88 | 3220 | 2.0540 | 0.4956 | 0.4559 | 0.5072 | 0.4802 |
| 0.1486 | 31.98 | 3230 | 1.9791 | 0.5077 | 0.4656 | 0.4856 | 0.4754 |
| 0.1303 | 32.08 | 3240 | 1.9271 | 0.5066 | 0.4697 | 0.5742 | 0.5167 |
| 0.1189 | 32.18 | 3250 | 1.8931 | 0.5231 | 0.4816 | 0.5 | 0.4906 |
| 0.1243 | 32.28 | 3260 | 1.9401 | 0.5099 | 0.4703 | 0.5311 | 0.4989 |
| 0.1147 | 32.38 | 3270 | 1.9619 | 0.5055 | 0.4675 | 0.5502 | 0.5055 |
| 0.1134 | 32.48 | 3280 | 1.9958 | 0.5143 | 0.4698 | 0.4474 | 0.4583 |
| 0.1211 | 32.57 | 3290 | 2.0501 | 0.4967 | 0.4580 | 0.5215 | 0.4877 |
| 0.1394 | 32.67 | 3300 | 2.0231 | 0.5044 | 0.4658 | 0.5383 | 0.4994 |
| 0.1157 | 32.77 | 3310 | 2.0500 | 0.4956 | 0.4597 | 0.5598 | 0.5049 |
| 0.156 | 32.87 | 3320 | 1.9984 | 0.5121 | 0.4690 | 0.4713 | 0.4702 |
| 0.1406 | 32.97 | 3330 | 1.9976 | 0.4912 | 0.4568 | 0.5694 | 0.5069 |
| 0.1379 | 33.07 | 3340 | 1.9001 | 0.5143 | 0.4722 | 0.4880 | 0.48 |
| 0.1372 | 33.17 | 3350 | 1.9292 | 0.5198 | 0.4774 | 0.4809 | 0.4791 |
| 0.1211 | 33.27 | 3360 | 2.0667 | 0.5033 | 0.4646 | 0.5335 | 0.4967 |
| 0.1086 | 33.37 | 3370 | 2.1113 | 0.5154 | 0.4749 | 0.5215 | 0.4971 |
| 0.1285 | 33.47 | 3380 | 2.1482 | 0.5033 | 0.4644 | 0.5311 | 0.4955 |
| 0.1404 | 33.56 | 3390 | 2.0883 | 0.5099 | 0.4688 | 0.5024 | 0.4850 |
| 0.1483 | 33.66 | 3400 | 2.0259 | 0.5011 | 0.4637 | 0.5502 | 0.5033 |
| 0.175 | 33.76 | 3410 | 1.9645 | 0.4890 | 0.4521 | 0.5311 | 0.4884 |
| 0.1199 | 33.86 | 3420 | 1.9896 | 0.4978 | 0.4570 | 0.4952 | 0.4753 |
| 0.1259 | 33.96 | 3430 | 2.0485 | 0.4857 | 0.4486 | 0.5215 | 0.4823 |
| 0.1472 | 34.06 | 3440 | 2.0220 | 0.4945 | 0.4527 | 0.4809 | 0.4664 |
| 0.1207 | 34.16 | 3450 | 2.0023 | 0.4956 | 0.4581 | 0.5359 | 0.4939 |
| 0.1022 | 34.26 | 3460 | 2.0544 | 0.4956 | 0.4577 | 0.5311 | 0.4917 |
| 0.1334 | 34.36 | 3470 | 2.1080 | 0.4945 | 0.4588 | 0.5598 | 0.5043 |
| 0.118 | 34.46 | 3480 | 2.0726 | 0.4956 | 0.4582 | 0.5383 | 0.4950 |
| 0.1527 | 34.55 | 3490 | 1.9606 | 0.4945 | 0.4502 | 0.4545 | 0.4524 |
| 0.13 | 34.65 | 3500 | 1.9395 | 0.4967 | 0.4595 | 0.5431 | 0.4978 |
| 0.1235 | 34.75 | 3510 | 1.9903 | 0.5055 | 0.4646 | 0.5024 | 0.4828 |
| 0.1616 | 34.85 | 3520 | 1.9800 | 0.4989 | 0.4626 | 0.5622 | 0.5076 |
| 0.1361 | 34.95 | 3530 | 1.9061 | 0.5011 | 0.4593 | 0.4856 | 0.4721 |
| 0.1188 | 35.05 | 3540 | 1.9699 | 0.4956 | 0.4586 | 0.5431 | 0.4973 |
| 0.144 | 35.15 | 3550 | 2.0882 | 0.4967 | 0.4635 | 0.6077 | 0.5259 |
| 0.1353 | 35.25 | 3560 | 2.0487 | 0.4967 | 0.4569 | 0.5072 | 0.4807 |
| 0.1136 | 35.35 | 3570 | 2.0453 | 0.5 | 0.4625 | 0.5455 | 0.5005 |
| 0.1208 | 35.45 | 3580 | 2.0445 | 0.5044 | 0.4619 | 0.4785 | 0.4700 |
| 0.1077 | 35.54 | 3590 | 2.0570 | 0.5033 | 0.4591 | 0.4569 | 0.4580 |
| 0.1038 | 35.64 | 3600 | 2.0791 | 0.5055 | 0.4658 | 0.5215 | 0.4921 |
| 0.1197 | 35.74 | 3610 | 2.0838 | 0.5044 | 0.4663 | 0.5455 | 0.5028 |
| 0.138 | 35.84 | 3620 | 2.0788 | 0.4967 | 0.4543 | 0.4761 | 0.4650 |
| 0.1348 | 35.94 | 3630 | 2.0215 | 0.4912 | 0.4497 | 0.4809 | 0.4647 |
| 0.1015 | 36.04 | 3640 | 2.0355 | 0.4945 | 0.4535 | 0.4904 | 0.4713 |
| 0.115 | 36.14 | 3650 | 2.0643 | 0.4901 | 0.4502 | 0.4976 | 0.4727 |
| 0.1085 | 36.24 | 3660 | 2.0985 | 0.4967 | 0.4526 | 0.4569 | 0.4548 |
| 0.1024 | 36.34 | 3670 | 2.1480 | 0.4967 | 0.4583 | 0.5263 | 0.4900 |
| 0.0993 | 36.44 | 3680 | 2.1946 | 0.4879 | 0.4518 | 0.5383 | 0.4913 |
| 0.1269 | 36.53 | 3690 | 2.1727 | 0.5 | 0.4569 | 0.4689 | 0.4628 |
| 0.0976 | 36.63 | 3700 | 2.1584 | 0.4934 | 0.4549 | 0.5191 | 0.4849 |
| 0.1099 | 36.73 | 3710 | 2.1804 | 0.4978 | 0.4579 | 0.5072 | 0.4813 |
| 0.1103 | 36.83 | 3720 | 2.1581 | 0.5154 | 0.4709 | 0.4450 | 0.4576 |
| 0.1168 | 36.93 | 3730 | 2.1793 | 0.4923 | 0.4567 | 0.5550 | 0.5011 |
| 0.1096 | 37.03 | 3740 | 2.1582 | 0.4956 | 0.4577 | 0.5311 | 0.4917 |
| 0.0967 | 37.13 | 3750 | 2.1912 | 0.5044 | 0.4657 | 0.5359 | 0.4983 |
| 0.1195 | 37.23 | 3760 | 2.1604 | 0.5055 | 0.4628 | 0.4761 | 0.4693 |
| 0.1331 | 37.33 | 3770 | 2.1033 | 0.5066 | 0.4671 | 0.5263 | 0.4949 |
| 0.1028 | 37.43 | 3780 | 2.1223 | 0.5055 | 0.4652 | 0.5120 | 0.4875 |
| 0.1225 | 37.52 | 3790 | 2.1905 | 0.5022 | 0.4648 | 0.5526 | 0.5049 |
| 0.1112 | 37.62 | 3800 | 2.1669 | 0.5033 | 0.4626 | 0.5024 | 0.4817 |
| 0.1155 | 37.72 | 3810 | 2.1987 | 0.5077 | 0.4702 | 0.5670 | 0.5141 |
| 0.126 | 37.82 | 3820 | 2.1550 | 0.5121 | 0.4706 | 0.4976 | 0.4837 |
| 0.0846 | 37.92 | 3830 | 2.1935 | 0.5110 | 0.4711 | 0.5263 | 0.4972 |
| 0.1035 | 38.02 | 3840 | 2.2105 | 0.5121 | 0.4725 | 0.5335 | 0.5011 |
| 0.1138 | 38.12 | 3850 | 2.1942 | 0.5 | 0.4619 | 0.5359 | 0.4961 |
| 0.0927 | 38.22 | 3860 | 2.1349 | 0.5132 | 0.4717 | 0.4976 | 0.4843 |
| 0.0941 | 38.32 | 3870 | 2.1465 | 0.5044 | 0.4665 | 0.5502 | 0.5049 |
| 0.0993 | 38.42 | 3880 | 2.1494 | 0.5121 | 0.4715 | 0.5144 | 0.4920 |
| 0.093 | 38.51 | 3890 | 2.1811 | 0.5209 | 0.4796 | 0.5072 | 0.4930 |
| 0.1153 | 38.61 | 3900 | 2.1946 | 0.5099 | 0.4706 | 0.5359 | 0.5011 |
| 0.0941 | 38.71 | 3910 | 2.1963 | 0.5165 | 0.4763 | 0.5287 | 0.5011 |
| 0.1027 | 38.81 | 3920 | 2.1836 | 0.5176 | 0.4709 | 0.4067 | 0.4365 |
| 0.1247 | 38.91 | 3930 | 2.1881 | 0.5088 | 0.4688 | 0.5215 | 0.4938 |
| 0.1107 | 39.01 | 3940 | 2.1822 | 0.5055 | 0.4664 | 0.5311 | 0.4966 |
| 0.0906 | 39.11 | 3950 | 2.1764 | 0.5 | 0.4565 | 0.4641 | 0.4603 |
| 0.1067 | 39.21 | 3960 | 2.2267 | 0.5033 | 0.4654 | 0.5478 | 0.5033 |
| 0.1187 | 39.31 | 3970 | 2.1916 | 0.5088 | 0.4670 | 0.4904 | 0.4784 |
| 0.1002 | 39.41 | 3980 | 2.1903 | 0.5044 | 0.4619 | 0.4785 | 0.4700 |
| 0.0716 | 39.5 | 3990 | 2.2750 | 0.5055 | 0.4655 | 0.5167 | 0.4898 |
| 0.133 | 39.6 | 4000 | 2.3035 | 0.5022 | 0.4625 | 0.5167 | 0.4881 |
| 0.0781 | 39.7 | 4010 | 2.3517 | 0.4967 | 0.4625 | 0.5909 | 0.5189 |
| 0.1077 | 39.8 | 4020 | 2.2651 | 0.5099 | 0.4697 | 0.5191 | 0.4932 |
| 0.112 | 39.9 | 4030 | 2.2087 | 0.5099 | 0.4673 | 0.4785 | 0.4728 |
| 0.1035 | 40.0 | 4040 | 2.1908 | 0.5077 | 0.4658 | 0.4880 | 0.4766 |
| 0.089 | 40.1 | 4050 | 2.2144 | 0.5110 | 0.4701 | 0.5072 | 0.4879 |
| 0.101 | 40.2 | 4060 | 2.2210 | 0.5110 | 0.4695 | 0.4976 | 0.4832 |
| 0.0869 | 40.3 | 4070 | 2.2460 | 0.5099 | 0.4698 | 0.5215 | 0.4943 |
| 0.0975 | 40.4 | 4080 | 2.2820 | 0.5121 | 0.4731 | 0.5478 | 0.5078 |
| 0.097 | 40.5 | 4090 | 2.2620 | 0.5099 | 0.4690 | 0.5072 | 0.4874 |
| 0.0985 | 40.59 | 4100 | 2.2593 | 0.5066 | 0.4678 | 0.5383 | 0.5006 |
| 0.1102 | 40.69 | 4110 | 2.2689 | 0.5022 | 0.4658 | 0.5694 | 0.5124 |
| 0.1101 | 40.79 | 4120 | 2.2530 | 0.5 | 0.4620 | 0.5383 | 0.4972 |
| 0.124 | 40.89 | 4130 | 2.1621 | 0.5132 | 0.4679 | 0.4354 | 0.4511 |
| 0.1342 | 40.99 | 4140 | 2.1030 | 0.5044 | 0.4641 | 0.5096 | 0.4857 |
| 0.1111 | 41.09 | 4150 | 2.0685 | 0.5110 | 0.4684 | 0.4785 | 0.4734 |
| 0.1144 | 41.19 | 4160 | 2.0743 | 0.5077 | 0.4654 | 0.4833 | 0.4742 |
| 0.0943 | 41.29 | 4170 | 2.0994 | 0.5143 | 0.47 | 0.4498 | 0.4597 |
| 0.0974 | 41.39 | 4180 | 2.1449 | 0.5033 | 0.4608 | 0.4785 | 0.4695 |
| 0.0955 | 41.49 | 4190 | 2.2387 | 0.4945 | 0.4557 | 0.5167 | 0.4843 |
| 0.0921 | 41.58 | 4200 | 2.2540 | 0.4967 | 0.4548 | 0.4809 | 0.4674 |
| 0.1114 | 41.68 | 4210 | 2.2426 | 0.5066 | 0.4625 | 0.4569 | 0.4597 |
| 0.1085 | 41.78 | 4220 | 2.2554 | 0.5066 | 0.4675 | 0.5335 | 0.4983 |
| 0.1082 | 41.88 | 4230 | 2.2176 | 0.5110 | 0.4697 | 0.5 | 0.4844 |
| 0.0884 | 41.98 | 4240 | 2.2295 | 0.5132 | 0.4710 | 0.4856 | 0.4782 |
| 0.1174 | 42.08 | 4250 | 2.2404 | 0.5099 | 0.4686 | 0.5 | 0.4838 |
| 0.1116 | 42.18 | 4260 | 2.2222 | 0.5121 | 0.4689 | 0.4689 | 0.4689 |
| 0.0931 | 42.28 | 4270 | 2.2391 | 0.5055 | 0.4671 | 0.5431 | 0.5022 |
| 0.0966 | 42.38 | 4280 | 2.2378 | 0.5033 | 0.4617 | 0.4904 | 0.4756 |
| 0.0775 | 42.48 | 4290 | 2.2732 | 0.5099 | 0.4645 | 0.4378 | 0.4507 |
| 0.0851 | 42.57 | 4300 | 2.3274 | 0.4978 | 0.4581 | 0.5096 | 0.4824 |
| 0.0953 | 42.67 | 4310 | 2.3597 | 0.4989 | 0.4606 | 0.5311 | 0.4933 |
| 0.095 | 42.77 | 4320 | 2.3421 | 0.5 | 0.4607 | 0.5191 | 0.4882 |
| 0.0894 | 42.87 | 4330 | 2.3274 | 0.5022 | 0.4642 | 0.5431 | 0.5006 |
| 0.1045 | 42.97 | 4340 | 2.2580 | 0.4967 | 0.4531 | 0.4617 | 0.4573 |
| 0.0749 | 43.07 | 4350 | 2.2307 | 0.5022 | 0.4582 | 0.4593 | 0.4588 |
| 0.0928 | 43.17 | 4360 | 2.2475 | 0.4989 | 0.4589 | 0.5072 | 0.4818 |
| 0.1116 | 43.27 | 4370 | 2.2234 | 0.5022 | 0.4615 | 0.5024 | 0.4811 |
| 0.1058 | 43.37 | 4380 | 2.1949 | 0.5055 | 0.4662 | 0.5287 | 0.4955 |
| 0.1032 | 43.47 | 4390 | 2.2132 | 0.5011 | 0.4627 | 0.5335 | 0.4956 |
| 0.0973 | 43.56 | 4400 | 2.2195 | 0.4989 | 0.4583 | 0.5 | 0.4783 |
| 0.0922 | 43.66 | 4410 | 2.2767 | 0.5044 | 0.4672 | 0.5622 | 0.5103 |
| 0.0899 | 43.76 | 4420 | 2.2685 | 0.5011 | 0.4595 | 0.4880 | 0.4733 |
| 0.0933 | 43.86 | 4430 | 2.2780 | 0.4956 | 0.4563 | 0.5120 | 0.4825 |
| 0.0978 | 43.96 | 4440 | 2.2994 | 0.4967 | 0.4582 | 0.5239 | 0.4888 |
| 0.1219 | 44.06 | 4450 | 2.2724 | 0.5033 | 0.4626 | 0.5024 | 0.4817 |
| 0.0878 | 44.16 | 4460 | 2.2600 | 0.5055 | 0.4636 | 0.4880 | 0.4755 |
| 0.0791 | 44.26 | 4470 | 2.2903 | 0.5055 | 0.4658 | 0.5215 | 0.4921 |
| 0.1081 | 44.36 | 4480 | 2.2805 | 0.5077 | 0.4674 | 0.5144 | 0.4897 |
| 0.095 | 44.46 | 4490 | 2.2506 | 0.5044 | 0.4629 | 0.4928 | 0.4774 |
| 0.101 | 44.55 | 4500 | 2.2338 | 0.5154 | 0.4707 | 0.4426 | 0.4562 |
| 0.0833 | 44.65 | 4510 | 2.2744 | 0.5011 | 0.4589 | 0.4809 | 0.4696 |
| 0.0987 | 44.75 | 4520 | 2.3038 | 0.4989 | 0.4604 | 0.5287 | 0.4922 |
| 0.095 | 44.85 | 4530 | 2.2748 | 0.5055 | 0.4626 | 0.4737 | 0.4681 |
| 0.083 | 44.95 | 4540 | 2.2818 | 0.5055 | 0.4630 | 0.4785 | 0.4706 |
| 0.0853 | 45.05 | 4550 | 2.3144 | 0.4989 | 0.4591 | 0.5096 | 0.4830 |
| 0.1026 | 45.15 | 4560 | 2.3211 | 0.4956 | 0.4568 | 0.5191 | 0.4860 |
| 0.1015 | 45.25 | 4570 | 2.2939 | 0.5033 | 0.4619 | 0.4928 | 0.4769 |
| 0.0843 | 45.35 | 4580 | 2.2916 | 0.5055 | 0.4638 | 0.4904 | 0.4767 |
| 0.0799 | 45.45 | 4590 | 2.2901 | 0.5088 | 0.4665 | 0.4833 | 0.4747 |
| 0.1088 | 45.54 | 4600 | 2.2795 | 0.5077 | 0.4648 | 0.4737 | 0.4692 |
| 0.0574 | 45.64 | 4610 | 2.3172 | 0.5154 | 0.4756 | 0.5359 | 0.5039 |
| 0.0968 | 45.74 | 4620 | 2.3217 | 0.5077 | 0.4670 | 0.5072 | 0.4862 |
| 0.0942 | 45.84 | 4630 | 2.3094 | 0.5110 | 0.4692 | 0.4928 | 0.4807 |
| 0.0735 | 45.94 | 4640 | 2.3088 | 0.5099 | 0.4670 | 0.4737 | 0.4703 |
| 0.0796 | 46.04 | 4650 | 2.3114 | 0.5099 | 0.4665 | 0.4665 | 0.4665 |
| 0.1033 | 46.14 | 4660 | 2.3266 | 0.5088 | 0.4665 | 0.4833 | 0.4747 |
| 0.1134 | 46.24 | 4670 | 2.3229 | 0.5099 | 0.4690 | 0.5072 | 0.4874 |
| 0.0845 | 46.34 | 4680 | 2.3422 | 0.5044 | 0.4667 | 0.5526 | 0.5060 |
| 0.0788 | 46.44 | 4690 | 2.3420 | 0.5022 | 0.4645 | 0.5478 | 0.5027 |
| 0.0762 | 46.53 | 4700 | 2.3169 | 0.5044 | 0.4606 | 0.4617 | 0.4612 |
| 0.081 | 46.63 | 4710 | 2.3352 | 0.5088 | 0.4656 | 0.4689 | 0.4672 |
| 0.093 | 46.73 | 4720 | 2.3560 | 0.5066 | 0.4655 | 0.5 | 0.4821 |
| 0.0795 | 46.83 | 4730 | 2.3509 | 0.5044 | 0.4626 | 0.4880 | 0.4750 |
| 0.0869 | 46.93 | 4740 | 2.3430 | 0.5055 | 0.4640 | 0.4928 | 0.4780 |
| 0.066 | 47.03 | 4750 | 2.3332 | 0.5066 | 0.4644 | 0.4833 | 0.4736 |
| 0.0805 | 47.13 | 4760 | 2.3379 | 0.5077 | 0.4658 | 0.4880 | 0.4766 |
| 0.0836 | 47.23 | 4770 | 2.3580 | 0.5055 | 0.4654 | 0.5144 | 0.4886 |
| 0.1056 | 47.33 | 4780 | 2.3479 | 0.5044 | 0.4639 | 0.5072 | 0.4846 |
| 0.079 | 47.43 | 4790 | 2.3445 | 0.5055 | 0.4646 | 0.5024 | 0.4828 |
| 0.1065 | 47.52 | 4800 | 2.3466 | 0.5044 | 0.4634 | 0.5 | 0.4810 |
| 0.0775 | 47.62 | 4810 | 2.3591 | 0.5 | 0.4607 | 0.5191 | 0.4882 |
| 0.0957 | 47.72 | 4820 | 2.3579 | 0.5011 | 0.4619 | 0.5215 | 0.4899 |
| 0.0952 | 47.82 | 4830 | 2.3498 | 0.5099 | 0.4693 | 0.5120 | 0.4897 |
| 0.0846 | 47.92 | 4840 | 2.3352 | 0.5066 | 0.4640 | 0.4785 | 0.4711 |
| 0.0943 | 48.02 | 4850 | 2.3375 | 0.5066 | 0.4642 | 0.4809 | 0.4724 |
| 0.084 | 48.12 | 4860 | 2.3523 | 0.5088 | 0.4681 | 0.5096 | 0.4880 |
| 0.0912 | 48.22 | 4870 | 2.3669 | 0.4989 | 0.4601 | 0.5239 | 0.4899 |
| 0.0911 | 48.32 | 4880 | 2.3715 | 0.5033 | 0.4640 | 0.5239 | 0.4921 |
| 0.0844 | 48.42 | 4890 | 2.3650 | 0.5088 | 0.4681 | 0.5096 | 0.4880 |
| 0.0784 | 48.51 | 4900 | 2.3656 | 0.5088 | 0.4681 | 0.5096 | 0.4880 |
| 0.0879 | 48.61 | 4910 | 2.3643 | 0.5088 | 0.4681 | 0.5096 | 0.4880 |
| 0.0872 | 48.71 | 4920 | 2.3633 | 0.5088 | 0.4677 | 0.5024 | 0.4844 |
| 0.0614 | 48.81 | 4930 | 2.3737 | 0.5110 | 0.4705 | 0.5144 | 0.4914 |
| 0.0827 | 48.91 | 4940 | 2.3759 | 0.5099 | 0.4689 | 0.5048 | 0.4862 |
| 0.0791 | 49.01 | 4950 | 2.3732 | 0.5066 | 0.4653 | 0.4976 | 0.4809 |
| 0.0903 | 49.11 | 4960 | 2.3698 | 0.5033 | 0.4614 | 0.4856 | 0.4732 |
| 0.0811 | 49.21 | 4970 | 2.3728 | 0.5055 | 0.4640 | 0.4928 | 0.4780 |
| 0.0721 | 49.31 | 4980 | 2.3783 | 0.5077 | 0.4665 | 0.5 | 0.4827 |
| 0.0729 | 49.41 | 4990 | 2.3762 | 0.5066 | 0.4652 | 0.4952 | 0.4797 |
| 0.0642 | 49.5 | 5000 | 2.3771 | 0.5066 | 0.4652 | 0.4952 | 0.4797 |
| 0.0675 | 49.6 | 5010 | 2.3800 | 0.5055 | 0.4641 | 0.4952 | 0.4792 |
| 0.0603 | 49.7 | 5020 | 2.3833 | 0.5088 | 0.4677 | 0.5024 | 0.4844 |
| 0.0744 | 49.8 | 5030 | 2.3854 | 0.5088 | 0.4677 | 0.5024 | 0.4844 |
| 0.0886 | 49.9 | 5040 | 2.3853 | 0.5088 | 0.4677 | 0.5024 | 0.4844 |
| 0.0843 | 50.0 | 5050 | 2.3851 | 0.5088 | 0.4677 | 0.5024 | 0.4844 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.13.1+cu116
- Datasets 2.16.1
- Tokenizers 0.15.0
|
HarrisonColby/ppo-Huggy
|
HarrisonColby
| 2024-01-23T14:31:50Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-23T14:31:44Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: HarrisonColby/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
silvercoder67/Mistral-7b-instruct-v0.2-summ-dpo-e1
|
silvercoder67
| 2024-01-23T14:27:15Z | 0 | 0 | null |
[
"safetensors",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-23T14:22:50Z |
---
license: cc-by-nc-4.0
---
Description to load and test will be added soon. More details on training and data will be added aswell.
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
TBD
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Hi, my name is "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
Samra1211/test-trainer
|
Samra1211
| 2024-01-23T14:21:38Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-23T14:21:14Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: test-trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
xformAI/facebook-opt-125m-qcqa-ub-6-best-for-q-loss
|
xformAI
| 2024-01-23T14:18:19Z | 1,358 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T14:15:41Z |
---
license: mit
language:
- en
library_name: transformers
---
This is a QCQA version of the original model facebook/opt-125m. In this version, the original MHA architecture is preserved but instead of having a single K/V head, different K/V heads corresponding to the same group have the same mean-pooled K or V values. It has upto 6 groups of KV heads per layer instead of original 12 KV heads in the MHA implementation. This implementation is supposed to more efficient than corresponding GQA one. This has been optimized for quality loss.
|
NobodyExistsOnTheInternet/Llama-2-70b-x8-MoE-clown-truck
|
NobodyExistsOnTheInternet
| 2024-01-23T14:14:45Z | 1,366 | 8 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T07:12:13Z |
---
license: mit
---

The biggest model ever to have been released. Has not been tested, nor do I have the compute to test it. If anyone is willing to host this to help me test, please share your results in the community tab.
Thank you for coming to my ted talk.
This is nearly 960GB of weights. It requires at least 8xA100 80gb to run it in 4 bits probably. *probably*
|
dlibf/zephyr-7b-sft-full
|
dlibf
| 2024-01-23T14:10:00Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T10:04:09Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- alignment-handbook
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrachat_200k
model-index:
- name: zephyr-7b-sft-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-full
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9081 | 1.0 | 1090 | 0.9358 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.1
|
Preetha13/my-dog-xzg
|
Preetha13
| 2024-01-23T14:04:45Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-23T14:00:19Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Dog-XZG Dreambooth model trained by Preetha13 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 960221104085
Sample pictures of this concept:

|
Rybens/truthful_dpo_tomgrc_fusionnet_7bx2_moe_13b_GGUF
|
Rybens
| 2024-01-23T14:04:42Z | 16 | 6 | null |
[
"gguf",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-01-21T19:29:02Z |
---
license: mit
---
Some of the quants of https://huggingface.co/yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B model
For other quants go to https://huggingface.co/Nan-Do/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-GGUF
|
Shruthi-S/mlproject-bert-ten
|
Shruthi-S
| 2024-01-23T14:03:38Z | 45 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"pretraining",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T14:03:15Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: mlproject-bert-ten
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mlproject-bert-ten
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.4869
- Validation Loss: 8.6187
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-04, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 8.4869 | 8.6187 | 0 |
### Framework versions
- Transformers 4.38.0.dev0
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
hadrakey/opt-350m-sft
|
hadrakey
| 2024-01-23T13:57:18Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:finetune:facebook/opt-350m",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T13:30:35Z |
---
license: other
base_model: facebook/opt-350m
tags:
- generated_from_trainer
model-index:
- name: opt-350m-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350m-sft
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
m4ddki7/q-Taxi-newyork-v2
|
m4ddki7
| 2024-01-23T13:57:02Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-23T13:57:01Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-newyork-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="m4ddki7/q-Taxi-newyork-v2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
RicardoMG1/clasificador-muchocine
|
RicardoMG1
| 2024-01-23T13:49:19Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"base_model:finetune:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-23T12:30:56Z |
---
base_model: mrm8488/electricidad-base-discriminator
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-muchocine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4671
- Accuracy: 0.4297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3240 | 0.3871 |
| 1.3676 | 2.0 | 776 | 1.3424 | 0.4297 |
| 0.9438 | 3.0 | 1164 | 1.4671 | 0.4297 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
beratcmn/ppo-Huggy
|
beratcmn
| 2024-01-23T13:41:48Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-23T13:41:43Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: beratcmn/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
fira7s/corgy_dog_LoRA
|
fira7s
| 2024-01-23T13:37:42Z | 1 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-23T13:37:39Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK person
license: openrail++
---
# SDXL LoRA DreamBooth - fira7s/corgy_dog_LoRA
<Gallery />
## Model description
These are fira7s/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK person to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](fira7s/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
|
DiwasDiwas/t5-small-ZapMed
|
DiwasDiwas
| 2024-01-23T13:33:25Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"safetensors",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-12T01:42:32Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: t5-small-ZapMed
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-small-ZapMed
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Cartinoe5930/DARE-Merging
|
Cartinoe5930
| 2024-01-23T13:31:24Z | 46 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:Open-Orca/Mistral-7B-OpenOrca",
"base_model:merge:Open-Orca/Mistral-7B-OpenOrca",
"base_model:WizardLMTeam/WizardMath-7B-V1.1",
"base_model:merge:WizardLMTeam/WizardMath-7B-V1.1",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:merge:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:openchat/openchat-3.5-0106",
"base_model:merge:openchat/openchat-3.5-0106",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T12:40:00Z |
---
base_model:
- openchat/openchat-3.5-0106
- mistralai/Mistral-7B-Instruct-v0.2
- Open-Orca/Mistral-7B-OpenOrca
- WizardLM/WizardMath-7B-V1.1
tags:
- mergekit
- merge
license: apache-2.0
---
# result
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) as a base.
### Models Merged
The following models were included in the merge:
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
* [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-Instruct-v0.2
# No parameters necessary for base model
- model: Open-Orca/Mistral-7B-OpenOrca
parameters:
density: 0.5
weight: 0.3
- model: openchat/openchat-3.5-0106
parameters:
density: 0.5
weight: 0.3
- model: WizardLM/WizardMath-7B-V1.1
parameters:
density: 0.5
weight: 0.3
merge_method: dare_ties
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
normalize: true
dtype: float16
```
|
ayousanz/japanese-mistral-150m-recipe
|
ayousanz
| 2024-01-23T13:30:14Z | 45 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T13:26:13Z |
---
base_model: None
tags:
- generated_from_trainer
model-index:
- name: checkpoints-mistral-150M-FA2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoints-mistral-150M-FA2
This model is a fine-tuned version of [None](https://huggingface.co/None) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.3607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
[graelo/wikipedia 20230901 jp only](https://huggingface.co/datasets/graelo/wikipedia/tree/main/data/20230901/ja)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 256
- total_train_batch_size: 4096
- optimizer: Adam with betas=(0.9,0.95) and epsilon=0.0001
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.361 | 2.87 | 100 | 8.3607 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
zooknowsys/wtoc_LoRA
|
zooknowsys
| 2024-01-23T13:29:59Z | 4 | 0 |
peft
|
[
"peft",
"base_model:Qwen/Qwen-VL-Chat",
"base_model:adapter:Qwen/Qwen-VL-Chat",
"region:us"
] | null | 2024-01-09T10:18:41Z |
---
library_name: peft
base_model: Qwen/Qwen-VL-Chat
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
- LoRA: wdtag -> long caption.
LICENSE: Tongyi Qianwen LICENSE
## Model Details
- Finetuned.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** cella
- **Model type:** LoRA
- **Language(s) (NLP):** Eng
- **License:** Tongyi Qianwen LICENSE
- **Finetuned from model [optional]:** Qwen-VL-Chat
## Uses
### Model Load
```
LoRA_DIR = "/path-to-LoRA-dir"
if OPTION_VLM_METHOD == 'qwen_chat_LoRA':
from peft import AutoPeftModelForCausalLM
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
import torch
torch.manual_seed(1234)
# Note: The default behavior now has injection attack prevention off.
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-VL-Chat", trust_remote_code=True)
\
# use cuda device
model = AutoPeftModelForCausalLM.from_pretrained(
LoRA_DIR, # path to the output directory
device_map="auto",
trust_remote_code=True
).eval()
# Specify hyperparameters for generation (No need to do this if you are using transformers>=4.32.0)
model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-VL-Chat", trust_remote_code=True)
else:
print("skipped.")
```
### Captioning
```
if OPTION_VLM_METHOD == 'qwen_chat':
from PIL import Image
from langdetect import detect
import string
import re
COMMON_QUERY = 'What is in tha image? Briefly describe the overall, in English'
MORE_QUERY = 'What is in tha image? Describe the overall in detail, in English'
LESS_QUERY = 'What is in tha image? Briefly summerize the description, in English'
for image in dataset.images:
img_name = os.path.basename(image.path)
img_name = os.path.splitext(img_name)[0]
# すでにアウトプットフォルダに同名のtxtファイルが存在する場合はスキップ
if OPTION_SKIP_EXISTING and os.path.exists(os.path.join(output_dir_VLM, img_name + '.txt')):
clear_output(True)
print("skipped: ", image.path)
continue
query = tokenizer.from_list_format([
{'image': image.path },
{'text': 'Make description using following words' + ', '.join(image.captions).replace('_', ' ') },
])
response, history = model.chat(tokenizer, query=query, history=None)
# ASCIIチェック、言語チェック、長さチェック
retry_count = 0
while not is_ascii(response) or not is_english(response) or not is_sufficient_length(response) or not is_over_length(response):
clear_output(True)
retry_count +=1
print("Retry count:", retry_count)
if retry_count >= 25 and is_ascii(response):
break
if not is_sufficient_length(response):
print("Too short. Retry...")
query = tokenizer.from_list_format([
{'image': image.path },
{'text': MORE_QUERY },
])
if not is_over_length(response):
print("Too long. Retry...")
query = tokenizer.from_list_format([
{'image': image.path },
{'text': LESS_QUERY },
])
if retry_count % 5 == 0:
history = None
query = tokenizer.from_list_format([
{'image': image.path },
{'text': COMMON_QUERY },
])
response, history = model.chat(tokenizer, query=query, history=history)
response = remove_fixed_patterns(response)
if OPTION_SAVE_TAGS:
# タグを保存
with open(os.path.join(output_dir_VLM, img_name + '.txt'), 'w') as file:
file.write(response)
image.captions = response
clear_output(True)
print("Saved for ", image.path, ": ", response)
#画像を表示
img = Image.open(image.path)
plt.imshow(np.asarray(img))
plt.show()
else:
print("skipped.")
```
### Framework versions
- PEFT 0.7.1
|
Preetha13/my-pet-dog-xzg
|
Preetha13
| 2024-01-23T13:29:23Z | 3 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-23T13:25:08Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-XZG Dreambooth model trained by Preetha13 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 960221104085
Sample pictures of this concept:

|
zooknowsys/humanizeLoRA_0123
|
zooknowsys
| 2024-01-23T13:29:11Z | 3 | 0 |
peft
|
[
"peft",
"base_model:Qwen/Qwen-VL-Chat",
"base_model:adapter:Qwen/Qwen-VL-Chat",
"region:us"
] | null | 2024-01-23T13:27:36Z |
---
library_name: peft
base_model: Qwen/Qwen-VL-Chat
---
# Model Card for Model ID
Qwen-VL LoRA
LICENSE: Tongyi Qianwen LICENSE
### Framework versions
- PEFT 0.7.1
|
sangngoc27042001/text-summarization
|
sangngoc27042001
| 2024-01-23T13:28:55Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"autotrain",
"text2text-generation",
"dataset:my_project/autotrain-data",
"region:us"
] |
text2text-generation
| 2024-01-23T13:28:52Z |
---
tags:
- autotrain
- text2text-generation
widget:
- text: "I love AutoTrain"
datasets:
- my_project/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Seq2Seq
## Validation Metrics
loss: 0.8724366426467896
rouge1: 8.5449
rouge2: 0.4965
rougeL: 8.0692
rougeLsum: 8.509
gen_len: 60.5921
runtime: 204.2842
samples_per_second: 0.372
steps_per_second: 0.049
: 5.0
|
smutuvi/whisper-small-sw-common-voice-ndizi-248
|
smutuvi
| 2024-01-23T13:28:20Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:smutuvi/whisper-small-sw-common-voice",
"base_model:finetune:smutuvi/whisper-small-sw-common-voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-23T13:27:09Z |
---
license: apache-2.0
base_model: smutuvi/whisper-small-sw-common-voice
tags:
- generated_from_trainer
model-index:
- name: whisper-small-sw-common-voice-ndizi-248
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-sw-common-voice-ndizi-248
This model is a fine-tuned version of [smutuvi/whisper-small-sw-common-voice](https://huggingface.co/smutuvi/whisper-small-sw-common-voice) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6298 | 1.0 | 28 | 1.2171 |
| 1.1514 | 2.0 | 56 | 1.0364 |
| 0.9175 | 3.0 | 84 | 0.9578 |
| 0.6885 | 4.0 | 112 | 0.9664 |
| 0.5841 | 5.0 | 140 | 1.0001 |
| 0.3397 | 6.0 | 168 | 1.0233 |
| 0.3166 | 7.0 | 196 | 1.0291 |
| 0.2313 | 8.0 | 224 | 1.0749 |
| 0.1457 | 9.0 | 252 | 1.0857 |
| 0.1036 | 10.0 | 280 | 1.0689 |
| 0.0644 | 11.0 | 308 | 1.1099 |
| 0.072 | 12.0 | 336 | 1.1080 |
| 0.0519 | 13.0 | 364 | 1.1119 |
| 0.0312 | 14.0 | 392 | 1.1747 |
| 0.0331 | 15.0 | 420 | 1.1441 |
| 0.02 | 16.0 | 448 | 1.1413 |
| 0.017 | 17.0 | 476 | 1.1880 |
| 0.0157 | 18.0 | 504 | 1.1564 |
| 0.0146 | 19.0 | 532 | 1.1627 |
| 0.013 | 20.0 | 560 | 1.2088 |
| 0.0071 | 21.0 | 588 | 1.2054 |
| 0.006 | 22.0 | 616 | 1.2113 |
| 0.0066 | 23.0 | 644 | 1.2269 |
| 0.0073 | 24.0 | 672 | 1.1721 |
| 0.0064 | 25.0 | 700 | 1.1878 |
| 0.0084 | 26.0 | 728 | 1.1701 |
| 0.0024 | 27.0 | 756 | 1.2221 |
| 0.0056 | 28.0 | 784 | 1.2072 |
| 0.005 | 29.0 | 812 | 1.1742 |
| 0.0032 | 30.0 | 840 | 1.1930 |
| 0.0021 | 31.0 | 868 | 1.1996 |
| 0.0008 | 32.0 | 896 | 1.2344 |
| 0.0014 | 33.0 | 924 | 1.2153 |
| 0.0018 | 34.0 | 952 | 1.2324 |
| 0.0013 | 35.0 | 980 | 1.2281 |
| 0.0011 | 36.0 | 1008 | 1.2223 |
| 0.0006 | 37.0 | 1036 | 1.2326 |
| 0.0011 | 38.0 | 1064 | 1.2250 |
| 0.0007 | 39.0 | 1092 | 1.2270 |
| 0.001 | 40.0 | 1120 | 1.2226 |
| 0.0017 | 41.0 | 1148 | 1.2255 |
| 0.0011 | 42.0 | 1176 | 1.2175 |
| 0.0011 | 43.0 | 1204 | 1.2302 |
| 0.0025 | 44.0 | 1232 | 1.2176 |
| 0.0021 | 45.0 | 1260 | 1.2450 |
| 0.0016 | 46.0 | 1288 | 1.3209 |
| 0.0023 | 47.0 | 1316 | 1.2245 |
| 0.0021 | 48.0 | 1344 | 1.2601 |
| 0.0024 | 49.0 | 1372 | 1.2703 |
| 0.002 | 50.0 | 1400 | 1.2674 |
| 0.0011 | 51.0 | 1428 | 1.2644 |
| 0.0032 | 52.0 | 1456 | 1.2901 |
| 0.0007 | 53.0 | 1484 | 1.2652 |
| 0.0033 | 54.0 | 1512 | 1.2901 |
| 0.0009 | 55.0 | 1540 | 1.2584 |
| 0.0012 | 56.0 | 1568 | 1.2542 |
| 0.0013 | 57.0 | 1596 | 1.2607 |
| 0.0006 | 58.0 | 1624 | 1.2733 |
| 0.0004 | 59.0 | 1652 | 1.2763 |
| 0.0003 | 60.0 | 1680 | 1.2780 |
| 0.0003 | 61.0 | 1708 | 1.2799 |
| 0.0003 | 62.0 | 1736 | 1.2808 |
| 0.0003 | 63.0 | 1764 | 1.2821 |
| 0.0003 | 64.0 | 1792 | 1.2844 |
| 0.0003 | 65.0 | 1820 | 1.2863 |
| 0.0003 | 66.0 | 1848 | 1.2875 |
| 0.0003 | 67.0 | 1876 | 1.2888 |
| 0.0003 | 68.0 | 1904 | 1.2910 |
| 0.0002 | 69.0 | 1932 | 1.2919 |
| 0.0002 | 70.0 | 1960 | 1.2930 |
| 0.0002 | 71.0 | 1988 | 1.2947 |
| 0.0002 | 72.0 | 2016 | 1.2955 |
| 0.0002 | 73.0 | 2044 | 1.2967 |
| 0.0002 | 74.0 | 2072 | 1.2974 |
| 0.0002 | 75.0 | 2100 | 1.2989 |
| 0.0002 | 76.0 | 2128 | 1.2997 |
| 0.0002 | 77.0 | 2156 | 1.3006 |
| 0.0002 | 78.0 | 2184 | 1.3011 |
| 0.0002 | 79.0 | 2212 | 1.3019 |
| 0.0002 | 80.0 | 2240 | 1.3029 |
| 0.0002 | 81.0 | 2268 | 1.3035 |
| 0.0002 | 82.0 | 2296 | 1.3040 |
| 0.0002 | 83.0 | 2324 | 1.3050 |
| 0.0002 | 84.0 | 2352 | 1.3056 |
| 0.0002 | 85.0 | 2380 | 1.3057 |
| 0.0002 | 86.0 | 2408 | 1.3065 |
| 0.0002 | 87.0 | 2436 | 1.3066 |
| 0.0002 | 88.0 | 2464 | 1.3078 |
| 0.0002 | 89.0 | 2492 | 1.3075 |
| 0.0002 | 90.0 | 2520 | 1.3080 |
| 0.0002 | 91.0 | 2548 | 1.3083 |
| 0.0002 | 92.0 | 2576 | 1.3091 |
| 0.0002 | 93.0 | 2604 | 1.3091 |
| 0.0002 | 94.0 | 2632 | 1.3091 |
| 0.0002 | 95.0 | 2660 | 1.3097 |
| 0.0002 | 96.0 | 2688 | 1.3098 |
| 0.0002 | 97.0 | 2716 | 1.3102 |
| 0.0002 | 98.0 | 2744 | 1.3102 |
| 0.0002 | 99.0 | 2772 | 1.3099 |
| 0.0002 | 100.0 | 2800 | 1.3100 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
kreabs/DPOpenHermes-7B-v2_finetuned_dolly_1600
|
kreabs
| 2024-01-23T13:24:22Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T13:17:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mesolitica/translation-t5-small-standard-bahasa-cased-v2
|
mesolitica
| 2024-01-23T13:09:09Z | 15,536 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"ms",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-14T06:02:52Z |
---
language:
- ms
---
# Noisy Translation Small T5
Trained on 1536 context length, able to translate malay, pasar malay (social media texts or local context), english, manglish, javanese, banjarese and indonesian to target language. It also able to maintain the text structure as it is and only translate necessary texts, eg, programming code.
Added more coding translation dataset, noisy b.cari.com.my translation, noisy ChatGPT4 translation and heavy postfilter.
## how-to
```python
from transformers import T5ForConditionalGeneration, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
'mesolitica/translation-t5-small-standard-bahasa-cased-v2',
use_fast=False
)
model = T5ForConditionalGeneration.from_pretrained(
'mesolitica/translation-t5-small-standard-bahasa-cased-v2'
)
s = 'Hai, ada yang bisa saya bantu?'
input_ids = tokenizer.encode(f'terjemah ke Melayu: {s}', return_tensors = 'pt')
outputs = model.generate(input_ids, max_length = 100)
all_special_ids = [0, 1, 2]
outputs = [i for i in outputs[0] if i not in all_special_ids]
print(tokenizer.decode(outputs, spaces_between_special_tokens = False))
```
|
CLMBR/superlative-quantifier-transformer-1
|
CLMBR
| 2024-01-23T13:08:37Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T15:28:21Z |
---
tags:
- generated_from_trainer
model-index:
- name: superlative-quantifier-transformer-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# superlative-quantifier-transformer-1
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2257 | 0.03 | 76320 | 4.2120 |
| 4.0221 | 1.03 | 152640 | 4.0416 |
| 3.9131 | 0.03 | 228960 | 3.9674 |
| 3.8475 | 1.03 | 305280 | 3.9266 |
| 3.7974 | 0.03 | 381600 | 3.9019 |
| 3.7557 | 1.03 | 457920 | 3.8863 |
| 3.7223 | 0.03 | 534240 | 3.8759 |
| 3.696 | 1.03 | 610560 | 3.8692 |
| 3.668 | 0.03 | 686880 | 3.8648 |
| 3.6404 | 1.03 | 763200 | 3.8614 |
| 3.619 | 0.03 | 839520 | 3.8603 |
| 3.5962 | 1.03 | 915840 | 3.8590 |
| 3.5817 | 0.03 | 992160 | 3.8601 |
| 3.5625 | 0.03 | 1068480 | 3.8599 |
| 3.544 | 0.03 | 1144800 | 3.8615 |
| 3.5279 | 1.03 | 1221120 | 3.8617 |
| 3.5119 | 0.03 | 1297440 | 3.8635 |
| 3.4993 | 1.03 | 1373760 | 3.8644 |
| 3.4836 | 0.03 | 1450080 | 3.8650 |
| 3.4751 | 0.03 | 1526400 | 3.8681 |
| 3.467 | 0.03 | 1602720 | 3.8682 |
| 3.4583 | 0.03 | 1679040 | 3.8708 |
| 3.451 | 1.03 | 1755360 | 3.8718 |
| 3.4441 | 0.03 | 1831680 | 3.8737 |
| 3.429 | 0.03 | 1908000 | 3.8752 |
| 3.4162 | 1.03 | 1984320 | 3.8754 |
| 3.4051 | 0.03 | 2060640 | 3.8770 |
| 3.3914 | 0.03 | 2136960 | 3.8770 |
| 3.3854 | 0.03 | 2213280 | 3.8788 |
| 3.3745 | 1.03 | 2289600 | 3.8804 |
| 3.3613 | 0.03 | 2365920 | 3.8813 |
| 3.3479 | 1.03 | 2442240 | 3.8816 |
| 3.3373 | 0.03 | 2518560 | 3.8827 |
| 3.3284 | 0.03 | 2594880 | 3.8824 |
| 3.3156 | 0.03 | 2671200 | 3.8829 |
| 3.3124 | 1.03 | 2747520 | 3.8831 |
| 3.3082 | 0.03 | 2823840 | 3.8832 |
| 3.3015 | 0.03 | 2900160 | 3.8824 |
| 3.2982 | 1.03 | 2976480 | 3.8816 |
| 3.2944 | 0.02 | 3052726 | 3.8811 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ben434/sarahv2
|
ben434
| 2024-01-23T13:07:46Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:h94/IP-Adapter-FaceID",
"base_model:adapter:h94/IP-Adapter-FaceID",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2024-01-23T13:07:28Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/sunny_5440.webp
base_model: h94/IP-Adapter-FaceID
instance_prompt: null
license: apache-2.0
---
# bf
<Gallery />
## Download model
[Download](/ben434/sarahv2/tree/main) them in the Files & versions tab.
|
kakojuvenkat/autotrain-euaqt-8br1w
|
kakojuvenkat
| 2024-01-23T13:07:26Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T13:07:21Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
wahaha1987/ppo-PyramidsRND
|
wahaha1987
| 2024-01-23T13:05:52Z | 13 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2024-01-23T13:05:49Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: wahaha1987/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
LarryAIDraw/toki_scarxzys
|
LarryAIDraw
| 2024-01-23T13:02:08Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-23T12:45:25Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/273312/asuma-toki-or-blue-archive
|
LarryAIDraw/MisatoJ2-10
|
LarryAIDraw
| 2024-01-23T13:01:03Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-23T12:44:04Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/273760/misato-katsuragi-red-jacket-neon-genesis-evangelion
|
hcy5561/my_awesome_wnut_model
|
hcy5561
| 2024-01-23T12:55:57Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-23T12:29:25Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.6024734982332155
- name: Recall
type: recall
value: 0.3160333642261353
- name: F1
type: f1
value: 0.4145896656534954
- name: Accuracy
type: accuracy
value: 0.942926766705143
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2641
- Precision: 0.6025
- Recall: 0.3160
- F1: 0.4146
- Accuracy: 0.9429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2801 | 0.6333 | 0.2465 | 0.3549 | 0.9389 |
| No log | 2.0 | 426 | 0.2641 | 0.6025 | 0.3160 | 0.4146 | 0.9429 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
LarryAIDraw/gladiia_arknights
|
LarryAIDraw
| 2024-01-23T12:55:29Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-26T02:13:52Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/134580/gladiia-arknights
|
shahzebnaveed/marian-finetuned-kde4-en-to-fr
|
shahzebnaveed
| 2024-01-23T12:55:23Z | 14 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-01-23T11:25:03Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
CLMBR/npi-only-transformer-3
|
CLMBR
| 2024-01-23T12:54:47Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T14:37:10Z |
---
tags:
- generated_from_trainer
model-index:
- name: npi-only-transformer-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# npi-only-transformer-3
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2223 | 0.03 | 76320 | 4.1964 |
| 4.0204 | 1.03 | 152640 | 4.0268 |
| 3.912 | 0.03 | 228960 | 3.9523 |
| 3.8408 | 1.03 | 305280 | 3.9111 |
| 3.7917 | 0.03 | 381600 | 3.8861 |
| 3.7492 | 1.03 | 457920 | 3.8700 |
| 3.7159 | 0.03 | 534240 | 3.8608 |
| 3.6895 | 1.03 | 610560 | 3.8526 |
| 3.6619 | 0.03 | 686880 | 3.8481 |
| 3.6343 | 1.03 | 763200 | 3.8460 |
| 3.61 | 0.03 | 839520 | 3.8443 |
| 3.5902 | 1.03 | 915840 | 3.8437 |
| 3.571 | 0.03 | 992160 | 3.8429 |
| 3.5525 | 1.03 | 1068480 | 3.8434 |
| 3.5337 | 0.03 | 1144800 | 3.8455 |
| 3.5324 | 1.03 | 1221120 | 3.8451 |
| 3.5107 | 0.03 | 1297440 | 3.8464 |
| 3.4996 | 1.03 | 1373760 | 3.8468 |
| 3.4875 | 0.03 | 1450080 | 3.8484 |
| 3.475 | 1.03 | 1526400 | 3.8496 |
| 3.4666 | 0.03 | 1602720 | 3.8495 |
| 3.4571 | 1.03 | 1679040 | 3.8516 |
| 3.4483 | 0.03 | 1755360 | 3.8525 |
| 3.4417 | 1.03 | 1831680 | 3.8534 |
| 3.4295 | 0.03 | 1908000 | 3.8552 |
| 3.4152 | 1.03 | 1984320 | 3.8558 |
| 3.3995 | 0.03 | 2060640 | 3.8572 |
| 3.3901 | 1.03 | 2136960 | 3.8578 |
| 3.3801 | 0.03 | 2213280 | 3.8582 |
| 3.367 | 1.03 | 2289600 | 3.8592 |
| 3.3558 | 0.03 | 2365920 | 3.8611 |
| 3.3561 | 1.03 | 2442240 | 3.8599 |
| 3.3408 | 0.03 | 2518560 | 3.8615 |
| 3.334 | 1.03 | 2594880 | 3.8621 |
| 3.3245 | 0.03 | 2671200 | 3.8619 |
| 3.317 | 0.03 | 2747520 | 3.8619 |
| 3.3107 | 1.03 | 2823840 | 3.8615 |
| 3.3063 | 0.03 | 2900160 | 3.8617 |
| 3.3022 | 1.03 | 2976480 | 3.8610 |
| 3.2972 | 0.02 | 3052726 | 3.8598 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aishutin/stable-diffusion-2-ppl-out
|
aishutin
| 2024-01-23T12:53:42Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-23T11:06:50Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2
instance_prompt: a photo of sks miniature
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - aishutin/stable-diffusion-2-ppl-out
This is a dreambooth model derived from stabilityai/stable-diffusion-2. The weights were trained on a photo of sks miniature using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
nccratliri/whisperseg-meerkat-vad-ct2
|
nccratliri
| 2024-01-23T12:40:13Z | 4 | 0 |
transformers
|
[
"transformers",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T12:35:26Z |
---
license: apache-2.0
---
This model is finetuned using "nccratliri/whisperseg-zebra-finch-vad" as the initial weights.
|
alierenak/bert_turkish_sentiment
|
alierenak
| 2024-01-23T12:39:39Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:VRLLab/TurkishBERTweet",
"base_model:finetune:VRLLab/TurkishBERTweet",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-23T12:24:02Z |
---
license: mit
base_model: VRLLab/TurkishBERTweet
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: turkish_sentiment3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# turkish_sentiment3
This model is a fine-tuned version of [VRLLab/TurkishBERTweet](https://huggingface.co/VRLLab/TurkishBERTweet) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0155
- Accuracy: 0.9972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 440 | 0.0516 | 0.9926 |
| 0.1392 | 2.0 | 880 | 0.0242 | 0.9966 |
| 0.0443 | 3.0 | 1320 | 0.0155 | 0.9972 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Shaleny/my-pet-dog-xzg
|
Shaleny
| 2024-01-23T12:37:43Z | 11 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-23T12:33:28Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-XZG Dreambooth model trained by Shaleny following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX200330tss
Sample pictures of this concept:

|
dantelok/squad-bloom-3b
|
dantelok
| 2024-01-23T12:10:06Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigscience/bloom-3b",
"base_model:adapter:bigscience/bloom-3b",
"region:us"
] | null | 2024-01-23T12:10:01Z |
---
library_name: peft
base_model: bigscience/bloom-3b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
kenchenxingyu/flan-large-lora-emotion-human
|
kenchenxingyu
| 2024-01-23T12:09:31Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2024-01-22T17:12:43Z |
---
{}
---
Finetuned on human annotated data from KDD2020 Fake News Challenge
|
Charlie911/MultiLora-drop-sharegpt
|
Charlie911
| 2024-01-23T12:05:18Z | 1,361 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:EleutherAI/drop",
"arxiv:1910.09700",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T11:58:46Z |
---
license: llama2
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
- EleutherAI/drop
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HexawareTech/adapter-gsm8kb
|
HexawareTech
| 2024-01-23T12:04:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T12:04:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.