modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
SparseLLM/relu2-50B
|
SparseLLM
| 2024-02-07T02:15:03Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-14T07:45:48Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
jan-hq/stealth-finance-v1
|
jan-hq
| 2024-02-07T02:14:34Z | 7 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T02:01:59Z |
---
license: apache-2.0
language:
- en
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto"
>
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner"
style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a
>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Prompt template
ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
# Training detail
You can read [here](https://huggingface.co/jan-hq/stealth-finance-v1-adapter).
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you.
- 🗂️ **
An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- 🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)

# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
|
hxgrace/model_2_20
|
hxgrace
| 2024-02-07T02:14:27Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-10T17:08:17Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-hxgrace/model20
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning, based on the dataset found at [hxgrace/augmentedSketches](https://huggingface.co/datasets/hxgrace/augmentedSketches?row=3). It was trained with a batch size of 2 over 20 epochs.
|
SparseLLM/relu2-60B
|
SparseLLM
| 2024-02-07T02:12:34Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-14T07:53:42Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu2-65B
|
SparseLLM
| 2024-02-07T02:12:13Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-14T07:59:41Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/reglu-100B
|
SparseLLM
| 2024-02-07T02:09:44Z | 10 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-14T08:27:19Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
tsunemoto/Senku-70B-Full-GGUF
|
tsunemoto
| 2024-02-07T02:09:38Z | 17 | 5 | null |
[
"gguf",
"GGUF",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-07T01:19:40Z |
---
title: "Senku-70B-Full Quantized in GGUF"
tags:
- GGUF
language: en
---

# Tsunemoto GGUF's of Senku-70B-Full
This is a GGUF quantization of Senku-70B-Full.
[Q8 is available here](https://huggingface.co/ShinojiResearch/Senku-70B-Q8)
## Original Repo Link:
[Original Repository](https://huggingface.co/ShinojiResearch/Senku-70B-Full)
## Original Model Card:
---
Finetune of miqu-70b-sf dequant of miqudev's leak of Mistral-70B (allegedly an early mistral medium). My diffs are available under CC-0, this is a merge with the leaked model, you can use the other repository to save bandwidth.
EQ-Bench: 84.89
Will run more benches later.
|
SparseLLM/relu-10B
|
SparseLLM
| 2024-02-07T02:08:27Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-15T01:53:06Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu-20B
|
SparseLLM
| 2024-02-07T02:08:12Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-15T02:13:59Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu-15B
|
SparseLLM
| 2024-02-07T02:07:49Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-15T01:56:05Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu-35B
|
SparseLLM
| 2024-02-07T02:06:20Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-15T02:37:46Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu-40B
|
SparseLLM
| 2024-02-07T02:06:07Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-15T02:42:49Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu-80B
|
SparseLLM
| 2024-02-07T02:04:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-15T04:30:37Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu-85B
|
SparseLLM
| 2024-02-07T02:04:12Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-15T04:37:05Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
varun-v-rao/opt-350m-snli-model2
|
varun-v-rao
| 2024-02-07T01:59:54Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-classification",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:finetune:facebook/opt-350m",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-07T00:08:24Z |
---
license: other
base_model: facebook/opt-350m
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: opt-350m-snli-model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350m-snli-model2
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7931
- Accuracy: 0.751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3296 | 1.0 | 2146 | 0.2628 | 0.9053 |
| 0.2382 | 2.0 | 4292 | 0.2587 | 0.9088 |
| 0.153 | 3.0 | 6438 | 0.3031 | 0.9088 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
jumtul/LDCC-Hyeogi.04
|
jumtul
| 2024-02-07T01:59:44Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"LDCC/LDCC-SOLAR-10.7B",
"hyeogi/SOLAR-10.7B-dpo-v1",
"ko",
"base_model:LDCC/LDCC-SOLAR-10.7B",
"base_model:merge:LDCC/LDCC-SOLAR-10.7B",
"base_model:hyeogi/SOLAR-10.7B-dpo-v1",
"base_model:merge:hyeogi/SOLAR-10.7B-dpo-v1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T01:50:52Z |
---
language:
- ko
base_model:
- LDCC/LDCC-SOLAR-10.7B
- hyeogi/SOLAR-10.7B-dpo-v1
tags:
- mergekit
- merge
- LDCC/LDCC-SOLAR-10.7B
- hyeogi/SOLAR-10.7B-dpo-v1
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [hyeogi/SOLAR-10.7B-dpo-v1](https://huggingface.co/hyeogi/SOLAR-10.7B-dpo-v1)
* [LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: LDCC/LDCC-SOLAR-10.7B
layer_range: [0, 48]
- model: hyeogi/SOLAR-10.7B-dpo-v1
layer_range: [0, 48]
merge_method: slerp
tokenizer_source: base
base_model: LDCC/LDCC-SOLAR-10.7B
embed_slerp: true
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## Datasets
Finetuned using LoRA with [kyujinpy/OpenOrca-KO](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO)
|
DameWaffles/JonnyCraig
|
DameWaffles
| 2024-02-07T01:50:56Z | 0 | 0 | null |
[
"music",
"audio-to-audio",
"license:artistic-2.0",
"region:us"
] |
audio-to-audio
| 2024-01-29T04:48:18Z |
---
tags:
- music
license: artistic-2.0
pipeline_tag: audio-to-audio
---
|
zzz99/deepseek-7B-instr-1.5-qlora-11k-all
|
zzz99
| 2024-02-07T01:35:09Z | 6 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-02-07T01:35:06Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
SeanWu25/Mixtral_8x7b_Medicine
|
SeanWu25
| 2024-02-07T01:22:09Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-02-07T01:21:10Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
model-index:
- name: Mixtral_8x7b_Medicine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mixtral_8x7b_Medicine
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
zwellington/bert-azahead-v0.1
|
zwellington
| 2024-02-07T01:16:57Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:azaheadhealth",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-07T01:15:52Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- azaheadhealth
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bert-azahead-v0.1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: azaheadhealth
type: azaheadhealth
config: small
split: test
args: small
metrics:
- name: Accuracy
type: accuracy
value: 0.75
- name: F1
type: f1
value: 0.4
- name: Precision
type: precision
value: 0.6666666666666666
- name: Recall
type: recall
value: 0.2857142857142857
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-azahead-v0.1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the azaheadhealth dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4710
- Accuracy: 0.75
- F1: 0.4
- Precision: 0.6667
- Recall: 0.2857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.6325 | 0.5 | 20 | 0.5001 | 0.7917 | 0.7059 | 0.6 | 0.8571 |
| 0.5346 | 1.0 | 40 | 0.4710 | 0.75 | 0.4 | 0.6667 | 0.2857 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.13.2
|
sert121/bert_finetuned_shortstories
|
sert121
| 2024-02-07T01:03:12Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-27T15:43:51Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4272
- Accuracy: 0.8218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6987 | 0.53 | 10 | 0.7055 | 0.4158 |
| 0.6893 | 1.05 | 20 | 0.6336 | 0.7327 |
| 0.5912 | 1.58 | 30 | 0.6067 | 0.7129 |
| 0.4819 | 2.11 | 40 | 0.4757 | 0.7822 |
| 0.2509 | 2.63 | 50 | 0.4272 | 0.8218 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.1
|
saikrishna759/multiwoz2_Saved_model
|
saikrishna759
| 2024-02-07T00:52:04Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-02-07T00:51:57Z |
---
library_name: peft
base_model: NousResearch/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
zwellington/microtest-2.0
|
zwellington
| 2024-02-07T00:41:23Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:azaheadhealth",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-07T00:40:09Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- azaheadhealth
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: microtest-2.0
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: azaheadhealth
type: azaheadhealth
config: micro
split: test
args: micro
metrics:
- name: Accuracy
type: accuracy
value: 0.75
- name: F1
type: f1
value: 0.8
- name: Precision
type: precision
value: 0.6666666666666666
- name: Recall
type: recall
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# microtest-2.0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the azaheadhealth dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3672
- Accuracy: 0.75
- F1: 0.8
- Precision: 0.6667
- Recall: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:---------:|:------:|
| 0.8113 | 0.5 | 1 | 0.4486 | 0.75 | 0.8 | 0.6667 | 1.0 |
| 0.7227 | 1.0 | 2 | 0.3672 | 0.75 | 0.8 | 0.6667 | 1.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.13.2
|
SolaireOfTheSun/FICOLlama2-7B
|
SolaireOfTheSun
| 2024-02-07T00:38:08Z | 0 | 0 | null |
[
"safetensors",
"license:bigscience-openrail-m",
"region:us"
] | null | 2024-02-07T00:37:38Z |
---
license: bigscience-openrail-m
---
|
atmikah/q-FrozenLake-v1-4x4-noSlippery
|
atmikah
| 2024-02-07T00:29:51Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T00:29:49Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="atmikah/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
WizWhite/gildenface-xl-headshot-lora
|
WizWhite
| 2024-02-07T00:15:47Z | 40 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"concept",
"portrait",
"detailed",
"face",
"grotesque",
"headshot",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2024-02-07T00:15:45Z |
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=RentCivit&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- concept
- portrait
- detailed
- face
- grotesque
- headshot
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Gildenface portrait photo
widget:
- text: 'gildenface portrait photography of jabba the hutt, close up photo, professional, breathtaking, close-up on face, highly detailed skin, visible skin pores, dark, gritty'
output:
url: >-
3277226.jpeg
- text: 'gildenface portrait photography of a troll from the movie troll hunter'
output:
url: >-
3277704.jpeg
- text: 'gildenface portrait of a surprised shocked zombie at a birthday party, highly detailed texture, sharp focus, party hat'
output:
url: >-
3277754.jpeg
- text: 'gildenface breathtaking portrait photo inspired by an epic scene from the movie total recall, 1990s, sci-fi, professional, by bruce gilden'
output:
url: >-
3277371.jpeg
- text: 'Gildenface close up portrait of real life luigi from (super mario bros:0.5), disgusted'
output:
url: >-
3277428.jpeg
- text: 'Gildenface close up portrait of real life super mario, disgusted'
output:
url: >-
3277429.jpeg
- text: 'obese (cthulhu:1.4), gildenface style photo, close up on face, detailed skin texture, by bruce gilden'
output:
url: >-
3277426.jpeg
- text: 'rusty cast-iron (robot:1.4), gildenface style photo, close up on face, detailed skin texture'
output:
url: >-
3277421.jpeg
- text: 'Gildenface close up portrait of real life pikachu by bruce gilden'
output:
url: >-
3277419.jpeg
- text: 'candid close up photo of a surprised Shrek business man, detailed skin texture, standing outside in a swamp,'
output:
url: >-
3277870.jpeg
---
# Gildenface XL – Headshot LoRA
<Gallery />
## Model description
<p><strong>Gildenface XL</strong> – a LoRA focused on <em>exaggerated</em> and <em><span style="color:rgb(189, 193, 198)">less-than-glamorous</span></em> close-ups with very <em>high detailed textures</em>.</p><p>Great for producing <em>unique, grotesque and/or outlandish faces</em>, but it can be used as a <em>enhance details for faces and textures</em>, depending on weight and prompt.</p><p><strong>Trigger word: Gildenface</strong><br /><strong>Useful prompt tips:</strong> Portrait photo, close up on face, detailed skin texture, leathery skin texture, visible skin pores, swollen face, greasy hair, wrinkles, potato nose, addict, blushing, chubby, hard shadows, disgusted, blemish, facial hair, staring <br />+ general enhancers, photography terms, and portrait photographers</p><p><strong>Recommended weights:</strong> between 0.8 – 1.2</p><p>It's a bit rough around the edges, and your milage may vary – but when it hits right it's golden.</p><p><span style="color:rgb(193, 194, 197)">Be sure to check out </span><a target="_blank" rel="ugc" href="https://civitai.com/models/181092?modelVersionId=203235">Caricature XL</a><span style="color:rgb(193, 194, 197)"> LoRA by Blink, if you like creating weird</span></p>
## Trigger words
You should use `Gildenface portrait photo` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/WizWhite/gildenface-xl-headshot-lora/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('WizWhite/gildenface-xl-headshot-lora', weight_name='GildenfaceXL_Headshot_LoRA_v1.safetensors')
image = pipeline('candid close up photo of a surprised Shrek business man, detailed skin texture, standing outside in a swamp,').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Wissam42/sentence-croissant-llm-base
|
Wissam42
| 2024-02-07T00:13:35Z | 22 | 3 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"llama",
"feature-extraction",
"sentence-similarity",
"transformers",
"fr",
"dataset:stsb_multi_mt",
"arxiv:2402.00786",
"arxiv:1908.10084",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-02-07T00:03:21Z |
---
pipeline_tag: sentence-similarity
language: fr
datasets:
- stsb_multi_mt
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: mit
model-index:
- name: sentence-croissant-llm-base by Wissam Siblini
results:
- task:
name: Sentence-Embedding
type: Text Similarity
dataset:
name: Text Similarity fr
type: stsb_multi_mt
args: fr
metrics:
- name: Test Pearson correlation coefficient
type: Pearson_correlation_coefficient
value: xx.xx
---
# Overview
The model [sentence-croissant-llm-base](https://huggingface.co/Wissam42/sentence-croissant-llm-base) is designed to generate French text embeddings. It has been fine-tuned using the very recent pre-trained LLM [croissantllm/CroissantLLMBase](https://huggingface.co/croissantllm/CroissantLLMBase) with the strategy of Siamese-BERT implemented in the library ['sentences-transformers'](https://www.sbert.net/). The fine tuning dataset used is the French training split of [stsb](https://huggingface.co/datasets/stsb_multi_mt/viewer/fr/train).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("Wissam42/sentence-croissant-llm-base")
sentences = ["Le chat mange la souris", "Un felin devore un rongeur", "Je travaille sur un ordinateur", "Je developpe sur mon pc"]
embeddings = model.encode(sentences)
print(embeddings)
```
## Citing & Authors
@article{faysse2024croissantllm,
title={CroissantLLM: A Truly Bilingual French-English Language Model},
author={Faysse, Manuel and Fernandes, Patrick and Guerreiro, Nuno and Loison, Ant{\'o}nio and Alves, Duarte and Corro, Caio and Boizard, Nicolas and Alves, Jo{\~a}o and Rei, Ricardo and Martins, Pedro and others},
journal={arXiv preprint arXiv:2402.00786},
year={2024}
}
@article{reimers2019sentence,
title={Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks},
author={Nils Reimers, Iryna Gurevych},
journal={https://arxiv.org/abs/1908.10084},
year={2019}
}
|
weijie210/zephyr-7b-dpo-maximal
|
weijie210
| 2024-02-07T00:13:01Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:finetune:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-06T14:16:30Z |
---
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: zephyr-7b-dpo-maximal
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-dpo-maximal
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3380
- Rewards/chosen: -0.1339
- Rewards/rejected: -3.0976
- Rewards/accuracies: 0.8790
- Rewards/margins: 2.9637
- Logps/rejected: -275.9525
- Logps/chosen: -285.9466
- Logits/rejected: -2.1375
- Logits/chosen: -2.2908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.3619 | 0.26 | 500 | 0.3822 | 0.1843 | -2.0970 | 0.8651 | 2.2812 | -265.9466 | -282.7652 | -2.1994 | -2.3618 |
| 0.396 | 0.52 | 1000 | 0.3747 | -0.7559 | -3.2293 | 0.8730 | 2.4733 | -277.2696 | -292.1672 | -2.1335 | -2.2927 |
| 0.3618 | 0.78 | 1500 | 0.3452 | -0.4962 | -3.2836 | 0.875 | 2.7874 | -277.8134 | -289.5698 | -2.1794 | -2.3280 |
### Framework versions
- Transformers 4.36.1
- Pytorch 2.0.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.0
|
WizWhite/wizard-s-vintage-board-games
|
WizWhite
| 2024-02-07T00:09:27Z | 66 | 4 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"vintage",
"concept",
"tabletop",
"pulp art",
"boardgame",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2024-02-07T00:09:25Z |
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=RentCivit&allowDerivatives=True&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- vintage
- concept
- tabletop
- pulp art
- boardgame
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Vintage board game box
widget:
- text: 'vintage board game box (title called "Wizard''s Vintage Board Game" logo text:1.3), (Moondog Wizard Whitebeard performing magic by michael whelan and gerald brom:0.8)'
output:
url: >-
4390221.jpeg
- text: ' '
output:
url: >-
4390247.jpeg
- text: 'vintage board game box (called "Procrastination":1.3), illustration of A paranormal investigator recording in an eerie, dilapidated asylum., detailed text logo'
output:
url: >-
4390254.jpeg
- text: 'vintage board game box (called "Procrastination":1.3), illustration of A solitary figure in an old library, surrounded by mountains of books., detailed text logo'
output:
url: >-
4390261.jpeg
- text: 'japanese vintage board game box called "The Great Wave off Kanagawa" by hokusai, detailed text logo'
output:
url: >-
4390251.jpeg
- text: 'vintage board game box (called "Being Kermit":1.3), illustration of (Kermit:0.4) Sketching in art class, detailed text logo'
output:
url: >-
4390263.jpeg
- text: 'vintage board game box (called "When the Diarrhea Hits":1.3), illustration of A scientist examining a glowing crystal in a futuristic lab., detailed text logo'
output:
url: >-
4390380.jpeg
- text: 'vintage board game box (called "Being Gal Gadot":1.3), illustration of (Gal Gadot:0.4) Baking bread in the kitchen, detailed text logo
'
output:
url: >-
4390405.jpeg
- text: 'vintage board game box (called "Being Melissa Joan Hart":1.3), illustration of (Melissa Joan Hart:0.4) Checking the neighbors mailbox, detailed text logo'
output:
url: >-
4390408.jpeg
---
# Wizard's Vintage Board Games
<Gallery />
## Model description
<p><em><u>Part III of Wizard's Vintage Series</u></em></p><h2 id="heading-655">Wizards Vintage Board Games</h2><p>LoRA for recreating the look of old tabletop games from 1950s-1970s.</p><p><strong>Keyword / Key Prompts:</strong> Vintage board game box | Vintage board game box called "xyz"<br /><strong>Aspect Ratios:</strong> 1:1 | 3:2 | 4:3 | 16:9<br /><strong>Tips for generating titles:</strong> Use <em><u>… Called "yourtitle"</u></em> with weights. Repeat the title at end of your prompt like; <em><u>title "yourtitle" text logo</u></em>, combine with the loras TEXTA or HarrologosXL</p><p></p>
## Trigger words
You should use `Vintage board game box`, `vintage board game box called "your-title"` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/WizWhite/wizard-s-vintage-board-games/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('WizWhite/wizard-s-vintage-board-games', weight_name='Wizards_Vintage_Board_Game.safetensors')
image = pipeline('vintage board game box (called "Being Melissa Joan Hart":1.3), illustration of (Melissa Joan Hart:0.4) Checking the neighbors mailbox, detailed text logo').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
EleutherAI/Mistral-7B-v0.1-multiplication_increment0
|
EleutherAI
| 2024-02-07T00:09:16Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-09T23:37:02Z |
---
license: apache-2.0
language:
- en
---
# Model Card for Mistral-7B-v0.1-multiplication_increment0
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky multiplication_increment0 dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/Mistral-7B-v0.1-nli
|
EleutherAI
| 2024-02-07T00:09:13Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-09T23:37:32Z |
---
license: apache-2.0
language:
- en
---
# Model Card for Mistral-7B-v0.1-nli
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky nli dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/Mistral-7B-v0.1-population
|
EleutherAI
| 2024-02-07T00:09:10Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-09T23:38:51Z |
---
license: apache-2.0
language:
- en
---
# Model Card for Mistral-7B-v0.1-population
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky population dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/Mistral-7B-v0.1-hemisphere
|
EleutherAI
| 2024-02-07T00:09:09Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-09T23:36:42Z |
---
license: apache-2.0
language:
- en
---
# Model Card for Mistral-7B-v0.1-hemisphere
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky hemisphere dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/Mistral-7B-v0.1-capitals
|
EleutherAI
| 2024-02-07T00:09:08Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-09T23:36:42Z |
---
license: apache-2.0
language:
- en
---
# Model Card for Mistral-7B-v0.1-capitals
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky capitals dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
WizWhite/sven-nordqvist-style
|
WizWhite
| 2024-02-07T00:09:06Z | 20 | 3 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"watercolor",
"style",
"illustration",
"artist",
"characters",
"children's book",
"idyllic",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2024-02-07T00:09:03Z |
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=RentCivit&allowDerivatives=True&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- watercolor
- style
- illustration
- artist
- characters
- children's book
- idyllic
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Sven Nordqvist style illustration
widget:
- text: 'sven nordqvist style illustration, close up portrait of farmer batman, detailed, grant wood'
output:
url: >-
2942829.jpeg
- text: 'sven nordqvist style illustration, portrait of jason voorhees dressed as a honest farmer, scene from the movie friday the 13th, grant wood, hayfork'
output:
url: >-
2943076.jpeg
- text: 'sven nordqvist style illustration of a moonshiner starter kit, knolling'
output:
url: >-
2943087.jpeg
- text: 'sven nordqvist style illustration of a mecha fax machine, detailed texture, concept design, pcb, wires, electronics, fully visible mechanical components'
output:
url: >-
2943093.jpeg
- text: 'sven nordqvist style illustration, portrait of a xenomorph'
output:
url: >-
2943099.jpeg
- text: 'sven nordqvist style illustration, Year:1968. High detail, portrait of an age 30 wife in 1968: mid-length hair, very voluminous, very thick, very tall, very lofty, curly, tapered pageant style bouffant. Accurate 1968 style. Subtle makeup. highly detailed'
output:
url: >-
2943113.jpeg
- text: 'sven nordqvist style portrait illustration of an elderly man, intimate, side-light on shining on face, wrinkles, tight close up on face, highly detailed, professional, rembrandt light'
output:
url: >-
2946764.jpeg
---
# Sven Nordqvist style
<Gallery />
## Model description
<p>Style of the Swedish illustrator and children's book author Sven Nordqvist (Pettson & Findus, Where Is My Sister?, The Dog Walk). Nordqvist has a quite whimsical and detailed style mostly based on ink and watercolor. </p><p>This LoRA is mostly trained from images from the Pettson & Findus series, so it's quite fond of putting beards and hats on people. </p><p><strong>Recommended weight between 0.8-1.4</strong></p>
## Trigger words
You should use `Sven Nordqvist style illustration` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/WizWhite/sven-nordqvist-style/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('WizWhite/sven-nordqvist-style', weight_name='Sven Nordqvist XL LoRA v1-0.safetensors')
image = pipeline('sven nordqvist style portrait illustration of an elderly man, intimate, side-light on shining on face, wrinkles, tight close up on face, highly detailed, professional, rembrandt light').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
EleutherAI/Llama-2-7b-hf-multiplication_increment0
|
EleutherAI
| 2024-02-07T00:09:05Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-17T16:57:33Z |
---
license: apache-2.0
language:
- en
---
# Model Card for Llama-2-7b-hf-multiplication_increment0
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky multiplication_increment0 dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/Llama-2-7b-hf-authors
|
EleutherAI
| 2024-02-07T00:09:02Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-17T16:52:58Z |
---
license: apache-2.0
language:
- en
---
# Model Card for Llama-2-7b-hf-authors
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky authors dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/Llama-2-7b-hf-nli
|
EleutherAI
| 2024-02-07T00:09:01Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-17T16:52:58Z |
---
license: apache-2.0
language:
- en
---
# Model Card for Llama-2-7b-hf-nli
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky nli dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/Llama-2-7b-hf-capitals
|
EleutherAI
| 2024-02-07T00:08:57Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-17T16:53:28Z |
---
license: apache-2.0
language:
- en
---
# Model Card for Llama-2-7b-hf-capitals
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky capitals dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-12b-multiplication_increment0
|
EleutherAI
| 2024-02-07T00:08:55Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-17T16:52:50Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-12b-multiplication_increment0
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky multiplication_increment0 dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-12b-addition_increment0
|
EleutherAI
| 2024-02-07T00:08:53Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-17T16:52:51Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-12b-addition_increment0
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky addition_increment0 dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-12b-authors
|
EleutherAI
| 2024-02-07T00:08:52Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-17T16:52:48Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-12b-authors
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky authors dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-12b-sentiment
|
EleutherAI
| 2024-02-07T00:08:50Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-17T16:52:49Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-12b-sentiment
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky sentiment dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-12b-capitals
|
EleutherAI
| 2024-02-07T00:08:47Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-17T16:52:09Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-12b-capitals
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky capitals dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-6.9b-modularaddition_increment0
|
EleutherAI
| 2024-02-07T00:08:45Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-17T16:51:04Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-6.9b-modularaddition_increment0
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky modularaddition_increment0 dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-6.9b-multiplication_increment0
|
EleutherAI
| 2024-02-07T00:08:44Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-17T16:51:04Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-6.9b-multiplication_increment0
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky multiplication_increment0 dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-6.9b-authors
|
EleutherAI
| 2024-02-07T00:08:41Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-17T16:50:38Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-6.9b-authors
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky authors dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-6.9b-sentiment
|
EleutherAI
| 2024-02-07T00:08:40Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-17T16:50:39Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-6.9b-sentiment
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky sentiment dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-6.9b-nli
|
EleutherAI
| 2024-02-07T00:08:40Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-17T16:50:40Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-6.9b-nli
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky nli dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-6.9b-sciq
|
EleutherAI
| 2024-02-07T00:08:39Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-17T16:50:38Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-6.9b-sciq
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky sciq dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-6.9b-capitals
|
EleutherAI
| 2024-02-07T00:08:36Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-17T16:50:26Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-6.9b-capitals
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky capitals dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-2.8b-squaring_increment0
|
EleutherAI
| 2024-02-07T00:08:35Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T06:18:22Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-2.8b-squaring_increment0
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky squaring_increment0 dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-2.8b-authors
|
EleutherAI
| 2024-02-07T00:08:30Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T06:00:46Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-2.8b-authors
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky authors dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-2.8b-sciq
|
EleutherAI
| 2024-02-07T00:08:28Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:56:26Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-2.8b-sciq
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky sciq dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1.4b-squaring_increment0
|
EleutherAI
| 2024-02-07T00:08:25Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:55:10Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1.4b-squaring_increment0
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky squaring_increment0 dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-2.8b-capitals
|
EleutherAI
| 2024-02-07T00:08:25Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:56:12Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-2.8b-capitals
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky capitals dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1.4b-modularaddition_increment0
|
EleutherAI
| 2024-02-07T00:08:24Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:55:12Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1.4b-modularaddition_increment0
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky modularaddition_increment0 dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1.4b-multiplication_increment0
|
EleutherAI
| 2024-02-07T00:08:23Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:54:07Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1.4b-multiplication_increment0
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky multiplication_increment0 dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1.4b-addition_increment0
|
EleutherAI
| 2024-02-07T00:08:21Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:54:07Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1.4b-addition_increment0
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky addition_increment0 dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1.4b-nli
|
EleutherAI
| 2024-02-07T00:08:19Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:54:05Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1.4b-nli
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky nli dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1.4b-sciq
|
EleutherAI
| 2024-02-07T00:08:16Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:53:04Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1.4b-sciq
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky sciq dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1.4b-population
|
EleutherAI
| 2024-02-07T00:08:15Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:53:04Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1.4b-population
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky population dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1.4b-capitals
|
EleutherAI
| 2024-02-07T00:08:14Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:53:04Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1.4b-capitals
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky capitals dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1b-multiplication_increment0
|
EleutherAI
| 2024-02-07T00:08:11Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:53:08Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1b-multiplication_increment0
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky multiplication_increment0 dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1b-subtraction_increment0
|
EleutherAI
| 2024-02-07T00:08:10Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:53:08Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1b-subtraction_increment0
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky subtraction_increment0 dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
varun-v-rao/opt-350m-snli-model1
|
varun-v-rao
| 2024-02-07T00:08:08Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-classification",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:finetune:facebook/opt-350m",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-06T22:15:00Z |
---
license: other
base_model: facebook/opt-350m
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: opt-350m-snli-model1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350m-snli-model1
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0012
- Accuracy: 0.752
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.33 | 1.0 | 2146 | 0.2674 | 0.8998 |
| 0.2369 | 2.0 | 4292 | 0.2634 | 0.9070 |
| 0.1527 | 3.0 | 6438 | 0.3009 | 0.9087 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
EleutherAI/pythia-1b-nli
|
EleutherAI
| 2024-02-07T00:08:07Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:52:10Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1b-nli
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky nli dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1b-sentiment
|
EleutherAI
| 2024-02-07T00:08:05Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-19T16:59:18Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1b-sentiment
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky sentiment dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1b-sciq
|
EleutherAI
| 2024-02-07T00:08:04Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:52:08Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1b-sciq
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky sciq dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1b-hemisphere
|
EleutherAI
| 2024-02-07T00:08:02Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:52:08Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1b-hemisphere
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky hemisphere dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1b-capitals
|
EleutherAI
| 2024-02-07T00:08:01Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:52:08Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1b-capitals
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky capitals dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-410m-multiplication_increment0
|
EleutherAI
| 2024-02-07T00:07:58Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:51:41Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-410m-multiplication_increment0
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky multiplication_increment0 dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-410m-addition_increment0
|
EleutherAI
| 2024-02-07T00:07:56Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:51:41Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-410m-addition_increment0
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky addition_increment0 dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-410m-authors
|
EleutherAI
| 2024-02-07T00:07:55Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:51:39Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-410m-authors
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky authors dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-410m-nli
|
EleutherAI
| 2024-02-07T00:07:54Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:51:11Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-410m-nli
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky nli dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-410m-population
|
EleutherAI
| 2024-02-07T00:07:51Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:51:11Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-410m-population
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky population dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-410m-hemisphere
|
EleutherAI
| 2024-02-07T00:07:50Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:51:10Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-410m-hemisphere
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky hemisphere dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-410m-capitals
|
EleutherAI
| 2024-02-07T00:07:49Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T05:51:10Z |
---
license: apache-2.0
language:
- en
---
# Model Card for pythia-410m-capitals
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky capitals dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
zwellington/microtest
|
zwellington
| 2024-02-07T00:00:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:azaheadhealth",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-06T23:37:16Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- azaheadhealth
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: microtest
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: azaheadhealth
type: azaheadhealth
config: micro
split: test
args: micro
metrics:
- name: Accuracy
type: accuracy
value: 1.0
- name: F1
type: f1
value: 1.0
- name: Precision
type: precision
value: 1.0
- name: Recall
type: recall
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# microtest
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the azaheadhealth dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6111
- Accuracy: 1.0
- F1: 1.0
- Precision: 1.0
- Recall: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:---------:|:------:|
| 0.5955 | 0.5 | 1 | 0.6676 | 0.5 | 0.5 | 0.5 | 0.5 |
| 0.633 | 1.0 | 2 | 0.6111 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.13.2
|
loiccabannes/MambaSan-130m-instruct
|
loiccabannes
| 2024-02-06T23:48:11Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"ja",
"dataset:SkelterLabsInc/JaQuAD",
"arxiv:2312.00752",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-30T23:20:00Z |
---
license: apache-2.0
datasets:
- SkelterLabsInc/JaQuAD
language:
- ja
---
MambaSan-130m-instruct 🐍
**MambaSan-instruct is the first chat Japanese language model based on a state-space model architecture (Mamba), not a transformer.**
The model is based on Albert Gu's and Tri Dao's work *Mamba: Linear-Time Sequence Modeling with Selective State Spaces* ([paper](https://arxiv.org/pdf/2312.00752.pdf)) as well as their [model implementation](https://github.com/state-spaces/mamba).
This work was also inspired by heavenq's mamba-chat implementation in English.
Mamba-Chat is based on MambaSan-130m and was fine-tuned on 31,7k examples samples of the [SkelterLabsInc/JaQuAD](https://huggingface.co/datasets/SkelterLabsInc/JaQuAD) dataset. To learn more, you can:
- Take a look at the model on [Huggingface](https://huggingface.co/loiccabannes/MambaSan-130m-instruct) 🤗
- Talk to Mamba-Chat on [Google Colab](https://colab.research.google.com/drive/1oDM071iXTLxiuDMzQtZVgyNzCi22xupy?usp=sharing)
The Code used for pretraining and finetuning will soon be published on my github: https://github.com/lcabannes
<br>
## Citation
```
bibtex
@misc{lcabannes2024MambaSan-130m-instruct,
title = {MambaSan-130-instruct},
author = {Loïc Cabannes},
year = {2024},
howpublished = {HuggingFace},
url = {https://huggingface.co/loiccabannes/MambaSan-130m-instruct/}
}
```
|
yaneq/jan_zJxnH5wV00E12Mb6uB2r_SDXL_LoRA_5_9d94_5iter_test
|
yaneq
| 2024-02-06T23:45:58Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-02-06T23:45:55Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of MDDL man
license: openrail++
---
# SDXL LoRA DreamBooth - yaneq/jan_zJxnH5wV00E12Mb6uB2r_SDXL_LoRA_5_9d94_5iter_test
<Gallery />
## Model description
These are yaneq/jan_zJxnH5wV00E12Mb6uB2r_SDXL_LoRA_5_9d94_5iter_test LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of MDDL man to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](yaneq/jan_zJxnH5wV00E12Mb6uB2r_SDXL_LoRA_5_9d94_5iter_test/tree/main) them in the Files & versions tab.
## Training properties
- max_train_steps: 5
- learning_rate: 0.01
- base_model_name: stabilityai/stable-diffusion-xl-base-1.0
- class_name: man
- training_images_urls = - https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FY7nFiafx8co1nK6cnjWJ.jpg?alt=media&token=a1fe8c9a-4d5e-4043-9a82-9304fd430569
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2Fz8D9WdMIx4mXcsDGAZm4.jpg?alt=media&token=fded9422-eb7c-4757-8c1f-cb436a348579
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2F82McawlxnTeA2vBc4bZg.jpg?alt=media&token=f7cfacb2-2186-4005-9211-b7ef762dafad
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2Fcn54hvM4ahi3MzpCQN5D.jpg?alt=media&token=e096f4dc-e7c5-4e14-88fc-a5562d103127
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FVYOVRhojKt30NzjWRXL0.jpg?alt=media&token=5a3a2afb-4b83-4488-92e5-6651f5173cc0
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2F6JW19SVZPczh5B2DEqKD.jpg?alt=media&token=0e0dc94f-957d-4b51-8979-0216c0849cf6
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FWF2NGBPUFgu9eyaCYAwB.jpg?alt=media&token=97c1e215-0a96-4fdf-b292-9ee0e497ba72
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FDAk5k1hGzP9q9y0jpGoO.jpg?alt=media&token=01ed67d1-938a-4f60-bc1a-e1b91412b97e
- gradient_accumulation_steps = 3
- GPU = T4
- duration =
|
yaneq/jan_twxe6S5VjvdOourW56P5_SDXL_LoRA_5_9d94_
|
yaneq
| 2024-02-06T23:44:19Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-02-06T23:44:16Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of MDDL man
license: openrail++
---
# SDXL LoRA DreamBooth - yaneq/jan_twxe6S5VjvdOourW56P5_SDXL_LoRA_5_9d94_
<Gallery />
## Model description
These are yaneq/jan_twxe6S5VjvdOourW56P5_SDXL_LoRA_5_9d94_ LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of MDDL man to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](yaneq/jan_twxe6S5VjvdOourW56P5_SDXL_LoRA_5_9d94_/tree/main) them in the Files & versions tab.
## Training properties
- max_train_steps: 5
- learning_rate: 0.01
- base_model_name: stabilityai/stable-diffusion-xl-base-1.0
- class_name: man
- training_images_urls = - https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FY7nFiafx8co1nK6cnjWJ.jpg?alt=media&token=a1fe8c9a-4d5e-4043-9a82-9304fd430569
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2F82McawlxnTeA2vBc4bZg.jpg?alt=media&token=f7cfacb2-2186-4005-9211-b7ef762dafad
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FDAk5k1hGzP9q9y0jpGoO.jpg?alt=media&token=01ed67d1-938a-4f60-bc1a-e1b91412b97e
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2F6JW19SVZPczh5B2DEqKD.jpg?alt=media&token=0e0dc94f-957d-4b51-8979-0216c0849cf6
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FVYOVRhojKt30NzjWRXL0.jpg?alt=media&token=5a3a2afb-4b83-4488-92e5-6651f5173cc0
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2Fcn54hvM4ahi3MzpCQN5D.jpg?alt=media&token=e096f4dc-e7c5-4e14-88fc-a5562d103127
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FWF2NGBPUFgu9eyaCYAwB.jpg?alt=media&token=97c1e215-0a96-4fdf-b292-9ee0e497ba72
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2Fz8D9WdMIx4mXcsDGAZm4.jpg?alt=media&token=fded9422-eb7c-4757-8c1f-cb436a348579
- gradient_accumulation_steps = 3
- GPU = T4
- duration =
|
gotchu/season-8-13bmergev1
|
gotchu
| 2024-02-06T23:35:17Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:KoboldAI/LLaMA2-13B-Tiefighter",
"base_model:merge:KoboldAI/LLaMA2-13B-Tiefighter",
"base_model:NeverSleep/Noromaid-13b-v0.3",
"base_model:merge:NeverSleep/Noromaid-13b-v0.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-06T23:28:05Z |
---
base_model:
- NeverSleep/Noromaid-13b-v0.3
- KoboldAI/LLaMA2-13B-Tiefighter
library_name: transformers
tags:
- mergekit
- merge
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [NeverSleep/Noromaid-13b-v0.3](https://huggingface.co/NeverSleep/Noromaid-13b-v0.3)
* [KoboldAI/LLaMA2-13B-Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: NeverSleep/Noromaid-13b-v0.3
dtype: float16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 40]
model:
model:
path: NeverSleep/Noromaid-13b-v0.3
- layer_range: [0, 40]
model:
model:
path: KoboldAI/LLaMA2-13B-Tiefighter
```
|
LoneStriker/Quyen-Pro-v0.1-GPTQ
|
LoneStriker
| 2024-02-06T23:34:35Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:LDJnr/Capybara",
"dataset:Intel/orca_dpo_pairs",
"dataset:argilla/distilabel-capybara-dpo-7k-binarized",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-06T18:00:39Z |
---
library_name: transformers
license: other
datasets:
- teknium/OpenHermes-2.5
- LDJnr/Capybara
- Intel/orca_dpo_pairs
- argilla/distilabel-capybara-dpo-7k-binarized
language:
- en
pipeline_tag: text-generation
---
# Quyen
<img src="quyen.webp" width="512" height="512" alt="Quyen">
# Model Description
Quyen is our first flagship LLM series based on the Qwen1.5 family. We introduced 6 different versions:
- **Quyen-SE (0.5B)**
- **Quyen-Mini (1.8B)**
- **Quyen (4B)**
- **Quyen-Plus (7B)**
- **Quyen-Pro (14B)**
- **Quyen-Pro-Max (72B)**
All models were trained with SFT and DPO using the following dataset:
- *OpenHermes-2.5* by **Teknium**
- *Capyabara* by **LDJ**
- *argilla/distilabel-capybara-dpo-7k-binarized* by **argilla**
- *orca_dpo_pairs* by **Intel**
- and Private Data by **Ontocord** & **BEE-spoke-data**
# Prompt Template
- All Quyen models use ChatML as the default template:
```
<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Hello world.<|im_end|>
<|im_start|>assistant
```
- You can also use `apply_chat_template`:
```python
messages = [
{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."},
{"role": "user", "content": "Hello world."}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
# Benchmarks:
- Coming Soon! We will update the benchmarks later
# Acknowledgement
- We're incredibly grateful to **Tensoic** and **Ontocord** for their generous support with compute and data preparation.
- Special thanks to the Qwen team for letting us access the models early for these amazing finetunes.
|
LoneStriker/Quyen-Pro-v0.1-AWQ
|
LoneStriker
| 2024-02-06T23:34:06Z | 8 | 2 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:LDJnr/Capybara",
"dataset:Intel/orca_dpo_pairs",
"dataset:argilla/distilabel-capybara-dpo-7k-binarized",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-02-06T17:34:18Z |
---
library_name: transformers
license: other
datasets:
- teknium/OpenHermes-2.5
- LDJnr/Capybara
- Intel/orca_dpo_pairs
- argilla/distilabel-capybara-dpo-7k-binarized
language:
- en
pipeline_tag: text-generation
---
# Quyen
<img src="quyen.webp" width="512" height="512" alt="Quyen">
# Model Description
Quyen is our first flagship LLM series based on the Qwen1.5 family. We introduced 6 different versions:
- **Quyen-SE (0.5B)**
- **Quyen-Mini (1.8B)**
- **Quyen (4B)**
- **Quyen-Plus (7B)**
- **Quyen-Pro (14B)**
- **Quyen-Pro-Max (72B)**
All models were trained with SFT and DPO using the following dataset:
- *OpenHermes-2.5* by **Teknium**
- *Capyabara* by **LDJ**
- *argilla/distilabel-capybara-dpo-7k-binarized* by **argilla**
- *orca_dpo_pairs* by **Intel**
- and Private Data by **Ontocord** & **BEE-spoke-data**
# Prompt Template
- All Quyen models use ChatML as the default template:
```
<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Hello world.<|im_end|>
<|im_start|>assistant
```
- You can also use `apply_chat_template`:
```python
messages = [
{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."},
{"role": "user", "content": "Hello world."}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
# Benchmarks:
- Coming Soon! We will update the benchmarks later
# Acknowledgement
- We're incredibly grateful to **Tensoic** and **Ontocord** for their generous support with compute and data preparation.
- Special thanks to the Qwen team for letting us access the models early for these amazing finetunes.
|
mdesousa/output_dir
|
mdesousa
| 2024-02-06T23:33:55Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"controlnet",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-13T16:27:43Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-mdesousa/output_dir
These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning.
You can find some example images below.
prompt: small rocks, medium rocks, big rocks, acoustic data, deep sea, ocean

prompt: small rocks, medium rocks, big rocks, acoustic data, deep sea, ocean

prompt: small rocks, medium rocks, big rocks, acoustic data, deep sea, ocean

|
CLMBR/det-noun-transformer-2
|
CLMBR
| 2024-02-06T23:30:06Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T11:58:30Z |
---
tags:
- generated_from_trainer
model-index:
- name: det-noun-transformer-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# det-noun-transformer-2
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2261 | 0.03 | 76320 | 4.1950 |
| 4.0212 | 1.03 | 152640 | 4.0265 |
| 3.9102 | 0.03 | 228960 | 3.9526 |
| 3.8427 | 0.03 | 305280 | 3.9115 |
| 3.795 | 1.03 | 381600 | 3.8861 |
| 3.7548 | 0.03 | 457920 | 3.8696 |
| 3.7195 | 1.03 | 534240 | 3.8594 |
| 3.6856 | 0.03 | 610560 | 3.8520 |
| 3.6564 | 1.03 | 686880 | 3.8480 |
| 3.6303 | 0.03 | 763200 | 3.8447 |
| 3.6105 | 1.03 | 839520 | 3.8437 |
| 3.5889 | 0.03 | 915840 | 3.8429 |
| 3.5707 | 1.03 | 992160 | 3.8434 |
| 3.5487 | 0.03 | 1068480 | 3.8440 |
| 3.5351 | 0.03 | 1144800 | 3.8453 |
| 3.5265 | 1.03 | 1221120 | 3.8438 |
| 3.5122 | 0.03 | 1297440 | 3.8459 |
| 3.4959 | 1.03 | 1373760 | 3.8474 |
| 3.4808 | 0.03 | 1450080 | 3.8487 |
| 3.4728 | 1.03 | 1526400 | 3.8513 |
| 3.4664 | 0.03 | 1602720 | 3.8521 |
| 3.4569 | 1.03 | 1679040 | 3.8540 |
| 3.4475 | 0.03 | 1755360 | 3.8547 |
| 3.4339 | 1.03 | 1831680 | 3.8568 |
| 3.4207 | 0.03 | 1908000 | 3.8577 |
| 3.4058 | 1.03 | 1984320 | 3.8597 |
| 3.3997 | 0.03 | 2060640 | 3.8610 |
| 3.3888 | 0.03 | 2136960 | 3.8615 |
| 3.3777 | 1.03 | 2213280 | 3.8630 |
| 3.3598 | 0.03 | 2289600 | 3.8639 |
| 3.352 | 1.03 | 2365920 | 3.8639 |
| 3.3502 | 0.03 | 2442240 | 3.8657 |
| 3.3364 | 1.03 | 2518560 | 3.8667 |
| 3.3289 | 0.03 | 2594880 | 3.8667 |
| 3.3164 | 0.03 | 2671200 | 3.8668 |
| 3.311 | 0.03 | 2747520 | 3.8669 |
| 3.3087 | 1.03 | 2823840 | 3.8664 |
| 3.3031 | 0.03 | 2900160 | 3.8660 |
| 3.2963 | 0.03 | 2976480 | 3.8651 |
| 3.286 | 0.02 | 3052726 | 3.8634 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bartowski/DeepMagic-Coder-7b-exl2
|
bartowski
| 2024-02-06T23:26:26Z | 0 | 0 | null |
[
"text-generation",
"license:other",
"region:us"
] |
text-generation
| 2024-02-06T23:11:09Z |
---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of DeepMagic-Coder-7b
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.13">turboderp's ExLlamaV2 v0.0.13</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/rombodawg/DeepMagic-Coder-7b
No GQA - VRAM requirements will be higher
| Branch | Bits | lm_head bits | Size (4k) | Size (16k) | Description |
| -------------------------------------------------------------- | ---- | ------------ | --------- | ---------- | ----------- |
| [8_0](https://huggingface.co/Bartowski/DeepMagic-Coder-7b-exl2/tree/8_0) | 8.0 | 8.0 | 9.4 GB | 15.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Bartowski/DeepMagic-Coder-7b-exl2/tree/6_5) | 6.5 | 8.0 | 8.6 GB | 14.8 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/DeepMagic-Coder-7b-exl2/tree/5_0) | 5.0 | 6.0 | 7.2 GB | 13.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards with 4k context. |
| [4_25](https://huggingface.co/Bartowski/DeepMagic-Coder-7b-exl2/tree/4_25) | 4.25 | 6.0 | 6.5 GB | 12.7 GB | GPTQ equivalent bits per weight. |
| [3_5](https://huggingface.co/Bartowski/DeepMagic-Coder-7b-exl2/tree/3_5) | 3.5 | 6.0 | 5.9 GB | 12.1 GB | Lower quality, not recommended. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/DeepMagic-Coder-7b-exl2 DeepMagic-Coder-7b-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `DeepMagic-Coder-7b-exl2`:
```shell
mkdir DeepMagic-Coder-7b-exl2
huggingface-cli download bartowski/DeepMagic-Coder-7b-exl2 --local-dir DeepMagic-Coder-7b-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir DeepMagic-Coder-7b-exl2-6_5
huggingface-cli download bartowski/DeepMagic-Coder-7b-exl2 --revision 6_5 --local-dir DeepMagic-Coder-7b-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir DeepMagic-Coder-7b-exl2-6.5
huggingface-cli download bartowski/DeepMagic-Coder-7b-exl2 --revision 6_5 --local-dir DeepMagic-Coder-7b-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
dyoo/distilbert-base-uncased-finetuned-emotion
|
dyoo
| 2024-02-06T23:16:01Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-10T00:21:03Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.921
- name: F1
type: f1
value: 0.921200725961587
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2211
- Accuracy: 0.921
- F1: 0.9212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8313 | 1.0 | 250 | 0.3273 | 0.904 | 0.9030 |
| 0.2531 | 2.0 | 500 | 0.2211 | 0.921 | 0.9212 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
cantillation/whisper-medium-he-teamim-silsuless-ori-TrainAndVal-Nikud
|
cantillation
| 2024-02-06T23:12:10Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"he",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-06T17:14:39Z |
---
language:
- he
license: apache-2.0
base_model: openai/whisper-medium
tags:
- hf-asr-leaderboard
- generated_from_trainer
metrics:
- wer
model-index:
- name: he
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# he
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0111
- Wer: 37.4517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0155 | 0.02 | 50 | 0.0160 | 82.2919 |
| 0.0191 | 0.04 | 100 | 0.0271 | 41.2986 |
| 0.0194 | 0.06 | 150 | 0.0244 | 40.1791 |
| 0.0179 | 0.07 | 200 | 0.0223 | 34.4189 |
| 0.0157 | 0.09 | 250 | 0.0259 | 25.5445 |
| 0.016 | 0.11 | 300 | 0.0248 | 33.1773 |
| 0.0139 | 0.13 | 350 | 0.0214 | 29.3914 |
| 0.02 | 0.15 | 400 | 0.0223 | 37.3092 |
| 0.0149 | 0.17 | 450 | 0.0243 | 55.5669 |
| 0.0147 | 0.18 | 500 | 0.0210 | 70.0997 |
| 0.0134 | 0.2 | 550 | 0.0303 | 69.6519 |
| 0.0122 | 0.22 | 600 | 0.0182 | 47.2420 |
| 0.0104 | 0.24 | 650 | 0.0213 | 32.7906 |
| 0.0114 | 0.26 | 700 | 0.0171 | 25.8091 |
| 0.01 | 0.28 | 750 | 0.0171 | 40.4641 |
| 0.0071 | 0.3 | 800 | 0.0157 | 45.0641 |
| 0.0069 | 0.31 | 850 | 0.0172 | 49.5217 |
| 0.008 | 0.33 | 900 | 0.0169 | 48.7075 |
| 0.0056 | 0.35 | 950 | 0.0158 | 42.0721 |
| 0.0074 | 0.37 | 1000 | 0.0141 | 37.8587 |
| 0.0056 | 0.39 | 1050 | 0.0143 | 30.9994 |
| 0.0057 | 0.41 | 1100 | 0.0140 | 37.8995 |
| 0.0052 | 0.42 | 1150 | 0.0136 | 36.7393 |
| 0.003 | 0.44 | 1200 | 0.0127 | 34.9685 |
| 0.0034 | 0.46 | 1250 | 0.0119 | 35.5994 |
| 0.0041 | 0.48 | 1300 | 0.0118 | 37.6756 |
| 0.005 | 0.5 | 1350 | 0.0113 | 38.1641 |
| 0.0037 | 0.52 | 1400 | 0.0110 | 38.4490 |
| 0.0021 | 0.54 | 1450 | 0.0111 | 37.4517 |
| 0.0023 | 0.55 | 1500 | 0.0111 | 37.4517 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.0
|
varun-v-rao/bert-base-cased-bn-adapter-895K-snli-model2
|
varun-v-rao
| 2024-02-06T23:06:44Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"region:us"
] | null | 2024-02-06T22:18:04Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-cased-bn-adapter-895K-snli-model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-bn-adapter-895K-snli-model2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8392
- Accuracy: 0.6865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5125 | 1.0 | 8584 | 0.4403 | 0.8329 |
| 0.4659 | 2.0 | 17168 | 0.4000 | 0.8463 |
| 0.4495 | 3.0 | 25752 | 0.3917 | 0.8503 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
kieranbm/ppo-LunarLander-v2
|
kieranbm
| 2024-02-06T22:48:22Z | 2 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-12-08T15:54:43Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -136.99 +/- 80.74
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': True
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 512
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.9
'num_minibatches': 4
'update_epochs': 10
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.011
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'kieranbm/ppo-LunarLander-v2'
'batch_size': 2048
'minibatch_size': 512}
```
|
maheshnathwani/UserPromptFineTunedModel
|
maheshnathwani
| 2024-02-06T22:35:11Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-02-06T22:35:08Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
Zyphra/BlackMamba-2.8B
|
Zyphra
| 2024-02-06T22:26:21Z | 7 | 30 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:2402.01771",
"arxiv:2312.00752",
"arxiv:2101.03961",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-01T22:01:35Z |
---
license: apache-2.0
---
# BlackMamba
<img src="https://cdn-uploads.huggingface.co/production/uploads/65bc13717c6ad1994b6619e9/JdxNtwFrmEAnjJ0_MP5A3.jpeg" width="900" height="900" />
> **BlackMamba: Mixture of Experts for State-space models**\
> Quentin Anthony*, Yury Tokpanov*, Paolo Glorioso*, Beren Millidge*\
> Paper: https://arxiv.org/abs/2402.01771
<img src="https://cdn-uploads.huggingface.co/production/uploads/65bc13717c6ad1994b6619e9/aHpEc5tnCJShO2Kn0f637.png" width="900" height="900" />
## About
We provide inference code for our BlackMamba model in our github repository: https://github.com/Zyphra/BlackMamba
BlackMamba is an novel architecture which combines state-space models (SSMs) with mixture of experts (MoE). It uses [Mamba](https://arxiv.org/abs/2312.00752) as its SSM block and [switch transformer](https://arxiv.org/abs/2101.03961) as its MoE block base. BlackMamba is extremely low latency for generation and inference, providing significant speedups over all of classical transformers, MoEs, and Mamba SSM models. Additionally, due to its SSM sequence mixer, BlackMamba retains linear compuational complexity in the sequence length.
|
tmeharizghi/code-llama-7b-text-to-sql
|
tmeharizghi
| 2024-02-06T22:17:10Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-02-06T21:14:13Z |
---
license: llama2
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: codellama/CodeLlama-7b-hf
model-index:
- name: code-llama-7b-text-to-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-llama-7b-text-to-sql
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
ALVHB95/finalsupermodelofthedestiny
|
ALVHB95
| 2024-02-06T22:13:25Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-12-19T22:47:48Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
|
DouglasPontes/2020-Q4-full_tweets
|
DouglasPontes
| 2024-02-06T22:09:10Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:cardiffnlp/twitter-roberta-base-2019-90m",
"base_model:finetune:cardiffnlp/twitter-roberta-base-2019-90m",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-30T21:37:40Z |
---
license: mit
base_model: cardiffnlp/twitter-roberta-base-2019-90m
tags:
- generated_from_trainer
model-index:
- name: 2020-Q4-full_tweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2020-Q4-full_tweets
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2019-90m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9720
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.1e-07
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2400000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| No log | 0.02 | 8000 | 2.2726 |
| 2.454 | 0.03 | 16000 | 2.1965 |
| 2.454 | 0.05 | 24000 | 2.1550 |
| 2.2713 | 0.07 | 32000 | 2.1327 |
| 2.2713 | 0.08 | 40000 | 2.1084 |
| 2.2285 | 0.1 | 48000 | 2.0920 |
| 2.2285 | 0.12 | 56000 | 2.0790 |
| 2.2116 | 0.13 | 64000 | 2.0766 |
| 2.2116 | 0.15 | 72000 | 2.0627 |
| 2.1857 | 0.17 | 80000 | 2.0600 |
| 2.1857 | 0.19 | 88000 | 2.0541 |
| 2.1716 | 0.2 | 96000 | 2.0404 |
| 2.1716 | 0.22 | 104000 | 2.0438 |
| 2.1594 | 0.24 | 112000 | 2.0344 |
| 2.1594 | 0.25 | 120000 | 2.0421 |
| 2.1584 | 0.27 | 128000 | 2.0309 |
| 2.1584 | 0.29 | 136000 | 2.0293 |
| 2.1426 | 0.3 | 144000 | 2.0262 |
| 2.1426 | 0.32 | 152000 | 2.0243 |
| 2.1494 | 0.34 | 160000 | 2.0235 |
| 2.1494 | 0.35 | 168000 | 2.0238 |
| 2.1466 | 0.37 | 176000 | 2.0158 |
| 2.1466 | 0.39 | 184000 | 2.0198 |
| 2.1389 | 0.4 | 192000 | 2.0098 |
| 2.1389 | 0.42 | 200000 | 2.0161 |
| 2.1312 | 0.44 | 208000 | 2.0185 |
| 2.1312 | 0.45 | 216000 | 2.0058 |
| 2.1404 | 0.47 | 224000 | 2.0143 |
| 2.1404 | 0.49 | 232000 | 2.0040 |
| 2.1385 | 0.51 | 240000 | 2.0060 |
| 2.1385 | 0.52 | 248000 | 2.0096 |
| 2.1356 | 0.54 | 256000 | 2.0073 |
| 2.1356 | 0.56 | 264000 | 2.0079 |
| 2.1297 | 0.57 | 272000 | 2.0068 |
| 2.1297 | 0.59 | 280000 | 2.0082 |
| 2.1319 | 0.61 | 288000 | 2.0070 |
| 2.1319 | 0.62 | 296000 | 2.0041 |
| 2.1296 | 0.64 | 304000 | 2.0038 |
| 2.1296 | 0.66 | 312000 | 2.0013 |
| 2.1289 | 0.67 | 320000 | 2.0043 |
| 2.1289 | 0.69 | 328000 | 2.0036 |
| 2.127 | 0.71 | 336000 | 2.0021 |
| 2.127 | 0.72 | 344000 | 2.0051 |
| 2.1244 | 0.74 | 352000 | 2.0006 |
| 2.1244 | 0.76 | 360000 | 2.0008 |
| 2.1271 | 0.77 | 368000 | 2.0028 |
| 2.1271 | 0.79 | 376000 | 2.0010 |
| 2.1258 | 0.81 | 384000 | 2.0008 |
| 2.1258 | 0.83 | 392000 | 1.9967 |
| 2.121 | 0.84 | 400000 | 2.0009 |
| 2.121 | 0.86 | 408000 | 1.9976 |
| 2.1288 | 0.88 | 416000 | 1.9993 |
| 2.1288 | 0.89 | 424000 | 1.9968 |
| 2.1358 | 0.91 | 432000 | 1.9999 |
| 2.1358 | 0.93 | 440000 | 1.9947 |
| 2.1339 | 0.94 | 448000 | 2.0011 |
| 2.1339 | 0.96 | 456000 | 2.0030 |
| 2.1256 | 0.98 | 464000 | 1.9871 |
| 2.1256 | 0.99 | 472000 | 1.9928 |
| 2.1304 | 1.01 | 480000 | 1.9876 |
| 2.1304 | 1.03 | 488000 | 1.9956 |
| 2.1224 | 1.04 | 496000 | 1.9979 |
| 2.1224 | 1.06 | 504000 | 1.9990 |
| 2.1274 | 1.08 | 512000 | 1.9970 |
| 2.1274 | 1.09 | 520000 | 1.9944 |
| 2.1215 | 1.11 | 528000 | 1.9924 |
| 2.1215 | 1.13 | 536000 | 1.9945 |
| 2.1246 | 1.15 | 544000 | 1.9916 |
| 2.1246 | 1.16 | 552000 | 1.9928 |
| 2.1305 | 1.18 | 560000 | 1.9927 |
| 2.1305 | 1.2 | 568000 | 1.9953 |
| 2.1204 | 1.21 | 576000 | 1.9892 |
| 2.1204 | 1.23 | 584000 | 1.9910 |
| 2.1171 | 1.25 | 592000 | 1.9920 |
| 2.1171 | 1.26 | 600000 | 1.9933 |
| 2.121 | 1.28 | 608000 | 1.9892 |
| 2.121 | 1.3 | 616000 | 1.9887 |
| 2.1238 | 1.31 | 624000 | 1.9917 |
| 2.1238 | 1.33 | 632000 | 1.9871 |
| 2.1235 | 1.35 | 640000 | 1.9852 |
| 2.1235 | 1.36 | 648000 | 1.9862 |
| 2.1266 | 1.38 | 656000 | 1.9866 |
| 2.1266 | 1.4 | 664000 | 1.9921 |
| 2.1236 | 1.41 | 672000 | 1.9807 |
| 2.1236 | 1.43 | 680000 | 1.9859 |
| 2.1278 | 1.45 | 688000 | 1.9925 |
| 2.1278 | 1.47 | 696000 | 1.9856 |
| 2.1116 | 1.48 | 704000 | 1.9882 |
| 2.1116 | 1.5 | 712000 | 1.9869 |
| 2.1128 | 1.52 | 720000 | 1.9819 |
| 2.1128 | 1.53 | 728000 | 1.9836 |
| 2.1208 | 1.55 | 736000 | 1.9819 |
| 2.1208 | 1.57 | 744000 | 1.9867 |
| 2.1248 | 1.58 | 752000 | 1.9893 |
| 2.1248 | 1.6 | 760000 | 1.9867 |
| 2.1181 | 1.62 | 768000 | 1.9826 |
| 2.1181 | 1.63 | 776000 | 1.9860 |
| 2.117 | 1.65 | 784000 | 1.9858 |
| 2.117 | 1.67 | 792000 | 1.9828 |
| 2.1203 | 1.68 | 800000 | 1.9846 |
| 2.1203 | 1.7 | 808000 | 1.9876 |
| 2.1219 | 1.72 | 816000 | 1.9816 |
| 2.1219 | 1.73 | 824000 | 1.9856 |
| 2.1226 | 1.75 | 832000 | 1.9833 |
| 2.1226 | 1.77 | 840000 | 1.9829 |
| 2.1218 | 1.79 | 848000 | 1.9870 |
| 2.1218 | 1.8 | 856000 | 1.9794 |
| 2.1207 | 1.82 | 864000 | 1.9860 |
| 2.1207 | 1.84 | 872000 | 1.9841 |
| 2.1173 | 1.85 | 880000 | 1.9851 |
| 2.1173 | 1.87 | 888000 | 1.9808 |
| 2.118 | 1.89 | 896000 | 1.9755 |
| 2.118 | 1.9 | 904000 | 1.9814 |
| 2.1085 | 1.92 | 912000 | 1.9834 |
| 2.1085 | 1.94 | 920000 | 1.9811 |
| 2.1213 | 1.95 | 928000 | 1.9837 |
| 2.1213 | 1.97 | 936000 | 1.9880 |
| 2.1254 | 1.99 | 944000 | 1.9802 |
| 2.1254 | 2.0 | 952000 | 1.9771 |
| 2.119 | 2.02 | 960000 | 1.9837 |
| 2.119 | 2.04 | 968000 | 1.9815 |
| 2.1217 | 2.05 | 976000 | 1.9791 |
| 2.1217 | 2.07 | 984000 | 1.9858 |
| 2.1196 | 2.09 | 992000 | 1.9823 |
| 2.1196 | 2.11 | 1000000 | 1.9849 |
| 2.1175 | 2.12 | 1008000 | 1.9832 |
| 2.1175 | 2.14 | 1016000 | 1.9795 |
| 2.1165 | 2.16 | 1024000 | 1.9848 |
| 2.1165 | 2.17 | 1032000 | 1.9813 |
| 2.1223 | 2.19 | 1040000 | 1.9791 |
| 2.1223 | 2.21 | 1048000 | 1.9791 |
| 2.1196 | 2.22 | 1056000 | 1.9724 |
| 2.1196 | 2.24 | 1064000 | 1.9779 |
| 2.1097 | 2.26 | 1072000 | 1.9785 |
| 2.1097 | 2.27 | 1080000 | 1.9842 |
| 2.109 | 2.29 | 1088000 | 1.9792 |
| 2.109 | 2.31 | 1096000 | 1.9804 |
| 2.1175 | 2.32 | 1104000 | 1.9811 |
| 2.1175 | 2.34 | 1112000 | 1.9813 |
| 2.1239 | 2.36 | 1120000 | 1.9742 |
| 2.1239 | 2.37 | 1128000 | 1.9759 |
| 2.1141 | 2.39 | 1136000 | 1.9835 |
| 2.1141 | 2.41 | 1144000 | 1.9814 |
| 2.1121 | 2.43 | 1152000 | 1.9753 |
| 2.1121 | 2.44 | 1160000 | 1.9796 |
| 2.1298 | 2.46 | 1168000 | 1.9720 |
| 2.1298 | 2.48 | 1176000 | 1.9822 |
| 2.1113 | 2.49 | 1184000 | 1.9772 |
| 2.1113 | 2.51 | 1192000 | 1.9779 |
| 2.1224 | 2.53 | 1200000 | 1.9760 |
| 2.1224 | 2.54 | 1208000 | 1.9823 |
| 2.1181 | 2.56 | 1216000 | 1.9836 |
| 2.1181 | 2.58 | 1224000 | 1.9754 |
| 2.1152 | 2.59 | 1232000 | 1.9764 |
| 2.1152 | 2.61 | 1240000 | 1.9771 |
| 2.1219 | 2.63 | 1248000 | 1.9774 |
| 2.1219 | 2.64 | 1256000 | 1.9790 |
| 2.115 | 2.66 | 1264000 | 1.9783 |
| 2.115 | 2.68 | 1272000 | 1.9829 |
| 2.1241 | 2.69 | 1280000 | 1.9844 |
| 2.1241 | 2.71 | 1288000 | 1.9781 |
| 2.1157 | 2.73 | 1296000 | 1.9808 |
| 2.1157 | 2.75 | 1304000 | 1.9820 |
| 2.1223 | 2.76 | 1312000 | 1.9812 |
| 2.1223 | 2.78 | 1320000 | 1.9811 |
| 2.1178 | 2.8 | 1328000 | 1.9779 |
| 2.1178 | 2.81 | 1336000 | 1.9761 |
| 2.1204 | 2.83 | 1344000 | 1.9772 |
| 2.1204 | 2.85 | 1352000 | 1.9724 |
| 2.1205 | 2.86 | 1360000 | 1.9777 |
| 2.1205 | 2.88 | 1368000 | 1.9721 |
| 2.1178 | 2.9 | 1376000 | 1.9768 |
| 2.1178 | 2.91 | 1384000 | 1.9802 |
| 2.1205 | 2.93 | 1392000 | 1.9759 |
| 2.1205 | 2.95 | 1400000 | 1.9817 |
| 2.1193 | 2.96 | 1408000 | 1.9788 |
| 2.1193 | 2.98 | 1416000 | 1.9770 |
| 2.1195 | 3.0 | 1424000 | 1.9769 |
| 2.1195 | 3.01 | 1432000 | 1.9848 |
| 2.1137 | 3.03 | 1440000 | 1.9747 |
| 2.1137 | 3.05 | 1448000 | 1.9745 |
| 2.12 | 3.07 | 1456000 | 1.9765 |
| 2.12 | 3.08 | 1464000 | 1.9776 |
| 2.123 | 3.1 | 1472000 | 1.9799 |
| 2.123 | 3.12 | 1480000 | 1.9737 |
| 2.1213 | 3.13 | 1488000 | 1.9775 |
| 2.1213 | 3.15 | 1496000 | 1.9783 |
| 2.1267 | 3.17 | 1504000 | 1.9806 |
| 2.1267 | 3.18 | 1512000 | 1.9764 |
| 2.1186 | 3.2 | 1520000 | 1.9695 |
| 2.1186 | 3.22 | 1528000 | 1.9783 |
| 2.1189 | 3.23 | 1536000 | 1.9774 |
| 2.1189 | 3.25 | 1544000 | 1.9781 |
| 2.1249 | 3.27 | 1552000 | 1.9740 |
| 2.1249 | 3.28 | 1560000 | 1.9787 |
| 2.1124 | 3.3 | 1568000 | 1.9799 |
| 2.1124 | 3.32 | 1576000 | 1.9734 |
| 2.1166 | 3.33 | 1584000 | 1.9763 |
| 2.1166 | 3.35 | 1592000 | 1.9798 |
| 2.1224 | 3.37 | 1600000 | 1.9741 |
| 2.1224 | 3.39 | 1608000 | 1.9781 |
| 2.1178 | 3.4 | 1616000 | 1.9705 |
| 2.1178 | 3.42 | 1624000 | 1.9754 |
| 2.1096 | 3.44 | 1632000 | 1.9738 |
| 2.1096 | 3.45 | 1640000 | 1.9785 |
| 2.1157 | 3.47 | 1648000 | 1.9745 |
| 2.1157 | 3.49 | 1656000 | 1.9788 |
| 2.1184 | 3.5 | 1664000 | 1.9739 |
| 2.1184 | 3.52 | 1672000 | 1.9722 |
| 2.1288 | 3.54 | 1680000 | 1.9729 |
| 2.1288 | 3.55 | 1688000 | 1.9782 |
| 2.1247 | 3.57 | 1696000 | 1.9772 |
| 2.1247 | 3.59 | 1704000 | 1.9759 |
| 2.1113 | 3.6 | 1712000 | 1.9696 |
| 2.1113 | 3.62 | 1720000 | 1.9751 |
| 2.124 | 3.64 | 1728000 | 1.9741 |
| 2.124 | 3.65 | 1736000 | 1.9780 |
| 2.1242 | 3.67 | 1744000 | 1.9777 |
| 2.1242 | 3.69 | 1752000 | 1.9724 |
| 2.1263 | 3.71 | 1760000 | 1.9775 |
| 2.1263 | 3.72 | 1768000 | 1.9779 |
| 2.1214 | 3.74 | 1776000 | 1.9786 |
| 2.1214 | 3.76 | 1784000 | 1.9770 |
| 2.1209 | 3.77 | 1792000 | 1.9809 |
| 2.1209 | 3.79 | 1800000 | 1.9754 |
| 2.1254 | 3.81 | 1808000 | 1.9769 |
| 2.1254 | 3.82 | 1816000 | 1.9782 |
| 2.1225 | 3.84 | 1824000 | 1.9799 |
| 2.1225 | 3.86 | 1832000 | 1.9781 |
| 2.1232 | 3.87 | 1840000 | 1.9752 |
| 2.1232 | 3.89 | 1848000 | 1.9749 |
| 2.1225 | 3.91 | 1856000 | 1.9787 |
| 2.1225 | 3.92 | 1864000 | 1.9765 |
| 2.118 | 3.94 | 1872000 | 1.9764 |
| 2.118 | 3.96 | 1880000 | 1.9767 |
| 2.1158 | 3.97 | 1888000 | 1.9775 |
| 2.1158 | 3.99 | 1896000 | 1.9775 |
| 2.1257 | 4.01 | 1904000 | 1.9750 |
| 2.1257 | 4.03 | 1912000 | 1.9756 |
| 2.122 | 4.04 | 1920000 | 1.9812 |
| 2.122 | 4.06 | 1928000 | 1.9753 |
| 2.1223 | 4.08 | 1936000 | 1.9788 |
| 2.1223 | 4.09 | 1944000 | 1.9773 |
| 2.1189 | 4.11 | 1952000 | 1.9798 |
| 2.1189 | 4.13 | 1960000 | 1.9724 |
| 2.1182 | 4.14 | 1968000 | 1.9813 |
| 2.1182 | 4.16 | 1976000 | 1.9821 |
| 2.118 | 4.18 | 1984000 | 1.9766 |
| 2.118 | 4.19 | 1992000 | 1.9779 |
| 2.1188 | 4.21 | 2000000 | 1.9700 |
| 2.1188 | 4.23 | 2008000 | 1.9783 |
| 2.1207 | 4.24 | 2016000 | 1.9744 |
| 2.1207 | 4.26 | 2024000 | 1.9800 |
| 2.1181 | 4.28 | 2032000 | 1.9769 |
| 2.1181 | 4.29 | 2040000 | 1.9770 |
| 2.1219 | 4.31 | 2048000 | 1.9745 |
| 2.1219 | 4.33 | 2056000 | 1.9719 |
| 2.1264 | 4.35 | 2064000 | 1.9766 |
| 2.1264 | 4.36 | 2072000 | 1.9753 |
| 2.1188 | 4.38 | 2080000 | 1.9752 |
| 2.1188 | 4.4 | 2088000 | 1.9787 |
| 2.1132 | 4.41 | 2096000 | 1.9755 |
| 2.1132 | 4.43 | 2104000 | 1.9824 |
| 2.1284 | 4.45 | 2112000 | 1.9788 |
| 2.1284 | 4.46 | 2120000 | 1.9768 |
| 2.1197 | 4.48 | 2128000 | 1.9800 |
| 2.1197 | 4.5 | 2136000 | 1.9771 |
| 2.1208 | 4.51 | 2144000 | 1.9769 |
| 2.1208 | 4.53 | 2152000 | 1.9770 |
| 2.1174 | 4.55 | 2160000 | 1.9727 |
| 2.1174 | 4.56 | 2168000 | 1.9772 |
| 2.1222 | 4.58 | 2176000 | 1.9709 |
| 2.1222 | 4.6 | 2184000 | 1.9768 |
| 2.1306 | 4.61 | 2192000 | 1.9721 |
| 2.1306 | 4.63 | 2200000 | 1.9730 |
| 2.1224 | 4.65 | 2208000 | 1.9756 |
| 2.1224 | 4.67 | 2216000 | 1.9703 |
| 2.1317 | 4.68 | 2224000 | 1.9788 |
| 2.1317 | 4.7 | 2232000 | 1.9760 |
| 2.1215 | 4.72 | 2240000 | 1.9795 |
| 2.1215 | 4.73 | 2248000 | 1.9747 |
| 2.1093 | 4.75 | 2256000 | 1.9798 |
| 2.1093 | 4.77 | 2264000 | 1.9734 |
| 2.1168 | 4.78 | 2272000 | 1.9769 |
| 2.1168 | 4.8 | 2280000 | 1.9767 |
| 2.1209 | 4.82 | 2288000 | 1.9758 |
| 2.1209 | 4.83 | 2296000 | 1.9794 |
| 2.1295 | 4.85 | 2304000 | 1.9806 |
| 2.1295 | 4.87 | 2312000 | 1.9778 |
| 2.1095 | 4.88 | 2320000 | 1.9740 |
| 2.1095 | 4.9 | 2328000 | 1.9753 |
| 2.1141 | 4.92 | 2336000 | 1.9768 |
| 2.1141 | 4.93 | 2344000 | 1.9744 |
| 2.1208 | 4.95 | 2352000 | 1.9785 |
| 2.1208 | 4.97 | 2360000 | 1.9829 |
| 2.1257 | 4.99 | 2368000 | 1.9744 |
| 2.1257 | 5.0 | 2376000 | 1.9829 |
| 2.1202 | 5.02 | 2384000 | 1.9729 |
| 2.1202 | 5.04 | 2392000 | 1.9804 |
| 2.1221 | 5.05 | 2400000 | 1.9803 |
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
frntcx/q-learning-taxi
|
frntcx
| 2024-02-06T22:05:35Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-06T22:05:34Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-learning-taxi2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.80 +/- 2.54
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="frntcx/q-learning-taxi2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Statos6/q-FrozenLake-v1-4x4-noSlippery
|
Statos6
| 2024-02-06T22:03:16Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-06T22:03:13Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Statos6/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dawveed/AWS-Sage
|
dawveed
| 2024-02-06T21:59:14Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"cloud",
"AWS",
"amazon web services",
"amazon",
"web",
"services",
"text-generation",
"en",
"dataset:dawveed/AmazonWebServicesAWS-dataset",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-02-06T20:34:44Z |
---
license: apache-2.0
datasets:
- dawveed/AmazonWebServicesAWS-dataset
language:
- en
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- cloud
- AWS
- amazon web services
- amazon
- web
- services
library_name: peft
base_model: tiiuae/falcon-7b
---
<img src="https://huggingface.co/dawveed/AWS-Sage/resolve/main/logo.png">
#Model Card for AWS Sage
The AWS-Sage is a Language Model (LLM) designed to assist users with questions related to Amazon Web Services (AWS) support. Powered by advanced natural language processing, it can swiftly provide answers to inquiries regarding AWS support plans, billing, technical issues, service limitations, and best practices. Whether you're a seasoned AWS user or new to the platform, the SupportBot offers timely and accurate assistance, helping you navigate the complexities of AWS support with ease.
## Model Details
### Model Description
The AWS Sage is a sophisticated Language Model (LLM) meticulously trained on a vast corpus of data extracted from Amazon Web Services (AWS) customer support interactions. This cutting-edge AI system is tailored specifically to address the diverse needs of AWS users seeking assistance and guidance with their cloud computing endeavors.
Equipped with state-of-the-art natural language understanding capabilities, the AWS Sage comprehensively tackles a wide array of inquiries related to AWS support services. Whether users are grappling with billing discrepancies, troubleshooting technical issues, seeking advice on optimizing their AWS infrastructure, or navigating the intricacies of support plans, the AWS Sage is adept at swiftly delivering accurate and insightful responses.
Utilizing a combination of machine learning algorithms and deep neural networks, the AWS Sage continuously refines its knowledge base and understanding of user queries, ensuring that it remains up-to-date with the latest developments and best practices in AWS support. Its ability to comprehend nuanced questions and provide contextually relevant answers makes it an invaluable resource for both novice and seasoned AWS users alike.
Moreover, the AWS Sage is designed to enhance the overall customer support experience by offering timely assistance and empowering users to resolve issues autonomously whenever possible. By leveraging the vast reservoir of knowledge accumulated through interactions with AWS support specialists, the AWS Sage serves as a virtual assistant capable of efficiently guiding users through various support processes and procedures.
In essence, the AWS Sage represents a paradigm shift in customer support, leveraging the power of artificial intelligence to deliver personalized, responsive, and effective assistance to AWS users across the globe. Whether users are seeking quick solutions to technical queries or seeking strategic advice to optimize their AWS deployments, the AWS Sage stands ready to assist, ensuring a seamless and rewarding experience in the AWS ecosystem.
- **Developed by:** David Lopez Oñate https://www.kinqo.com
- **License:** Apache 2.0
- **Finetuned from model:** tiiuae/falcon-7b
## Uses
AWS Sage is a language model designed to assist users with inquiries related to Amazon Web Services (AWS) support. The model can be utilized in various scenarios, including:
Technical Support: Users can rely on AWS Sage to obtain assistance with technical issues encountered while using AWS services, including troubleshooting, debugging, and resolving configuration errors.
Service Guidance: AWS Sage can provide guidance on the selection, deployment, and optimization of AWS services, helping users make informed decisions to meet their specific business requirements.
Billing and Account Management: Users can seek clarification on billing inquiries, account management procedures, and guidance on optimizing costs within the AWS environment.
Support Plan Information: AWS Sage can provide information on available AWS support plans, including features, benefits, and eligibility criteria, assisting users in selecting the most appropriate support plan for their needs.
Best Practices and Recommendations: Users can leverage AWS Sage to access best practices, recommendations, and guidelines for optimizing their AWS infrastructure, enhancing performance, security, and reliability.
Policy and Compliance Assistance: AWS Sage can offer guidance on AWS policies, compliance requirements, and security best practices, helping users ensure adherence to industry standards and regulatory frameworks.
Resource Documentation: Users can access documentation, FAQs, and resources related to AWS services and support offerings through AWS Sage, facilitating self-service support and learning.
Training and Education: AWS Sage can serve as a learning resource for users seeking to expand their knowledge of AWS services, support processes, and best practices through interactive Q&A sessions and educational content.
## Bias, Risks, and Limitations
-Bias in Training Data: The AWS Sage model may exhibit biases present in the training data, which could result in skewed or unfair responses to user inquiries, particularly if the data is not sufficiently diverse or representative.
-Technical Limitations: Despite its advanced capabilities, AWS Sage may face limitations in understanding complex or nuanced language, potentially leading to incomplete or inaccurate responses to user queries.
-Dependency on Training Data Quality: The effectiveness of AWS Sage relies heavily on the quality and relevance of its training data. Inaccurate or outdated data may undermine the model's ability to provide accurate and helpful support.
-Risk of Misinterpretation: AWS Sage may misinterpret the intent or context of user inquiries, especially in cases of ambiguous or colloquial language, which could result in incorrect or misleading responses.
-Lack of Emotional Intelligence: Unlike human support agents, AWS Sage may lack the ability to empathize with users or understand subtle emotional cues, potentially leading to impersonal interactions or dissatisfaction among users seeking emotional support.
-Privacy Concerns: User inquiries processed by AWS Sage may contain sensitive or confidential information, raising concerns about data privacy and security, especially if proper safeguards are not in place to protect user data.
-Limited Domain Expertise: While knowledgeable about AWS support topics, AWS Sage may lack expertise in certain specialized areas or industries, which could limit its ability to provide comprehensive support in those domains.
-Overreliance on Automation: Users may become overly reliant on AWS Sage for support, potentially overlooking the value of human interaction or alternative support channels, which could lead to a loss of human touch in customer service.
-Inability to Handle Unforeseen Scenarios: AWS Sage may struggle to handle novel or unforeseen support scenarios not covered in its training data, potentially leading to inadequate or ineffective responses in rapidly evolving situations.
-Technical Failures or Errors: Like any AI system, AWS Sage is susceptible to technical failures, errors, or malfunctions, which could disrupt service delivery or lead to unintended consequences for users relying on its support. Regular monitoring and maintenance are essential to mitigate these risks.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.