modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
AIFT/AIFT-ko-orca-plat-Yi-ko-6b-v1.2-dpo-2
|
AIFT
| 2024-01-25T07:54:44Z | 62 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T07:17:06Z |
---
license: cc-by-sa-4.0
---
<h1>orca-platypus - instruct-dpo-2 ๋ชจ๋ธ v1.2</h1>
<b><ํ์ต ๋ฐ์ดํฐ ๊ตฌ์ถ></b>
kyujinpy ๋์ด ๊ณต๊ฐํ์ KOR-OpenOrca-Platypus ๋ฐ์ดํฐ๋ฅผ ์ผ๋ถ ์ญ์ (์ํ๋ง) ๋ฐ ์ ์ ์์
์งํํ์ฌ ํ์ฉ.
๊ทธ ์ดํ ํด๋น ๋ฐ์ดํฐ๋ค์ ๋ณด๋ฉฐ ๊ด๋ จ ํ์คํฌ๋ฅผ ์ถ์ถํ์๊ณ ์ด๋ฅผ ๊ธฐ๋ฐ์ผ๋ก
ํด๋น ํ์คํฌ์ ๋ง์ถฐ์ NLP ๊ด๋ จ ์คํ์์ค ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ํ์ต๋ฐ์ดํฐ๋ฅผ ์์ฒด์ ์ผ๋ก
์ญ์ฌ, ๊ณผํ, ์ํ, ๊ธฐ๊ณ๋
ํด, ๋ฆฌ๋ทฐ ๋ถ์ ๋ฌธ์ ๋ฅผ gpt๋ฅผ ํตํด์ ๊ตฌ์ถํ์๊ณ ,
aihub ์ผ๋ฐ์์ ๋ฐ ๊ธฐ๊ณ๋
ํด ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ์ถ๊ฐ๋ก ํ์ต ๋ฐ์ดํฐ๋ฅผ ๊ตฌ์ถ(ํํ์ ๊ด๋ จ, ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ๋ฐ ์์ฝ)
๊ฐ์ข
๋ธ๋ก๊ทธ์์ ์ญ์ฌ ๋ฐ ์์ ํด์ฆ๋ฅผ ์ฌ๋์ด ์ง์ ํ์ต๋ฐ์ดํฐ ํํ๋ก ๋ณ๊ฒฝ
AI2AI Challenge ๋ฐ์ดํฐ ํํ๋ฅผ ๋ณด๊ณ gpt๋ฅผ ํตํด ์ด๋ฑ ์์ค์ ๊ณผํ ์ํ ๋ฌธ์ ์ ํ์ ์ ์ 500๋ฌธ์
์์ด ๋ฒ์ญ ๋ฐ์ดํฐ ์ํ/ํ์ ๋ฐ์ดํฐ ํ์ต ๋ฐ์ดํฐ๋ก ํ์ฉ ์งํ
์ด ๋ฐ์ดํฐ 4๋ง๊ฐ ์ ๋ ์ฌ์ฉํ์์ต๋๋ค.
<br>
<DPOํ์ต ๋ฐ์ดํฐ>
DPO ๋ฐ์ดํฐ๋ CommonGen๊ณผ TruthfulQA์ ์ด์ ์ ๋ง์ถ์ด ์ฝ 17,000๊ฐ์ ๋ฐ์ดํฐ๋ฅผ ํ์ตํ์์ต๋๋ค.
+ ko-hh-rlhf ๋ฐ์ดํฐ์์ chosen ๋ฐ์ดํฐ๋ถ๋ถ์ ChatGPT๋ฅผ ํตํด ๋ณ๊ฒฝํ ๋ฐ์ดํฐ๋ฅผ ์ถ๊ฐ ํ์ตํ์์ต๋๋ค.
<br>
+ TruthfulQA ๊ด๋ จ ๋ฌธ์ ์ถ๊ฐ๋ฅผ ์งํํ์์ต๋๋ค.(์์ค ๊ด๋ จ ์ฐธ๊ฑฐ์ง ๋ฌธ์ )
+ ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ๋ฅผ ChatGPT๋ฅผ ํตํด์ ๋ต๋ณ์ ์ป์ด ํ์ต
+ ๋ฌธ๋ฒ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ
<br>
###ํ์ต ๋ฐ์ดํฐ ํ์ผ์ ๋น๊ณต๊ฐ์
๋๋ค.
<br>
<b><ํ์ต></b>
ํ์ต์ LoRA๋ฅผ ์ฌ์ฉํ์ฌ A100 40G *2์์ ํ์ต์ ์งํํ์์ต๋๋ค.
|
SanjiWatsuki/TinyLlamaHerd-2x1.1B
|
SanjiWatsuki
| 2024-01-25T07:47:36Z | 80 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T02:00:53Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
---
# TinyLlamaHerd-2x1.1B
TinyLlamaHerd-2x1.1B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
## ๐งฉ Configuration
```yaml
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
gate_mode: random
dtype: bfloat16
experts:
- source_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
positive_prompts: [""]
- source_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
positive_prompts: [""]
```
|
pfnet/plamo-13b-instruct-nc
|
pfnet
| 2024-01-25T07:46:45Z | 76 | 3 |
transformers
|
[
"transformers",
"safetensors",
"plamo",
"text-generation",
"conversational",
"custom_code",
"en",
"ja",
"arxiv:2302.13971",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-10-26T05:36:25Z |
---
language:
- en
- ja
license: cc-by-nc-4.0
library_name: transformers
pipeline_tag: text-generation
---
# PLaMo-13B-Instruct-NC
## Model Description
PLaMo-13B-Instruct-NC is a noncommercial instruct fine-tuned model built upon the 8192 context length version of [PLaMo-13B](https://huggingface.co/pfnet/plamo-13b) text generation model. PLaMo-13B-Instruct-NC is fine-tuned using multiple publicly available Japanese datasets.
This model is released under CC-BY-NC-4.0.
[PLaMo-13B-Instruct Release blog (Japanese)](https://tech.preferred.jp/ja/blog/llm-plamo-instruct/)
## Usage
Install the required libraries as follows:
```sh
>>> python -m pip install numpy safetensors sentencepiece torch transformers accelerate
```
Execute the following python code:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained(
"pfnet/plamo-13b-instruct-nc",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"pfnet/plamo-13b-instruct-nc",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map="auto",
)
```
```python
def completion(prompt: str, max_new_tokens: int = 128) -> str:
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
generated_ids = model.generate(
inputs.input_ids,
eos_token_id=2,
pad_token_id=3,
max_new_tokens=max_new_tokens,
temperature=1,
top_p=0.95,
top_k=50,
do_sample=True,
)
return tokenizer.decode(generated_ids[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
def generate_prompt(messages: list) -> str:
sep = "\n\n### "
prompt = [
"ไปฅไธใฏใฟในใฏใ่ชฌๆใใๆ็คบใงใๆ่ใ่ชฌๆใใๅ
ฅๅใจใใขใซใชใฃใฆใใพใใ",
"่ฆๆฑใ้ฉๅใซ่ฃๅฎใใใใๅฟ็ญใๆธใใฆใใ ใใใ",
]
roles = {"instruction": "ๆ็คบ", "response": "ๅฟ็ญ", "input": "ๅ
ฅๅ"}
for msg in messages:
prompt.append(sep + roles[msg["role"]] + ":\n" + msg['content'])
prompt.append(sep + roles["response"] + ":\n")
return "".join(prompt)
```
```python
prompt = generate_prompt([
{"role": "instruction", "content": "ๆฅๆฌใฎ้ฆ้ฝใฏใฉใใงใใ๏ผ"},
# {"role": "input", "content": "..."} ## An extra input (optional)
])
print(completion(prompt, max_new_tokens=128))
```
## Model Details
- Model size: 13B
- Trained tokens: 1.5T tokens (English: 1.32T tokens, Japanese: 0.18T tokens)
- Tokenizer: sentencepiece tokenizer trained on a subset of the pretraining datasets.
- Context length: 8192
- Developed by: Preferred Networks, Inc
- Model type: Causal decoder-only
- Language(s): Japanese and English
- License: CC-BY-NC-4.0
## Training Dataset
- [Stanford Alpaca (Japanese translation)](https://huggingface.co/datasets/fujiki/japanese_alpaca_data)
- [databricks-dolly-15k (Japanese translation)](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
- [Anthropic HH-RLHF (Japanese translation, subset)](https://huggingface.co/datasets/fujiki/japanese_hh-rlhf-49k)
- [OpenAssistant Conversations Dataset (Japanese translation, oasst1)](https://huggingface.co/datasets/kunishou/oasst1-89k-ja)
- [Wikinews subset of Izumi-lab llm-japanese-dataset](https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset)
For the pretraining model, see [PLaMo-13B](https://huggingface.co/pfnet/plamo-13b).
## Bias, Risks, and Limitations
PLaMo-13B-Instruct-NC is a new technology that carries risks with use. Testing conducted to date has been in English and Japanese, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, PLaMo-13B-Instruct-NCโs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of PLaMo-13B-Instruct-NC, developers should perform safety testing and tuning tailored to their specific applications of the model.
## How to cite
```tex
@online{PLaMoInstructNC2023Introducing,
author = {Preferred Networks, Inc},
title = {PLaMo-13B-Instruct-NC},
year = {2023},
url = {https://huggingface.co/pfnet/plamo-13b-instruct-nc},
urldate = {2023-10-26}
}
```
## References
```tex
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
```tex
@article{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
|
pfnet/plamo-13b-instruct
|
pfnet
| 2024-01-25T07:46:09Z | 117 | 14 |
transformers
|
[
"transformers",
"safetensors",
"plamo",
"text-generation",
"conversational",
"custom_code",
"en",
"ja",
"arxiv:2302.13971",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-10-26T02:11:24Z |
---
language:
- en
- ja
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
---
# PLaMo-13B-Instruct
## Model Description
PLaMo-13B-Instruct is an instruct fine-tuned model built upon the 8192 context length version of [PLaMo-13B](https://huggingface.co/pfnet/plamo-13b) text generation model. PLaMo-13B-Instruct is fine-tuned using multiple publicly available Japanese datasets.
This model is released under the Apache License 2.0.
[PLaMo-13B-Instruct Release blog (Japanese)](https://tech.preferred.jp/ja/blog/llm-plamo-instruct/)
## Usage
Install the required libraries as follows:
```sh
>>> python -m pip install numpy sentencepiece torch transformers accelerate
```
Execute the following python code:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained(
"pfnet/plamo-13b-instruct",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"pfnet/plamo-13b-instruct",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map="auto",
)
```
```python
def completion(prompt: str, max_new_tokens: int = 128) -> str:
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
generated_ids = model.generate(
inputs.input_ids,
eos_token_id=2,
pad_token_id=3,
max_new_tokens=max_new_tokens,
temperature=1,
top_p=0.95,
top_k=50,
do_sample=True,
)
return tokenizer.decode(generated_ids[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
def generate_prompt(messages: list) -> str:
sep = "\n\n### "
prompt = [
"ไปฅไธใฏใฟในใฏใ่ชฌๆใใๆ็คบใงใๆ่ใ่ชฌๆใใๅ
ฅๅใจใใขใซใชใฃใฆใใพใใ",
"่ฆๆฑใ้ฉๅใซ่ฃๅฎใใใใๅฟ็ญใๆธใใฆใใ ใใใ",
]
roles = {"instruction": "ๆ็คบ", "response": "ๅฟ็ญ", "input": "ๅ
ฅๅ"}
for msg in messages:
prompt.append(sep + roles[msg["role"]] + ":\n" + msg["content"])
prompt.append(sep + roles["response"] + ":\n")
return "".join(prompt)
```
```python
prompt = generate_prompt([
{"role": "instruction", "content": "ๆฅๆฌใฎ้ฆ้ฝใฏใฉใใงใใ๏ผ"},
# {"role": "input", "content": "..."} ## An extra input (optional)
])
print(completion(prompt, max_new_tokens=128))
```
## Model Details
- Model size: 13B
- Trained tokens: 1.5T tokens (English: 1.32T tokens, Japanese: 0.18T tokens)
- Tokenizer: sentencepiece tokenizer trained on a subset of the pretraining datasets.
- Context length: 8192
- Developed by: Preferred Networks, Inc
- Model type: Causal decoder-only
- Language(s): Japanese and English
- License: Apache License 2.0
## Training Dataset
<!-- - [Stanford Alpaca (Japanese translation)](https://huggingface.co/datasets/fujiki/japanese_alpaca_data)-->
- [databricks-dolly-15k (Japanese translation)](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
- [Anthropic HH-RLHF (Japanese translation, subset)](https://huggingface.co/datasets/fujiki/japanese_hh-rlhf-49k)
- [OpenAssistant Conversations Dataset (Japanese translation, oasst1)](https://huggingface.co/datasets/kunishou/oasst1-89k-ja)
- [Wikinews subset of Izumi-lab llm-japanese-dataset](https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset)
For the pretraining model, see [PLaMo-13B](https://huggingface.co/pfnet/plamo-13b).
## Bias, Risks, and Limitations
PLaMo-13B-Instruct is a new technology that carries risks with use. Testing conducted to date has been in English and Japanese, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, PLaMo-13B-Instruct-NCโs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of PLaMo-13B-Instruct, developers should perform safety testing and tuning tailored to their specific applications of the model.
## How to cite
```tex
@online{PLaMoInstruct2023Introducing,
author = {Preferred Networks, Inc},
title = {PLaMo-13B-Instruct},
year = {2023},
url = {https://huggingface.co/pfnet/plamo-13b-instruct},
urldate = {2023-10-26}
}
```
## References
```tex
@article{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
|
antonovata/test_chat
|
antonovata
| 2024-01-25T07:41:51Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-24T11:10:59Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
motherduckdb/DuckDB-NSQL-7B-v0.1-GGUF
|
motherduckdb
| 2024-01-25T07:37:17Z | 232 | 21 | null |
[
"gguf",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-01-24T19:31:51Z |
---
license: llama2
inference:
parameters:
do_sample: false
max_length: 200
widget:
- text: "### Instruction:\nYour task is to generate valid duckdb SQL to answer the following question.\n\n### Input:\n\n### Question:\ncreate a new table called tmp from test.csv\n\n### Response (use duckdb shorthand if possible):"
example_title: "read test.csv"
- text: "### Instruction:\nYour task is to generate valid duckdb SQL to answer the following question.\n\n### Input:\n\n### Question:\ncreate a new table called tmp from test.csv\n\n### Response (use duckdb shorthand if possible):"
example_title: "get _amount columns"
- text: "### Instruction:\nYour task is to generate valid duckdb SQL to answer the following question, given a duckdb database schema.\n\n### Input:\nHere is the database schema that the SQL query will run on:\nCREATE TABLE rideshare (\n hvfhs_license_num varchar,\n dispatching_base_num varchar,\n originating_base_num varchar,\n request_datetime timestamp,\n on_scene_datetime timestamp,\n pickup_datetime timestamp,\n dropoff_datetime timestamp,\n trip_miles double,\n trip_time bigint,\n\n);\n\n### Question:\nget longest trip in december 2022\n\n### Response (use duckdb shorthand if possible):"
example_title: "taxi trips"
---
# DuckDB-NSQL-7B (GGUF)
The repository includes model files in the GGUF format for [DuckDB-NSQL-7B-v0.1](https://huggingface.co/motherduckdb/DuckDB-NSQL-7B-v0.1), featuring both the f16 and Q8_0 versions.
## Provided model files
| Name | Quant method | Bits |
| ---- | ---- | ---- |
| [DuckDB-NSQL-7B-v0.1-f16.gguf](https://huggingface.co/motherduckdb/DuckDB-NSQL-7B-v0.1-GGUF/blob/main/DuckDB-NSQL-7B-v0.1-f16.gguf) | - | 16 |
| [DuckDB-NSQL-7B-v0.1-q8_0.gguf](https://huggingface.co/motherduckdb/DuckDB-NSQL-7B-v0.1-GGUF/blob/main/DuckDB-NSQL-7B-v0.1-q8_0.gguf) | Q8_0 | 8 |
## Model Description
NSQL is a family of autoregressive open-source large foundation models (FMs) designed specifically for SQL generation tasks.
In this repository we are introducing a new member of NSQL, DuckDB-NSQL. It's based on Meta's original [Llama-2 7B model](https://huggingface.co/meta-llama/Llama-2-7b) and further pre-trained on a dataset of general SQL queries and then fine-tuned on a dataset composed of DuckDB text-to-SQL pairs.
## Training Data
200k DuckDB text-to-SQL pairs, synthetically generated using [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1), guided by the DuckDB v0.9.2 documentation. And text-to-SQL pairs from [NSText2SQL](https://huggingface.co/datasets/NumbersStation/NSText2SQL) that were transpiled to DuckDB SQL using [sqlglot](https://github.com/tobymao/sqlglot).
## Evaluation Data
We evaluate our models on a DuckDB-specific benchmark that contains 75 text-to-SQL pairs. The benchmark is available [here](https://github.com/NumbersStationAI/DuckDB-NSQL/).
## Training Procedure
DuckDB-NSQL was trained using cross-entropy loss to maximize the likelihood of sequential inputs. For finetuning on text-to-SQL pairs, we only compute the loss over the SQL portion of the pair. The model is trained using 80GB A100s, leveraging data and model parallelism. We fine-tuned for 10 epochs.
## Intended Use and Limitations
The model was designed for text-to-SQL generation tasks from given table schema and natural language prompts. The model works best with the prompt format defined below and outputs.
In contrast to existing text-to-SQL models, the SQL generation is not contrained to `SELECT` statements, but can generate any valid DuckDB SQL statement, including statements for official DuckDB extensions.
## How to Use
Setup llama.cpp:
```shell
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
huggingface-cli download motherduckdb/DuckDB-NSQL-7B-v0.1-GGUF DuckDB-NSQL-7B-v0.1-q8_0.gguf --local-dir . --local-dir-use-symlinks False
pip install wurlitzer
```
Example 1:
```python
## Setup - Llama.cpp
from llama_cpp import Llama
with pipes() as (out, err):
llama = Llama(
model_path="DuckDB-NSQL-7B-v0.1-q8_0.gguf",
n_ctx=2048,
)
text = """### Instruction:
Your task is to generate valid duckdb SQL to answer the following question.
### Input:
### Question:
create a new table called tmp from test.csv
### Response (use duckdb shorthand if possible):
"""
with pipes() as (out, err):
pred = llama(text, temperature=0.1, max_tokens=500)
print(pred["choices"][0]["text"])
```
Example 2:
```python
from llama_cpp import Llama
with pipes() as (out, err):
llama = Llama(
model_path="DuckDB-NSQL-7B-v0.1-q8_0.gguf",
n_ctx=2048,
)
text = """### Instruction:
Your task is to generate valid duckdb SQL to answer the following question, given a duckdb database schema.
### Input:
Here is the database schema that the SQL query will run on:
CREATE TABLE taxi (
VendorID bigint,
tpep_pickup_datetime timestamp,
tpep_dropoff_datetime timestamp,
passenger_count double,
trip_distance double,
fare_amount double,
extra double,
tip_amount double,
tolls_amount double,
improvement_surcharge double,
total_amount double,
);
### Question:
get all columns ending with _amount from taxi table
### Response (use duckdb shorthand if possible):"""
with pipes() as (out, err):
pred = llama(text, temperature=0.1, max_tokens=500)
print(pred["choices"][0]["text"])
```
Example 3:
```python
from llama_cpp import Llama
with pipes() as (out, err):
llama = Llama(
model_path="DuckDB-NSQL-7B-v0.1-q8_0.gguf",
n_ctx=2048,
)
text = """### Instruction:
Your task is to generate valid duckdb SQL to answer the following question, given a duckdb database schema.
### Input:
Here is the database schema that the SQL query will run on:
CREATE TABLE rideshare (
hvfhs_license_num varchar,
dispatching_base_num varchar,
originating_base_num varchar,
request_datetime timestamp,
on_scene_datetime timestamp,
pickup_datetime timestamp,
dropoff_datetime timestamp,
trip_miles double,
trip_time bigint,
);
### Question:
get longest trip in december 2022
### Response (use duckdb shorthand if possible):
"""
with pipes() as (out, err):
pred = llama(text, temperature=0.1, max_tokens=500)
print(pred["choices"][0]["text"])
```
For more information (e.g., run with your local database), please find examples in [this repository](https://github.com/NumbersStationAI/DuckDB-NSQL).
|
eigh8skillz/southzulumodel
|
eigh8skillz
| 2024-01-25T07:35:17Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-01-25T07:35:16Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a sks southzulumodel
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
uukuguy/speechless-zephyr-code-functionary-7b
|
uukuguy
| 2024-01-25T07:15:51Z | 4,400 | 2 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T08:19:26Z |
---
language:
- en
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
---
<p><h1> speechless-zephyr-code-functionary-7b </h1></p>
[4,5,8-bit GGUF models for CPU+GPU inference](https://huggingface.co/uukuguy/speechless-zephyr-code-functionary-7b/tree/main/GGUF)
This model is the one of the moloras (Mixture-of-Multi-LoRAs) experiments.
Extract LoRA modules from below models (all based Mistral-7B-v0.1), each LoRA module has its own unique skills. By using multi-loras, they can be combined together statically or dynamically to form a versatile new model.
- HuggingFaceH4/zephyr-7b-beta (Uncensored Model)
- meetkai/functionary-small-v2.2 (Execute functions/plugins)
- uukuguy/speechless-code-mistral-7b-v1.0 (Enhance Coding)
The entire process is completed through the use of extract-lora, merge-lora, and lora-hub provided by multi-loras.
The router of mixture-of-multi-loras enables an automatic assembling of LoRA modules, using a gradientfree approach to obtain the coefficients of LoRA modules and requiring only a handful of inference steps for unseen tasks.
Code: https://github.com/uukuguy/multi_loras
## LM-Evaluation-Harness
[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric | Value |
| --- | --- |
| ARC | 61.52 |
| HellaSwag | 83.88 |
| MMLU | 64.71 |
| TruthfulQA | 44.99 |
| Winogrande | 78.69 |
| GSM8K | 43.82 |
| Average | 62.93 |
|
katzenbach/test_trainer
|
katzenbach
| 2024-01-25T07:11:58Z | 92 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-25T06:27:57Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5138
- Accuracy: 0.34
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 13 | 1.5917 | 0.22 |
| No log | 2.0 | 26 | 1.5629 | 0.25 |
| No log | 3.0 | 39 | 1.5138 | 0.34 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
abhinand/malayalam-llama-7b-instruct-v0.1-GGUF
|
abhinand
| 2024-01-25T07:11:35Z | 6 | 1 | null |
[
"gguf",
"en",
"ml",
"arxiv:2311.05845",
"base_model:abhinand/malayalam-llama-pt-v0",
"base_model:quantized:abhinand/malayalam-llama-pt-v0",
"license:llama2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-01-25T04:29:16Z |
---
base_model: abhinand/malayalam-llama-pt-v0
model-index:
- name: malayalam-llama-instruct-v0.1
results: []
license: llama2
language:
- en
- ml
---
# Malayalam LLaMA 7B Instruct v0.1 [GGUF Quantized]
Welcome to the inaugural release of the Malayalam LLaMA 7B instruct model โ an important step in advancing LLMs for the Malayalam language. This model is ready for immediate inference and is also primed for further fine-tuning to cater to your specific NLP tasks.
To dive deep into the development and capabilities of this model, please read the [research paper](https://arxiv.org/abs/2311.05845) and the [introductory blog post (WIP)]() that outlines our journey and the model's potential impact.
> **Note:** This model is based on the Tamil LLaMA series of models. The GitHub repository remains the same - [https://github.com/abhinand5/tamil-llama](https://github.com/abhinand5/tamil-llama). The base models and the updated code for Tamil LLaMA v0.2 (which this work is based on) will be released soon.
If you appreciate this work and would like to support its continued development, consider [buying me a coffee](https://www.buymeacoffee.com/abhinand.b). Your support is invaluable and greatly appreciated.
[](https://www.buymeacoffee.com/abhinand.b)
## Demo:
To access an easy-to-use, no-code demo, please open the provided Google Colab notebook. Complete instructions for usage are included within the notebook itself.
<a href="https://colab.research.google.com/drive/11_RHZim_HubD2NskxSwq4X_NW4XlnrtS?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Demo In Colab"></a></center>
## Model description
The Malayalam LLaMA models have been enhanced and tailored specifically with an extensive Malayalam vocabulary of ~16,000 tokens, building upon the foundation set by the original LLaMA-2.
- **Model type:** A 7B parameter GPT-like model finetuned on ~500,000 samples consisting of an equal proportion of English and Malayalam samples. (Dataset will be released soon)
- **Language(s):** Bilingual. English and Malayalam.
- **License:** GNU General Public License v3.0
- **Finetuned from model:** [To be released soon]()
- **Training Precision:** `bfloat16`
- **Code:** [GitHub](https://github.com/abhinand5/tamil-llama) (To be updated soon)
## Quantization Info
This repo contains GGUF format model files for [Malayalam LLaMA 7B Instruct v0.1](https://huggingface.co/abhinand/malayalam-llama-7b-instruct-v0.1).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
### Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|------------------------------------------------------------------------------------------------------------------------------------------------|--------------|------|---------|------------------|----------------------------------------------------------|
| [malayalam-llama-7b-instruct-v0.1.Q4_K_M.gguf](https://huggingface.co/abhinand/malayalam-llama-7b-instruct-v0.1-GGUF/blob/main/malayalam-llama-7b-instruct-v0.1.Q4_K_M.gguf) | Q4_K_M | 4 | 4.18 GB | 6.8 GB | medium, balanced quality - recommended |
| [malayalam-llama-7b-instruct-v0.1.Q5_K_M.gguf](https://huggingface.co/abhinand/malayalam-llama-7b-instruct-v0.1-GGUF/blob/main/malayalam-llama-7b-instruct-v0.1.Q5_K_M.gguf) | Q5_K_M | 5 | 4.89 GB | 7.5 GB | large, very low quality loss - recommended |
## Prompt Template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Benchmark Results
Benchmarking was done using [LLM-Autoeval](https://github.com/mlabonne/llm-autoeval) on an RTX 3090 on [runpod](https://www.runpod.io/).
> **Note:** Please note that discrepancies have been observed between the Open LLM Leaderboard scores and those obtained from local runs using the LM Eval Harness with identical configurations. The results mentioned here are based on our own benchmarking. To replicate these findings, you can utilize the LLM-Autoeval or use [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) locally with the configurations described in Open LLM Leaderboard's About page.
| Benchmark | Llama 2 Chat | Tamil Llama v0.2 Instruct | Telugu Llama Instruct | Malayalam Llama Instruct |
|---------------|--------------|---------------------------|-----------------------|--------------------------|
| ARC Challenge (25-shot) | 52.9 | **53.75** | 52.47 | 52.82 |
| TruthfulQA (0-shot) | 45.57 | 47.23 | **48.47** | 47.46 |
| Hellaswag (10-shot) | **78.55** | 76.11 | 76.13 | 76.91 |
| Winogrande (5-shot) | 71.74 | **73.95** | 71.74 | 73.16 |
| AGI Eval (0-shot) | 29.3 | **30.95** | 28.44 | 29.6 |
| BigBench (0-shot) | 32.6 | 33.08 | 32.99 | **33.26** |
| Average | 51.78 | **52.51** | 51.71 | 52.2 |
## Related Models
| Model | Type | Data | Base Model | # Params | Download Links |
|--------------------------|-----------------------------|-------------------|----------------------|------|------------------------------------------------------------------------|
| Tamil LLaMA 7B v0.1 Base | Base model | 12GB | LLaMA 7B | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-base-v0.1) |
| Tamil LLaMA 13B v0.1 Base | Base model | 4GB | LLaMA 13B | 13B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-13b-base-v0.1) |
| Tamil LLaMA 7B v0.1 Instruct | Instruction following model | 145k instructions | Tamil LLaMA 7B Base | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.1) |
| Tamil LLaMA 13B v0.1 Instruct | Instruction following model | 145k instructions | Tamil LLaMA 13B Base | 13B | [HF Hub](abhinand/tamil-llama-13b-instruct-v0.1) |
| Tamil LLaMA 7B v0.2 Instruct | Instruction/Chat model | 420k instructions | Tamil LLaMA 7B Base v0.2 | 7B | [HF Hub](abhinand/tamil-llama-13b-instruct-v0.1) |
| Telugu LLaMA 7B v0.2 Instruct | Instruction/Chat model | ~400k instructions | Telugu LLaMA 7B Base v0.1 | 7B | [HF Hub](https://huggingface.co/abhinand/telugu-llama-instruct-v0.1) |
## Usage Note
It's important to note that the models have not undergone detoxification/censorship. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications.
## Meet the Developers
Get to know the creators behind this innovative model and follow their contributions to the field:
- [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/)
## Citation
If you use this model or any of the the Tamil-Llama related work in your research, please cite:
```bibtex
@misc{balachandran2023tamilllama,
title={Tamil-Llama: A New Tamil Language Model Based on Llama 2},
author={Abhinand Balachandran},
year={2023},
eprint={2311.05845},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Tamil language.
|
Deepakkori45/Mistal_aspect_class_sequnece
|
Deepakkori45
| 2024-01-25T07:11:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T07:11:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
smend0/distilbert-base-uncased-lora-text-classification
|
smend0
| 2024-01-25T07:10:56Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-01-25T07:06:51Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-lora-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-lora-text-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on a truncated IMDB dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7208
- Accuracy: {'accuracy': 0.876}
## Model description
The purpose of this model is to turn distilbert into a sentiment classification model.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|
| No log | 1.0 | 250 | 2.0890 | {'accuracy': 0.862} |
| 0.2005 | 2.0 | 500 | 1.8919 | {'accuracy': 0.874} |
| 0.2005 | 3.0 | 750 | 1.7205 | {'accuracy': 0.871} |
| 0.0963 | 4.0 | 1000 | 1.7208 | {'accuracy': 0.876} |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.1
|
dana11235/poca-SoccerTwos
|
dana11235
| 2024-01-25T07:07:57Z | 23 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-01-25T07:07:50Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: dana11235/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
TheBloke/Everyone-Coder-33B-Base-GPTQ
|
TheBloke
| 2024-01-25T07:01:23Z | 14 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"base_model:rombodawg/Everyone-Coder-33b-Base",
"base_model:quantized:rombodawg/Everyone-Coder-33b-Base",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-01-25T04:04:35Z |
---
base_model: rombodawg/Everyone-Coder-33b-Base
inference: false
license: other
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
license_name: deepseek
model_creator: rombo dawg
model_name: Everyone Coder 33B Base
model_type: deepseek
prompt_template: '{prompt}
'
quantized_by: TheBloke
tags:
- merge
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Everyone Coder 33B Base - GPTQ
- Model creator: [rombo dawg](https://huggingface.co/rombodawg)
- Original model: [Everyone Coder 33B Base](https://huggingface.co/rombodawg/Everyone-Coder-33b-Base)
<!-- description start -->
# Description
This repo contains GPTQ model files for [rombo dawg's Everyone Coder 33B Base](https://huggingface.co/rombodawg/Everyone-Coder-33b-Base).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Everyone-Coder-33B-Base-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Everyone-Coder-33B-Base-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Everyone-Coder-33B-Base-GGUF)
* [rombo dawg's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/rombodawg/Everyone-Coder-33b-Base)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Everyone-Coder-33B-Base-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1/viewer/) | 8192 | 17.40 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Everyone-Coder-33B-Base-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1/viewer/) | 8192 | 18.03 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Everyone-Coder-33B-Base-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1/viewer/) | 8192 | 19.96 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Everyone-Coder-33B-Base-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1/viewer/) | 8192 | 13.89 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Everyone-Coder-33B-Base-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1/viewer/) | 8192 | 33.84 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/Everyone-Coder-33B-Base-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1/viewer/) | 8192 | 15.72 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Everyone-Coder-33B-Base-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1/viewer/) | 8192 | 34.60 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Everyone-Coder-33B-Base-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Everyone-Coder-33B-Base-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Everyone-Coder-33B-Base-GPTQ`:
```shell
mkdir Everyone-Coder-33B-Base-GPTQ
huggingface-cli download TheBloke/Everyone-Coder-33B-Base-GPTQ --local-dir Everyone-Coder-33B-Base-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Everyone-Coder-33B-Base-GPTQ
huggingface-cli download TheBloke/Everyone-Coder-33B-Base-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Everyone-Coder-33B-Base-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Everyone-Coder-33B-Base-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Everyone-Coder-33B-Base-GPTQ --local-dir Everyone-Coder-33B-Base-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Everyone-Coder-33B-Base-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Everyone-Coder-33B-Base-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Everyone-Coder-33B-Base-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Everyone-Coder-33B-Base-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Everyone-Coder-33B-Base-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(
prompt_template,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Everyone-Coder-33B-Base-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, ้ฟๆ, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjรคreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: rombo dawg's Everyone Coder 33B Base
Everyone-Coder-33b-Base

EveryoneLLM series of models made by the community, for the community. This is a coding specific model made using fine-tunes of deekseekcoder-33b-base.
______________________________________________________________________________________________________________
Im having trouble benchmarking this model because I suck at running llm benchmarks, but from hand testing running the model through https://edabit.com/challenge coding challenges vs up to date gpt-4. My model is hands down beating it in coding.
______________________________________________________________________________________________________________
Ive recently noticed this model has trouble with end tokens so I made a custom prompt template for it. Made sure to add (Always end with "<|EOT|>") In addition to your system prompt and (Always end your response with "<|EOT|>") at the end of the User message is the preset. Then add <|EOT|> as a custom stop string in your LM text generating interface.
```
Always end with "<|EOT|>"
{System}
<|User|>
{User}. Always end your response with "<|EOT|>"
<|Assistant|>
{Assistant}
```
The models that were used in this merger were as follow:
- https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct
- https://huggingface.co/codefuse-ai/CodeFuse-DeepSeek-33B
- https://huggingface.co/WizardLM/WizardCoder-33B-V1.1
Thank you to the creators of the above ai models, they have full credit for the EveryoneLLM series of models. Without their hard work we wouldnt be able to achieve the great success we have in the open source community. ๐
You can find the write up for merging models here:
https://docs.google.com/document/d/1_vOftBnrk9NRk5h10UqrfJ5CDih9KBKL61yvrZtVWPE/edit?usp=sharing
Config for the merger can be found bellow:
```yaml
models:
- model: WizardLM_WizardCoder-33B-V1.1
parameters:
density: 1
weight: .5
- model: codefuse-ai_CodeFuse-DeepSeek-33B
parameters:
density: 1
weight: .5
merge_method: ties
base_model: deepseek-ai_deepseek-coder-33b-instruct
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
alnrg2arg/blockchainlabs_7B_merged_test2_4_sft_lora_DPO_orca2
|
alnrg2arg
| 2024-01-25T06:56:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T06:56:36Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Hongu/test_ner
|
Hongu
| 2024-01-25T06:56:34Z | 102 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-25T06:43:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
schroneko/ELYZA-japanese-Llama-2-13b-fast-instruct-gguf
|
schroneko
| 2024-01-25T06:30:57Z | 14 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama",
"llama2",
"ja",
"arxiv:2307.09288",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T14:14:39Z |
---
license: llama2
language:
- ja
tags:
- llama2
---
# ELYZA-japanese-Llama-2-13b-fast-instruct-gguf
[ELYZA-japanese-Llama-2-13b-fast-instruct](https://huggingface.co/ELYZA/ELYZA-japanese-Llama-2-13b-fast-instruct)ใฎ GGUF ๅคๆใขใใซใงใใ
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'ELYZA-japanese-Llama-2-13b-fast-instruct-q4_k_m.gguf' -n 256 -p '[INST] <<SYS>>ใใชใใฏ่ช ๅฎใงๅช็งใชๆฅๆฌไบบใฎใขใทในใฟใณใใงใใ<</SYS>>ไปไบใฎ็ฑๆใๅใๆปใใใใฎใขใคใใขใ5ใคๆใใฆใใ ใใใ[/INST]'
```
### Licence
Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
### ๅผ็จ Citations
```tex
@misc{elyzallama2023,
title={ELYZA-japanese-Llama-2-13b},
url={https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b},
author={Akira Sasaki and Masato Hirakawa and Shintaro Horie and Tomoaki Nakamura and Sam Passaglia and Daisuke Oba},
year={2023},
}
```
```tex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
LMX20012010/mistral_7b_guanaco
|
LMX20012010
| 2024-01-25T06:25:14Z | 61 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-01-25T06:23:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
adarsh2350/T5-pytorch-billsum
|
adarsh2350
| 2024-01-25T06:15:34Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-24T15:52:59Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: T5-pytorch-billsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5-pytorch-billsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5259
- Rouge1: 0.1402
- Rouge2: 0.0484
- Rougel: 0.1155
- Rougelsum: 0.1153
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8082 | 0.1229 | 0.034 | 0.1036 | 0.1035 | 19.0 |
| No log | 2.0 | 124 | 2.6034 | 0.1296 | 0.0409 | 0.1079 | 0.1079 | 19.0 |
| No log | 3.0 | 186 | 2.5419 | 0.1378 | 0.047 | 0.1136 | 0.1136 | 19.0 |
| No log | 4.0 | 248 | 2.5259 | 0.1402 | 0.0484 | 0.1155 | 0.1153 | 19.0 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cpu
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Chandrababu-Namani/InLegalBERT-fine-tuned
|
Chandrababu-Namani
| 2024-01-25T05:45:26Z | 108 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:law-ai/InLegalBERT",
"base_model:finetune:law-ai/InLegalBERT",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-25T05:45:07Z |
---
license: mit
base_model: law-ai/InLegalBERT
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: InLegalBERT-lora-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# InLegalBERT-lora-text-classification
This model is a fine-tuned version of [law-ai/InLegalBERT](https://huggingface.co/law-ai/InLegalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0550
- Accuracy: {'accuracy': 0.6449893390191898}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------:|
| No log | 1.0 | 235 | 1.1448 | {'accuracy': 0.6151385927505331} |
| No log | 2.0 | 470 | 1.0553 | {'accuracy': 0.6380597014925373} |
| 1.2222 | 3.0 | 705 | 1.0427 | {'accuracy': 0.6316631130063965} |
| 1.2222 | 4.0 | 940 | 1.0490 | {'accuracy': 0.6428571428571429} |
| 0.8111 | 5.0 | 1175 | 1.0550 | {'accuracy': 0.6449893390191898} |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
cloudyu/Pluto_13B_DPO
|
cloudyu
| 2024-01-25T05:44:13Z | 17 | 4 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"dpo",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-17T06:00:41Z |
---
license: cc-by-nc-4.0
tags:
- moe
- dpo
---
# DPO of cloudyu/Mixtral_7Bx2_MoE
dataset : jondurbin/truthy-dpo-v0.1
* metrics average 75.88
* TruthfulQA 81.5 SOTA (2024-01-17)
gpu code example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Mixtral_7Bx2_MoE_DPO"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.bfloat16, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
|
GaganpreetSingh/mistral-7b-lora_model
|
GaganpreetSingh
| 2024-01-25T05:29:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T05:29:12Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fionazhang/mistral-experiment-6
|
fionazhang
| 2024-01-25T05:16:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-22T00:46:43Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral-experiment-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-experiment-6
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0a0+git7bcf7da
- Datasets 2.16.1
- Tokenizers 0.15.0
|
megha-shroff/mistral-rand-300k-test
|
megha-shroff
| 2024-01-25T05:16:18Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-01-25T05:16:07Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-v0.1-GPTQ
model-index:
- name: mistral-rand-300k-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-rand-300k-test
This model is a fine-tuned version of [TheBloke/Mistral-7B-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-v0.1-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6477 | 0.01 | 50 | 0.8983 |
| 0.8474 | 0.02 | 100 | 0.8347 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
tsobolev/ppo-Huggy
|
tsobolev
| 2024-01-25T05:00:08Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-25T05:00:01Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tsobolev/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
ronluvzu1/autotrain-zb40r-1ccvc
|
ronluvzu1
| 2024-01-25T04:53:03Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T04:52:59Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
varun-v-rao/t5-base-snli-model3
|
varun-v-rao
| 2024-01-25T04:52:47Z | 33 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-25T03:43:05Z |
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: t5-base-snli-model3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-snli-model3
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2857
- Accuracy: 0.8985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 43
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3812 | 1.0 | 2146 | 0.3052 | 0.8894 |
| 0.3511 | 2.0 | 4292 | 0.2862 | 0.8971 |
| 0.3333 | 3.0 | 6438 | 0.2857 | 0.8985 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
arun100/whisper-small-tr-derived-1
|
arun100
| 2024-01-25T04:26:57Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"tr",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:alikanakar/whisper-synthesized-turkish-8-hour",
"base_model:finetune:alikanakar/whisper-synthesized-turkish-8-hour",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-24T16:56:45Z |
---
language:
- tr
license: apache-2.0
base_model: alikanakar/whisper-synthesized-turkish-8-hour
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
model-index:
- name: Whisper Small Turkish Derived
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_16_0 tr
type: mozilla-foundation/common_voice_16_0
config: tr
split: test
args: tr
metrics:
- name: Wer
type: wer
value: 18.625004393518683
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Turkish Derived
This model is a fine-tuned version of [alikanakar/whisper-synthesized-turkish-8-hour](https://huggingface.co/alikanakar/whisper-synthesized-turkish-8-hour) on the mozilla-foundation/common_voice_16_0 tr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2859
- Wer: 18.6250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3197 | 0.58 | 200 | 0.3517 | 21.5757 |
| 0.2672 | 1.16 | 400 | 0.3203 | 20.6724 |
| 0.2532 | 1.75 | 600 | 0.3065 | 19.6496 |
| 0.2382 | 2.33 | 800 | 0.2991 | 19.3420 |
| 0.2448 | 2.91 | 1000 | 0.2943 | 19.1276 |
| 0.2197 | 3.49 | 1200 | 0.2909 | 18.9712 |
| 0.2159 | 4.07 | 1400 | 0.2885 | 18.7340 |
| 0.2212 | 4.65 | 1600 | 0.2871 | 18.6988 |
| 0.2114 | 5.24 | 1800 | 0.2861 | 18.6320 |
| 0.2092 | 5.82 | 2000 | 0.2859 | 18.6250 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
bartowski/Einstein-7B-exl2
|
bartowski
| 2024-01-25T04:24:26Z | 1 | 0 | null |
[
"axolotl",
"generated_from_trainer",
"text-generation",
"dataset:allenai/ai2_arc",
"dataset:camel-ai/physics",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/biology",
"dataset:metaeval/reclor",
"dataset:openbookqa",
"dataset:mandyyyyii/scibench",
"dataset:derek-thomas/ScienceQA",
"dataset:wenhu/TheoremQA",
"dataset:TIGER-Lab/ScienceEval",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:other",
"region:us"
] |
text-generation
| 2024-01-25T04:08:27Z |
---
license: other
tags:
- axolotl
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
datasets:
- allenai/ai2_arc
- camel-ai/physics
- camel-ai/chemistry
- camel-ai/biology
- metaeval/reclor
- openbookqa
- mandyyyyii/scibench
- derek-thomas/ScienceQA
- wenhu/TheoremQA
- TIGER-Lab/ScienceEval
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of Einstein-7B
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/Weyaxi/Einstein-7B
| Branch | Bits | lm_head bits | Size | Description |
| ----- | ---- | ------- | ------ | ------------ |
| [8_0](https://huggingface.co/Bartowski/Einstein-7B-exl2/tree/8_0) | 8.0 | 8.0 | 9.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Bartowski/Einstein-7B-exl2/tree/6_5) | 6.5 | 8.0 | 8.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/Einstein-7B-exl2/tree/5_0) | 5.0 | 6.0 | 7.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/Bartowski/Einstein-7B-exl2/tree/4_25) | 4.25 | 6.0 | 6.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/Bartowski/Einstein-7B-exl2/tree/3_5) | 3.5 | 6.0 | 6.1 GB | Lower quality, only use if you have to. |
All VRAM requirements estimated from 16k context. For 32k context add ~2 GB.
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Einstein-7B-exl2 Einstein-7B-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Einstein-7B-exl2`:
```shell
mkdir Einstein-7B-exl2
huggingface-cli download bartowski/Einstein-7B-exl2 --local-dir Einstein-7B-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir Einstein-7B-exl2-6_5
huggingface-cli download bartowski/Einstein-7B-exl2 --revision 6_5 --local-dir Einstein-7B-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir Einstein-7B-exl2-6.5
huggingface-cli download bartowski/Einstein-7B-exl2 --revision 6_5 --local-dir Einstein-7B-exl2-6.5 --local-dir-use-symlinks False
```
|
datasciencechampion/mistral-gptq-finetuned-medmcqa
|
datasciencechampion
| 2024-01-25T04:15:01Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-01-24T13:15:50Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
model-index:
- name: mistral-gptq-finetuned-medmcqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-gptq-finetuned-medmcqa
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 500
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
bcijo/myMistral7b-finetuned
|
bcijo
| 2024-01-25T03:51:07Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T19:36:48Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
RangiLyu/InternLM2-tokenizer-llama
|
RangiLyu
| 2024-01-25T03:46:09Z | 0 | 3 | null |
[
"license:other",
"region:us"
] | null | 2024-01-25T03:21:15Z |
---
license: other
---
# InternLM2 tokenizer(llamaified version)
Official repo: https://github.com/InternLM/InternLM
## Note
This repo converts the InternLM2 tokenizer to LlamaTokenizerFast.
It also replaces the 354 token `\u0000` with an emoji so that it can be converted by llama.cpp
## How to use
- Load
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(RangiLyu/InternLM2-tokenizer-llama)
```
- Apply chatml template
```python
chat = [{"role": "user", "content": "Hello! What's your name?"},
{"role": "assistant", "content": "My name is InternLM2!"},
{"role": "user", "content": "Nice to meet you InternLM2!"},]
chat_ids = llama_tokenizer.apply_chat_template(chat)
print("ids: ", chat_ids)
print("tokens: ", llama_tokenizer.convert_ids_to_tokens(chat_ids))
# convert the chat history to a string for generation
chat_str = llama_tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
print("chat string: ", chat_str)
```
```
ids: [1, 92543, 1008, 364, 9843, 346, 3716, 725, 829, 963, 345, 92542, 364, 92543, 525, 11353, 364, 5211, 963, 505, 4576, 11146, 314, 346, 92542, 364, 92543, 1008, 364, 44501, 442, 3531, 629, 4576, 11146, 314, 346, 92542, 364]
tokens: ['<s>', '<|im_start|>', 'user', '\n', 'Hello', '!', 'โWhat', "'s", 'โyour', 'โname', '?', '<|im_end|>', '\n', '<|im_start|>', 'ass', 'istant', '\n', 'My', 'โname', 'โis', 'โIntern', 'LM', '2', '!', '<|im_end|>', '\n', '<|im_start|>', 'user', '\n', 'Nice', 'โto', 'โmeet', 'โyou', 'โIntern', 'LM', '2', '!', '<|im_end|>', '\n']
chat string: <s><|im_start|>user
Hello! What's your name?<|im_end|>
<|im_start|>assistant
My name is InternLM2!<|im_end|>
<|im_start|>user
Nice to meet you InternLM2!<|im_end|>
<|im_start|>assistant
```
|
biznetgio/llama2-indonesia-chat
|
biznetgio
| 2024-01-25T03:46:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T03:45:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
varun-v-rao/t5-base-snli-model2
|
varun-v-rao
| 2024-01-25T03:42:00Z | 33 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-25T02:32:24Z |
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: t5-base-snli-model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-snli-model2
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2836
- Accuracy: 0.8994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 36
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3815 | 1.0 | 2146 | 0.3028 | 0.8896 |
| 0.3457 | 2.0 | 4292 | 0.2840 | 0.8979 |
| 0.3339 | 3.0 | 6438 | 0.2836 | 0.8994 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
RayChengdu/internlm2-7b-lora
|
RayChengdu
| 2024-01-25T03:31:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T03:31:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
limin-arc/Mem-Beagle-7b-slerp-v3
|
limin-arc
| 2024-01-25T03:30:14Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"starsnatched/MemGPT",
"222gate/Ingot-7b-slerp-7-forged-mirror",
"base_model:gate369/Ingot-7b-slerp-7-forged-mirror",
"base_model:merge:gate369/Ingot-7b-slerp-7-forged-mirror",
"base_model:minchyeom/MemGPT",
"base_model:merge:minchyeom/MemGPT",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T03:22:20Z |
---
tags:
- merge
- mergekit
- lazymergekit
- starsnatched/MemGPT
- 222gate/Ingot-7b-slerp-7-forged-mirror
- starsnatched/MemGPT
base_model:
- starsnatched/MemGPT
- 222gate/Ingot-7b-slerp-7-forged-mirror
- starsnatched/MemGPT
---
# Mem-Beagle-7b-slerp-v3
Mem-Beagle-7b-slerp-v3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [starsnatched/MemGPT](https://huggingface.co/starsnatched/MemGPT)
* [222gate/Ingot-7b-slerp-7-forged-mirror](https://huggingface.co/222gate/Ingot-7b-slerp-7-forged-mirror)
* [starsnatched/MemGPT](https://huggingface.co/starsnatched/MemGPT)
## ๐งฉ Configuration
```yaml
models:
- model: starsnatched/MemGPT
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: 222gate/Ingot-7b-slerp-7-forged-mirror
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
- model: starsnatched/MemGPT
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: liminerity/Mem-Beagle-7b-slerp-v2
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "liminerity/Mem-Beagle-7b-slerp-v3"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
abhinand/tamil-llama-7b-base-v0.1
|
abhinand
| 2024-01-25T03:18:06Z | 1,637 | 9 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"ta",
"en",
"arxiv:2311.05845",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-08T03:55:05Z |
---
language:
- ta
- en
license: llama2
---
# Tamil LLaMA 7B Base v0.1 [pre-trained]
Welcome to the inaugural release of the Tamil LLaMA 7B base model โ an important step in advancing LLMs for the Tamil language. This model is ready for immediate inference and is also primed for further fine-tuning to cater to your specific NLP tasks.
To dive deep into the development and capabilities of this model, please read the [research paper](https://arxiv.org/abs/2311.05845) and the [introductory blog post (WIP)]() that outlines our journey and the model's potential impact.
> **Please Note:** This model, labeled as a foundational Tamil Language Model (LLM), is designed primarily for Causal Language Modeling (LM) purposes. In other words, if you are looking for an instruction following model in Tamil, you may find [abhinand/tamil-llama-7b-instruct-v0.1](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.1) more suitable for your needs.
## Model description
The Tamil LLaMA models have been enhanced and tailored specifically with an extensive Tamil vocabulary of 16,000 tokens, building upon the foundation set by the original LLaMA-2.
- **Model type:** A 7B parameter model for Causal LM pre-trained on [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) dataset's Tamil subset.
- **Language(s):** Tamil and English
- **License:** GNU General Public License v3.0
- **Source Model:** [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
- **Training Precision:** `float16`
- **Code:** [GitHub](https://github.com/abhinand5/tamil-llama)
## Related Models
| Model | Type | Data | Base Model | # Params | Download Links |
|--------------------------|-----------------------------|-------------------|----------------------|------|------------------------------------------------------------------------|
| Tamil LLaMA 7B Base | Base model | 12GB | LLaMA 7B | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-base-v0.1) |
| Tamil LLaMA 13B Base | Base model | 4GB | LLaMA 13B | 13B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-13b-base-v0.1) |
| Tamil LLaMA 7B Instruct | Instruction following model | 145k instructions | Tamil LLaMA 7B Base | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.1) |
| Tamil LLaMA 13B Instruct | Instruction following model | 145k instructions | Tamil LLaMA 13B Base | 13B | [HF Hub](abhinand/tamil-llama-13b-instruct-v0.1) |
## Usage Note
It's important to note that the models have not undergone detoxification. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications.
## Meet the Developers
Get to know the creators behind this innovative model and follow their contributions to the field:
- [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/)
## Citation
If you use this model or the Tamil-Llama dataset in your research, please cite:
```bibtex
@misc{balachandran2023tamilllama,
title={Tamil-Llama: A New Tamil Language Model Based on Llama 2},
author={Abhinand Balachandran},
year={2023},
eprint={2311.05845},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Tamil language.
|
abhinand/tamil-llama-7b-instruct-v0.2-GGUF
|
abhinand
| 2024-01-25T03:15:33Z | 20 | 1 | null |
[
"gguf",
"en",
"ta",
"arxiv:2311.05845",
"base_model:abhinand/dr-llama-ta-instruct-v0",
"base_model:quantized:abhinand/dr-llama-ta-instruct-v0",
"license:llama2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-01-24T06:54:32Z |
---
base_model: abhinand/dr-llama-ta-instruct-v0
model-index:
- name: tamil-llama-instruct-v0.2
results: []
license: llama2
language:
- en
- ta
---
# Tamil LLaMA 7B Instruct v0.2 [GGUF Quantized]
Welcome to the inaugural release of the Tamil LLaMA 7B instruct model โ an important step in advancing LLMs for the Tamil language. This model is ready for immediate inference and is also primed for further fine-tuning to cater to your specific NLP tasks.
To dive deep into the development and capabilities of this model, please read the [research paper](https://arxiv.org/abs/2311.05845) and the [introductory blog post (WIP)]() that outlines our journey and the model's potential impact.
> **Note:** This model is based on the Tamil LLaMA series of models. The GitHub repository remains the same - [https://github.com/abhinand5/tamil-llama](https://github.com/abhinand5/tamil-llama). The base models and the updated code for Tamil LLaMA v0.2 (which this work is based on) will be released soon.
If you appreciate this work and would like to support its continued development, consider [buying me a coffee](https://www.buymeacoffee.com/abhinand.b). Your support is invaluable and greatly appreciated.
[](https://www.buymeacoffee.com/abhinand.b)
## Demo:
To access an easy-to-use, no-code demo, please open the provided Google Colab notebook. Complete instructions for usage are included within the notebook itself.
<a href="https://colab.research.google.com/drive/11_RHZim_HubD2NskxSwq4X_NW4XlnrtS?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Demo In Colab"></a></center>
## Model description
The Tamil LLaMA models have been enhanced and tailored specifically with an extensive Tamil vocabulary of ~16,000 tokens, building upon the foundation set by the original LLaMA-2.
- **Model type:** A 7B parameter GPT-like model finetuned on ~500,000 samples consisting of an equal proportion of English and Tamil samples. (Dataset will be released soon)
- **Language(s):** Bilingual. English and Tamil.
- **License:** GNU General Public License v3.0
- **Finetuned from model:** [To be released soon]()
- **Training Precision:** `bfloat16`
- **Code:** [GitHub](https://github.com/abhinand5/tamil-llama) (To be updated soon)
## Quantization Info
This repo contains GGUF format model files for [Tamil LLaMA 7B Instruct v0.2](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.2).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
### Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|------------------------------------------------------------------------------------------------------------------------------------------------|--------------|------|---------|------------------|----------------------------------------------------------|
| [tamil-llama-7b-instruct-v0.2.Q4_K_M.gguf](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.2-GGUF/blob/main/tamil-llama-7b-instruct-v0.2.Q4_K_M.gguf) | Q4_K_M | 4 | 4.18 GB | 6.8 GB | medium, balanced quality - recommended |
| [tamil-llama-7b-instruct-v0.2.Q5_K_M.gguf](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.2-GGUF/blob/main/tamil-llama-7b-instruct-v0.2.Q5_K_M.gguf) | Q5_K_M | 5 | 4.89 GB | 7.5 GB | large, very low quality loss - recommended |
## Prompt Template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Benchmark Results
Benchmarking was done using [LLM-Autoeval](https://github.com/mlabonne/llm-autoeval) on an RTX 3090 on [runpod](https://www.runpod.io/).
> **Note:** Please note that discrepancies have been observed between the Open LLM Leaderboard scores and those obtained from local runs using the LM Eval Harness with identical configurations. The results mentioned here are based on our own benchmarking. To replicate these findings, you can utilize the LLM-Autoeval or use [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) locally with the configurations described in Open LLM Leaderboard's About page.
| Benchmark | Llama 2 Chat | Tamil Llama v0.2 Instruct | Telugu Llama Instruct | Malayalam Llama Instruct |
|---------------|--------------|---------------------------|-----------------------|--------------------------|
| ARC Challenge (25-shot) | 52.9 | **53.75** | 52.47 | 52.82 |
| TruthfulQA (0-shot) | 45.57 | 47.23 | **48.47** | 47.46 |
| Hellaswag (10-shot) | **78.55** | 76.11 | 76.13 | 76.91 |
| Winogrande (5-shot) | 71.74 | **73.95** | 71.74 | 73.16 |
| AGI Eval (0-shot) | 29.3 | **30.95** | 28.44 | 29.6 |
| BigBench (0-shot) | 32.6 | 33.08 | 32.99 | **33.26** |
| Average | 51.78 | **52.51** | 51.71 | 52.2 |
## Related Models
| Model | Type | Data | Base Model | # Params | Download Links |
|--------------------------|-----------------------------|-------------------|----------------------|------|------------------------------------------------------------------------|
| Tamil LLaMA 7B v0.1 Base | Base model | 12GB | LLaMA 7B | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-base-v0.1) |
| Tamil LLaMA 13B v0.1 Base | Base model | 4GB | LLaMA 13B | 13B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-13b-base-v0.1) |
| Tamil LLaMA 7B v0.1 Instruct | Instruction following model | 145k instructions | Tamil LLaMA 7B Base | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.1) |
| Tamil LLaMA 13B v0.1 Instruct | Instruction following model | 145k instructions | Tamil LLaMA 13B Base | 13B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-13b-instruct-v0.1) |
| Telugu LLaMA 7B v0.1 Instruct | Instruction/Chat model | 420k instructions | Telugu LLaMA 7B Base v0.1 | 7B | [HF Hub](https://huggingface.co/abhinand/telugu-llama-instruct-v0.1) |
| Malayalam LLaMA 7B v0.2 Instruct | Instruction/Chat model | 420k instructions | Malayalam LLaMA 7B Base v0.1 | 7B | [HF Hub](https://huggingface.co/abhinand/malayalam-llama-instruct-v0.1) |
## Usage Note
It's important to note that the models have not undergone detoxification/censorship. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications.
## Meet the Developers
Get to know the creators behind this innovative model and follow their contributions to the field:
- [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/)
## Citation
If you use this model or any of the the Tamil-Llama related work in your research, please cite:
```bibtex
@misc{balachandran2023tamilllama,
title={Tamil-Llama: A New Tamil Language Model Based on Llama 2},
author={Abhinand Balachandran},
year={2023},
eprint={2311.05845},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Tamil language.
|
abhinand/tamil-llama-7b-instruct-v0.2
|
abhinand
| 2024-01-25T03:14:24Z | 753 | 4 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"ta",
"arxiv:2311.05845",
"base_model:abhinand/dr-llama-ta-instruct-v0",
"base_model:finetune:abhinand/dr-llama-ta-instruct-v0",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T03:06:49Z |
---
base_model: abhinand/dr-llama-ta-instruct-v0
model-index:
- name: tamil-llama-instruct-v0.2
results: []
license: llama2
language:
- en
- ta
---
# Tamil LLaMA 7B Instruct v0.2
Welcome to the inaugural release of the Tamil LLaMA 7B instruct model โ an important step in advancing LLMs for the Tamil language. This model is ready for immediate inference and is also primed for further fine-tuning to cater to your specific NLP tasks.
To dive deep into the development and capabilities of this model, please read the [research paper](https://arxiv.org/abs/2311.05845) and the [introductory blog post (WIP)]() that outlines our journey and the model's potential impact.
> **Note:** This model is based on the Tamil LLaMA series of models. The GitHub repository remains the same - [https://github.com/abhinand5/tamil-llama](https://github.com/abhinand5/tamil-llama). The base models and the updated code for Tamil LLaMA v0.2 (which this work is based on) will be released soon.
If you appreciate this work and would like to support its continued development, consider [buying me a coffee](https://www.buymeacoffee.com/abhinand.b). Your support is invaluable and greatly appreciated.
[](https://www.buymeacoffee.com/abhinand.b)
## Demo:
To access an easy-to-use, no-code demo, please open the provided Google Colab notebook. Complete instructions for usage are included within the notebook itself.
<a href="https://colab.research.google.com/drive/11_RHZim_HubD2NskxSwq4X_NW4XlnrtS?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Demo In Colab"></a></center>
## Model description
The Tamil LLaMA models have been enhanced and tailored specifically with an extensive Tamil vocabulary of ~16,000 tokens, building upon the foundation set by the original LLaMA-2.
- **Model type:** A 7B parameter GPT-like model finetuned on ~500,000 samples consisting of an equal proportion of English and Tamil samples. (Dataset will be released soon)
- **Language(s):** Bilingual. English and Tamil.
- **License:** GNU General Public License v3.0
- **Finetuned from model:** [To be released soon]()
- **Training Precision:** `bfloat16`
- **Code:** [GitHub](https://github.com/abhinand5/tamil-llama) (To be updated soon)
## Prompt Template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Benchmark Results
Benchmarking was done using [LLM-Autoeval](https://github.com/mlabonne/llm-autoeval) on an RTX 3090 on [runpod](https://www.runpod.io/).
> **Note:** Please note that discrepancies have been observed between the Open LLM Leaderboard scores and those obtained from local runs using the LM Eval Harness with identical configurations. The results mentioned here are based on our own benchmarking. To replicate these findings, you can utilize the LLM-Autoeval or use [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) locally with the configurations described in Open LLM Leaderboard's About page.
| Benchmark | Llama 2 Chat | Tamil Llama v0.2 Instruct | Telugu Llama Instruct | Malayalam Llama Instruct |
|---------------|--------------|---------------------------|-----------------------|--------------------------|
| ARC Challenge (25-shot) | 52.9 | **53.75** | 52.47 | 52.82 |
| TruthfulQA (0-shot) | 45.57 | 47.23 | **48.47** | 47.46 |
| Hellaswag (10-shot) | **78.55** | 76.11 | 76.13 | 76.91 |
| Winogrande (5-shot) | 71.74 | **73.95** | 71.74 | 73.16 |
| AGI Eval (0-shot) | 29.3 | **30.95** | 28.44 | 29.6 |
| BigBench (0-shot) | 32.6 | 33.08 | 32.99 | **33.26** |
| Average | 51.78 | **52.51** | 51.71 | 52.2 |
## Related Models
| Model | Type | Data | Base Model | # Params | Download Links |
|--------------------------|-----------------------------|-------------------|----------------------|------|------------------------------------------------------------------------|
| Tamil LLaMA 7B v0.1 Base | Base model | 12GB | LLaMA 7B | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-base-v0.1) |
| Tamil LLaMA 13B v0.1 Base | Base model | 4GB | LLaMA 13B | 13B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-13b-base-v0.1) |
| Tamil LLaMA 7B v0.1 Instruct | Instruction following model | 145k instructions | Tamil LLaMA 7B Base | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.1) |
| Tamil LLaMA 13B v0.1 Instruct | Instruction following model | 145k instructions | Tamil LLaMA 13B Base | 13B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-13b-instruct-v0.1) |
| Telugu LLaMA 7B v0.1 Instruct | Instruction/Chat model | 420k instructions | Telugu LLaMA 7B Base v0.1 | 7B | [HF Hub](https://huggingface.co/abhinand/telugu-llama-instruct-v0.1) |
| Malayalam LLaMA 7B v0.2 Instruct | Instruction/Chat model | 420k instructions | Malayalam LLaMA 7B Base v0.1 | 7B | [HF Hub](https://huggingface.co/abhinand/malayalam-llama-instruct-v0.1) |
## Example Usage
```python
from transformers import LlamaForCausalLM, AutoTokenizer, pipeline
model = LlamaForCausalLM.from_pretrained(
"abhinand/tamil-llama-instruct-v0.2",
#load_in_8bit=True, # Set this depending on the GPU you have
torch_dtype=torch.bfloat16,
device_map={"": 0}, # Set this depending on the number of GPUs you have
local_files_only=False # Optional
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained("abhinand/tamil-llama-instruct-v0.2")
inf_pipeline = pipeline("conversational", model=model, tokenizer=tokenizer)
def format_instruction(system_prompt, question, return_dict=False):
if system_prompt is None:
messages = [
{'content': question, 'role': 'user'},
]
else:
messages = [
{'content': system_prompt, 'role': 'system'},
{'content': question, 'role': 'user'},
]
if return_dict:
return messages
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
return prompt
# Set the generation configuration according to your needs
temperature = 0.6
repetition_penalty = 1.1
max_new_tokens = 256
SYSTEM_PROMPT = "You are an AI assistant who follows instructions extremely well. Do your best your best to help."
INPUT = "Can you explain the significance of Tamil festival Pongal?"
instruction = format_instruction(
system_prompt=SYSTEM_PROMPT,
question=INPUT,
return_dict=True,
)
output = inf_pipeline(
instruction,
temperature=temperature,
max_new_tokens=max_new_tokens,
repetition_penalty=repetition_penalty
)
print(output)
```
**Example Output:**
```
Conversation id: d57cdf33-01ff-4328-8efe-5c4fefdd6e77
system: You are an AI assistant who follows instructions extremely well. Do your best your best to help.
user: Can you explain the significance of Tamil festival Pongal?
assistant: Pongal is a significant harvest festival celebrated in Tamil Nadu and other parts of southern India. It marks the end of the rainy season and beginning of the agricultural year. The festival primarily revolves around giving gratitude to nature, particularly the Sun God Surya for his bountiful gifts like agriculture and health. People offer prayers to cattle, which play a significant role in agriculture, as well as their families for their continued support during the harvest season. The festival is marked by various colorful events, including preparing traditional Pongal dishes like rice cooked with milk, sugarcane, and banana, followed by exchanging gifts and celebrating among family members and friends. It also serves as a time for unity and strengthens the bond between people in their communities.
```
## Usage Note
It's important to note that the models have not undergone detoxification/censorship. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications.
## Meet the Developers
Get to know the creators behind this innovative model and follow their contributions to the field:
- [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/)
## Citation
If you use this model or any of the the Tamil-Llama related work in your research, please cite:
```bibtex
@misc{balachandran2023tamilllama,
title={Tamil-Llama: A New Tamil Language Model Based on Llama 2},
author={Abhinand Balachandran},
year={2023},
eprint={2311.05845},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Tamil language.
|
Liveme/Marcoroni-7B-v3_qlora_20231228-0017_8bit_merge
|
Liveme
| 2024-01-25T03:14:07Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"mistral",
"region:us"
] | null | 2024-01-02T07:26:59Z |
---
library_name: peft
base_model: AIDC-ai-business/Marcoroni-7B-v3
---
# Model Card for Model ID
promptๆ ผๅผๅฆไธ๏ผ
```python
system_prompt = '''You are now roleplaying as {Character Nickname}.
{character_msg}
Identity: Nurse
Categories: Erotica, {tone}
Personalities: {personalities}
```
ๆจ็้
็ฝฎ้่ฆๆณจๆ็ๅ ไธชๅๆฐ๏ผ
```
'temperature': 0.85
'top_p': 0.95, # disable
'top_k': 0, # disable
'repetition_penalty': 1.15,
```
Prompt Template for alpaca style
```
### Instruction:
<prompt> (without the <>)
### Response:
```
่ฎญ็ปๅๆฐ๏ผ
```
```
|
vine135/outputs_mistral_b_finance_finetuned_test
|
vine135
| 2024-01-25T03:07:10Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-25T02:58:54Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
model-index:
- name: outputs_mistral_b_finance_finetuned_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs_mistral_b_finance_finetuned_test
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
macadeliccc/TheCorso-7b-v1
|
macadeliccc
| 2024-01-25T02:59:29Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T21:48:14Z |
---
library_name: transformers
tags: []
---
# TheCorso-7b-v1
**In loving memory of my dog Klaus (Piccolo)**
_~ Piccolo (Italian): the little one ~_

+ Based on fblgit/UNA-TheBeagle-7b-v1
|
SanjiWatsuki/Kunoichi-DPO-v2-7B
|
SanjiWatsuki
| 2024-01-25T02:52:18Z | 1,520 | 81 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T04:02:02Z |
---
license: cc-by-nc-4.0
---
| Model | MT Bench | EQ Bench | MMLU | Logic Test |
|----------------------|----------|----------|---------|-------------|
| GPT-4-Turbo | 9.32 | - | - | - |
| GPT-4 | 8.99 | 62.52 | 86.4 | 0.86 |
| **Kunoichi-DPO-v2-7B** | **8.51** | **42.18** | **64.94**| **0.58** |
| Mixtral-8x7B-Instruct| 8.30 | 44.81 | 70.6 | 0.75 |
| **Kunoichi-DPO-7B** | **8.29** | **41.60** | **64.83** | **0.59** |
| **Kunoichi-7B** | **8.14** | **44.32** | **64.9** | **0.58** |
| Starling-7B | 8.09 | - | 63.9 | 0.51 |
| Claude-2 | 8.06 | 52.14 | 78.5 | - |
| Silicon-Maid-7B | 7.96 | 40.44 | 64.7 | 0.54 |
| Loyal-Macaroni-Maid-7B | 7.95 | 38.66 | 64.9 | 0.57 |
| GPT-3.5-Turbo | 7.94 | 50.28 | 70 | 0.57 |
| Claude-1 | 7.9 | - | 77 | - |
| Openchat-3.5 | 7.81 | 37.08 | 64.3 | 0.39 |
| Dolphin-2.6-DPO | 7.74 | 42.88 | 61.9 | 0.53 |
| Zephyr-7B-beta | 7.34 | 38.71 | 61.4 | 0.30 |
| Llama-2-70b-chat-hf | 6.86 | 51.56 | 63 | - |
| Neural-chat-7b-v3-1 | 6.84 | 43.61 | 62.4 | 0.30 |
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| **Kunoichi-DPO-7B**|**58.4**| 45.08 | 74| 66.99| 47.52|
| **Kunoichi-DPO-v2-7B**|**58.31**| 44.85| 75.05| 65.69| 47.65|
| [Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B)|57.54| 44.99| 74.86| 63.72| 46.58|
| [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)| 56.85 | 44.74 | 75.6 | 59.89 | 47.17 |
| [Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) | 56.45| 44.74| 74.26| 61.5| 45.32|
| [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) | 53.51 | 43.67 | 73.24 | 55.37 | 41.76 |
| [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) | 52.42 | 42.75 | 72.99 | 52.99 | 40.94 |
| [openchat/openchat_3.5](https://huggingface.co/openchat/openchat_3.5) | 51.34 | 42.67 | 72.92 | 47.27 | 42.51 |
| [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) | 51.16 | 42.06 | 72.72 | 47.33 | 42.53 |
| [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | 50.99 | 37.33 | 71.83 | 55.1 | 39.7 |
| Model | AlpacaEval2 | Length |
| --------------------------- | ----------- | ------ |
| GPT-4 | 23.58% | 1365 |
| GPT-4 0314 | 22.07% | 1371 |
| Mistral Medium | 21.86% | 1500 |
| Mixtral 8x7B v0.1 | 18.26% | 1465 |
| **Kunoichi-DPO-v2** | **17.19%** | 1785 |
| Claude 2 | 17.19% | 1069 |
| Claude | 16.99% | 1082 |
| Gemini Pro | 16.85% | 1315 |
| GPT-4 0613 | 15.76% | 1140 |
| Claude 2.1 | 15.73% | 1096 |
| Mistral 7B v0.2 | 14.72% | 1676 |
| GPT 3.5 Turbo 0613 | 14.13% | 1328 |
| LLaMA2 Chat 70B | 13.87% | 1790 |
| LMCocktail-10.7B-v1 | 13.15% | 1203 |
| WizardLM 13B V1.1 | 11.23% | 1525 |
| Zephyr 7B Beta | 10.99% | 1444 |
| OpenHermes-2.5-Mistral (7B) | 10.34% | 1107 |
| GPT 3.5 Turbo 0301 | 9.62% | 827 |
| **Kunoichi-7B** | **9.38%** | 1492 |
| GPT 3.5 Turbo 1106 | 9.18% | 796 |
| GPT-3.5 | 8.56% | 1018 |
| Phi-2 DPO | 7.76% | 1687 |
| LLaMA2 Chat 13B | 7.70% | 1513 |
|
jilp00/youtoks-transformers-united-v2-7B-v02
|
jilp00
| 2024-01-25T02:50:43Z | 62 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-01-25T02:46:13Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
varun-v-rao/bert-large-cased-mnli-model1
|
varun-v-rao
| 2024-01-25T02:49:56Z | 95 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-large-cased",
"base_model:finetune:google-bert/bert-large-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-24T23:57:41Z |
---
license: apache-2.0
base_model: bert-large-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-large-cased-mnli-model1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-mnli-model1
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4619
- Accuracy: 0.8680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 78
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3947 | 1.0 | 6136 | 0.3608 | 0.8601 |
| 0.2557 | 2.0 | 12272 | 0.3683 | 0.8678 |
| 0.1455 | 3.0 | 18408 | 0.4619 | 0.8680 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
MSI-AI-Group/TRT_AllMsi_4050
|
MSI-AI-Group
| 2024-01-25T02:42:53Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-22T01:32:43Z |
---
license: creativeml-openrail-m
---
This model is for AI Artist use.
You use our model by following Github Link:
https://github.com/MSI-AI-Group/AI-Artist
|
MSI-AI-Group/TRT_Base_4050
|
MSI-AI-Group
| 2024-01-25T02:42:35Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-22T01:29:24Z |
---
license: creativeml-openrail-m
---
This model is for AI Artist use.
You use our model by following Github Link:
https://github.com/MSI-AI-Group/AI-Artist
|
MSI-AI-Group/TRT_AllMsi_4060
|
MSI-AI-Group
| 2024-01-25T02:42:08Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-22T01:33:31Z |
---
license: creativeml-openrail-m
---
This model is for AI Artist use.
You use our model by following Github Link:
https://github.com/MSI-AI-Group/AI-Artist
|
MSI-AI-Group/TRT_Base_4080
|
MSI-AI-Group
| 2024-01-25T02:41:19Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-22T01:34:14Z |
---
license: creativeml-openrail-m
---
This model is for AI Artist use.
You use our model by following Github Link:
https://github.com/MSI-AI-Group/AI-Artist
|
MSI-AI-Group/TRT_Base_4090
|
MSI-AI-Group
| 2024-01-25T02:40:42Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-22T01:34:22Z |
---
license: creativeml-openrail-m
---
This model is for AI Artist use.
You use our model by following Github Link:
https://github.com/MSI-AI-Group/AI-Artist
|
tiagoblima/t5_base-qg-ap-peft-oficial
|
tiagoblima
| 2024-01-25T02:34:56Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"dataset:tiagoblima/du-qg-squadv1_pt",
"base_model:unicamp-dl/ptt5-base-portuguese-vocab",
"base_model:finetune:unicamp-dl/ptt5-base-portuguese-vocab",
"license:mit",
"region:us"
] | null | 2024-01-25T01:55:34Z |
---
license: mit
base_model: unicamp-dl/ptt5-base-portuguese-vocab
tags:
- generated_from_trainer
datasets:
- tiagoblima/du-qg-squadv1_pt
model-index:
- name: t5_base-qg-ap-peft-oficial
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_base-qg-ap-peft-oficial
This model is a fine-tuned version of [unicamp-dl/ptt5-base-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-base-portuguese-vocab) on the tiagoblima/du-qg-squadv1_pt dataset.
It achieves the following results on the evaluation set:
- Loss: 15.9549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 13.2241 | 1.0 | 2367 | 16.4507 |
| 12.9503 | 2.0 | 4734 | 16.2224 |
| 12.8425 | 3.0 | 7101 | 16.0670 |
| 12.8058 | 4.0 | 9468 | 15.9792 |
| 12.7937 | 5.0 | 11835 | 15.9549 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.1
|
varun-v-rao/t5-base-snli-model1
|
varun-v-rao
| 2024-01-25T02:31:13Z | 33 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-25T01:21:31Z |
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: t5-base-snli-model1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-snli-model1
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2860
- Accuracy: 0.8989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 73
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3815 | 1.0 | 2146 | 0.2976 | 0.8903 |
| 0.3507 | 2.0 | 4292 | 0.2937 | 0.8962 |
| 0.3354 | 3.0 | 6438 | 0.2860 | 0.8989 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
zhangyanchao/whisper-medium-zh_CN
|
zhangyanchao
| 2024-01-25T02:19:48Z | 6 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-22T03:25:38Z |
---
language:
- hi
license: apache-2.0
base_model: openai/whisper-medium
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Medium zh_CN - Charles.djang
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium zh_CN - Charles.djang
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
josemhernandezbiometric/whisper-medium-finetuned-int8
|
josemhernandezbiometric
| 2024-01-25T02:18:22Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"whisper-event",
"generated_from_trainer",
"es",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-medium",
"base_model:adapter:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-01-22T07:36:34Z |
---
language:
- es
license: apache-2.0
library_name: peft
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
base_model: openai/whisper-medium
model-index:
- name: Whisper Small Es - Sanchit Gandhi
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
split: None
metrics:
- type: wer
value: 56.441337816315695
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Es - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5129
- Wer: 56.4413
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 750
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 6.3605 | 2.3 | 50 | 6.2660 | 55.7247 |
| 5.3113 | 4.6 | 100 | 5.1187 | 56.4590 |
| 4.2749 | 6.9 | 150 | 4.2391 | 55.6185 |
| 3.5266 | 9.2 | 200 | 3.4143 | 53.6719 |
| 3.0671 | 11.49 | 250 | 3.1045 | 49.2037 |
| 2.8716 | 13.79 | 300 | 2.9260 | 50.7786 |
| 2.7263 | 16.09 | 350 | 2.7987 | 53.5746 |
| 2.6467 | 18.39 | 400 | 2.7079 | 55.0787 |
| 2.5624 | 20.69 | 450 | 2.6443 | 55.6008 |
| 2.5087 | 22.99 | 500 | 2.5989 | 57.3881 |
| 2.4922 | 25.29 | 550 | 2.5660 | 55.9370 |
| 2.4274 | 27.59 | 600 | 2.5421 | 56.4325 |
| 2.4337 | 29.89 | 650 | 2.5257 | 57.4058 |
| 2.3991 | 32.18 | 700 | 2.5165 | 57.0165 |
| 2.4211 | 34.48 | 750 | 2.5129 | 56.4413 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.0.dev0
- Pytorch 2.1.0
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
sosoai/Orion-14B-Chat-RAG-safetensors
|
sosoai
| 2024-01-25T02:09:15Z | 43 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"orion",
"text-generation",
"code",
"model",
"llm",
"custom_code",
"en",
"zh",
"ja",
"ko",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-25T00:05:06Z |
---
language:
- en
- zh
- ja
- ko
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- code
- model
- llm
---
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<div align="center">
<img src="./assets/imgs/orion_start.PNG" alt="logo" width="50%" />
</div>
<div align="center">
<h1>
Orion-14B
</h1>
</div>
<div align="center">
<div align="center">
<b>๐English</b> | <a href="https://huggingface.co/OrionStarAI/Orion-14B-Chat-RAG/blob/main/README_zh.md" target="_blank">๐จ๐ณไธญๆ</a> | <a href="https://huggingface.co/OrionStarAI/Orion-14B-Chat-RAG/blob/main/README_ja.md" target="_blank">๐ฏ๐ตๆฅๆฌ่ช</a> | <a href="https://huggingface.co/OrionStarAI/Orion-14B-Chat-RAG/blob/main/README_ko.md" target="_blank">๐ฐ๐ทํ๊ตญ์ด</a>
</div>
<h4 align="center">
<p>
๐ค <a href="https://huggingface.co/OrionStarAI" target="_blank">HuggingFace Mainpage</a> | ๐ค <a href="https://modelscope.cn/organization/OrionStarAI" target="_blank">ModelScope Mainpage</a><br>๐ฌ <a href="https://huggingface.co/spaces/OrionStarAI/Orion-14B-App-Demo" target="_blank">HuggingFace Demo</a> | ๐ซ <a href="https://modelscope.cn/studios/OrionStarAI/Orion-14B-App-Demo/summary" target="_blank">ModelScope Demo</a><br>๐บ <a href="https://github.com/OrionStarAI/Orion" target="_blank">GitHub</a><br>๐ <a href="https://github.com/OrionStarAI/Orion/blob/master/doc/Orion14B_v3.pdf" target="_blank">Tech Report</a>
<p>
</h4>
</div>
# Table of Contents
- [๐ Model Introduction](#model-introduction)
- [๐ Model Download](#model-download)
- [๐ Model Benchmark](#model-benchmark)
- [๐ Model Inference](#model-inference)
- [๐ Declarations & License](#declarations-license)
- [๐ฅ Company Introduction](#company-introduction)
<a name="model-introduction"></a><br>
# 1. Model Introduction
- Orion-14B series models are open-source multilingual large language models trained from scratch by OrionStarAI. The base model is trained on 2.5T multilingual corpus, including Chinese, English, Japanese, Korean, etc, and it exhibits superior performance in these languages. For details, please refer to [tech report](https://github.com/OrionStarAI/Orion/blob/master/doc/Orion14B_v3.pdf).
- The Orion-14B series models exhibit the following features:
- Among models with 20B-parameter scale level, Orion-14B-Base model shows outstanding performance in comprehensive evaluations.
- Strong multilingual capabilities, significantly outperforming in Japanese and Korean testsets.
- The fine-tuned models demonstrate strong adaptability, excelling in human-annotated blind tests.
- The long-chat version supports extremely long texts, performing exceptionally well at a token length of 200k and can support up to a maximum of 320k.
- The quantized versions reduce model size by 70%, improve inference speed by 30%, with performance loss less than 1%.
<table style="border-collapse: collapse; width: 100%;">
<tr>
<td style="border: none; padding: 10px; box-sizing: border-box;">
<img src="./assets/imgs/opencompass_en.png" alt="opencompass" style="width: 100%; height: auto;">
</td>
<td style="border: none; padding: 10px; box-sizing: border-box;">
<img src="./assets/imgs/model_cap_en.png" alt="modelcap" style="width: 100%; height: auto;">
</td>
</tr>
</table>
- Orion-14B series models including:
- **Orion-14B-Base:** A multilingual large language foundational model with 14 billion parameters, pretrained on a diverse dataset of 2.5 trillion tokens.
- **Orion-14B-Chat:** A chat-model fine-tuned on a high-quality corpus aims to provide an excellence interactive experience for users in the large model community.
- **Orion-14B-LongChat:** The long-context version excels at handling extremely lengthy texts, performing exceptionally well at a token length of 200k and can support up to a maximum of 320k.
- **Orion-14B-Chat-RAG:** A chat-model fine-tuned on a custom retrieval augmented generation dataset, achieving superior performance in retrieval augmented generation tasks.
- **Orion-14B-Chat-Plugin:** A chat-model specifically tailored for plugin and function calling tasks, ideal for agent-related scenarios where the LLM acts as a plugin and function call system.
- **Orion-14B-Base-Int4:** A quantized base model utilizing 4-bit integer weights. It significantly reduces the model size by 70% and increases the inference speed by 30% while incurring a minimal performance loss of only 1%.
- **Orion-14B-Chat-Int4:** A quantized chat model utilizing 4-bit integer weights.
<a name="model-download"></a><br>
# 2. Model Download
Model release and download links are provided in the table below:
| Model Name | HuggingFace Download Links | ModelScope Download Links |
|-------------------------|-----------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|
| โพOrion-14B-Base | [Orion-14B-Base](https://huggingface.co/OrionStarAI/Orion-14B-Base) | [Orion-14B-Base](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base/summary) |
| ๐Orion-14B-Chat | [Orion-14B-Chat](https://huggingface.co/OrionStarAI/Orion-14B-Chat) | [Orion-14B-Chat](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat/summary) |
| ๐Orion-14B-LongChat | [Orion-14B-LongChat](https://huggingface.co/OrionStarAI/Orion-14B-LongChat) | [Orion-14B-LongChat](https://modelscope.cn/models/OrionStarAI/Orion-14B-LongChat/summary) |
| ๐Orion-14B-Chat-RAG | [Orion-14B-Chat-RAG](https://huggingface.co/OrionStarAI/Orion-14B-Chat-RAG) | [Orion-14B-Chat-RAG](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-RAG/summary) |
| ๐Orion-14B-Chat-Plugin | [Orion-14B-Chat-Plugin](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Plugin) | [Orion-14B-Chat-Plugin](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Plugin/summary) |
| ๐ผOrion-14B-Base-Int4 | [Orion-14B-Base-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Base-Int4) | [Orion-14B-Base-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base-Int4/summary) |
| ๐ฆOrion-14B-Chat-Int4 | [Orion-14B-Chat-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Int4) | [Orion-14B-Chat-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Int4/summary) |
<a name="model-benchmark"></a><br>
# 3. Model Benchmarks
## 3.1. Base Model Orion-14B-Base Benchmarks
### 3.1.1. LLM evaluation results on examination and professional knowledge
| Model | C-Eval | CMMLU | MMLU | AGIEval | Gaokao | BBH |
|--------------------|----------|----------|----------|----------|----------|----------|
| LLaMA2-13B | 41.4 | 38.4 | 55.0 | 30.9 | 18.2 | 45.6 |
| Skywork-13B | 59.1 | 61.4 | 62.7 | 43.6 | 56.1 | 48.3 |
| Baichuan2-13B | 59.0 | 61.3 | 59.5 | 37.4 | 45.6 | 49.0 |
| QWEN-14B | 71.7 | 70.2 | 67.9 | 51.9 | **62.5** | 53.7 |
| InternLM-20B | 58.8 | 59.0 | 62.1 | 44.6 | 45.5 | 52.5 |
| **Orion-14B-Base** | **72.9** | **70.6** | **69.9** | **54.7** | 62.1 | **56.5** |
### 3.1.2. LLM evaluation results on language understanding and common knowledge
| Model |RACE-middle|RACE-high |HellaSwag | PIQA | Lambada | WSC |
|--------------------|----------|----------|----------|----------|----------|----------|
| LLaMA 2-13B | 63.0 | 58.9 | 77.5 | 79.8 | 76.5 | 66.3 |
| Skywork-13B | 87.6 | 84.1 | 73.7 | 78.3 | 71.8 | 66.3 |
| Baichuan 2-13B | 68.9 | 67.2 | 70.8 | 78.1 | 74.1 | 66.3 |
| QWEN-14B | 93.0 | 90.3 | **80.2** | 79.8 | 71.4 | 66.3 |
| InternLM-20B | 86.4 | 83.3 | 78.1 | **80.3** | 71.8 | 68.3 |
| **Orion-14B-Base** | **93.2** | **91.3** | 78.5 | 79.5 | **78.8** | **70.2** |
### 3.1.3. LLM evaluation results of OpenCompass testsets
| Model | Average | Examination | Language | Knowledge | Understanding | Reasoning |
|------------------|----------|----------|----------|----------|----------|----------|
| LLaMA 2-13B | 47.3 | 45.2 | 47.0 | 58.3 | 50.9 | 43.6 |
| Skywork-13B | 53.6 | 61.1 | 51.3 | 52.7 | 64.5 | 45.2 |
| Baichuan 2-13B | 49.4 | 51.8 | 47.5 | 48.9 | 58.1 | 44.2 |
| QWEN-14B | 62.4 | 71.3 | 52.67 | 56.1 | 68.8 | 60.1 |
| InternLM-20B | 59.4 | 62.5 | 55.0 | **60.1** | 67.3 | 54.9 |
|**Orion-14B-Base**| **64.3** | **71.4** | **55.0** | 60.0 | **71.9** | **61.6** |
### 3.1.4. Comparison of LLM performances on Japanese testsets
| Model |**Average**| JCQA | JNLI | MARC | JSQD | JQK | XLS | XWN | MGSM |
|--------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| PLaMo-13B | 52.3 | 56.7 | 42.8 | 95.8 | 70.6 | 71.0 | 8.70 | 70.5 | 2.40 |
| WebLab-10B | 50.7 | 66.6 | 53.7 | 82.1 | 62.9 | 56.2 | 10.0 | 72.0 | 2.40 |
| ELYZA-jp-7B | 48.8 | 71.7 | 25.3 | 86.6 | 70.8 | 64.1 | 2.50 | 62.1 | 7.20 |
| StableLM-jp-7B | 51.1 | 33.4 | 43.3 | **96.7** | 70.6 | 78.1 | 10.7 | 72.8 | 2.80 |
| LLaMA 2-13B | 46.3 | 75.0 | 47.6 | 38.8 | 76.1 | 67.7 | 18.1 | 63.2 | 10.4 |
| Baichuan 2-13B | 57.1 | 73.7 | 31.3 | 91.6 | 80.5 | 63.3 | 18.6 | 72.2 | 25.2 |
| QWEN-14B | 65.8 | 85.9 | 60.7 | 97.0 | 83.3 | 71.8 | 18.8 | 70.6 | 38.0 |
| Yi-34B | 67.1 | 83.8 | 61.2 | 95.2 | **86.1** | 78.5 | **27.2** | 69.2 | 35.2 |
| **Orion-14B-Base** | **69.1** | **88.2** | **75.8** | 94.1 | 75.7 | **85.1** | 17.3 | **78.8** | **38.0** |
### 3.1.5. Comparison of LLM performances on Korean testsets. n = 0 and n = 5 stand for n-shot prompts used in the evaluation
|Model | **Average**<br>n=0 n=5 | HellaSwag<br>n=0 n=5 | COPA<br> n=0 n=5 | BooIQ<br>n=0 n=5 | SentiNeg<br>n=0 n=5|
|------------------|------------------------------|------------------------------|------------------------------|------------------------------|------------------------------|
| KoGPT | 53.0 70.1 | 55.9 58.3 | 73.5 72.9 | 45.1 59.8 | 37.5 89.4 |
| Polyglot-ko-13B | 69.6 73.7 |**59.5** **63.1**|**79.4** **81.1**| 48.2 60.4 | 91.2 90.2 |
| LLaMA 2-13B | 46.7 63.7 | 41.3 44.0 | 59.3 63.8 | 34.9 73.8 | 51.5 73.4 |
| Baichuan 2-13B | 52.1 58.7 | 39.2 39.6 | 60.6 60.6 | 58.4 61.5 | 50.3 72.9 |
| QWEN-14B | 53.8 73.7 | 45.3 46.8 | 64.9 68.9 | 33.4 83.5 | 71.5 95.7 |
| Yi-34B | 54.2 72.1 | 44.6 44.7 | 58.0 60.6 | 65.9 90.2 | 48.3 92.9 |
|**Orion-14B-Chat**|**74.5** **79.6**| 47.0 49.6 | 77.7 79.4 |**81.6** **90.7**|**92.4** **98.7**|
### 3.1.6. Multilingual evaluation
| Model | Train Lang | Japanese | Korean | Chinese | English |
|--------------------|------------|----------|----------|----------|----------|
| PLaMo-13B | En,Jp | 52.3 | * | * | * |
| Weblab-10B | En,Jp | 50.7 | * | * | * |
| ELYZA-jp-7B | En,Jp | 48.8 | * | * | * |
| StableLM-jp-7B | En,Jp | 51.1 | * | * | * |
| KoGPT-6B | En,Ko | * | 70.1 | * | * |
| Polyglot-ko-13B | En,Ko | * | 70.7 | * | * |
| Baichuan2-13B | Multi | 57.1 | 58.7 | 50.8 | 57.1 |
| Qwen-14B | Multi | 65.8 | 73.7 | 64.5 | 65.4 |
| Llama2-13B | Multi | 46.3 | 63.7 | 41.4 | 55.3 |
| Yi-34B | Multi | 67.1 | 72.2 | 58.7 | **68.8** |
| **Orion-14B-Chat** | Multi | **69.1** | **79.5** | **67.9** | 67.3 |
## 3.2. Chat Model Orion-14B-Chat Benchmarks
### 3.2.1. Chat model subjective evaluation of MTBench
| Model | First-Turn | Second-Turn | **Average** |
|----------------------|----------|----------|----------|
| Baichuan2-13B-Chat | 7.05 | 6.47 | 6.76 |
| Qwen-14B-Chat | 7.30 | 6.62 | 6.96 |
| Llama2-13B-Chat | 7.10 | 6.20 | 6.65 |
| InternLM-20B-Chat | 7.03 | 5.93 | 6.48 |
| **Orion-14B-Chat** | **7.68** | **7.07** | **7.37** |
\* use vllm for inference
### 3.2.2. Chat model subjective evaluation of AlignBench
| Model | Math. | Logi. | Basic. | Chi. | Comp. | Writ. | Role. | Prof. |**Avg.**|
|--------------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|
| Baichuan2-13B-Chat | 3.76 | 4.07 | 6.22 | 6.05 | 7.11 | 6.97 | 6.75 | 6.43 | 5.25 |
| Qwen-14B-Chat |**4.91**|**4.71**|**6.90**| 6.36 | 6.74 | 6.64 | 6.59 | 6.56 |**5.72**|
| Llama2-13B-Chat | 3.05 | 3.79 | 5.43 | 4.40 | 6.76 | 6.63 | 6.99 | 5.65 | 4.70 |
| InternLM-20B-Chat | 3.39 | 3.92 | 5.96 | 5.50 |**7.18**| 6.19 | 6.49 | 6.22 | 4.96 |
| **Orion-14B-Chat** | 4.00 | 4.24 | 6.18 |**6.57**| 7.16 |**7.36**|**7.16**|**6.99**| 5.51 |
\* use vllm for inference
## 3.3. LongChat Model Orion-14B-LongChat Benchmarks
### 3.3.1. LongChat evaluation of LongBench
| Model | NarrativeQA|MultiFieldQA-en|MultiFieldQA-zh| DuReader | QMSum | VCSUM | TREC | TriviaQA | LSHT |RepoBench-P|
|--------------------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|
| GPT-3.5-Turbo-16k | **23.60** | **52.30** | **61.20** | 28.70 | 23.40 | **16.00** | 68.00 | **91.40** | 29.20 | 53.60 |
| LongChat-v1.5-7B-32k | 16.90 | 41.40 | 29.10 | 19.50 | 22.70 | 9.90 | 63.50 | 82.30 | 23.20 | 55.30 |
| Vicuna-v1.5-7B-16k | 19.40 | 38.50 | 43.00 | 19.30 | 22.80 | 15.10 | 71.50 | 86.20 | 28.80 | 43.50 |
| Yi-6B-200K | 14.11 | 36.74 | 22.68 | 14.01 | 20.44 | 8.08 | 72.00 | 86.61 | 38.00 | **63.29** |
| Orion-14B-LongChat | 19.47 | 48.11 | 55.84 | **37.02** | **24.87** | 15.44 | **77.00** | 89.12 | **45.50** | 54.31 |
## 3.4. Chat RAG Model Benchmarks
### 3.4.1. LLM evaluation results of self-built RAG testsets
|Model|Effectiveness of Response(Keyword)|*Effectiveness of Response๏ผsubjective evaluation๏ผ|Quoting Ability|Fallback Ability|*AutoQA|*Data Extraction|
|---------------------|------|------|------|------|------|------|
| Baichuan2-13B-Chat | 85 | 76 | 1 | 0 | 69 | 51 |
| Qwen-14B-Chat | 79 | 77 | 75 | 47 | 68 | 72 |
| Qwen-72B-Chat(Int4) | 87 | 89 | 90 | 32 | 67 | 76 |
| GPT-4 | 91 | 94 | 96 | 95 | 75 | 86 |
| Orion-14B-Chat-RAG | 86 | 87 | 91 | 97 | 73 | 71 |
\* means manual assessment
## 3.5. Chat Plugin Model Orion-14B-Chat-Plugin Benchmarks
### 3.5.1. LLM evaluation results of self-built plugin testsets
|Model |Intent Recognition with Full Params |Intent Recognition with Missing Params |Non-Plugin Invocation Recognition |
|-----------------------|--------|-----------|--------|
| Baichuan2-13B-Chat | 25 | 0 | 0 |
| Qwen-14B-Chat | 55 | 0 | 50 |
| GPT-4 | **95** | 52.38 | 70 |
| Orion-14B-Chat-Plugin | 92.5 | **60.32** | **90** |
## 3.6. Quantized Model Orion-14B-Base-Int4 Benchmarks
### 3.6.1. Comparison of before and after quantization
|Model |Size(GB)|Inference Speed(tokens/s)|C-Eval|CMMLU|MMLU|RACE|HellaSwag|
|-------------------------|-------|-----|------|------|------|------|------|
| OrionStar-14B-Base | 28.0 | 135 | 72.8 | 70.6 | 70.0 | 93.3 | 78.5 |
| OrionStar-14B-Base-Int4 | 8.3 | 178 | 71.8 | 69.8 | 69.2 | 93.1 | 78.0 |
<a name="model-inference"></a><br>
# 4. Model Inference
Model weights, source code, and configuration needed for inference are published on Hugging Face, and the download link
is available in the table at the beginning of this document. We demonstrate various inference methods here, and the
program will automatically download the necessary resources from Hugging Face.
## 4.1. Python Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation.utils import GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("OrionStarAI/Orion-14B", use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("OrionStarAI/Orion-14B", device_map="auto",
torch_dtype=torch.bfloat16, trust_remote_code=True)
model.generation_config = GenerationConfig.from_pretrained("OrionStarAI/Orion-14B")
messages = [{"role": "user", "content": "Hello, what is your name? "}]
response = model.chat(tokenizer, messages, streaming=False)
print(response)
```
In the above Python code, the model is loaded with `device_map='auto'` to utilize all available GPUs. To specify the
device, you can use something like `export CUDA_VISIBLE_DEVICES=0,1` (using GPUs 0 and 1).
## 4.2. Command Line Tool
```shell
CUDA_VISIBLE_DEVICES=0 python cli_demo.py
```
This command-line tool is designed for chat scenarios, and thus, it does not support calling the base model.
## 4.3. Direct Script Inference
```shell
# base model
CUDA_VISIBLE_DEVICES=0 python demo/text_generation_base.py --model OrionStarAI/Orion-14B --tokenizer OrionStarAI/Orion-14B --prompt hello
# chat model
CUDA_VISIBLE_DEVICES=0 python demo/text_generation.py --model OrionStarAI/Orion-14B-Chat --tokenizer OrionStarAI/Orion-14B-Chat --prompt hi
```
## 4.4 Example Output
### 4.4.1. Casual Chat
`````
User: Hello
Orion-14B: Hello! How can I help you today? If you have any questions or need assistance, feel free to ask.
User: What functions do you have?
Orion-14B: As an AI language model, I have a wide range of functions and capabilities, including but not limited to:
Answering general knowledge questions: I can provide information on various topics such as history, science, geography, and more.
Providing definitions and explanations: I can explain complex concepts and provide definitions for words and phrases.
Assisting with tasks: I can help with simple tasks like setting reminders, converting units of measurement, and even ordering food from restaurants.
Generating text: I can write short stories, essays, articles, and other types of content based on prompts given to me.
Translating languages: I can translate text from one language to another.
Chatting: I can engage in conversation with users and respond to their queries in a conversational manner.
Summarizing long texts: I can summarize longer texts like articles, books, and research papers.
Generating random numbers and names: I can generate random numbers within a specified range and also suggest names based on certain criteria.
Playing games: I can play simple games like Hangman, Trivia, and more.
Providing recommendations: I can recommend movies, music, books, and other forms of entertainment based on user preferences.
These are just some of the many functions that I am capable of performing. If you have a specific task in mind, feel free to ask me about it.
User: Tell me a joke.
Orion-14B: Sure, here's a classic one-liner: Why don't scientists trust atoms? Because they make up everything.
`````
### 4.4.2. Japanese & Korean Chat
`````
User๏ผ่ชๅทฑใ็ดนไปใใฆใใ ใใ
Orion-14B๏ผใใใซใกใฏใ็งใฎๅๅใฏChatMaxใงใOrionStarใซใใฃใฆ้็บใใใAIใขใทในใฟใณใใงใใ็งใฏใฆใผใถใผใซๆญฃ็ขบใงใๅฎข่ฆณ็ใงใๆ็จใชๅ็ญใๆไพใใๅฝผใใๅ้กใ่งฃๆฑบใใๆ
ๅ ฑใๅๅพใใใฎใๅฉใใใใจใใงใใพใใ
User๏ผ์๊ธฐ์๊ฐ๋ฅผ ํด์ฃผ์ธ์.
Orion-14B๏ผ์๋
ํ์ธ์, ์ ์ด๋ฆ์ ChatMax์
๋๋ค. OrionStar์์ ๊ฐ๋ฐํ AI ์ด์์คํดํธ์
๋๋ค. ์ ๋ ์ฌ์ฉ์๋ค์๊ฒ ์ ํํ๊ณ ๊ฐ๊ด์ ์ด๋ฉฐ ์ ์ฉํ ๋ต๋ณ์ ์ ๊ณตํ์ฌ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ณ ์ ๋ณด๋ฅผ ์ป๋ ๋ฐ ๋์์ ์ค ์ ์์ต๋๋ค.
`````
<a name="declarations-license"></a><br>
# 5. Declarations, License
## 5.1. Declarations
We strongly urge all users not to use the Orion-14B model for any activities that may harm national or social security or violate the law.
Additionally, we request users not to use the Orion-14B model for internet services without proper security review and filing.
We hope all users abide by this principle to ensure that technological development takes place in a regulated and legal environment.
We have done our best to ensure the compliance of the data used in the model training process. However, despite our
significant efforts, unforeseen issues may still arise due to the complexity of the model and data. Therefore, if any
problems arise due to the use of the Orion-14B open-source model, including but not limited to data security
issues, public opinion risks, or any risks and issues arising from the model being misled, abused, disseminated, or
improperly utilized, we will not assume any responsibility.
## 5.2. License
Community use of the Orion-14B series models
- For code, please comply with [Apache License Version 2.0](./LICENSE)<br>
- For model, please comply with [ใOrion-14B Seriesใ Models Community License Agreement](./ModelsCommunityLicenseAgreement)
<a name="company-introduction"></a><br>
# 6. Company Introduction
OrionStar is a leading global service robot solutions company, founded in September 2016. OrionStar is dedicated to
using artificial intelligence technology to create the next generation of revolutionary robots, allowing people to break
free from repetitive physical labor and making human work and life more intelligent and enjoyable. Through technology,
OrionStar aims to make society and the world a better place.
OrionStar possesses fully self-developed end-to-end artificial intelligence technologies, such as voice interaction and
visual navigation. It integrates product development capabilities and technological application capabilities. Based on
the Orion robotic arm platform, it has launched products such as OrionStar AI Robot Greeting, AI Robot Greeting Mini,
Lucki, Coffee Master, and established the open platform OrionOS for Orion robots. Following the philosophy of "Born for
Truly Useful Robots", OrionStar empowers more people through AI technology.
**The core strengths of OrionStar lies in possessing end-to-end AI application capabilities,** including big data preprocessing, large model pretraining, fine-tuning, prompt engineering, agent, etc. With comprehensive end-to-end model training capabilities, including systematic data processing workflows and the parallel model training capability of hundreds of GPUs, it has been successfully applied in various industry scenarios such as government affairs, cloud services, international e-commerce, and fast-moving consumer goods.
Companies with demands for deploying large-scale model applications are welcome to contact us.<br>
**Enquiry Hotline: 400-898-7779**<br>
**E-mail: [email protected]**
<div align="center">
<img src="./assets/imgs/wechat_group.jpg" alt="wechat" width="40%" />
</div>
# ็ฎๅฝ
- [๐ ๆจกๅไป็ป](#zh_model-introduction)
- [๐ ไธ่ฝฝ่ทฏๅพ](#zh_model-download)
- [๐ ่ฏไผฐ็ปๆ](#zh_model-benchmark)
- [๐ ๆจกๅๆจ็](#zh_model-inference)
- [๐ ๅฃฐๆๅ่ฎฎ](#zh_declarations-license)
- [๐ฅ ไผไธไป็ป](#zh_company-introduction)
<a name="zh_model-introduction"></a><br>
# 1. ๆจกๅไป็ป
- Orion-14B-Baseๆฏไธไธชๅ
ทๆ140ไบฟๅๆฐ็ๅค่ฏญ็งๅคงๆจกๅ๏ผ่ฏฅๆจกๅๅจไธไธชๅ
ๅซ2.5ไธไบฟtoken็ๅคๆ ทๅๆฐๆฎ้ไธ่ฟ่กไบ่ฎญ็ป๏ผๆถต็ไบไธญๆใ่ฑ่ฏญใๆฅ่ฏญใ้ฉ่ฏญ็ญๅค็ง่ฏญ่จใๅจๅค่ฏญ่จ็ฏๅขไธ็ไธ็ณปๅไปปๅกไธญๅฑ็ฐๅบๅ่ถ็ๆง่ฝใๅจไธปๆต็ๅ
ฌๅผๅบๅ่ฏๆตไธญ๏ผOrion-14B็ณปๅๆจกๅ่กจ็ฐไผๅผ๏ผๅค้กนๆๆ ๆพ่่ถ
่ถๅ็ญๅๆฐๅบๆฌ็ๅ
ถไปๆจกๅใๅ
ทไฝๆๆฏ็ป่่ฏทๅ่[ๆๆฏๆฅๅ](https://github.com/OrionStarAI/Orion/blob/master/doc/Orion14B_v3.pdf)ใ
- Orion-14B็ณปๅๅคงๆจกๅๆไปฅไธๅ ไธช็น็น๏ผ
- ๅบๅบง20Bๅๆฐ็บงๅซๅคงๆจกๅ็ปผๅ่ฏๆตๆๆ่กจ็ฐไผๅผ
- ๅค่ฏญ่จ่ฝๅๅผบ๏ผๅจๆฅ่ฏญใ้ฉ่ฏญๆต่ฏ้ไธๆพ่้ขๅ
- ๅพฎ่ฐๆจกๅ้ๅบๆงๅผบ๏ผๅจไบบ็ฑปๆ ๆณจ็ฒๆตไธญ๏ผ่กจ็ฐ็ชๅบ
- ้ฟไธไธๆ็ๆฌๆฏๆ่ถ
้ฟๆๆฌ๏ผๅจ200k token้ฟๅบฆไธๆๆไผๅผ๏ผๆ้ฟๅฏๆฏๆๅฏ่พพ320k
- ้ๅ็ๆฌๆจกๅๅคงๅฐ็ผฉๅฐ70%๏ผๆจ็้ๅบฆๆๅ30%๏ผๆง่ฝๆๅคฑๅฐไบ1%
<table style="border-collapse: collapse; width: 100%;">
<tr>
<td style="border: none; padding: 10px; box-sizing: border-box;">
<img src="./assets/imgs/opencompass_zh.png" alt="opencompass" style="width: 100%; height: auto;">
</td>
<td style="border: none; padding: 10px; box-sizing: border-box;">
<img src="./assets/imgs/model_cap_zh.png" alt="modelcap" style="width: 100%; height: auto;">
</td>
</tr>
</table>
- ๅ
ทไฝ่่จ๏ผOrion-14B็ณปๅๅคง่ฏญ่จๆจกๅๅ
ๅซ:
- **Orion-14B-Base:** ๅบไบ2.5ไธไบฟtokensๅคๆ ทๅๆฐๆฎ้่ฎญ็ปๅค็140ไบฟๅๆฐ้็บง็ๅค่ฏญ่จๅบๅบงๆจกๅใ
- **Orion-14B-Chat:** ๅบไบ้ซ่ดจ้่ฏญๆๅบๅพฎ่ฐ็ๅฏน่ฏ็ฑปๆจกๅ๏ผๆจๅจไธบๅคงๆจกๅ็คพๅบๆไพๆดๅฅฝ็็จๆทไบคไบไฝ้ชใ
- **Orion-14B-LongChat:** ๅจ200k token้ฟๅบฆไธๆๆไผๅผ๏ผๆ้ฟๅฏๆฏๆๅฏ่พพ320k๏ผๅจ้ฟๆๆฌ่ฏไผฐ้ไธๆง่ฝๆฏ่ฉไธๆๆจกๅใ
- **Orion-14B-Chat-RAG:** ๅจไธไธชๅฎๅถ็ๆฃ็ดขๅขๅผบ็ๆๆฐๆฎ้ไธ่ฟ่กๅพฎ่ฐ็่ๅคฉๆจกๅ๏ผๅจๆฃ็ดขๅขๅผบ็ๆไปปๅกไธญๅๅพไบๅ่ถ็ๆง่ฝใ
- **Orion-14B-Chat-Plugin:** ไธ้จ้ๅฏนๆไปถๅๅฝๆฐ่ฐ็จไปปๅกๅฎๅถ็่ๅคฉๆจกๅ๏ผ้ๅธธ้็จไบไฝฟ็จไปฃ็็็ธๅ
ณๅบๆฏ๏ผๅ
ถไธญๅคง่ฏญ่จๆจกๅๅ
ๅฝๆไปถๅๅฝๆฐ่ฐ็จ็ณป็ปใ
- **Orion-14B-Base-Int4:** ไธไธชไฝฟ็จint4่ฟ่ก้ๅ็ๅบๅบงๆจกๅใๅฎๅฐๆจกๅๅคงๅฐๆพ่ๅๅฐไบ70%๏ผๅๆถๆ้ซไบๆจ็้ๅบฆ30%๏ผไป
ๅผๅ
ฅไบ1%็ๆๅฐๆง่ฝๆๅคฑใ
- **Orion-14B-Chat-Int4:** ไธไธชไฝฟ็จint4่ฟ่ก้ๅ็ๅฏน่ฏๆจกๅใ
<a name="zh_model-download"></a><br>
# 2. ไธ่ฝฝ่ทฏๅพ
ๅๅธๆจกๅๅไธ่ฝฝ้พๆฅ่งไธ่กจ๏ผ
| ๆจกๅๅ็งฐ | HuggingFaceไธ่ฝฝ้พๆฅ | ModelScopeไธ่ฝฝ้พๆฅ |
|---------------------|-----------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|
| โพ ๅบๅบงๆจกๅ | [Orion-14B-Base](https://huggingface.co/OrionStarAI/Orion-14B-Base) | [Orion-14B-Base](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base/summary) |
| ๐ ๅฏน่ฏๆจกๅ | [Orion-14B-Chat](https://huggingface.co/OrionStarAI/Orion-14B-Chat) | [Orion-14B-Chat](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat/summary) |
| ๐ ้ฟไธไธๆๆจกๅ | [Orion-14B-LongChat](https://huggingface.co/OrionStarAI/Orion-14B-LongChat) | [Orion-14B-LongChat](https://modelscope.cn/models/OrionStarAI/Orion-14B-LongChat/summary) |
| ๐ ๆฃ็ดขๅขๅผบๆจกๅ | [Orion-14B-Chat-RAG](https://huggingface.co/OrionStarAI/Orion-14B-Chat-RAG) | [Orion-14B-Chat-RAG](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-RAG/summary) |
| ๐ ๆไปถๆจกๅ | [Orion-14B-Chat-Plugin](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Plugin) | [Orion-14B-Chat-Plugin](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Plugin/summary)|
| ๐ผ ๅบๅบงInt4้ๅๆจกๅ | [Orion-14B-Base-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Base-Int4) | [Orion-14B-Base-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base-Int4/summary) |
| ๐ฆ ๅฏน่ฏInt4้ๅๆจกๅ | [Orion-14B-Chat-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Int4) | [Orion-14B-Chat-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Int4/summary) |
<a name="zh_model-benchmark"></a><br>
# 3. ่ฏไผฐ็ปๆ
## 3.1. ๅบๅบงๆจกๅOrion-14B-Base่ฏไผฐ
### 3.1.1. ไธไธ็ฅ่ฏไธ่ฏ้ข่ฏไผฐ็ปๆ
| ๆจกๅๅ็งฐ | C-Eval | CMMLU | MMLU | AGIEval | Gaokao | BBH |
|--------------------|----------|----------|----------|----------|----------|----------|
| LLaMA2-13B | 41.4 | 38.4 | 55.0 | 30.9 | 18.2 | 45.6 |
| Skywork-13B | 59.1 | 61.4 | 62.7 | 43.6 | 56.1 | 48.3 |
| Baichuan2-13B | 59.0 | 61.3 | 59.5 | 37.4 | 45.6 | 49.0 |
| QWEN-14B | 71.7 | 70.2 | 67.9 | 51.9 | **62.5** | 53.7 |
| InternLM-20B | 58.8 | 59.0 | 62.1 | 44.6 | 45.5 | 52.5 |
| **Orion-14B-Base** | **72.9** | **70.6** | **69.9** | **54.7** | 62.1 | **56.5** |
### 3.1.2. ็่งฃไธ้่ฏ่ฏไผฐ็ปๆ
| ๆจกๅๅ็งฐ |RACE-middle|RACE-high| HellaSwag| PIQA | Lambada | WSC |
|--------------------|----------|----------|----------|----------|----------|----------|
| LLaMA 2-13B | 63.0 | 58.9 | 77.5 | 79.8 | 76.5 | 66.3 |
| Skywork-13B | 87.6 | 84.1 | 73.7 | 78.3 | 71.8 | 66.3 |
| Baichuan 2-13B | 68.9 | 67.2 | 70.8 | 78.1 | 74.1 | 66.3 |
| QWEN-14B | 93.0 | 90.3 | **80.2** | 79.8 | 71.4 | 66.3 |
| InternLM-20B | 86.4 | 83.3 | 78.1 | **80.3** | 71.8 | 68.3 |
| **Orion-14B-Base** | **93.2** | **91.3** | 78.5 | 79.5 | **78.8** | **70.2** |
### 3.1.3. OpenCompass่ฏๆต้่ฏไผฐ็ปๆ
| ๆจกๅๅ็งฐ | Average | Examination | Language | Knowledge | Understanding | Reasoning |
|------------------|----------|----------|----------|----------|----------|----------|
| LLaMA 2-13B | 47.3 | 45.2 | 47.0 | 58.3 | 50.9 | 43.6 |
| Skywork-13B | 53.6 | 61.1 | 51.3 | 52.7 | 64.5 | 45.2 |
| Baichuan 2-13B | 49.4 | 51.8 | 47.5 | 48.9 | 58.1 | 44.2 |
| QWEN-14B | 62.4 | 71.3 | 52.67 | 56.1 | 68.8 | 60.1 |
| InternLM-20B | 59.4 | 62.5 | 55.0 | **60.1** | 67.3 | 54.9 |
|**Orion-14B-Base**| **64.3** | **71.4** | **55.0** | 60.0 | **71.9** | **61.6** |
### 3.1.4. ๆฅ่ฏญๆต่ฏ้่ฏไผฐ็ปๆ
| ๆจกๅๅ็งฐ |**Average**| JCQA | JNLI | MARC | JSQD | JQK | XLS | XWN | MGSM |
|--------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| PLaMo-13B | 52.3 | 56.7 | 42.8 | 95.8 | 70.6 | 71.0 | 8.70 | 70.5 | 2.40 |
| WebLab-10B | 50.7 | 66.6 | 53.7 | 82.1 | 62.9 | 56.2 | 10.0 | 72.0 | 2.40 |
| ELYZA-jp-7B | 48.8 | 71.7 | 25.3 | 86.6 | 70.8 | 64.1 | 2.50 | 62.1 | 7.20 |
| StableLM-jp-7B | 51.1 | 33.4 | 43.3 | **96.7** | 70.6 | 78.1 | 10.7 | 72.8 | 2.80 |
| LLaMA 2-13B | 46.3 | 75.0 | 47.6 | 38.8 | 76.1 | 67.7 | 18.1 | 63.2 | 10.4 |
| Baichuan 2-13B | 57.1 | 73.7 | 31.3 | 91.6 | 80.5 | 63.3 | 18.6 | 72.2 | 25.2 |
| QWEN-14B | 65.8 | 85.9 | 60.7 | 97.0 | 83.3 | 71.8 | 18.8 | 70.6 | 38.0 |
| Yi-34B | 67.1 | 83.8 | 61.2 | 95.2 | **86.1** | 78.5 | **27.2** | 69.2 | 35.2 |
| **Orion-14B-Base** | **69.1** | **88.2** | **75.8** | 94.1 | 75.7 | **85.1** | 17.3 | **78.8** | **38.0** |
### 3.1.5. ้ฉ่ฏญๆต่ฏ้n-shot่ฏไผฐ็ปๆ
| ๆจกๅๅ็งฐ | **Average**<br>n=0 n=5 | HellaSwag<br>n=0 n=5 | COPA<br> n=0 n=5 | BooIQ<br>n=0 n=5 | SentiNeg<br>n=0 n=5|
|------------------|------------------------------|------------------------------|------------------------------|------------------------------|------------------------------|
| KoGPT | 53.0 70.1 | 55.9 58.3 | 73.5 72.9 | 45.1 59.8 | 37.5 89.4 |
| Polyglot-ko-13B | 69.6 73.7 |**59.5** **63.1**|**79.4** **81.1**| 48.2 60.4 | 91.2 90.2 |
| LLaMA 2-13B | 46.7 63.7 | 41.3 44.0 | 59.3 63.8 | 34.9 73.8 | 51.5 73.4 |
| Baichuan 2-13B | 52.1 58.7 | 39.2 39.6 | 60.6 60.6 | 58.4 61.5 | 50.3 72.9 |
| QWEN-14B | 53.8 73.7 | 45.3 46.8 | 64.9 68.9 | 33.4 83.5 | 71.5 95.7 |
| Yi-34B | 54.2 72.1 | 44.6 44.7 | 58.0 60.6 | 65.9 90.2 | 48.3 92.9 |
|**Orion-14B-Base**|**74.5** **79.6**| 47.0 49.6 | 77.7 79.4 |**81.6** **90.7**|**92.4** **98.7**|
### 3.1.6. ๅค่ฏญ่จ่ฏไผฐ็ปๆ
| ๆจกๅๅ็งฐ | Train Lang | Japanese | Korean | Chinese | English |
|--------------------|------------|----------|----------|----------|----------|
| PLaMo-13B | En,Jp | 52.3 | * | * | * |
| Weblab-10B | En,Jp | 50.7 | * | * | * |
| ELYZA-jp-7B | En,Jp | 48.8 | * | * | * |
| StableLM-jp-7B | En,Jp | 51.1 | * | * | * |
| KoGPT-6B | En,Ko | * | 70.1 | * | * |
| Polyglot-ko-13B | En,Ko | * | 70.7 | * | * |
| Baichuan2-13B | Multi | 57.1 | 58.7 | 50.8 | 57.1 |
| Qwen-14B | Multi | 65.8 | 73.7 | 64.5 | 65.4 |
| Llama2-13B | Multi | 46.3 | 63.7 | 41.4 | 55.3 |
| Yi-34B | Multi | 67.1 | 72.2 | 58.7 | **68.8** |
| **Orion-14B-Base** | Multi | **69.1** | **79.5** | **67.9** | 67.3 |
## 3.2. ๅฏน่ฏๆจกๅOrion-14B-Chat่ฏไผฐ
### 3.2.1. ๅฏน่ฏๆจกๅMTBenchไธป่ง่ฏไผฐ
| ๆจกๅๅ็งฐ | ็ฌฌไธ่ฝฎ | ็ฌฌไบ่ฝฎ | **ๅนณๅ** |
|----------------------|----------|----------|----------|
| Baichuan2-13B-Chat | 7.05 | 6.47 | 6.76 |
| Qwen-14B-Chat | 7.30 | 6.62 | 6.96 |
| Llama2-13B-Chat | 7.10 | 6.20 | 6.65 |
| InternLM-20B-Chat | 7.03 | 5.93 | 6.48 |
| **Orion-14B-Chat** | **7.68** | **7.07** | **7.37** |
\*่ฟ้่ฏๆตไฝฟ็จvllm่ฟ่กๆจ็
### 3.2.2. ๅฏน่ฏๆจกๅAlignBenchไธป่ง่ฏไผฐ
| ๆจกๅๅ็งฐ | ๆฐๅญฆ่ฝๅ | ้ป่พๆจ็ | ๅบๆฌ่ฝๅ | ไธญๆ็่งฃ | ็ปผๅ้ฎ็ญ | ๅไฝ่ฝๅ | ่ง่ฒๆฎๆผ | ไธไธ็ฅ่ฏ | **ๅนณๅ** |
|--------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| Baichuan2-13B-Chat | 3.76 | 4.07 | 6.22 | 6.05 | 7.11 | 6.97 | 6.75 | 6.43 | 5.25 |
| Qwen-14B-Chat | **4.91** | **4.71** | **6.90** | 6.36 | 6.74 | 6.64 | 6.59 | 6.56 | **5.72** |
| Llama2-13B-Chat | 3.05 | 3.79 | 5.43 | 4.40 | 6.76 | 6.63 | 6.99 | 5.65 | 4.70 |
| InternLM-20B-Chat | 3.39 | 3.92 | 5.96 | 5.50 | **7.18** | 6.19 | 6.49 | 6.22 | 4.96 |
| **Orion-14B-Chat** | 4.00 | 4.24 | 6.18 | **6.57** | 7.16 | **7.36** | **7.16** | **6.99** | 5.51 |
\*่ฟ้่ฏๆตไฝฟ็จvllm่ฟ่กๆจ็
## 3.3. ้ฟไธไธๆๆจกๅOrion-14B-LongChat่ฏไผฐ
### 3.3.1. ้ฟไธไธๆๆจกๅLongBench่ฏไผฐ
| ๆจกๅๅ็งฐ | NarrativeQA| MultiFieldQA-en| MultiFieldQA-zh | DuReader | QMSum | VCSUM | TREC | TriviaQA | LSHT | RepoBench-P |
|--------------------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|
| GPT-3.5-Turbo-16k | **23.60** | **52.30** | **61.20** | 28.70 | 23.40 | **16.00** | 68.00 | **91.40** | 29.20 | 53.60 |
| LongChat-v1.5-7B-32k | 16.90 | 41.40 | 29.10 | 19.50 | 22.70 | 9.90 | 63.50 | 82.30 | 23.20 | 55.30 |
| Vicuna-v1.5-7B-16k | 19.40 | 38.50 | 43.00 | 19.30 | 22.80 | 15.10 | 71.50 | 86.20 | 28.80 | 43.50 |
| Yi-6B-200K | 14.11 | 36.74 | 22.68 | 14.01 | 20.44 | 8.08 | 72.00 | 86.61 | 38.00 | **63.29** |
| Orion-14B-LongChat | 19.47 | 48.11 | 55.84 | **37.02** | **24.87** | 15.44 | **77.00** | 89.12 | **45.50** | 54.31 |
## 3.4. ๆฃ็ดขๅขๅผบๆจกๅOrion-14B-Chat-RAG่ฏไผฐ
### 3.4.1. ่ชๅปบๆฃ็ดขๅขๅผบๆต่ฏ้่ฏไผฐ็ปๆ
|ๆจกๅๅ็งฐ|ๅๅคๆๆ(ๅ
ณ้ฎๅญ)|*ๅๅคๆๆ(ไธป่งๆๅ)|ๅผ็จ่ฝๅ|ๅ
ๅบ่ฝๅ|*AutoQA|*ๆฝๅๆฐๆฎ|
|---------------------|------|------|------|------|------|------|
| Baichuan2-13B-Chat | 85 | 76 | 1 | 0 | 69 | 51 |
| Qwen-14B-Chat | 79 | 77 | 75 | 47 | 68 | 72 |
| Qwen-72B-Chat(Int4) | 87 | 89 | 90 | 32 | 67 | 76 |
| GPT-4 | 91 | 94 | 96 | 95 | 75 | 86 |
| Orion-14B-Chat-RAG | 86 | 87 | 91 | 97 | 73 | 71 |
\* ่กจ็คบไบบๅทฅ่ฏๅค็ปๆ
## 3.5. ๆไปถๆจกๅOrion-14B-Chat-Plugin่ฏไผฐ
### 3.5.1. ่ชๅปบๆไปถๆต่ฏ้่ฏไผฐ็ปๆ
| ๆจกๅๅ็งฐ | ๅ
จๅๆฐๆๅพ่ฏๅซ | ็ผบๅๆฐๆๅพ่ฏๅซ | ้ๆไปถ่ฐ็จ่ฏๅซ |
|-----------------------|--------|-----------|--------|
| Baichuan2-13B-Chat | 25 | 0 | 0 |
| Qwen-14B-Chat | 55 | 0 | 50 |
| GPT-4 | **95** | 52.38 | 70 |
| Orion-14B-Chat-Plugin | 92.5 | **60.32** | **90** |
## 3.6. ้ๅๆจกๅOrion-14B-Base-Int4่ฏไผฐ
### 3.6.1. ้ๅๅๅๆดไฝๅฏนๆฏ
|ๆจกๅๅ็งฐ|ๆจกๅๅคงๅฐ(GB)|ๆจ็้ๅบฆ(ไปค็ๆฐ/็ง)|C-Eval |CMMLU |MMLU |RACE | HellaSwag|
|-------------------------|------|-----|------|------|------|------|------|
| OrionStar-14B-Base | 28.0 | 135 | 72.8 | 70.6 | 70.0 | 93.3 | 78.5 |
| OrionStar-14B-Base-Int4 | 8.3 | 178 | 71.8 | 69.8 | 69.2 | 93.1 | 78.0 |
<a name="zh_model-inference"></a><br>
# 4. ๆจกๅๆจ็
ๆจ็ๆ้็ๆจกๅๆ้ใๆบ็ ใ้
็ฝฎๅทฒๅๅธๅจ Hugging Face๏ผไธ่ฝฝ้พๆฅ่งๆฌๆๆกฃๆๅผๅง็่กจๆ ผใๆไปฌๅจๆญค็คบ่ๅค็งๆจ็ๆนๅผใ็จๅบไผ่ชๅจไป
Hugging Face ไธ่ฝฝๆ้่ตๆบใ
## 4.1. Python ไปฃ็ ๆนๅผ
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation.utils import GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("OrionStarAI/Orion-14B", use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("OrionStarAI/Orion-14B", device_map="auto",
torch_dtype=torch.bfloat16, trust_remote_code=True)
model.generation_config = GenerationConfig.from_pretrained("OrionStarAI/Orion-14B")
messages = [{"role": "user", "content": "ไฝ ๅฅฝ! ไฝ ๅซไปไนๅๅญ!"}]
response = model.chat(tokenizer, messages, streaming=Flase)
print(response)
```
ๅจไธ่ฟฐไธคๆฎตไปฃ็ ไธญ๏ผๆจกๅๅ ่ฝฝๆๅฎ `device_map='auto'`
๏ผไผไฝฟ็จๆๆๅฏ็จๆพๅกใๅฆ้ๆๅฎไฝฟ็จ็่ฎพๅค๏ผๅฏไปฅไฝฟ็จ็ฑปไผผ `export CUDA_VISIBLE_DEVICES=0,1`๏ผไฝฟ็จไบ0ใ1ๅทๆพๅก๏ผ็ๆนๅผๆงๅถใ
## 4.2. ๅฝไปค่กๅทฅๅ
ทๆนๅผ
```shell
CUDA_VISIBLE_DEVICES=0 python cli_demo.py
```
ๆฌๅฝไปค่กๅทฅๅ
ทๆฏไธบ Chat ๅบๆฏ่ฎพ่ฎก๏ผๅ ๆญคๆไปฌไธๆฏๆไฝฟ็จ่ฏฅๅทฅๅ
ท่ฐ็จ Base ๆจกๅใ
## 4.3. ่ๆฌ็ดๆฅๆจ็
```shell
# base model
CUDA_VISIBLE_DEVICES=0 python demo/text_generation_base.py --model OrionStarAI/Orion-14B --tokenizer OrionStarAI/Orion-14B --prompt ไฝ ๅฅฝ,ไฝ ๅซไปไนๅๅญ
# chat model
CUDA_VISIBLE_DEVICES=0 python demo/text_generation.py --model OrionStarAI/Orion-14B-Chat --tokenizer OrionStarAI/Orion-14B-Chat --prompt ไฝ ๅฅฝ,ไฝ ๅซไปไนๅๅญ
```
## 4.4. ็คบไพ่พๅบ
### 4.4.1. ้ฒ่
`````
็จๆท๏ผไฝ ๅฅฝ,ไฝ ๅซไปไนๅๅญ
Orion-14B๏ผไฝ ๅฅฝ!ๆๆฏไธไธชไบบๅทฅๆบ่ฝๅฉๆ,ๆฒกๆๅบๅฎ็ๅๅญใไฝ ๅฏไปฅ้ๆ็งฐๅผๆใๆไปไนๆๅฏไปฅๅธฎๅฉไฝ ็ๅ?
็จๆท๏ผ่ฎฒไธชๆ
ไบ
Orion-14B๏ผไปๅๆไธไธชๅฐ็ทๅญฉๅซๆฐๅ
,ไปไฝๅจไธไธชๅฐๆๅบ้ใๆฐๅ
้ๅธธๅๆฌขๅ้ฉๅๆข็ดข,ๆปๆฏๆขฆๆณ็่ฝๅคๅ็ฐไธไบ็ฅๅฅ็ไบๆ
ใ
ๆไธๅคฉ,ๆฐๅ
ๅฌ่ฏดๅจ้่ฟ็ๆฃฎๆๆทฑๅคๆไธๅบงไผ ่ฏดไธญ็้ญๆณๅๅ ก,้้ขไฝ็ไธไธช้ชๆถ็ๅทจไบบใ่ฟไธชๅทจไบบ็ๅๅญๅซๆ ผ้ๅง,ไป่ขซๅฐๅจๅๅ กไธญ,ๆ ๆณ็ฆปๅผใ
ๆฐๅ
ๅณๅฎ่ฆๅป่งฃๆ่ฟไธชๅทจไบบ,ๅ ไธบไป็ธไฟกๆ ผ้ๅงไธๅฎๆ่ชๅทฑ็ๅๅ ่ขซๅฐๅจ้ฃ้ใไบๆฏ,ไปๅธฆ็ๅๆฐๅๅณๅฟ,่ธไธไบๅๅพ้ญๆณๅๅ ก็ๆ
็จใ
็ป่ฟ้ฟๆถ้ด็่ทๆถ,ๆฐๅ
็ปไบๆฅๅฐไบๅๅ ก็ๅคง้จๅใไปๅฐๅฟ็ฟผ็ฟผๅฐๆจๅผๅคง้จ,่ฟๅ
ฅไบไธไธชๅ
ๆปก็ฅ็งๆฐๆฏ็ๆฟ้ดใ
ๆฟ้ด้ๅผฅๆผซ็้ปๆ็ๆฐๆฏ,ไฝๆฐๅ
ๅนถๆฒกๆ่ขซๅๅใไปๅๆขๅฐๅๅๅ กๆทฑๅค่ตฐๅป,ๆ็ปๆพๅฐไบๆ ผ้ๅงใ
ๆ ผ้ๅงๅ่ฏๆฐๅ
,ไปๆฏ่ขซไธไธช้ชๆถ็ๅฅณๅทซ่ฏ
ๅ็,ๅชๆ้่ฟ่งฃๅผๅ่ฏญๆ่ฝ่ทๅพ่ช็ฑใๆฐๅ
ๅณๅฎ่ฆๅธฎๅฉๆ ผ้ๅง,ไปๅผๅงๅฏปๆพ็บฟ็ดขๅ่งฃๅผๅ่ฏญ็ๆนๆณใ
็ป่ฟไธ็ชๅชๅ,ๆฐๅ
็ปไบๆพๅฐไบไธๆก้ๅพๅฅณๅทซ็็งๅฏๅฎ้ชๅฎค็้่ทฏใไปๅจ้ฃ้ๅ็ฐไบๅฅณๅทซ็ไธ็้ญๆณไนฆ,ไนฆไธญ่ฎฐ่ฝฝไบ่งฃๅผๅ่ฏญ็ๆนๆณใ
ๆฐๅ
ๆ็
งไนฆไธๆๅ็ๆญฅ้ชค่ฟ่กๆไฝ,ๆๅๅฐ่งฃๅผไบๆ ผ้ๅง่บซไธ็่ฏ
ๅใๅทจไบบๆๆฟไธๅทฒ,ไป็ญๅบๅธฎๅฉๆฐๅ
ๅฎ็ฐไป็ๆฟๆใ
ไปๆญคไปฅๅ,ๆฐๅ
ๅๆ ผ้ๅงๆไธบไบๅฅฝๆๅ,ไปไปฌไธ่ตท็ปๅไบ่ฎธๅคๆ้ฉๅบๆฟ็ๅ้ฉ,ๅนถ็จไปไปฌ็ๅๆฐๅๆบๆ
งๆ่ไบๅ็งๅฐ้พใ
่ฟไธชๆ
ไบๅ่ฏๆไปฌ,ๅช่ฆๆๅๆฐๅๅณๅฟ,ๆไปฌๅฐฑ่ฝๅ
ๆไธๅๅฐ้พ,ๅฎ็ฐ่ชๅทฑ็ๆขฆๆณใ
`````
### 4.4.2. ๆฅ้ฉ
`````
็จๆท๏ผ่ชๅทฑใ็ดนไปใใฆใใ ใใ
Orion-14B๏ผใใใซใกใฏใ็งใฎๅๅใฏChatMaxใงใOrionStarใซใใฃใฆ้็บใใใAIใขใทในใฟใณใใงใใ็งใฏใฆใผใถใผใซๆญฃ็ขบใงใๅฎข่ฆณ็ใงใๆ็จใชๅ็ญใๆไพใใๅฝผใใๅ้กใ่งฃๆฑบใใๆ
ๅ ฑใๅๅพใใใฎใๅฉใใใใจใใงใใพใใ
็จๆท๏ผ์๊ธฐ์๊ฐ๋ฅผ ํด์ฃผ์ธ์.
Orion-14B๏ผ์๋
ํ์ธ์, ์ ์ด๋ฆ์ ChatMax์
๋๋ค. OrionStar์์ ๊ฐ๋ฐํ AI ์ด์์คํดํธ์
๋๋ค. ์ ๋ ์ฌ์ฉ์๋ค์๊ฒ ์ ํํ๊ณ ๊ฐ๊ด์ ์ด๋ฉฐ ์ ์ฉํ ๋ต๋ณ์ ์ ๊ณตํ์ฌ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ณ ์ ๋ณด๋ฅผ ์ป๋ ๋ฐ ๋์์ ์ค ์ ์์ต๋๋ค.
`````
<a name="zh_declarations-license"></a><br>
# 5. ๅฃฐๆใๅ่ฎฎ
## 5.1. ๅฃฐๆ
ๆไปฌๅผบ็ๅผๅๆๆไฝฟ็จ่
๏ผไธ่ฆๅฉ็จ Orion-14B ๆจกๅ่ฟ่กไปปไฝๅฑๅฎณๅฝๅฎถ็คพไผๅฎๅ
จๆ่ฟๆณ็ๆดปๅจใๅฆๅค๏ผๆไปฌไน่ฆๆฑไฝฟ็จ่
ไธ่ฆๅฐ
Orion-14B ๆจกๅ็จไบๆช็ป้ๅฝๅฎๅ
จๅฎกๆฅๅๅคๆก็ไบ่็ฝๆๅกใ
ๆไปฌๅธๆๆๆ็ไฝฟ็จ่
้ฝ่ฝ้ตๅฎ่ฟไธชๅๅ๏ผ็กฎไฟ็งๆ็ๅๅฑ่ฝๅจ่ง่ๅๅๆณ็็ฏๅขไธ่ฟ่กใ
ๆไปฌๅทฒ็ปๅฐฝๆไปฌๆ่ฝ๏ผๆฅ็กฎไฟๆจกๅ่ฎญ็ป่ฟ็จไธญไฝฟ็จ็ๆฐๆฎ็ๅ่งๆงใ็ถ่๏ผๅฐฝ็ฎกๆไปฌๅทฒ็ปๅๅบไบๅทจๅคง็ๅชๅ๏ผไฝ็ฑไบๆจกๅๅๆฐๆฎ็ๅคๆๆง๏ผไปๆๅฏ่ฝๅญๅจไธไบๆ ๆณ้ข่ง็้ฎ้ขใๅ ๆญค๏ผๅฆๆ็ฑไบไฝฟ็จ
Orion-14B ๅผๆบๆจกๅ่ๅฏผ่ด็ไปปไฝ้ฎ้ข๏ผๅ
ๆฌไฝไธ้ไบๆฐๆฎๅฎๅ
จ้ฎ้ขใๅ
ฌๅ
ฑ่่ฎบ้ฃ้ฉ๏ผๆๆจกๅ่ขซ่ฏฏๅฏผใๆปฅ็จใไผ ๆญๆไธๅฝๅฉ็จๆๅธฆๆฅ็ไปปไฝ้ฃ้ฉๅ้ฎ้ข๏ผๆไปฌๅฐไธๆฟๆ
ไปปไฝ่ดฃไปปใ
## 5.2. ๅ่ฎฎ
็คพๅบไฝฟ็จOrion-14B็ณปๅๆจกๅ
- ไปฃ็ ่ฏท้ตๅพช [Apache License Version 2.0](./LICENSE)<br>
- ๆจกๅ่ฏท้ตๅพช [Orion-14B็ณปๅๆจกๅ็คพๅบ่ฎธๅฏๅ่ฎฎ](./ModelsCommunityLicenseAgreement)
<a name="zh_company-introduction"></a><br>
# 6. ไผไธไป็ป
็ๆทๆ็ฉบ๏ผOrionStar๏ผๆฏไธๅฎถๅ
จ็้ขๅ
็ๆๅกๆบๅจไบบ่งฃๅณๆนๆกๅ
ฌๅธ๏ผๆ็ซไบ2016ๅนด9ๆใ็ๆทๆ็ฉบ่ดๅไบๅบไบไบบๅทฅๆบ่ฝๆๆฏๆ้ ไธไธไปฃ้ฉๅฝๆงๆบๅจไบบ๏ผไฝฟไบบไปฌ่ฝๅคๆ่ฑ้ๅค็ไฝๅๅณๅจ๏ผไฝฟไบบ็ฑป็ๅทฅไฝๅ็ๆดปๆดๅ ๆบ่ฝๅๆ่ถฃ๏ผ้่ฟๆๆฏไฝฟ็คพไผๅไธ็ๅๅพๆดๅ ็พๅฅฝใ
็ๆทๆ็ฉบๆฅๆๅฎๅ
จ่ชไธปๅผๅ็ๅ
จ้พๆกไบบๅทฅๆบ่ฝๆๆฏ๏ผๅฆ่ฏญ้ณไบคไบๅ่ง่งๅฏผ่ชใๅฎๆดๅไบไบงๅๅผๅ่ฝๅๅๆๆฏๅบ็จ่ฝๅใๅบไบOrionๆบๆขฐ่ๅนณๅฐ๏ผๅฎๆจๅบไบORION
STAR AI Robot GreetingใAI Robot Greeting MiniใLuckiใCoffee
Master็ญไบงๅ๏ผๅนถๅปบ็ซไบOrionๆบๅจไบบ็ๅผๆพๅนณๅฐOrionOSใ้่ฟไธบ **็ๆญฃๆ็จ็ๆบๅจไบบ่็** ็็ๅฟตๅฎ่ทต๏ผๅฎ้่ฟAIๆๆฏไธบๆดๅคไบบ่ต่ฝใ
ๅญๅ7ๅนดAI็ป้ช็งฏ็ดฏ๏ผ็ๆทๆ็ฉบๅทฒๆจๅบ็ๅคงๆจกๅๆทฑๅบฆๅบ็จโ่่จโ๏ผๅนถ้็ปญ้ขๅ่กไธๅฎขๆทๆไพๅฎๅถๅAIๅคงๆจกๅๅจ่ฏขไธๆๅก่งฃๅณๆนๆก๏ผ็ๆญฃๅธฎๅฉๅฎขๆทๅฎ็ฐไผไธ็ป่ฅๆ็้ขๅ
ๅ่ก็ฎๆ ใ
**็ๆทๆ็ฉบๅ
ทๅคๅ
จ้พๆกๅคงๆจกๅๅบ็จ่ฝๅ็ๆ ธๅฟไผๅฟ**๏ผๅ
ๆฌๆฅๆไปๆตท้ๆฐๆฎๅค็ใๅคงๆจกๅ้ข่ฎญ็ปใไบๆฌก้ข่ฎญ็ปใๅพฎ่ฐ(Fine-tune)ใPrompt
Engineering ใAgentๅผๅ็ๅ
จ้พๆก่ฝๅๅ็ป้ช็งฏ็ดฏ๏ผๆฅๆๅฎๆด็็ซฏๅฐ็ซฏๆจกๅ่ฎญ็ป่ฝๅ๏ผๅ
ๆฌ็ณป็ปๅ็ๆฐๆฎๅค็ๆต็จๅๆฐ็พๅผ GPU็ๅนถ่กๆจกๅ่ฎญ็ป่ฝๅ๏ผ็ฐๅทฒๅจๅคงๆฟๅกใไบๆๅกใๅบๆตท็ตๅใๅฟซๆถ็ญๅคไธช่กไธๅบๆฏ่ฝๅฐใ
***ๆฌข่ฟๆๅคงๆจกๅๅบ็จ่ฝๅฐ้ๆฑ็ไผไธ่็ณปๆไปฌ่ฟ่กๅๅกๅไฝ***<br>
**ๅจ่ฏข็ต่ฏ:** 400-898-7779<br>
**็ตๅญ้ฎ็ฎฑ:** [email protected]
<div align="center">
<img src="./assets/imgs/wechat_group.jpg" alt="wechat" width="40%" />
</div>
|
sosoai/Orion-14B-Chat-safetensors
|
sosoai
| 2024-01-25T02:08:59Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"orion",
"text-generation",
"code",
"model",
"llm",
"conversational",
"custom_code",
"en",
"zh",
"ja",
"ko",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-24T23:44:38Z |
---
language:
- en
- zh
- ja
- ko
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- code
- model
- llm
---
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<div align="center">
<img src="./assets/imgs/orion_start.PNG" alt="logo" width="50%" />
</div>
<div align="center">
<h1>
Orion-14B
</h1>
</div>
<div align="center">
<div align="center">
<b>๐English</b> | <a href="https://huggingface.co/OrionStarAI/Orion-14B-Chat/blob/main/README_zh.md" target="_blank">๐จ๐ณไธญๆ</a> | <a href="https://huggingface.co/OrionStarAI/Orion-14B-Chat/blob/main/README_ja.md" target="_blank">๐ฏ๐ตๆฅๆฌ่ช</a> | <a href="https://huggingface.co/OrionStarAI/Orion-14B-Chat/blob/main/README_ko.md" target="_blank">๐ฐ๐ทํ๊ตญ์ด</a>
</div>
<h4 align="center">
<p>
๐ค <a href="https://huggingface.co/OrionStarAI" target="_blank">HuggingFace Mainpage</a> | ๐ค <a href="https://modelscope.cn/organization/OrionStarAI" target="_blank">ModelScope Mainpage</a><br>๐ฌ <a href="https://huggingface.co/spaces/OrionStarAI/Orion-14B-App-Demo" target="_blank">HuggingFace Demo</a> | ๐ซ <a href="https://modelscope.cn/studios/OrionStarAI/Orion-14B-App-Demo/summary" target="_blank">ModelScope Demo</a><br>๐บ <a href="https://github.com/OrionStarAI/Orion" target="_blank">GitHub</a><br>๐ <a href="https://github.com/OrionStarAI/Orion/blob/master/doc/Orion14B_v3.pdf" target="_blank">Tech Report</a>
<p>
</h4>
</div>
# Table of Contents
- [๐ Model Introduction](#model-introduction)
- [๐ Model Download](#model-download)
- [๐ Model Benchmark](#model-benchmark)
- [๐ Model Inference](#model-inference)
- [๐ Declarations & License](#declarations-license)
- [๐ฅ Company Introduction](#company-introduction)
<a name="model-introduction"></a><br>
# 1. Model Introduction
- Orion-14B series models are open-source multilingual large language models trained from scratch by OrionStarAI. The base model is trained on 2.5T multilingual corpus, including Chinese, English, Japanese, Korean, etc, and it exhibits superior performance in these languages. For details, please refer to [tech report](https://github.com/OrionStarAI/Orion/blob/master/doc/Orion14B_v3.pdf).
- The Orion-14B series models exhibit the following features:
- Among models with 20B-parameter scale level, Orion-14B-Base model shows outstanding performance in comprehensive evaluations.
- Strong multilingual capabilities, significantly outperforming in Japanese and Korean testsets.
- The fine-tuned models demonstrate strong adaptability, excelling in human-annotated blind tests.
- The long-chat version supports extremely long texts, performing exceptionally well at a token length of 200k and can support up to a maximum of 320k.
- The quantized versions reduce model size by 70%, improve inference speed by 30%, with performance loss less than 1%.
<table style="border-collapse: collapse; width: 100%;">
<tr>
<td style="border: none; padding: 10px; box-sizing: border-box;">
<img src="./assets/imgs/opencompass_en.png" alt="opencompass" style="width: 100%; height: auto;">
</td>
<td style="border: none; padding: 10px; box-sizing: border-box;">
<img src="./assets/imgs/model_cap_en.png" alt="modelcap" style="width: 100%; height: auto;">
</td>
</tr>
</table>
- Orion-14B series models including:
- **Orion-14B-Base:** A multilingual large language foundational model with 14 billion parameters, pretrained on a diverse dataset of 2.5 trillion tokens.
- **Orion-14B-Chat:** A chat-model fine-tuned on a high-quality corpus aims to provide an excellence interactive experience for users in the large model community.
- **Orion-14B-LongChat:** The long-context version excels at handling extremely lengthy texts, performing exceptionally well at a token length of 200k and can support up to a maximum of 320k.
- **Orion-14B-Chat-RAG:** A chat-model fine-tuned on a custom retrieval augmented generation dataset, achieving superior performance in retrieval augmented generation tasks.
- **Orion-14B-Chat-Plugin:** A chat-model specifically tailored for plugin and function calling tasks, ideal for agent-related scenarios where the LLM acts as a plugin and function call system.
- **Orion-14B-Base-Int4:** A quantized base model utilizing 4-bit integer weights. It significantly reduces the model size by 70% and increases the inference speed by 30% while incurring a minimal performance loss of only 1%.
- **Orion-14B-Chat-Int4:** A quantized chat model utilizing 4-bit integer weights.
<a name="model-download"></a><br>
# 2. Model Download
Model release and download links are provided in the table below:
| Model Name | HuggingFace Download Links | ModelScope Download Links |
|-------------------------|-----------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|
| โพOrion-14B-Base | [Orion-14B-Base](https://huggingface.co/OrionStarAI/Orion-14B-Base) | [Orion-14B-Base](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base/summary) |
| ๐Orion-14B-Chat | [Orion-14B-Chat](https://huggingface.co/OrionStarAI/Orion-14B-Chat) | [Orion-14B-Chat](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat/summary) |
| ๐Orion-14B-LongChat | [Orion-14B-LongChat](https://huggingface.co/OrionStarAI/Orion-14B-LongChat) | [Orion-14B-LongChat](https://modelscope.cn/models/OrionStarAI/Orion-14B-LongChat/summary) |
| ๐Orion-14B-Chat-RAG | [Orion-14B-Chat-RAG](https://huggingface.co/OrionStarAI/Orion-14B-Chat-RAG) | [Orion-14B-Chat-RAG](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-RAG/summary) |
| ๐Orion-14B-Chat-Plugin | [Orion-14B-Chat-Plugin](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Plugin) | [Orion-14B-Chat-Plugin](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Plugin/summary) |
| ๐ผOrion-14B-Base-Int4 | [Orion-14B-Base-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Base-Int4) | [Orion-14B-Base-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base-Int4/summary) |
| ๐ฆOrion-14B-Chat-Int4 | [Orion-14B-Chat-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Int4) | [Orion-14B-Chat-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Int4/summary) |
<a name="model-benchmark"></a><br>
# 3. Model Benchmarks
## 3.1. Base Model Orion-14B-Base Benchmarks
### 3.1.1. LLM evaluation results on examination and professional knowledge
| Model | C-Eval | CMMLU | MMLU | AGIEval | Gaokao | BBH |
|--------------------|----------|----------|----------|----------|----------|----------|
| LLaMA2-13B | 41.4 | 38.4 | 55.0 | 30.9 | 18.2 | 45.6 |
| Skywork-13B | 59.1 | 61.4 | 62.7 | 43.6 | 56.1 | 48.3 |
| Baichuan2-13B | 59.0 | 61.3 | 59.5 | 37.4 | 45.6 | 49.0 |
| QWEN-14B | 71.7 | 70.2 | 67.9 | 51.9 | **62.5** | 53.7 |
| InternLM-20B | 58.8 | 59.0 | 62.1 | 44.6 | 45.5 | 52.5 |
| **Orion-14B-Base** | **72.9** | **70.6** | **69.9** | **54.7** | 62.1 | **56.5** |
### 3.1.2. LLM evaluation results on language understanding and common knowledge
| Model |RACE-middle|RACE-high |HellaSwag | PIQA | Lambada | WSC |
|--------------------|----------|----------|----------|----------|----------|----------|
| LLaMA 2-13B | 63.0 | 58.9 | 77.5 | 79.8 | 76.5 | 66.3 |
| Skywork-13B | 87.6 | 84.1 | 73.7 | 78.3 | 71.8 | 66.3 |
| Baichuan 2-13B | 68.9 | 67.2 | 70.8 | 78.1 | 74.1 | 66.3 |
| QWEN-14B | 93.0 | 90.3 | **80.2** | 79.8 | 71.4 | 66.3 |
| InternLM-20B | 86.4 | 83.3 | 78.1 | **80.3** | 71.8 | 68.3 |
| **Orion-14B-Base** | **93.2** | **91.3** | 78.5 | 79.5 | **78.8** | **70.2** |
### 3.1.3. LLM evaluation results of OpenCompass testsets
| Model | Average | Examination | Language | Knowledge | Understanding | Reasoning |
|------------------|----------|----------|----------|----------|----------|----------|
| LLaMA 2-13B | 47.3 | 45.2 | 47.0 | 58.3 | 50.9 | 43.6 |
| Skywork-13B | 53.6 | 61.1 | 51.3 | 52.7 | 64.5 | 45.2 |
| Baichuan 2-13B | 49.4 | 51.8 | 47.5 | 48.9 | 58.1 | 44.2 |
| QWEN-14B | 62.4 | 71.3 | 52.67 | 56.1 | 68.8 | 60.1 |
| InternLM-20B | 59.4 | 62.5 | 55.0 | **60.1** | 67.3 | 54.9 |
|**Orion-14B-Base**| **64.3** | **71.4** | **55.0** | 60.0 | **71.9** | **61.6** |
### 3.1.4. Comparison of LLM performances on Japanese testsets
| Model |**Average**| JCQA | JNLI | MARC | JSQD | JQK | XLS | XWN | MGSM |
|--------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| PLaMo-13B | 52.3 | 56.7 | 42.8 | 95.8 | 70.6 | 71.0 | 8.70 | 70.5 | 2.40 |
| WebLab-10B | 50.7 | 66.6 | 53.7 | 82.1 | 62.9 | 56.2 | 10.0 | 72.0 | 2.40 |
| ELYZA-jp-7B | 48.8 | 71.7 | 25.3 | 86.6 | 70.8 | 64.1 | 2.50 | 62.1 | 7.20 |
| StableLM-jp-7B | 51.1 | 33.4 | 43.3 | **96.7** | 70.6 | 78.1 | 10.7 | 72.8 | 2.80 |
| LLaMA 2-13B | 46.3 | 75.0 | 47.6 | 38.8 | 76.1 | 67.7 | 18.1 | 63.2 | 10.4 |
| Baichuan 2-13B | 57.1 | 73.7 | 31.3 | 91.6 | 80.5 | 63.3 | 18.6 | 72.2 | 25.2 |
| QWEN-14B | 65.8 | 85.9 | 60.7 | 97.0 | 83.3 | 71.8 | 18.8 | 70.6 | 38.0 |
| Yi-34B | 67.1 | 83.8 | 61.2 | 95.2 | **86.1** | 78.5 | **27.2** | 69.2 | 35.2 |
| **Orion-14B-Base** | **69.1** | **88.2** | **75.8** | 94.1 | 75.7 | **85.1** | 17.3 | **78.8** | **38.0** |
### 3.1.5. Comparison of LLM performances on Korean testsets. n = 0 and n = 5 stand for n-shot prompts used in the evaluation
|Model | **Average**<br>n=0 n=5 | HellaSwag<br>n=0 n=5 | COPA<br> n=0 n=5 | BooIQ<br>n=0 n=5 | SentiNeg<br>n=0 n=5|
|------------------|------------------------------|------------------------------|------------------------------|------------------------------|------------------------------|
| KoGPT | 53.0 70.1 | 55.9 58.3 | 73.5 72.9 | 45.1 59.8 | 37.5 89.4 |
| Polyglot-ko-13B | 69.6 73.7 |**59.5** **63.1**|**79.4** **81.1**| 48.2 60.4 | 91.2 90.2 |
| LLaMA 2-13B | 46.7 63.7 | 41.3 44.0 | 59.3 63.8 | 34.9 73.8 | 51.5 73.4 |
| Baichuan 2-13B | 52.1 58.7 | 39.2 39.6 | 60.6 60.6 | 58.4 61.5 | 50.3 72.9 |
| QWEN-14B | 53.8 73.7 | 45.3 46.8 | 64.9 68.9 | 33.4 83.5 | 71.5 95.7 |
| Yi-34B | 54.2 72.1 | 44.6 44.7 | 58.0 60.6 | 65.9 90.2 | 48.3 92.9 |
|**Orion-14B-Chat**|**74.5** **79.6**| 47.0 49.6 | 77.7 79.4 |**81.6** **90.7**|**92.4** **98.7**|
### 3.1.6. Multilingual evaluation
| Model | Train Lang | Japanese | Korean | Chinese | English |
|--------------------|------------|----------|----------|----------|----------|
| PLaMo-13B | En,Jp | 52.3 | * | * | * |
| Weblab-10B | En,Jp | 50.7 | * | * | * |
| ELYZA-jp-7B | En,Jp | 48.8 | * | * | * |
| StableLM-jp-7B | En,Jp | 51.1 | * | * | * |
| KoGPT-6B | En,Ko | * | 70.1 | * | * |
| Polyglot-ko-13B | En,Ko | * | 70.7 | * | * |
| Baichuan2-13B | Multi | 57.1 | 58.7 | 50.8 | 57.1 |
| Qwen-14B | Multi | 65.8 | 73.7 | 64.5 | 65.4 |
| Llama2-13B | Multi | 46.3 | 63.7 | 41.4 | 55.3 |
| Yi-34B | Multi | 67.1 | 72.2 | 58.7 | **68.8** |
| **Orion-14B-Chat** | Multi | **69.1** | **79.5** | **67.9** | 67.3 |
## 3.2. Chat Model Orion-14B-Chat Benchmarks
### 3.2.1. Chat model subjective evaluation of MTBench
| Model | First-Turn | Second-Turn | **Average** |
|----------------------|----------|----------|----------|
| Baichuan2-13B-Chat | 7.05 | 6.47 | 6.76 |
| Qwen-14B-Chat | 7.30 | 6.62 | 6.96 |
| Llama2-13B-Chat | 7.10 | 6.20 | 6.65 |
| InternLM-20B-Chat | 7.03 | 5.93 | 6.48 |
| **Orion-14B-Chat** | **7.68** | **7.07** | **7.37** |
\* use vllm for inference
### 3.2.2. Chat model subjective evaluation of AlignBench
| Model | Math. | Logi. | Basic. | Chi. | Comp. | Writ. | Role. | Prof. |**Avg.**|
|--------------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|
| Baichuan2-13B-Chat | 3.76 | 4.07 | 6.22 | 6.05 | 7.11 | 6.97 | 6.75 | 6.43 | 5.25 |
| Qwen-14B-Chat |**4.91**|**4.71**|**6.90**| 6.36 | 6.74 | 6.64 | 6.59 | 6.56 |**5.72**|
| Llama2-13B-Chat | 3.05 | 3.79 | 5.43 | 4.40 | 6.76 | 6.63 | 6.99 | 5.65 | 4.70 |
| InternLM-20B-Chat | 3.39 | 3.92 | 5.96 | 5.50 |**7.18**| 6.19 | 6.49 | 6.22 | 4.96 |
| **Orion-14B-Chat** | 4.00 | 4.24 | 6.18 |**6.57**| 7.16 |**7.36**|**7.16**|**6.99**| 5.51 |
\* use vllm for inference
## 3.3. LongChat Model Orion-14B-LongChat Benchmarks
### 3.3.1. LongChat evaluation of LongBench
| Model | NarrativeQA|MultiFieldQA-en|MultiFieldQA-zh| DuReader | QMSum | VCSUM | TREC | TriviaQA | LSHT |RepoBench-P|
|--------------------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|
| GPT-3.5-Turbo-16k | **23.60** | **52.30** | **61.20** | 28.70 | 23.40 | **16.00** | 68.00 | **91.40** | 29.20 | 53.60 |
| LongChat-v1.5-7B-32k | 16.90 | 41.40 | 29.10 | 19.50 | 22.70 | 9.90 | 63.50 | 82.30 | 23.20 | 55.30 |
| Vicuna-v1.5-7B-16k | 19.40 | 38.50 | 43.00 | 19.30 | 22.80 | 15.10 | 71.50 | 86.20 | 28.80 | 43.50 |
| Yi-6B-200K | 14.11 | 36.74 | 22.68 | 14.01 | 20.44 | 8.08 | 72.00 | 86.61 | 38.00 | **63.29** |
| Orion-14B-LongChat | 19.47 | 48.11 | 55.84 | **37.02** | **24.87** | 15.44 | **77.00** | 89.12 | **45.50** | 54.31 |
## 3.4. Chat RAG Model Benchmarks
### 3.4.1. LLM evaluation results of self-built RAG testsets
|Model|Effectiveness of Response(Keyword)|*Effectiveness of Response๏ผsubjective evaluation๏ผ|Quoting Ability|Fallback Ability|*AutoQA|*Data Extraction|
|---------------------|------|------|------|------|------|------|
| Baichuan2-13B-Chat | 85 | 76 | 1 | 0 | 69 | 51 |
| Qwen-14B-Chat | 79 | 77 | 75 | 47 | 68 | 72 |
| Qwen-72B-Chat(Int4) | 87 | 89 | 90 | 32 | 67 | 76 |
| GPT-4 | 91 | 94 | 96 | 95 | 75 | 86 |
| Orion-14B-Chat-RAG | 86 | 87 | 91 | 97 | 73 | 71 |
\* means manual assessment
## 3.5. Chat Plugin Model Orion-14B-Chat-Plugin Benchmarks
### 3.5.1. LLM evaluation results of self-built plugin testsets
|Model |Intent Recognition with Full Params |Intent Recognition with Missing Params |Non-Plugin Invocation Recognition |
|-----------------------|--------|-----------|--------|
| Baichuan2-13B-Chat | 25 | 0 | 0 |
| Qwen-14B-Chat | 55 | 0 | 50 |
| GPT-4 | **95** | 52.38 | 70 |
| Orion-14B-Chat-Plugin | 92.5 | **60.32** | **90** |
## 3.6. Quantized Model Orion-14B-Base-Int4 Benchmarks
### 3.6.1. Comparison of before and after quantization
|Model |Size(GB)|Inference Speed(tokens/s)|C-Eval|CMMLU|MMLU|RACE|HellaSwag|
|-------------------------|-------|-----|------|------|------|------|------|
| OrionStar-14B-Base | 28.0 | 135 | 72.8 | 70.6 | 70.0 | 93.3 | 78.5 |
| OrionStar-14B-Base-Int4 | 8.3 | 178 | 71.8 | 69.8 | 69.2 | 93.1 | 78.0 |
<a name="model-inference"></a><br>
# 4. Model Inference
Model weights, source code, and configuration needed for inference are published on Hugging Face, and the download link
is available in the table at the beginning of this document. We demonstrate various inference methods here, and the
program will automatically download the necessary resources from Hugging Face.
## 4.1. Python Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation.utils import GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("OrionStarAI/Orion-14B", use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("OrionStarAI/Orion-14B", device_map="auto",
torch_dtype=torch.bfloat16, trust_remote_code=True)
model.generation_config = GenerationConfig.from_pretrained("OrionStarAI/Orion-14B")
messages = [{"role": "user", "content": "Hello, what is your name? "}]
response = model.chat(tokenizer, messages, streaming=False)
print(response)
```
In the above Python code, the model is loaded with `device_map='auto'` to utilize all available GPUs. To specify the
device, you can use something like `export CUDA_VISIBLE_DEVICES=0,1` (using GPUs 0 and 1).
## 4.2. Command Line Tool
```shell
CUDA_VISIBLE_DEVICES=0 python cli_demo.py
```
This command-line tool is designed for chat scenarios, and thus, it does not support calling the base model.
## 4.3. Direct Script Inference
```shell
# base model
CUDA_VISIBLE_DEVICES=0 python demo/text_generation_base.py --model OrionStarAI/Orion-14B --tokenizer OrionStarAI/Orion-14B --prompt hello
# chat model
CUDA_VISIBLE_DEVICES=0 python demo/text_generation.py --model OrionStarAI/Orion-14B-Chat --tokenizer OrionStarAI/Orion-14B-Chat --prompt hi
```
## 4.4 Example Output
### 4.4.1. Casual Chat
`````
User: Hello
Orion-14B: Hello! How can I help you today? If you have any questions or need assistance, feel free to ask.
User: What functions do you have?
Orion-14B: As an AI language model, I have a wide range of functions and capabilities, including but not limited to:
Answering general knowledge questions: I can provide information on various topics such as history, science, geography, and more.
Providing definitions and explanations: I can explain complex concepts and provide definitions for words and phrases.
Assisting with tasks: I can help with simple tasks like setting reminders, converting units of measurement, and even ordering food from restaurants.
Generating text: I can write short stories, essays, articles, and other types of content based on prompts given to me.
Translating languages: I can translate text from one language to another.
Chatting: I can engage in conversation with users and respond to their queries in a conversational manner.
Summarizing long texts: I can summarize longer texts like articles, books, and research papers.
Generating random numbers and names: I can generate random numbers within a specified range and also suggest names based on certain criteria.
Playing games: I can play simple games like Hangman, Trivia, and more.
Providing recommendations: I can recommend movies, music, books, and other forms of entertainment based on user preferences.
These are just some of the many functions that I am capable of performing. If you have a specific task in mind, feel free to ask me about it.
User: Tell me a joke.
Orion-14B: Sure, here's a classic one-liner: Why don't scientists trust atoms? Because they make up everything.
`````
### 4.4.2. Japanese & Korean Chat
`````
User๏ผ่ชๅทฑใ็ดนไปใใฆใใ ใใ
Orion-14B๏ผใใใซใกใฏใ็งใฎๅๅใฏChatMaxใงใOrionStarใซใใฃใฆ้็บใใใAIใขใทในใฟใณใใงใใ็งใฏใฆใผใถใผใซๆญฃ็ขบใงใๅฎข่ฆณ็ใงใๆ็จใชๅ็ญใๆไพใใๅฝผใใๅ้กใ่งฃๆฑบใใๆ
ๅ ฑใๅๅพใใใฎใๅฉใใใใจใใงใใพใใ
User๏ผ์๊ธฐ์๊ฐ๋ฅผ ํด์ฃผ์ธ์.
Orion-14B๏ผ์๋
ํ์ธ์, ์ ์ด๋ฆ์ ChatMax์
๋๋ค. OrionStar์์ ๊ฐ๋ฐํ AI ์ด์์คํดํธ์
๋๋ค. ์ ๋ ์ฌ์ฉ์๋ค์๊ฒ ์ ํํ๊ณ ๊ฐ๊ด์ ์ด๋ฉฐ ์ ์ฉํ ๋ต๋ณ์ ์ ๊ณตํ์ฌ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ณ ์ ๋ณด๋ฅผ ์ป๋ ๋ฐ ๋์์ ์ค ์ ์์ต๋๋ค.
`````
<a name="declarations-license"></a><br>
# 5. Declarations, License
## 5.1. Declarations
We strongly urge all users not to use the Orion-14B model for any activities that may harm national or social security or violate the law.
Additionally, we request users not to use the Orion-14B model for internet services without proper security review and filing.
We hope all users abide by this principle to ensure that technological development takes place in a regulated and legal environment.
We have done our best to ensure the compliance of the data used in the model training process. However, despite our
significant efforts, unforeseen issues may still arise due to the complexity of the model and data. Therefore, if any
problems arise due to the use of the Orion-14B open-source model, including but not limited to data security
issues, public opinion risks, or any risks and issues arising from the model being misled, abused, disseminated, or
improperly utilized, we will not assume any responsibility.
## 5.2. License
Community use of the Orion-14B series models
- For code, please comply with [Apache License Version 2.0](./LICENSE)<br>
- For model, please comply with [ใOrion-14B Seriesใ Models Community License Agreement](./ModelsCommunityLicenseAgreement)
<a name="company-introduction"></a><br>
# 6. Company Introduction
OrionStar is a leading global service robot solutions company, founded in September 2016. OrionStar is dedicated to
using artificial intelligence technology to create the next generation of revolutionary robots, allowing people to break
free from repetitive physical labor and making human work and life more intelligent and enjoyable. Through technology,
OrionStar aims to make society and the world a better place.
OrionStar possesses fully self-developed end-to-end artificial intelligence technologies, such as voice interaction and
visual navigation. It integrates product development capabilities and technological application capabilities. Based on
the Orion robotic arm platform, it has launched products such as OrionStar AI Robot Greeting, AI Robot Greeting Mini,
Lucki, Coffee Master, and established the open platform OrionOS for Orion robots. Following the philosophy of "Born for
Truly Useful Robots", OrionStar empowers more people through AI technology.
**The core strengths of OrionStar lies in possessing end-to-end AI application capabilities,** including big data preprocessing, large model pretraining, fine-tuning, prompt engineering, agent, etc. With comprehensive end-to-end model training capabilities, including systematic data processing workflows and the parallel model training capability of hundreds of GPUs, it has been successfully applied in various industry scenarios such as government affairs, cloud services, international e-commerce, and fast-moving consumer goods.
Companies with demands for deploying large-scale model applications are welcome to contact us.<br>
**Enquiry Hotline: 400-898-7779**<br>
**E-mail: [email protected]**
<div align="center">
<img src="./assets/imgs/wechat_group.jpg" alt="wechat" width="40%" />
</div>
# ็ฎๅฝ
- [๐ ๆจกๅไป็ป](#zh_model-introduction)
- [๐ ไธ่ฝฝ่ทฏๅพ](#zh_model-download)
- [๐ ่ฏไผฐ็ปๆ](#zh_model-benchmark)
- [๐ ๆจกๅๆจ็](#zh_model-inference)
- [๐ ๅฃฐๆๅ่ฎฎ](#zh_declarations-license)
- [๐ฅ ไผไธไป็ป](#zh_company-introduction)
<a name="zh_model-introduction"></a><br>
# 1. ๆจกๅไป็ป
- Orion-14B-Baseๆฏไธไธชๅ
ทๆ140ไบฟๅๆฐ็ๅค่ฏญ็งๅคงๆจกๅ๏ผ่ฏฅๆจกๅๅจไธไธชๅ
ๅซ2.5ไธไบฟtoken็ๅคๆ ทๅๆฐๆฎ้ไธ่ฟ่กไบ่ฎญ็ป๏ผๆถต็ไบไธญๆใ่ฑ่ฏญใๆฅ่ฏญใ้ฉ่ฏญ็ญๅค็ง่ฏญ่จใๅจๅค่ฏญ่จ็ฏๅขไธ็ไธ็ณปๅไปปๅกไธญๅฑ็ฐๅบๅ่ถ็ๆง่ฝใๅจไธปๆต็ๅ
ฌๅผๅบๅ่ฏๆตไธญ๏ผOrion-14B็ณปๅๆจกๅ่กจ็ฐไผๅผ๏ผๅค้กนๆๆ ๆพ่่ถ
่ถๅ็ญๅๆฐๅบๆฌ็ๅ
ถไปๆจกๅใๅ
ทไฝๆๆฏ็ป่่ฏทๅ่[ๆๆฏๆฅๅ](https://github.com/OrionStarAI/Orion/blob/master/doc/Orion14B_v3.pdf)ใ
- Orion-14B็ณปๅๅคงๆจกๅๆไปฅไธๅ ไธช็น็น๏ผ
- ๅบๅบง20Bๅๆฐ็บงๅซๅคงๆจกๅ็ปผๅ่ฏๆตๆๆ่กจ็ฐไผๅผ
- ๅค่ฏญ่จ่ฝๅๅผบ๏ผๅจๆฅ่ฏญใ้ฉ่ฏญๆต่ฏ้ไธๆพ่้ขๅ
- ๅพฎ่ฐๆจกๅ้ๅบๆงๅผบ๏ผๅจไบบ็ฑปๆ ๆณจ็ฒๆตไธญ๏ผ่กจ็ฐ็ชๅบ
- ้ฟไธไธๆ็ๆฌๆฏๆ่ถ
้ฟๆๆฌ๏ผๅจ200k token้ฟๅบฆไธๆๆไผๅผ๏ผๆ้ฟๅฏๆฏๆๅฏ่พพ320k
- ้ๅ็ๆฌๆจกๅๅคงๅฐ็ผฉๅฐ70%๏ผๆจ็้ๅบฆๆๅ30%๏ผๆง่ฝๆๅคฑๅฐไบ1%
<table style="border-collapse: collapse; width: 100%;">
<tr>
<td style="border: none; padding: 10px; box-sizing: border-box;">
<img src="./assets/imgs/opencompass_zh.png" alt="opencompass" style="width: 100%; height: auto;">
</td>
<td style="border: none; padding: 10px; box-sizing: border-box;">
<img src="./assets/imgs/model_cap_zh.png" alt="modelcap" style="width: 100%; height: auto;">
</td>
</tr>
</table>
- ๅ
ทไฝ่่จ๏ผOrion-14B็ณปๅๅคง่ฏญ่จๆจกๅๅ
ๅซ:
- **Orion-14B-Base:** ๅบไบ2.5ไธไบฟtokensๅคๆ ทๅๆฐๆฎ้่ฎญ็ปๅค็140ไบฟๅๆฐ้็บง็ๅค่ฏญ่จๅบๅบงๆจกๅใ
- **Orion-14B-Chat:** ๅบไบ้ซ่ดจ้่ฏญๆๅบๅพฎ่ฐ็ๅฏน่ฏ็ฑปๆจกๅ๏ผๆจๅจไธบๅคงๆจกๅ็คพๅบๆไพๆดๅฅฝ็็จๆทไบคไบไฝ้ชใ
- **Orion-14B-LongChat:** ๅจ200k token้ฟๅบฆไธๆๆไผๅผ๏ผๆ้ฟๅฏๆฏๆๅฏ่พพ320k๏ผๅจ้ฟๆๆฌ่ฏไผฐ้ไธๆง่ฝๆฏ่ฉไธๆๆจกๅใ
- **Orion-14B-Chat-RAG:** ๅจไธไธชๅฎๅถ็ๆฃ็ดขๅขๅผบ็ๆๆฐๆฎ้ไธ่ฟ่กๅพฎ่ฐ็่ๅคฉๆจกๅ๏ผๅจๆฃ็ดขๅขๅผบ็ๆไปปๅกไธญๅๅพไบๅ่ถ็ๆง่ฝใ
- **Orion-14B-Chat-Plugin:** ไธ้จ้ๅฏนๆไปถๅๅฝๆฐ่ฐ็จไปปๅกๅฎๅถ็่ๅคฉๆจกๅ๏ผ้ๅธธ้็จไบไฝฟ็จไปฃ็็็ธๅ
ณๅบๆฏ๏ผๅ
ถไธญๅคง่ฏญ่จๆจกๅๅ
ๅฝๆไปถๅๅฝๆฐ่ฐ็จ็ณป็ปใ
- **Orion-14B-Base-Int4:** ไธไธชไฝฟ็จint4่ฟ่ก้ๅ็ๅบๅบงๆจกๅใๅฎๅฐๆจกๅๅคงๅฐๆพ่ๅๅฐไบ70%๏ผๅๆถๆ้ซไบๆจ็้ๅบฆ30%๏ผไป
ๅผๅ
ฅไบ1%็ๆๅฐๆง่ฝๆๅคฑใ
- **Orion-14B-Chat-Int4:** ไธไธชไฝฟ็จint4่ฟ่ก้ๅ็ๅฏน่ฏๆจกๅใ
<a name="zh_model-download"></a><br>
# 2. ไธ่ฝฝ่ทฏๅพ
ๅๅธๆจกๅๅไธ่ฝฝ้พๆฅ่งไธ่กจ๏ผ
| ๆจกๅๅ็งฐ | HuggingFaceไธ่ฝฝ้พๆฅ | ModelScopeไธ่ฝฝ้พๆฅ |
|---------------------|-----------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|
| โพ ๅบๅบงๆจกๅ | [Orion-14B-Base](https://huggingface.co/OrionStarAI/Orion-14B-Base) | [Orion-14B-Base](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base/summary) |
| ๐ ๅฏน่ฏๆจกๅ | [Orion-14B-Chat](https://huggingface.co/OrionStarAI/Orion-14B-Chat) | [Orion-14B-Chat](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat/summary) |
| ๐ ้ฟไธไธๆๆจกๅ | [Orion-14B-LongChat](https://huggingface.co/OrionStarAI/Orion-14B-LongChat) | [Orion-14B-LongChat](https://modelscope.cn/models/OrionStarAI/Orion-14B-LongChat/summary) |
| ๐ ๆฃ็ดขๅขๅผบๆจกๅ | [Orion-14B-Chat-RAG](https://huggingface.co/OrionStarAI/Orion-14B-Chat-RAG) | [Orion-14B-Chat-RAG](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-RAG/summary) |
| ๐ ๆไปถๆจกๅ | [Orion-14B-Chat-Plugin](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Plugin) | [Orion-14B-Chat-Plugin](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Plugin/summary)|
| ๐ผ ๅบๅบงInt4้ๅๆจกๅ | [Orion-14B-Base-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Base-Int4) | [Orion-14B-Base-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base-Int4/summary) |
| ๐ฆ ๅฏน่ฏInt4้ๅๆจกๅ | [Orion-14B-Chat-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Int4) | [Orion-14B-Chat-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Int4/summary) |
<a name="zh_model-benchmark"></a><br>
# 3. ่ฏไผฐ็ปๆ
## 3.1. ๅบๅบงๆจกๅOrion-14B-Base่ฏไผฐ
### 3.1.1. ไธไธ็ฅ่ฏไธ่ฏ้ข่ฏไผฐ็ปๆ
| ๆจกๅๅ็งฐ | C-Eval | CMMLU | MMLU | AGIEval | Gaokao | BBH |
|--------------------|----------|----------|----------|----------|----------|----------|
| LLaMA2-13B | 41.4 | 38.4 | 55.0 | 30.9 | 18.2 | 45.6 |
| Skywork-13B | 59.1 | 61.4 | 62.7 | 43.6 | 56.1 | 48.3 |
| Baichuan2-13B | 59.0 | 61.3 | 59.5 | 37.4 | 45.6 | 49.0 |
| QWEN-14B | 71.7 | 70.2 | 67.9 | 51.9 | **62.5** | 53.7 |
| InternLM-20B | 58.8 | 59.0 | 62.1 | 44.6 | 45.5 | 52.5 |
| **Orion-14B-Base** | **72.9** | **70.6** | **69.9** | **54.7** | 62.1 | **56.5** |
### 3.1.2. ็่งฃไธ้่ฏ่ฏไผฐ็ปๆ
| ๆจกๅๅ็งฐ |RACE-middle|RACE-high| HellaSwag| PIQA | Lambada | WSC |
|--------------------|----------|----------|----------|----------|----------|----------|
| LLaMA 2-13B | 63.0 | 58.9 | 77.5 | 79.8 | 76.5 | 66.3 |
| Skywork-13B | 87.6 | 84.1 | 73.7 | 78.3 | 71.8 | 66.3 |
| Baichuan 2-13B | 68.9 | 67.2 | 70.8 | 78.1 | 74.1 | 66.3 |
| QWEN-14B | 93.0 | 90.3 | **80.2** | 79.8 | 71.4 | 66.3 |
| InternLM-20B | 86.4 | 83.3 | 78.1 | **80.3** | 71.8 | 68.3 |
| **Orion-14B-Base** | **93.2** | **91.3** | 78.5 | 79.5 | **78.8** | **70.2** |
### 3.1.3. OpenCompass่ฏๆต้่ฏไผฐ็ปๆ
| ๆจกๅๅ็งฐ | Average | Examination | Language | Knowledge | Understanding | Reasoning |
|------------------|----------|----------|----------|----------|----------|----------|
| LLaMA 2-13B | 47.3 | 45.2 | 47.0 | 58.3 | 50.9 | 43.6 |
| Skywork-13B | 53.6 | 61.1 | 51.3 | 52.7 | 64.5 | 45.2 |
| Baichuan 2-13B | 49.4 | 51.8 | 47.5 | 48.9 | 58.1 | 44.2 |
| QWEN-14B | 62.4 | 71.3 | 52.67 | 56.1 | 68.8 | 60.1 |
| InternLM-20B | 59.4 | 62.5 | 55.0 | **60.1** | 67.3 | 54.9 |
|**Orion-14B-Base**| **64.3** | **71.4** | **55.0** | 60.0 | **71.9** | **61.6** |
### 3.1.4. ๆฅ่ฏญๆต่ฏ้่ฏไผฐ็ปๆ
| ๆจกๅๅ็งฐ |**Average**| JCQA | JNLI | MARC | JSQD | JQK | XLS | XWN | MGSM |
|--------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| PLaMo-13B | 52.3 | 56.7 | 42.8 | 95.8 | 70.6 | 71.0 | 8.70 | 70.5 | 2.40 |
| WebLab-10B | 50.7 | 66.6 | 53.7 | 82.1 | 62.9 | 56.2 | 10.0 | 72.0 | 2.40 |
| ELYZA-jp-7B | 48.8 | 71.7 | 25.3 | 86.6 | 70.8 | 64.1 | 2.50 | 62.1 | 7.20 |
| StableLM-jp-7B | 51.1 | 33.4 | 43.3 | **96.7** | 70.6 | 78.1 | 10.7 | 72.8 | 2.80 |
| LLaMA 2-13B | 46.3 | 75.0 | 47.6 | 38.8 | 76.1 | 67.7 | 18.1 | 63.2 | 10.4 |
| Baichuan 2-13B | 57.1 | 73.7 | 31.3 | 91.6 | 80.5 | 63.3 | 18.6 | 72.2 | 25.2 |
| QWEN-14B | 65.8 | 85.9 | 60.7 | 97.0 | 83.3 | 71.8 | 18.8 | 70.6 | 38.0 |
| Yi-34B | 67.1 | 83.8 | 61.2 | 95.2 | **86.1** | 78.5 | **27.2** | 69.2 | 35.2 |
| **Orion-14B-Base** | **69.1** | **88.2** | **75.8** | 94.1 | 75.7 | **85.1** | 17.3 | **78.8** | **38.0** |
### 3.1.5. ้ฉ่ฏญๆต่ฏ้n-shot่ฏไผฐ็ปๆ
| ๆจกๅๅ็งฐ | **Average**<br>n=0 n=5 | HellaSwag<br>n=0 n=5 | COPA<br> n=0 n=5 | BooIQ<br>n=0 n=5 | SentiNeg<br>n=0 n=5|
|------------------|------------------------------|------------------------------|------------------------------|------------------------------|------------------------------|
| KoGPT | 53.0 70.1 | 55.9 58.3 | 73.5 72.9 | 45.1 59.8 | 37.5 89.4 |
| Polyglot-ko-13B | 69.6 73.7 |**59.5** **63.1**|**79.4** **81.1**| 48.2 60.4 | 91.2 90.2 |
| LLaMA 2-13B | 46.7 63.7 | 41.3 44.0 | 59.3 63.8 | 34.9 73.8 | 51.5 73.4 |
| Baichuan 2-13B | 52.1 58.7 | 39.2 39.6 | 60.6 60.6 | 58.4 61.5 | 50.3 72.9 |
| QWEN-14B | 53.8 73.7 | 45.3 46.8 | 64.9 68.9 | 33.4 83.5 | 71.5 95.7 |
| Yi-34B | 54.2 72.1 | 44.6 44.7 | 58.0 60.6 | 65.9 90.2 | 48.3 92.9 |
|**Orion-14B-Base**|**74.5** **79.6**| 47.0 49.6 | 77.7 79.4 |**81.6** **90.7**|**92.4** **98.7**|
### 3.1.6. ๅค่ฏญ่จ่ฏไผฐ็ปๆ
| ๆจกๅๅ็งฐ | Train Lang | Japanese | Korean | Chinese | English |
|--------------------|------------|----------|----------|----------|----------|
| PLaMo-13B | En,Jp | 52.3 | * | * | * |
| Weblab-10B | En,Jp | 50.7 | * | * | * |
| ELYZA-jp-7B | En,Jp | 48.8 | * | * | * |
| StableLM-jp-7B | En,Jp | 51.1 | * | * | * |
| KoGPT-6B | En,Ko | * | 70.1 | * | * |
| Polyglot-ko-13B | En,Ko | * | 70.7 | * | * |
| Baichuan2-13B | Multi | 57.1 | 58.7 | 50.8 | 57.1 |
| Qwen-14B | Multi | 65.8 | 73.7 | 64.5 | 65.4 |
| Llama2-13B | Multi | 46.3 | 63.7 | 41.4 | 55.3 |
| Yi-34B | Multi | 67.1 | 72.2 | 58.7 | **68.8** |
| **Orion-14B-Base** | Multi | **69.1** | **79.5** | **67.9** | 67.3 |
## 3.2. ๅฏน่ฏๆจกๅOrion-14B-Chat่ฏไผฐ
### 3.2.1. ๅฏน่ฏๆจกๅMTBenchไธป่ง่ฏไผฐ
| ๆจกๅๅ็งฐ | ็ฌฌไธ่ฝฎ | ็ฌฌไบ่ฝฎ | **ๅนณๅ** |
|----------------------|----------|----------|----------|
| Baichuan2-13B-Chat | 7.05 | 6.47 | 6.76 |
| Qwen-14B-Chat | 7.30 | 6.62 | 6.96 |
| Llama2-13B-Chat | 7.10 | 6.20 | 6.65 |
| InternLM-20B-Chat | 7.03 | 5.93 | 6.48 |
| **Orion-14B-Chat** | **7.68** | **7.07** | **7.37** |
\*่ฟ้่ฏๆตไฝฟ็จvllm่ฟ่กๆจ็
### 3.2.2. ๅฏน่ฏๆจกๅAlignBenchไธป่ง่ฏไผฐ
| ๆจกๅๅ็งฐ | ๆฐๅญฆ่ฝๅ | ้ป่พๆจ็ | ๅบๆฌ่ฝๅ | ไธญๆ็่งฃ | ็ปผๅ้ฎ็ญ | ๅไฝ่ฝๅ | ่ง่ฒๆฎๆผ | ไธไธ็ฅ่ฏ | **ๅนณๅ** |
|--------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| Baichuan2-13B-Chat | 3.76 | 4.07 | 6.22 | 6.05 | 7.11 | 6.97 | 6.75 | 6.43 | 5.25 |
| Qwen-14B-Chat | **4.91** | **4.71** | **6.90** | 6.36 | 6.74 | 6.64 | 6.59 | 6.56 | **5.72** |
| Llama2-13B-Chat | 3.05 | 3.79 | 5.43 | 4.40 | 6.76 | 6.63 | 6.99 | 5.65 | 4.70 |
| InternLM-20B-Chat | 3.39 | 3.92 | 5.96 | 5.50 | **7.18** | 6.19 | 6.49 | 6.22 | 4.96 |
| **Orion-14B-Chat** | 4.00 | 4.24 | 6.18 | **6.57** | 7.16 | **7.36** | **7.16** | **6.99** | 5.51 |
\*่ฟ้่ฏๆตไฝฟ็จvllm่ฟ่กๆจ็
## 3.3. ้ฟไธไธๆๆจกๅOrion-14B-LongChat่ฏไผฐ
### 3.3.1. ้ฟไธไธๆๆจกๅLongBench่ฏไผฐ
| ๆจกๅๅ็งฐ | NarrativeQA| MultiFieldQA-en| MultiFieldQA-zh | DuReader | QMSum | VCSUM | TREC | TriviaQA | LSHT | RepoBench-P |
|--------------------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|
| GPT-3.5-Turbo-16k | **23.60** | **52.30** | **61.20** | 28.70 | 23.40 | **16.00** | 68.00 | **91.40** | 29.20 | 53.60 |
| LongChat-v1.5-7B-32k | 16.90 | 41.40 | 29.10 | 19.50 | 22.70 | 9.90 | 63.50 | 82.30 | 23.20 | 55.30 |
| Vicuna-v1.5-7B-16k | 19.40 | 38.50 | 43.00 | 19.30 | 22.80 | 15.10 | 71.50 | 86.20 | 28.80 | 43.50 |
| Yi-6B-200K | 14.11 | 36.74 | 22.68 | 14.01 | 20.44 | 8.08 | 72.00 | 86.61 | 38.00 | **63.29** |
| Orion-14B-LongChat | 19.47 | 48.11 | 55.84 | **37.02** | **24.87** | 15.44 | **77.00** | 89.12 | **45.50** | 54.31 |
## 3.4. ๆฃ็ดขๅขๅผบๆจกๅOrion-14B-Chat-RAG่ฏไผฐ
### 3.4.1. ่ชๅปบๆฃ็ดขๅขๅผบๆต่ฏ้่ฏไผฐ็ปๆ
|ๆจกๅๅ็งฐ|ๅๅคๆๆ(ๅ
ณ้ฎๅญ)|*ๅๅคๆๆ(ไธป่งๆๅ)|ๅผ็จ่ฝๅ|ๅ
ๅบ่ฝๅ|*AutoQA|*ๆฝๅๆฐๆฎ|
|---------------------|------|------|------|------|------|------|
| Baichuan2-13B-Chat | 85 | 76 | 1 | 0 | 69 | 51 |
| Qwen-14B-Chat | 79 | 77 | 75 | 47 | 68 | 72 |
| Qwen-72B-Chat(Int4) | 87 | 89 | 90 | 32 | 67 | 76 |
| GPT-4 | 91 | 94 | 96 | 95 | 75 | 86 |
| Orion-14B-Chat-RAG | 86 | 87 | 91 | 97 | 73 | 71 |
\* ่กจ็คบไบบๅทฅ่ฏๅค็ปๆ
## 3.5. ๆไปถๆจกๅOrion-14B-Chat-Plugin่ฏไผฐ
### 3.5.1. ่ชๅปบๆไปถๆต่ฏ้่ฏไผฐ็ปๆ
| ๆจกๅๅ็งฐ | ๅ
จๅๆฐๆๅพ่ฏๅซ | ็ผบๅๆฐๆๅพ่ฏๅซ | ้ๆไปถ่ฐ็จ่ฏๅซ |
|-----------------------|--------|-----------|--------|
| Baichuan2-13B-Chat | 25 | 0 | 0 |
| Qwen-14B-Chat | 55 | 0 | 50 |
| GPT-4 | **95** | 52.38 | 70 |
| Orion-14B-Chat-Plugin | 92.5 | **60.32** | **90** |
## 3.6. ้ๅๆจกๅOrion-14B-Base-Int4่ฏไผฐ
### 3.6.1. ้ๅๅๅๆดไฝๅฏนๆฏ
|ๆจกๅๅ็งฐ|ๆจกๅๅคงๅฐ(GB)|ๆจ็้ๅบฆ(ไปค็ๆฐ/็ง)|C-Eval |CMMLU |MMLU |RACE | HellaSwag|
|-------------------------|------|-----|------|------|------|------|------|
| OrionStar-14B-Base | 28.0 | 135 | 72.8 | 70.6 | 70.0 | 93.3 | 78.5 |
| OrionStar-14B-Base-Int4 | 8.3 | 178 | 71.8 | 69.8 | 69.2 | 93.1 | 78.0 |
<a name="zh_model-inference"></a><br>
# 4. ๆจกๅๆจ็
ๆจ็ๆ้็ๆจกๅๆ้ใๆบ็ ใ้
็ฝฎๅทฒๅๅธๅจ Hugging Face๏ผไธ่ฝฝ้พๆฅ่งๆฌๆๆกฃๆๅผๅง็่กจๆ ผใๆไปฌๅจๆญค็คบ่ๅค็งๆจ็ๆนๅผใ็จๅบไผ่ชๅจไป
Hugging Face ไธ่ฝฝๆ้่ตๆบใ
## 4.1. Python ไปฃ็ ๆนๅผ
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation.utils import GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("OrionStarAI/Orion-14B", use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("OrionStarAI/Orion-14B", device_map="auto",
torch_dtype=torch.bfloat16, trust_remote_code=True)
model.generation_config = GenerationConfig.from_pretrained("OrionStarAI/Orion-14B")
messages = [{"role": "user", "content": "ไฝ ๅฅฝ! ไฝ ๅซไปไนๅๅญ!"}]
response = model.chat(tokenizer, messages, streaming=Flase)
print(response)
```
ๅจไธ่ฟฐไธคๆฎตไปฃ็ ไธญ๏ผๆจกๅๅ ่ฝฝๆๅฎ `device_map='auto'`
๏ผไผไฝฟ็จๆๆๅฏ็จๆพๅกใๅฆ้ๆๅฎไฝฟ็จ็่ฎพๅค๏ผๅฏไปฅไฝฟ็จ็ฑปไผผ `export CUDA_VISIBLE_DEVICES=0,1`๏ผไฝฟ็จไบ0ใ1ๅทๆพๅก๏ผ็ๆนๅผๆงๅถใ
## 4.2. ๅฝไปค่กๅทฅๅ
ทๆนๅผ
```shell
CUDA_VISIBLE_DEVICES=0 python cli_demo.py
```
ๆฌๅฝไปค่กๅทฅๅ
ทๆฏไธบ Chat ๅบๆฏ่ฎพ่ฎก๏ผๅ ๆญคๆไปฌไธๆฏๆไฝฟ็จ่ฏฅๅทฅๅ
ท่ฐ็จ Base ๆจกๅใ
## 4.3. ่ๆฌ็ดๆฅๆจ็
```shell
# base model
CUDA_VISIBLE_DEVICES=0 python demo/text_generation_base.py --model OrionStarAI/Orion-14B --tokenizer OrionStarAI/Orion-14B --prompt ไฝ ๅฅฝ,ไฝ ๅซไปไนๅๅญ
# chat model
CUDA_VISIBLE_DEVICES=0 python demo/text_generation.py --model OrionStarAI/Orion-14B-Chat --tokenizer OrionStarAI/Orion-14B-Chat --prompt ไฝ ๅฅฝ,ไฝ ๅซไปไนๅๅญ
```
## 4.4. ็คบไพ่พๅบ
### 4.4.1. ้ฒ่
`````
็จๆท๏ผไฝ ๅฅฝ,ไฝ ๅซไปไนๅๅญ
Orion-14B๏ผไฝ ๅฅฝ!ๆๆฏไธไธชไบบๅทฅๆบ่ฝๅฉๆ,ๆฒกๆๅบๅฎ็ๅๅญใไฝ ๅฏไปฅ้ๆ็งฐๅผๆใๆไปไนๆๅฏไปฅๅธฎๅฉไฝ ็ๅ?
็จๆท๏ผ่ฎฒไธชๆ
ไบ
Orion-14B๏ผไปๅๆไธไธชๅฐ็ทๅญฉๅซๆฐๅ
,ไปไฝๅจไธไธชๅฐๆๅบ้ใๆฐๅ
้ๅธธๅๆฌขๅ้ฉๅๆข็ดข,ๆปๆฏๆขฆๆณ็่ฝๅคๅ็ฐไธไบ็ฅๅฅ็ไบๆ
ใ
ๆไธๅคฉ,ๆฐๅ
ๅฌ่ฏดๅจ้่ฟ็ๆฃฎๆๆทฑๅคๆไธๅบงไผ ่ฏดไธญ็้ญๆณๅๅ ก,้้ขไฝ็ไธไธช้ชๆถ็ๅทจไบบใ่ฟไธชๅทจไบบ็ๅๅญๅซๆ ผ้ๅง,ไป่ขซๅฐๅจๅๅ กไธญ,ๆ ๆณ็ฆปๅผใ
ๆฐๅ
ๅณๅฎ่ฆๅป่งฃๆ่ฟไธชๅทจไบบ,ๅ ไธบไป็ธไฟกๆ ผ้ๅงไธๅฎๆ่ชๅทฑ็ๅๅ ่ขซๅฐๅจ้ฃ้ใไบๆฏ,ไปๅธฆ็ๅๆฐๅๅณๅฟ,่ธไธไบๅๅพ้ญๆณๅๅ ก็ๆ
็จใ
็ป่ฟ้ฟๆถ้ด็่ทๆถ,ๆฐๅ
็ปไบๆฅๅฐไบๅๅ ก็ๅคง้จๅใไปๅฐๅฟ็ฟผ็ฟผๅฐๆจๅผๅคง้จ,่ฟๅ
ฅไบไธไธชๅ
ๆปก็ฅ็งๆฐๆฏ็ๆฟ้ดใ
ๆฟ้ด้ๅผฅๆผซ็้ปๆ็ๆฐๆฏ,ไฝๆฐๅ
ๅนถๆฒกๆ่ขซๅๅใไปๅๆขๅฐๅๅๅ กๆทฑๅค่ตฐๅป,ๆ็ปๆพๅฐไบๆ ผ้ๅงใ
ๆ ผ้ๅงๅ่ฏๆฐๅ
,ไปๆฏ่ขซไธไธช้ชๆถ็ๅฅณๅทซ่ฏ
ๅ็,ๅชๆ้่ฟ่งฃๅผๅ่ฏญๆ่ฝ่ทๅพ่ช็ฑใๆฐๅ
ๅณๅฎ่ฆๅธฎๅฉๆ ผ้ๅง,ไปๅผๅงๅฏปๆพ็บฟ็ดขๅ่งฃๅผๅ่ฏญ็ๆนๆณใ
็ป่ฟไธ็ชๅชๅ,ๆฐๅ
็ปไบๆพๅฐไบไธๆก้ๅพๅฅณๅทซ็็งๅฏๅฎ้ชๅฎค็้่ทฏใไปๅจ้ฃ้ๅ็ฐไบๅฅณๅทซ็ไธ็้ญๆณไนฆ,ไนฆไธญ่ฎฐ่ฝฝไบ่งฃๅผๅ่ฏญ็ๆนๆณใ
ๆฐๅ
ๆ็
งไนฆไธๆๅ็ๆญฅ้ชค่ฟ่กๆไฝ,ๆๅๅฐ่งฃๅผไบๆ ผ้ๅง่บซไธ็่ฏ
ๅใๅทจไบบๆๆฟไธๅทฒ,ไป็ญๅบๅธฎๅฉๆฐๅ
ๅฎ็ฐไป็ๆฟๆใ
ไปๆญคไปฅๅ,ๆฐๅ
ๅๆ ผ้ๅงๆไธบไบๅฅฝๆๅ,ไปไปฌไธ่ตท็ปๅไบ่ฎธๅคๆ้ฉๅบๆฟ็ๅ้ฉ,ๅนถ็จไปไปฌ็ๅๆฐๅๆบๆ
งๆ่ไบๅ็งๅฐ้พใ
่ฟไธชๆ
ไบๅ่ฏๆไปฌ,ๅช่ฆๆๅๆฐๅๅณๅฟ,ๆไปฌๅฐฑ่ฝๅ
ๆไธๅๅฐ้พ,ๅฎ็ฐ่ชๅทฑ็ๆขฆๆณใ
`````
### 4.4.2. ๆฅ้ฉ
`````
็จๆท๏ผ่ชๅทฑใ็ดนไปใใฆใใ ใใ
Orion-14B๏ผใใใซใกใฏใ็งใฎๅๅใฏChatMaxใงใOrionStarใซใใฃใฆ้็บใใใAIใขใทในใฟใณใใงใใ็งใฏใฆใผใถใผใซๆญฃ็ขบใงใๅฎข่ฆณ็ใงใๆ็จใชๅ็ญใๆไพใใๅฝผใใๅ้กใ่งฃๆฑบใใๆ
ๅ ฑใๅๅพใใใฎใๅฉใใใใจใใงใใพใใ
็จๆท๏ผ์๊ธฐ์๊ฐ๋ฅผ ํด์ฃผ์ธ์.
Orion-14B๏ผ์๋
ํ์ธ์, ์ ์ด๋ฆ์ ChatMax์
๋๋ค. OrionStar์์ ๊ฐ๋ฐํ AI ์ด์์คํดํธ์
๋๋ค. ์ ๋ ์ฌ์ฉ์๋ค์๊ฒ ์ ํํ๊ณ ๊ฐ๊ด์ ์ด๋ฉฐ ์ ์ฉํ ๋ต๋ณ์ ์ ๊ณตํ์ฌ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ณ ์ ๋ณด๋ฅผ ์ป๋ ๋ฐ ๋์์ ์ค ์ ์์ต๋๋ค.
`````
<a name="zh_declarations-license"></a><br>
# 5. ๅฃฐๆใๅ่ฎฎ
## 5.1. ๅฃฐๆ
ๆไปฌๅผบ็ๅผๅๆๆไฝฟ็จ่
๏ผไธ่ฆๅฉ็จ Orion-14B ๆจกๅ่ฟ่กไปปไฝๅฑๅฎณๅฝๅฎถ็คพไผๅฎๅ
จๆ่ฟๆณ็ๆดปๅจใๅฆๅค๏ผๆไปฌไน่ฆๆฑไฝฟ็จ่
ไธ่ฆๅฐ
Orion-14B ๆจกๅ็จไบๆช็ป้ๅฝๅฎๅ
จๅฎกๆฅๅๅคๆก็ไบ่็ฝๆๅกใ
ๆไปฌๅธๆๆๆ็ไฝฟ็จ่
้ฝ่ฝ้ตๅฎ่ฟไธชๅๅ๏ผ็กฎไฟ็งๆ็ๅๅฑ่ฝๅจ่ง่ๅๅๆณ็็ฏๅขไธ่ฟ่กใ
ๆไปฌๅทฒ็ปๅฐฝๆไปฌๆ่ฝ๏ผๆฅ็กฎไฟๆจกๅ่ฎญ็ป่ฟ็จไธญไฝฟ็จ็ๆฐๆฎ็ๅ่งๆงใ็ถ่๏ผๅฐฝ็ฎกๆไปฌๅทฒ็ปๅๅบไบๅทจๅคง็ๅชๅ๏ผไฝ็ฑไบๆจกๅๅๆฐๆฎ็ๅคๆๆง๏ผไปๆๅฏ่ฝๅญๅจไธไบๆ ๆณ้ข่ง็้ฎ้ขใๅ ๆญค๏ผๅฆๆ็ฑไบไฝฟ็จ
Orion-14B ๅผๆบๆจกๅ่ๅฏผ่ด็ไปปไฝ้ฎ้ข๏ผๅ
ๆฌไฝไธ้ไบๆฐๆฎๅฎๅ
จ้ฎ้ขใๅ
ฌๅ
ฑ่่ฎบ้ฃ้ฉ๏ผๆๆจกๅ่ขซ่ฏฏๅฏผใๆปฅ็จใไผ ๆญๆไธๅฝๅฉ็จๆๅธฆๆฅ็ไปปไฝ้ฃ้ฉๅ้ฎ้ข๏ผๆไปฌๅฐไธๆฟๆ
ไปปไฝ่ดฃไปปใ
## 5.2. ๅ่ฎฎ
็คพๅบไฝฟ็จOrion-14B็ณปๅๆจกๅ
- ไปฃ็ ่ฏท้ตๅพช [Apache License Version 2.0](./LICENSE)<br>
- ๆจกๅ่ฏท้ตๅพช [Orion-14B็ณปๅๆจกๅ็คพๅบ่ฎธๅฏๅ่ฎฎ](./ModelsCommunityLicenseAgreement)
<a name="zh_company-introduction"></a><br>
# 6. ไผไธไป็ป
็ๆทๆ็ฉบ๏ผOrionStar๏ผๆฏไธๅฎถๅ
จ็้ขๅ
็ๆๅกๆบๅจไบบ่งฃๅณๆนๆกๅ
ฌๅธ๏ผๆ็ซไบ2016ๅนด9ๆใ็ๆทๆ็ฉบ่ดๅไบๅบไบไบบๅทฅๆบ่ฝๆๆฏๆ้ ไธไธไปฃ้ฉๅฝๆงๆบๅจไบบ๏ผไฝฟไบบไปฌ่ฝๅคๆ่ฑ้ๅค็ไฝๅๅณๅจ๏ผไฝฟไบบ็ฑป็ๅทฅไฝๅ็ๆดปๆดๅ ๆบ่ฝๅๆ่ถฃ๏ผ้่ฟๆๆฏไฝฟ็คพไผๅไธ็ๅๅพๆดๅ ็พๅฅฝใ
็ๆทๆ็ฉบๆฅๆๅฎๅ
จ่ชไธปๅผๅ็ๅ
จ้พๆกไบบๅทฅๆบ่ฝๆๆฏ๏ผๅฆ่ฏญ้ณไบคไบๅ่ง่งๅฏผ่ชใๅฎๆดๅไบไบงๅๅผๅ่ฝๅๅๆๆฏๅบ็จ่ฝๅใๅบไบOrionๆบๆขฐ่ๅนณๅฐ๏ผๅฎๆจๅบไบORION
STAR AI Robot GreetingใAI Robot Greeting MiniใLuckiใCoffee
Master็ญไบงๅ๏ผๅนถๅปบ็ซไบOrionๆบๅจไบบ็ๅผๆพๅนณๅฐOrionOSใ้่ฟไธบ **็ๆญฃๆ็จ็ๆบๅจไบบ่็** ็็ๅฟตๅฎ่ทต๏ผๅฎ้่ฟAIๆๆฏไธบๆดๅคไบบ่ต่ฝใ
ๅญๅ7ๅนดAI็ป้ช็งฏ็ดฏ๏ผ็ๆทๆ็ฉบๅทฒๆจๅบ็ๅคงๆจกๅๆทฑๅบฆๅบ็จโ่่จโ๏ผๅนถ้็ปญ้ขๅ่กไธๅฎขๆทๆไพๅฎๅถๅAIๅคงๆจกๅๅจ่ฏขไธๆๅก่งฃๅณๆนๆก๏ผ็ๆญฃๅธฎๅฉๅฎขๆทๅฎ็ฐไผไธ็ป่ฅๆ็้ขๅ
ๅ่ก็ฎๆ ใ
**็ๆทๆ็ฉบๅ
ทๅคๅ
จ้พๆกๅคงๆจกๅๅบ็จ่ฝๅ็ๆ ธๅฟไผๅฟ**๏ผๅ
ๆฌๆฅๆไปๆตท้ๆฐๆฎๅค็ใๅคงๆจกๅ้ข่ฎญ็ปใไบๆฌก้ข่ฎญ็ปใๅพฎ่ฐ(Fine-tune)ใPrompt
Engineering ใAgentๅผๅ็ๅ
จ้พๆก่ฝๅๅ็ป้ช็งฏ็ดฏ๏ผๆฅๆๅฎๆด็็ซฏๅฐ็ซฏๆจกๅ่ฎญ็ป่ฝๅ๏ผๅ
ๆฌ็ณป็ปๅ็ๆฐๆฎๅค็ๆต็จๅๆฐ็พๅผ GPU็ๅนถ่กๆจกๅ่ฎญ็ป่ฝๅ๏ผ็ฐๅทฒๅจๅคงๆฟๅกใไบๆๅกใๅบๆตท็ตๅใๅฟซๆถ็ญๅคไธช่กไธๅบๆฏ่ฝๅฐใ
***ๆฌข่ฟๆๅคงๆจกๅๅบ็จ่ฝๅฐ้ๆฑ็ไผไธ่็ณปๆไปฌ่ฟ่กๅๅกๅไฝ***<br>
**ๅจ่ฏข็ต่ฏ:** 400-898-7779<br>
**็ตๅญ้ฎ็ฎฑ:** [email protected]
<div align="center">
<img src="./assets/imgs/wechat_group.jpg" alt="wechat" width="40%" />
</div>
|
limin-arc/Mem-Beagle-7b-slerp-v1
|
limin-arc
| 2024-01-25T01:56:57Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"starsnatched/MemGPT",
"mlabonne/Beagle14-7B",
"conversational",
"base_model:minchyeom/MemGPT",
"base_model:merge:minchyeom/MemGPT",
"base_model:mlabonne/Beagle14-7B",
"base_model:merge:mlabonne/Beagle14-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T01:49:21Z |
---
tags:
- merge
- mergekit
- lazymergekit
- starsnatched/MemGPT
- mlabonne/Beagle14-7B
base_model:
- starsnatched/MemGPT
- mlabonne/Beagle14-7B
---
# Mem-Beagle-7b-slerp-v1
Mem-Beagle-7b-slerp-v1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [starsnatched/MemGPT](https://huggingface.co/starsnatched/MemGPT)
* [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: starsnatched/MemGPT
layer_range: [0, 32]
- model: mlabonne/Beagle14-7B
layer_range: [0, 32]
merge_method: slerp
base_model: starsnatched/MemGPT
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "liminerity/Mem-Beagle-7b-slerp-v1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Nicolas852/ppo-LunarLander-v2
|
Nicolas852
| 2024-01-25T01:56:31Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-25T01:56:10Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.77 +/- 36.60
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tiagoblima/mt5_base-qg-ap-peft-oficial
|
tiagoblima
| 2024-01-25T01:54:24Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"dataset:tiagoblima/du-qg-squadv1_pt",
"base_model:google/mt5-base",
"base_model:finetune:google/mt5-base",
"license:apache-2.0",
"region:us"
] | null | 2024-01-24T02:26:00Z |
---
license: apache-2.0
base_model: google/mt5-base
tags:
- generated_from_trainer
datasets:
- tiagoblima/du-qg-squadv1_pt
model-index:
- name: mt5_base-qg-ap-peft-oficial
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5_base-qg-ap-peft-oficial
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the tiagoblima/du-qg-squadv1_pt dataset.
It achieves the following results on the evaluation set:
- Loss: 6.6924
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 10.2785 | 1.0 | 2367 | 7.4566 |
| 9.7078 | 2.0 | 4734 | 7.0581 |
| 9.4851 | 3.0 | 7101 | 6.8360 |
| 9.2737 | 4.0 | 9468 | 6.7291 |
| 9.3588 | 5.0 | 11835 | 6.6924 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.1
|
graceneutrality/Reinforce-stuff
|
graceneutrality
| 2024-01-25T01:50:07Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-25T01:49:57Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-stuff
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 77.60 +/- 53.83
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
riyasankhe/vit-base-beans-tutorial
|
riyasankhe
| 2024-01-25T01:42:37Z | 180 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-25T01:39:43Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-beans-tutorial
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans-tutorial
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0180
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 1787
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0469 | 1.54 | 100 | 0.1320 | 0.9624 |
| 0.0186 | 3.08 | 200 | 0.0315 | 0.9925 |
| 0.0084 | 4.62 | 300 | 0.0180 | 0.9925 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
traethethird/ppo-Huggy
|
traethethird
| 2024-01-25T01:33:37Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-25T01:33:24Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: traethethird/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
sharadsin/Mixtral-8x7B-sql-ft-v1
|
sharadsin
| 2024-01-25T01:30:39Z | 5 | 1 |
peft
|
[
"peft",
"safetensors",
"en",
"dataset:knowrohit07/know_sql",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-03T01:14:40Z |
---
library_name: peft
base_model: mistralai/Mixtral-8x7B-v0.1
license: apache-2.0
datasets:
- knowrohit07/know_sql
language:
- en
---
<!---->
<img src="project-9.png" width="50%" height="50%" >
## SQL-Converter Mixtral 8x7B v0.1
**Convert Natural Language to SQL**
### Overview
Mixtral-8x7B-sql-ft-v1 is fine-tuned from Mixtral 8x7B to convert natural language to SQL queries.
### Base Model
mistralai/Mixtral-8x7B-v0.1
### Fine-Tuning
- **Dataset**: 5,000 natural language-SQL pairs.
### Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
import torch
base_model_id = 'mistralai/Mixtral-8x7B-v0.1'
adapter_id = 'sharadsin/Mixtral-8x7B-sql-ft-v1'
bnb_config = BitsAndBytesConfig(
load_in_4bit = True,
bnb_4bit_use_double_quant = True,
bnb_4bit_compute_dtype = torch.bfloat16,
bnb_4bit_quant_type = "nf4",
)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_id,
quantization_config = bnb_config,
device_map = "auto",
trust_remote_code = True,
)
tokenizer = AutoTokenizer.from_pretrained(base_model_id, add_bos_token = True, trust_remote_code = True)
ft_model = PeftModel.from_pretrained(base_model, adapter_id)
eval_prompt= """SYSTEM: Use the following contextual information to concisely answer the question.
USER: CREATE TABLE EmployeeInfo (EmpID INTEGER, EmpFname VARCHAR, EmpLname VARCHAR, Department VARCHAR, Project VARCHAR,Address VARCHAR, DOB DATE, Gender CHAR)
===
Write a query to fetch details of employees whose EmpLname ends with an alphabet 'A' and contains five alphabets?
ASSISTANT:"""
model_input = tokenizer(eval_prompt, return_tensors="pt").to("cuda")
ft_model.eval()
with torch.inference_mode():
print(tokenizer.decode(ft_model.generate(**model_input, max_new_tokens=70,top_k=4, penalty_alpha = 0.6, repetition_penalty=1.15)[0], skip_special_tokens= False))
```
### Limitations
- Less accurate with very complex queries.
- Generates extra gibberish content after providing the answers.
### Framework versions
- PEFT 0.7.1
|
ntc-ai/SDXL-LoRA-slider.heavy-inking
|
ntc-ai
| 2024-01-25T01:27:02Z | 52 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-25T01:26:59Z |
---
language:
- en
thumbnail: "images/evaluate/heavy inking.../heavy inking_17_3.0.png"
widget:
- text: heavy inking
output:
url: images/heavy inking_17_3.0.png
- text: heavy inking
output:
url: images/heavy inking_19_3.0.png
- text: heavy inking
output:
url: images/heavy inking_20_3.0.png
- text: heavy inking
output:
url: images/heavy inking_21_3.0.png
- text: heavy inking
output:
url: images/heavy inking_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "heavy inking"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - heavy inking (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/heavy inking_17_-3.0.png" width=256 height=256 /> | <img src="images/heavy inking_17_0.0.png" width=256 height=256 /> | <img src="images/heavy inking_17_3.0.png" width=256 height=256 /> |
| <img src="images/heavy inking_19_-3.0.png" width=256 height=256 /> | <img src="images/heavy inking_19_0.0.png" width=256 height=256 /> | <img src="images/heavy inking_19_3.0.png" width=256 height=256 /> |
| <img src="images/heavy inking_20_-3.0.png" width=256 height=256 /> | <img src="images/heavy inking_20_0.0.png" width=256 height=256 /> | <img src="images/heavy inking_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
heavy inking
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.heavy-inking', weight_name='heavy inking.safetensors', adapter_name="heavy inking")
# Activate the LoRA
pipe.set_adapters(["heavy inking"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, heavy inking"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
tenyaiida/ArcherEMIYA
|
tenyaiida
| 2024-01-25T01:20:26Z | 0 | 1 | null |
[
"voice",
"license:mit",
"region:us"
] | null | 2023-07-06T21:05:42Z |
---
license: mit
tags:
- voice
---
# Voice model of Archer / EMIYA from Fate/Stay Night
<!-- Provide a quick summary of what the model is/does. -->
Voice model was trained with Mangio-Crepe, using clips from FGO and FGO Arcade.
## Model Details
Update 2024/1/3: The voice has gotten better since the dataset has been updated.
Please use RMVPE when sampling with speaking clips that involve some distortion such as reverb, even though the model was trained with Mangio-Crepe.
When singing, it's better to use Mangio-Crepe.
With other clips, I can't tell a difference between RMVPE and Mangio-Crepe.
### Warnings
They may not come out as amazing as they should, so extra tweaking may be necessary.
Archer has a very low voice, and using it with higher voice models may not sound correct without pitch correction.
|
ashercn97/avi-1.1b
|
ashercn97
| 2024-01-25T01:17:05Z | 11 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"he",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T20:05:15Z |
---
license: apache-2.0
language:
- he
- en
pipeline_tag: text-generation
---
# Avi 1.1B
Avi is a fine tune of TinyLlama. While TinyLlama originally only spoke English, Avi was trained on Hebrew data, so it speaks hebrew.
## Data
Avi was trained on the Platypus and Alpaca-GPT4 datasets, machine translated into Hebrew. While I know this is not the optimal approach, it is what I did.
|
Kwabena/results
|
Kwabena
| 2024-01-25T00:48:43Z | 0 | 0 | null |
[
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-01-25T00:41:09Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1301 | 0.05 | 20 | 0.1112 |
| 0.14 | 0.09 | 40 | 0.1021 |
| 0.1582 | 0.14 | 60 | 0.1935 |
| 0.1106 | 0.19 | 80 | 0.1298 |
| 0.1646 | 0.23 | 100 | 0.1226 |
| 0.0748 | 0.28 | 120 | 0.0889 |
| 0.1169 | 0.33 | 140 | 0.0966 |
| 0.1127 | 0.38 | 160 | 0.0706 |
| 0.0899 | 0.42 | 180 | 0.0696 |
| 0.1202 | 0.47 | 200 | 0.1355 |
| 0.0491 | 0.52 | 220 | 0.0529 |
| 0.0931 | 0.56 | 240 | 0.0466 |
| 0.1068 | 0.61 | 260 | 0.0695 |
| 0.0545 | 0.66 | 280 | 0.0383 |
| 0.0346 | 0.7 | 300 | 0.0307 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
asun17904/glue-mnli-bert-base-uncased-regularized-l2
|
asun17904
| 2024-01-25T00:44:43Z | 0 | 0 |
pytorch
|
[
"pytorch",
"en",
"license:mit",
"region:us"
] | null | 2024-01-24T07:08:38Z |
---
language: en
license: mit
library_name: pytorch
---
# Knowledge Continuity Regularized Network
Dataset: GLUE
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 16
- `gradient_accumulation_steps` = 1
- `weight_decay` = 1e-09
- `seed` = 42
Regularization Hyperparameters
- `numerical stability denominator constant` = 0.01
- `lambda` = 0.02
- `alpha` = 2.0
- `beta` = 1.0
Extended Logs:
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|24.363|0.790|1.0|
|23.650|0.813|2.0|
|
DouglasPontes/2020-Q2-75p-filtered_combined75
|
DouglasPontes
| 2024-01-25T00:43:54Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:DouglasPontes/2020-Q1-filtered_tweets",
"base_model:finetune:DouglasPontes/2020-Q1-filtered_tweets",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-20T17:45:37Z |
---
base_model: DouglasPontes/2020-Q1-filtered_tweets
tags:
- generated_from_trainer
model-index:
- name: 2020-Q2-75p-filtered
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2020-Q2-75p-filtered
This model is a fine-tuned version of [DouglasPontes/2020-Q1-filtered_tweets](https://huggingface.co/DouglasPontes/2020-Q1-filtered_tweets) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.1e-07
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2400000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| No log | 0.02 | 8000 | 2.4865 |
| 2.6592 | 0.04 | 16000 | 2.4598 |
| 2.6592 | 0.07 | 24000 | 2.4472 |
| 2.6211 | 0.09 | 32000 | 2.4341 |
| 2.6211 | 0.11 | 40000 | 2.4223 |
| 2.6048 | 0.13 | 48000 | 2.4217 |
| 2.6048 | 0.16 | 56000 | 2.4184 |
| 2.5861 | 0.18 | 64000 | 2.4062 |
| 2.5861 | 0.2 | 72000 | 2.3919 |
| 2.5736 | 0.22 | 80000 | 2.3896 |
| 2.5736 | 0.25 | 88000 | 2.3951 |
| 2.5559 | 0.27 | 96000 | 2.3903 |
| 2.5559 | 0.29 | 104000 | 2.3836 |
| 2.5551 | 0.31 | 112000 | 2.3749 |
| 2.5551 | 0.34 | 120000 | 2.3794 |
| 2.5371 | 0.36 | 128000 | 2.3733 |
| 2.5371 | 0.38 | 136000 | 2.3703 |
| 2.5417 | 0.4 | 144000 | 2.3662 |
| 2.5417 | 0.43 | 152000 | 2.3728 |
| 2.5316 | 0.45 | 160000 | 2.3643 |
| 2.5316 | 0.47 | 168000 | 2.3568 |
| 2.5296 | 0.49 | 176000 | 2.3555 |
| 2.5296 | 0.52 | 184000 | 2.3506 |
| 2.5215 | 0.54 | 192000 | 2.3482 |
| 2.5215 | 0.56 | 200000 | 2.3514 |
| 2.5274 | 0.58 | 208000 | 2.3531 |
| 2.5274 | 0.61 | 216000 | 2.3463 |
| 2.5215 | 0.63 | 224000 | 2.3470 |
| 2.5215 | 0.65 | 232000 | 2.3407 |
| 2.5096 | 0.67 | 240000 | 2.3400 |
| 2.5096 | 0.7 | 248000 | 2.3402 |
| 2.5176 | 0.72 | 256000 | 2.3308 |
| 2.5176 | 0.74 | 264000 | 2.3342 |
| 2.5048 | 0.76 | 272000 | 2.3333 |
| 2.5048 | 0.79 | 280000 | 2.3288 |
| 2.4979 | 0.81 | 288000 | 2.3298 |
| 2.4979 | 0.83 | 296000 | 2.3237 |
| 2.4963 | 0.85 | 304000 | 2.3266 |
| 2.4963 | 0.88 | 312000 | 2.3197 |
| 2.4972 | 0.9 | 320000 | 2.3271 |
| 2.4972 | 0.92 | 328000 | 2.3275 |
| 2.4969 | 0.94 | 336000 | 2.3210 |
| 2.4969 | 0.97 | 344000 | 2.3222 |
| 2.4961 | 0.99 | 352000 | 2.3242 |
| 2.4961 | 1.01 | 360000 | 2.3155 |
| 2.49 | 1.03 | 368000 | 2.3175 |
| 2.49 | 1.06 | 376000 | 2.3076 |
| 2.4847 | 1.08 | 384000 | 2.3138 |
| 2.4847 | 1.1 | 392000 | 2.3183 |
| 2.4767 | 1.12 | 400000 | 2.3118 |
| 2.4767 | 1.15 | 408000 | 2.3152 |
| 2.4788 | 1.17 | 416000 | 2.3089 |
| 2.4788 | 1.19 | 424000 | 2.3051 |
| 2.4738 | 1.21 | 432000 | 2.3102 |
| 2.4738 | 1.24 | 440000 | 2.3069 |
| 2.4635 | 1.26 | 448000 | 2.3004 |
| 2.4635 | 1.28 | 456000 | 2.3066 |
| 2.4828 | 1.3 | 464000 | 2.3078 |
| 2.4828 | 1.32 | 472000 | 2.3072 |
| 2.4675 | 1.35 | 480000 | 2.3073 |
| 2.4675 | 1.37 | 488000 | 2.3014 |
| 2.4676 | 1.39 | 496000 | 2.2987 |
| 2.4676 | 1.41 | 504000 | 2.2988 |
| 2.4678 | 1.44 | 512000 | 2.2971 |
| 2.4678 | 1.46 | 520000 | 2.2969 |
| 2.4634 | 1.48 | 528000 | 2.2990 |
| 2.4634 | 1.5 | 536000 | 2.2869 |
| 2.4657 | 1.53 | 544000 | 2.2936 |
| 2.4657 | 1.55 | 552000 | 2.2915 |
| 2.4607 | 1.57 | 560000 | 2.2903 |
| 2.4607 | 1.59 | 568000 | 2.2934 |
| 2.4558 | 1.62 | 576000 | 2.2845 |
| 2.4558 | 1.64 | 584000 | 2.2897 |
| 2.4662 | 1.66 | 592000 | 2.2928 |
| 2.4662 | 1.68 | 600000 | 2.2861 |
| 2.4658 | 1.71 | 608000 | 2.2883 |
| 2.4658 | 1.73 | 616000 | 2.2878 |
| 2.4533 | 1.75 | 624000 | 2.2892 |
| 2.4533 | 1.77 | 632000 | 2.2886 |
| 2.4575 | 1.8 | 640000 | 2.2894 |
| 2.4575 | 1.82 | 648000 | 2.2871 |
| 2.4565 | 1.84 | 656000 | 2.2798 |
| 2.4565 | 1.86 | 664000 | 2.2877 |
| 2.4548 | 1.89 | 672000 | 2.2859 |
| 2.4548 | 1.91 | 680000 | 2.2787 |
| 2.4507 | 1.93 | 688000 | 2.2780 |
| 2.4507 | 1.95 | 696000 | 2.2826 |
| 2.4455 | 1.98 | 704000 | 2.2838 |
| 2.4455 | 2.0 | 712000 | 2.2764 |
| 2.4516 | 2.02 | 720000 | 2.2814 |
| 2.4516 | 2.04 | 728000 | 2.2807 |
| 2.445 | 2.07 | 736000 | 2.2740 |
| 2.445 | 2.09 | 744000 | 2.2780 |
| 2.4466 | 2.11 | 752000 | 2.2775 |
| 2.4466 | 2.13 | 760000 | 2.2783 |
| 2.4476 | 2.16 | 768000 | 2.2763 |
| 2.4476 | 2.18 | 776000 | 2.2737 |
| 2.4449 | 2.2 | 784000 | 2.2753 |
| 2.4449 | 2.22 | 792000 | 2.2762 |
| 2.4424 | 2.25 | 800000 | 2.2767 |
| 2.4424 | 2.27 | 808000 | 2.2702 |
| 2.4528 | 2.29 | 816000 | 2.2655 |
| 2.4528 | 2.31 | 824000 | 2.2727 |
| 2.4523 | 2.34 | 832000 | 2.2733 |
| 2.4523 | 2.36 | 840000 | 2.2654 |
| 2.4395 | 2.38 | 848000 | 2.2674 |
| 2.4395 | 2.4 | 856000 | 2.2754 |
| 2.434 | 2.43 | 864000 | 2.2722 |
| 2.434 | 2.45 | 872000 | 2.2666 |
| 2.4407 | 2.47 | 880000 | 2.2656 |
| 2.4407 | 2.49 | 888000 | 2.2654 |
| 2.4352 | 2.52 | 896000 | 2.2630 |
| 2.4352 | 2.54 | 904000 | 2.2662 |
| 2.4393 | 2.56 | 912000 | 2.2692 |
| 2.4393 | 2.58 | 920000 | 2.2558 |
| 2.4378 | 2.61 | 928000 | 2.2619 |
| 2.4378 | 2.63 | 936000 | 2.2614 |
| 2.4392 | 2.65 | 944000 | 2.2578 |
| 2.4392 | 2.67 | 952000 | 2.2672 |
| 2.437 | 2.69 | 960000 | 2.2598 |
| 2.437 | 2.72 | 968000 | 2.2633 |
| 2.4388 | 2.74 | 976000 | 2.2566 |
| 2.4388 | 2.76 | 984000 | 2.2551 |
| 2.4386 | 2.78 | 992000 | 2.2606 |
| 2.4386 | 2.81 | 1000000 | 2.2634 |
| 2.4402 | 2.83 | 1008000 | 2.2641 |
| 2.4402 | 2.85 | 1016000 | 2.2619 |
| 2.4442 | 2.87 | 1024000 | 2.2584 |
| 2.4442 | 2.9 | 1032000 | 2.2579 |
| 2.4327 | 2.92 | 1040000 | 2.2523 |
| 2.4327 | 2.94 | 1048000 | 2.2562 |
| 2.4289 | 2.96 | 1056000 | 2.2593 |
| 2.4289 | 2.99 | 1064000 | 2.2562 |
| 2.4319 | 3.01 | 1072000 | 2.2536 |
| 2.4319 | 3.03 | 1080000 | 2.2603 |
| 2.4174 | 3.05 | 1088000 | 2.2549 |
| 2.4174 | 3.08 | 1096000 | 2.2595 |
| 2.4155 | 3.1 | 1104000 | 2.2555 |
| 2.4155 | 3.12 | 1112000 | 2.2501 |
| 2.427 | 3.14 | 1120000 | 2.2528 |
| 2.427 | 3.17 | 1128000 | 2.2529 |
| 2.4222 | 3.19 | 1136000 | 2.2536 |
| 2.4222 | 3.21 | 1144000 | 2.2582 |
| 2.4232 | 3.23 | 1152000 | 2.2522 |
| 2.4232 | 3.26 | 1160000 | 2.2525 |
| 2.4252 | 3.28 | 1168000 | 2.2538 |
| 2.4252 | 3.3 | 1176000 | 2.2512 |
| 2.4209 | 3.32 | 1184000 | 2.2557 |
| 2.4209 | 3.35 | 1192000 | 2.2445 |
| 2.4243 | 3.37 | 1200000 | 2.2570 |
| 2.4243 | 3.39 | 1208000 | 2.2539 |
| 2.4278 | 3.41 | 1216000 | 2.2514 |
| 2.4278 | 3.44 | 1224000 | 2.2454 |
| 2.4286 | 3.46 | 1232000 | 2.2463 |
| 2.4286 | 3.48 | 1240000 | 2.2506 |
| 2.4274 | 3.5 | 1248000 | 2.2427 |
| 2.4274 | 3.53 | 1256000 | 2.2535 |
| 2.4201 | 3.55 | 1264000 | 2.2517 |
| 2.4201 | 3.57 | 1272000 | 2.2436 |
| 2.4233 | 3.59 | 1280000 | 2.2430 |
| 2.4233 | 3.62 | 1288000 | 2.2470 |
| 2.4183 | 3.64 | 1296000 | 2.2446 |
| 2.4183 | 3.66 | 1304000 | 2.2539 |
| 2.428 | 3.68 | 1312000 | 2.2492 |
| 2.428 | 3.71 | 1320000 | 2.2544 |
| 2.4206 | 3.73 | 1328000 | 2.2478 |
| 2.4206 | 3.75 | 1336000 | 2.2420 |
| 2.4287 | 3.77 | 1344000 | 2.2442 |
| 2.4287 | 3.8 | 1352000 | 2.2426 |
| 2.4297 | 3.82 | 1360000 | 2.2426 |
| 2.4297 | 3.84 | 1368000 | 2.2481 |
| 2.4185 | 3.86 | 1376000 | 2.2449 |
| 2.4185 | 3.89 | 1384000 | 2.2468 |
| 2.4217 | 3.91 | 1392000 | 2.2467 |
| 2.4217 | 3.93 | 1400000 | 2.2463 |
| 2.4144 | 3.95 | 1408000 | 2.2482 |
| 2.4144 | 3.97 | 1416000 | 2.2424 |
| 2.4175 | 4.0 | 1424000 | 2.2415 |
| 2.4175 | 4.02 | 1432000 | 2.2451 |
| 2.4169 | 4.04 | 1440000 | 2.2443 |
| 2.4169 | 4.06 | 1448000 | 2.2389 |
| 2.4142 | 4.09 | 1456000 | 2.2377 |
| 2.4142 | 4.11 | 1464000 | 2.2399 |
| 2.4122 | 4.13 | 1472000 | 2.2447 |
| 2.4122 | 4.15 | 1480000 | 2.2456 |
| 2.4166 | 4.18 | 1488000 | 2.2451 |
| 2.4166 | 4.2 | 1496000 | 2.2369 |
| 2.4165 | 4.22 | 1504000 | 2.2426 |
| 2.4165 | 4.24 | 1512000 | 2.2384 |
| 2.4204 | 4.27 | 1520000 | 2.2454 |
| 2.4204 | 4.29 | 1528000 | 2.2422 |
| 2.4192 | 4.31 | 1536000 | 2.2423 |
| 2.4192 | 4.33 | 1544000 | 2.2435 |
| 2.4167 | 4.36 | 1552000 | 2.2451 |
| 2.4167 | 4.38 | 1560000 | 2.2443 |
| 2.4124 | 4.4 | 1568000 | 2.2430 |
| 2.4124 | 4.42 | 1576000 | 2.2422 |
| 2.406 | 4.45 | 1584000 | 2.2357 |
| 2.406 | 4.47 | 1592000 | 2.2395 |
| 2.4166 | 4.49 | 1600000 | 2.2378 |
| 2.4166 | 4.51 | 1608000 | 2.2420 |
| 2.4144 | 4.54 | 1616000 | 2.2402 |
| 2.4144 | 4.56 | 1624000 | 2.2384 |
| 2.4219 | 4.58 | 1632000 | 2.2438 |
| 2.4219 | 4.6 | 1640000 | 2.2455 |
| 2.4061 | 4.63 | 1648000 | 2.2397 |
| 2.4061 | 4.65 | 1656000 | 2.2354 |
| 2.411 | 4.67 | 1664000 | 2.2393 |
| 2.411 | 4.69 | 1672000 | 2.2388 |
| 2.4125 | 4.72 | 1680000 | 2.2406 |
| 2.4125 | 4.74 | 1688000 | 2.2330 |
| 2.4092 | 4.76 | 1696000 | 2.2336 |
| 2.4092 | 4.78 | 1704000 | 2.2398 |
| 2.4078 | 4.81 | 1712000 | 2.2368 |
| 2.4078 | 4.83 | 1720000 | 2.2361 |
| 2.4185 | 4.85 | 1728000 | 2.2378 |
| 2.4185 | 4.87 | 1736000 | 2.2339 |
| 2.4088 | 4.9 | 1744000 | 2.2366 |
| 2.4088 | 4.92 | 1752000 | 2.2385 |
| 2.4095 | 4.94 | 1760000 | 2.2337 |
| 2.4095 | 4.96 | 1768000 | 2.2413 |
| 2.4078 | 4.99 | 1776000 | 2.2377 |
| 2.4078 | 5.01 | 1784000 | 2.2302 |
| 2.4073 | 5.03 | 1792000 | 2.2357 |
| 2.4073 | 5.05 | 1800000 | 2.2384 |
| 2.4073 | 5.08 | 1808000 | 2.2322 |
| 2.4073 | 5.1 | 1816000 | 2.2344 |
| 2.4043 | 5.12 | 1824000 | 2.2327 |
| 2.4043 | 5.14 | 1832000 | 2.2350 |
| 2.4082 | 5.17 | 1840000 | 2.2376 |
| 2.4082 | 5.19 | 1848000 | 2.2363 |
| 2.4073 | 5.21 | 1856000 | 2.2323 |
| 2.4073 | 5.23 | 1864000 | 2.2419 |
| 2.4148 | 5.26 | 1872000 | 2.2293 |
| 2.4148 | 5.28 | 1880000 | 2.2346 |
| 2.4098 | 5.3 | 1888000 | 2.2372 |
| 2.4098 | 5.32 | 1896000 | 2.2371 |
| 2.407 | 5.34 | 1904000 | 2.2397 |
| 2.407 | 5.37 | 1912000 | 2.2300 |
| 2.4108 | 5.39 | 1920000 | 2.2317 |
| 2.4108 | 5.41 | 1928000 | 2.2350 |
| 2.4168 | 5.43 | 1936000 | 2.2343 |
| 2.4168 | 5.46 | 1944000 | 2.2327 |
| 2.4113 | 5.48 | 1952000 | 2.2363 |
| 2.4113 | 5.5 | 1960000 | 2.2314 |
| 2.4131 | 5.52 | 1968000 | 2.2303 |
| 2.4131 | 5.55 | 1976000 | 2.2353 |
| 2.4129 | 5.57 | 1984000 | 2.2353 |
| 2.4129 | 5.59 | 1992000 | 2.2296 |
| 2.4129 | 5.61 | 2000000 | 2.2314 |
| 2.4129 | 5.64 | 2008000 | 2.2288 |
| 2.4045 | 5.66 | 2016000 | 2.2347 |
| 2.4045 | 5.68 | 2024000 | 2.2349 |
| 2.4089 | 5.7 | 2032000 | 2.2310 |
| 2.4089 | 5.73 | 2040000 | 2.2342 |
| 2.4091 | 5.75 | 2048000 | 2.2320 |
| 2.4091 | 5.77 | 2056000 | 2.2311 |
| 2.4137 | 5.79 | 2064000 | 2.2278 |
| 2.4137 | 5.82 | 2072000 | 2.2344 |
| 2.4063 | 5.84 | 2080000 | 2.2339 |
| 2.4063 | 5.86 | 2088000 | 2.2271 |
| 2.4046 | 5.88 | 2096000 | 2.2263 |
| 2.4046 | 5.91 | 2104000 | 2.2369 |
| 2.4105 | 5.93 | 2112000 | 2.2330 |
| 2.4105 | 5.95 | 2120000 | 2.2361 |
| 2.4045 | 5.97 | 2128000 | 2.2320 |
| 2.4045 | 6.0 | 2136000 | 2.2283 |
| 2.4093 | 6.02 | 2144000 | 2.2262 |
| 2.4093 | 6.04 | 2152000 | 2.2294 |
| 2.4109 | 6.06 | 2160000 | 2.2334 |
| 2.4109 | 6.09 | 2168000 | 2.2363 |
| 2.4061 | 6.11 | 2176000 | 2.2309 |
| 2.4061 | 6.13 | 2184000 | 2.2269 |
| 2.4007 | 6.15 | 2192000 | 2.2369 |
| 2.4007 | 6.18 | 2200000 | 2.2297 |
| 2.4034 | 6.2 | 2208000 | 2.2267 |
| 2.4034 | 6.22 | 2216000 | 2.2310 |
| 2.4049 | 6.24 | 2224000 | 2.2362 |
| 2.4049 | 6.27 | 2232000 | 2.2319 |
| 2.4052 | 6.29 | 2240000 | 2.2308 |
| 2.4052 | 6.31 | 2248000 | 2.2225 |
| 2.4102 | 6.33 | 2256000 | 2.2366 |
| 2.4102 | 6.36 | 2264000 | 2.2327 |
| 2.4046 | 6.38 | 2272000 | 2.2305 |
| 2.4046 | 6.4 | 2280000 | 2.2309 |
| 2.4066 | 6.42 | 2288000 | 2.2291 |
| 2.4066 | 6.45 | 2296000 | 2.2301 |
| 2.4041 | 6.47 | 2304000 | 2.2378 |
| 2.4041 | 6.49 | 2312000 | 2.2317 |
| 2.4081 | 6.51 | 2320000 | 2.2326 |
| 2.4081 | 6.54 | 2328000 | 2.2412 |
| 2.4147 | 6.56 | 2336000 | 2.2349 |
| 2.4147 | 6.58 | 2344000 | 2.2296 |
| 2.4105 | 6.6 | 2352000 | 2.2313 |
| 2.4105 | 6.62 | 2360000 | 2.2297 |
| 2.4096 | 6.65 | 2368000 | 2.2241 |
| 2.4096 | 6.67 | 2376000 | 2.2322 |
| 2.4089 | 6.69 | 2384000 | 2.2344 |
| 2.4089 | 6.71 | 2392000 | 2.2291 |
| 2.4048 | 6.74 | 2400000 | 2.2274 |
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
TheBloke/FusionNet_34Bx2_MoE-GGUF
|
TheBloke
| 2024-01-25T00:28:42Z | 221 | 5 |
transformers
|
[
"transformers",
"gguf",
"mixtral",
"moe",
"text-generation",
"en",
"base_model:TomGrc/FusionNet_34Bx2_MoE",
"base_model:quantized:TomGrc/FusionNet_34Bx2_MoE",
"license:mit",
"region:us",
"conversational"
] |
text-generation
| 2024-01-24T22:49:13Z |
---
base_model: TomGrc/FusionNet_34Bx2_MoE
inference: false
language:
- en
license: mit
model_creator: Suqin Zhang
model_name: FusionNet 34Bx2 MoE
model_type: mixtral
pipeline_tag: text-generation
prompt_template: '[INST] <<SYS>>
{system_message}
<</SYS>>
{prompt} [/INST]
'
quantized_by: TheBloke
tags:
- moe
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# FusionNet 34Bx2 MoE - GGUF
- Model creator: [Suqin Zhang](https://huggingface.co/TomGrc)
- Original model: [FusionNet 34Bx2 MoE](https://huggingface.co/TomGrc/FusionNet_34Bx2_MoE)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Suqin Zhang's FusionNet 34Bx2 MoE](https://huggingface.co/TomGrc/FusionNet_34Bx2_MoE).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/FusionNet_34Bx2_MoE-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/FusionNet_34Bx2_MoE-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/FusionNet_34Bx2_MoE-GGUF)
* [Suqin Zhang's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TomGrc/FusionNet_34Bx2_MoE)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Llama-2-Chat
```
[INST] <<SYS>>
{system_message}
<</SYS>>
{prompt} [/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [fusionnet_34bx2_moe.Q2_K.gguf](https://huggingface.co/TheBloke/FusionNet_34Bx2_MoE-GGUF/blob/main/fusionnet_34bx2_moe.Q2_K.gguf) | Q2_K | 2 | 22.39 GB| 24.89 GB | significant quality loss - not recommended for most purposes |
| [fusionnet_34bx2_moe.Q3_K_S.gguf](https://huggingface.co/TheBloke/FusionNet_34Bx2_MoE-GGUF/blob/main/fusionnet_34bx2_moe.Q3_K_S.gguf) | Q3_K_S | 3 | 26.32 GB| 28.82 GB | very small, high quality loss |
| [fusionnet_34bx2_moe.Q3_K_M.gguf](https://huggingface.co/TheBloke/FusionNet_34Bx2_MoE-GGUF/blob/main/fusionnet_34bx2_moe.Q3_K_M.gguf) | Q3_K_M | 3 | 29.24 GB| 31.74 GB | very small, high quality loss |
| [fusionnet_34bx2_moe.Q3_K_L.gguf](https://huggingface.co/TheBloke/FusionNet_34Bx2_MoE-GGUF/blob/main/fusionnet_34bx2_moe.Q3_K_L.gguf) | Q3_K_L | 3 | 31.77 GB| 34.27 GB | small, substantial quality loss |
| [fusionnet_34bx2_moe.Q4_0.gguf](https://huggingface.co/TheBloke/FusionNet_34Bx2_MoE-GGUF/blob/main/fusionnet_34bx2_moe.Q4_0.gguf) | Q4_0 | 4 | 34.33 GB| 36.83 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [fusionnet_34bx2_moe.Q4_K_S.gguf](https://huggingface.co/TheBloke/FusionNet_34Bx2_MoE-GGUF/blob/main/fusionnet_34bx2_moe.Q4_K_S.gguf) | Q4_K_S | 4 | 34.59 GB| 37.09 GB | small, greater quality loss |
| [fusionnet_34bx2_moe.Q4_K_M.gguf](https://huggingface.co/TheBloke/FusionNet_34Bx2_MoE-GGUF/blob/main/fusionnet_34bx2_moe.Q4_K_M.gguf) | Q4_K_M | 4 | 36.66 GB| 39.16 GB | medium, balanced quality - recommended |
| [fusionnet_34bx2_moe.Q5_0.gguf](https://huggingface.co/TheBloke/FusionNet_34Bx2_MoE-GGUF/blob/main/fusionnet_34bx2_moe.Q5_0.gguf) | Q5_0 | 5 | 41.88 GB| 44.38 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [fusionnet_34bx2_moe.Q5_K_S.gguf](https://huggingface.co/TheBloke/FusionNet_34Bx2_MoE-GGUF/blob/main/fusionnet_34bx2_moe.Q5_K_S.gguf) | Q5_K_S | 5 | 41.88 GB| 44.38 GB | large, low quality loss - recommended |
| [fusionnet_34bx2_moe.Q5_K_M.gguf](https://huggingface.co/TheBloke/FusionNet_34Bx2_MoE-GGUF/blob/main/fusionnet_34bx2_moe.Q5_K_M.gguf) | Q5_K_M | 5 | 43.08 GB| 45.58 GB | large, very low quality loss - recommended |
| [fusionnet_34bx2_moe.Q6_K.gguf](https://huggingface.co/TheBloke/FusionNet_34Bx2_MoE-GGUF/blob/main/fusionnet_34bx2_moe.Q6_K.gguf) | Q6_K | 6 | 49.89 GB| 52.39 GB | very large, extremely low quality loss |
| fusionnet_34bx2_moe.Q8_0.gguf | Q8_0 | 8 | 64.62 GB| 67.12 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `fusionnet_34bx2_moe.Q6_K.gguf-split-a`
* `fusionnet_34bx2_moe.Q6_K.gguf-split-b`
### q8_0
Please download:
* `fusionnet_34bx2_moe.Q8_0.gguf-split-a`
* `fusionnet_34bx2_moe.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat fusionnet_34bx2_moe.Q6_K.gguf-split-* > fusionnet_34bx2_moe.Q6_K.gguf && rm fusionnet_34bx2_moe.Q6_K.gguf-split-*
cat fusionnet_34bx2_moe.Q8_0.gguf-split-* > fusionnet_34bx2_moe.Q8_0.gguf && rm fusionnet_34bx2_moe.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B fusionnet_34bx2_moe.Q6_K.gguf-split-a + fusionnet_34bx2_moe.Q6_K.gguf-split-b fusionnet_34bx2_moe.Q6_K.gguf
del fusionnet_34bx2_moe.Q6_K.gguf-split-a fusionnet_34bx2_moe.Q6_K.gguf-split-b
COPY /B fusionnet_34bx2_moe.Q8_0.gguf-split-a + fusionnet_34bx2_moe.Q8_0.gguf-split-b fusionnet_34bx2_moe.Q8_0.gguf
del fusionnet_34bx2_moe.Q8_0.gguf-split-a fusionnet_34bx2_moe.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/FusionNet_34Bx2_MoE-GGUF and below it, a specific filename to download, such as: fusionnet_34bx2_moe.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/FusionNet_34Bx2_MoE-GGUF fusionnet_34bx2_moe.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/FusionNet_34Bx2_MoE-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/FusionNet_34Bx2_MoE-GGUF fusionnet_34bx2_moe.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m fusionnet_34bx2_moe.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\n{system_message}\n<</SYS>>\n{prompt} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./fusionnet_34bx2_moe.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"[INST] <<SYS>>\n{system_message}\n<</SYS>>\n{prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./fusionnet_34bx2_moe.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, ้ฟๆ, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjรคreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Suqin Zhang's FusionNet 34Bx2 MoE
# FusionNet_34Bx2_MoE
Fine-tuned model on English language using MoE method.
## Model description
The FusionNet_34Bx2_MoE is a model to experiment with the MoE method, which could significantly increase the performance of the original model. The FusionNet_34Bx2_MoE has 60.8B parameters, and this model is fine-tuned. Enjoy!
## Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("TomGrc/FusionNet_34Bx2_MoE")
model = AutoModelForCausalLM.from_pretrained("TomGrc/FusionNet_34Bx2_MoE")
```
<!-- original-model-card end -->
|
mesolitica/mallam-5b-20k-instructions-v2
|
mesolitica
| 2024-01-25T00:24:53Z | 20 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"ms",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-27T02:02:09Z |
---
language:
- ms
---
# Full Parameter Finetuning MaLLaM ๐ 5B 20480 context length v2 on Malaysian instructions dataset
README at https://github.com/mesolitica/malaya/tree/5.1/session/mistral#mallam-5b
We use exact Mistral Instruct chat template.
WandB, https://wandb.ai/huseinzol05/fpf-mallam-5b-instructions-16k?workspace=user-huseinzol05
WandB report, https://wandb.ai/huseinzol05/fpf-tinyllama-1.1b-hf-instructions-16k/reports/Instruction-finetuning--Vmlldzo2MzQ3OTcz
## Dataset
Dataset gathered at https://huggingface.co/collections/mesolitica/malaysian-synthetic-dataset-656c2673fe7fe0b1e9e25fe2
Notebook to prepare dataset at https://github.com/mesolitica/malaysian-dataset/blob/master/llm-instruction/combine-malay-no-alignment-multitasks-partial-ultrachat-v2.ipynb
## Limitations
This model is a quick demonstration that the base model can be easily fine-tuned to achieve some performance.
It does have minimal moderation mechanisms.
## how-to
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch
import json
def parse_mistral_chat(messages, function_call = None):
user_query = messages[-1]['content']
users, assistants = [], []
for q in messages[:-1]:
if q['role'] == 'user':
users.append(q['content'])
elif q['role'] == 'assistant':
assistants.append(q['content'])
texts = ['<s>']
if function_call:
fs = []
for f in function_call:
f = json.dumps(f, indent=4)
fs.append(f)
fs = '\n\n'.join(fs)
texts.append(f'\n[FUNCTIONCALL]\n{fs}\n')
for u, a in zip(users, assistants):
texts.append(f'[INST] {u.strip()} [/INST] {a.strip()}</s>')
texts.append(f'[INST] {user_query.strip()} [/INST]')
prompt = ''.join(texts).strip()
return prompt
TORCH_DTYPE = 'bfloat16'
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type='nf4',
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=getattr(torch, TORCH_DTYPE)
)
tokenizer = AutoTokenizer.from_pretrained('mesolitica/mallam-5b-20k-instructions-v2')
model = AutoModelForCausalLM.from_pretrained(
'mesolitica/mallam-5b-20k-instructions-v2',
use_flash_attention_2 = True,
quantization_config = nf4_config
)
messages = [
{'role': 'user', 'content': 'kwsp tu apa'}
]
prompt = parse_mistral_chat(messages)
inputs = tokenizer([prompt], return_tensors='pt', add_special_tokens=False).to('cuda')
generate_kwargs = dict(
inputs,
max_new_tokens=1024,
top_p=0.95,
top_k=50,
temperature=0.9,
do_sample=True,
num_beams=1,
)
r = model.generate(**generate_kwargs)
tokenizer.decode(r[0])
```
```text
<s> [INST] kwsp tu apa [/INST]KWSP bermaksud Kumpulan Wang Simpanan Pekerja. Ia adalah sebuah institusi simpanan persaraan yang ditubuhkan oleh Kementerian Kewangan Malaysia untuk tujuan mengumpul simpanan ahli untuk dibayar pada umur persaraan, penuh atau penuh persaraan penuh. KWSP ditubuhkan pada tahun 1951 dan mula beroperasi pada tahun 1952. KWSP adalah salah satu institusi simpanan persaraan terbesar di dunia, dengan pangkalan ahli sekitar 14 juta ahli.</s>
```
|
RecCode/whisper-small-hi
|
RecCode
| 2024-01-25T00:15:07Z | 63 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-23T05:35:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
base_model: openai/whisper-small
model-index:
- name: whisper-small-hi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-hi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1606
- Wer: 88.3090
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3591 | 4.0 | 1000 | 0.2347 | 43.0063 |
| 0.0548 | 8.0 | 2000 | 0.2610 | 315.6576 |
| 0.0144 | 12.0 | 3000 | 0.1812 | 500.4175 |
| 0.0001 | 16.0 | 4000 | 0.1606 | 88.3090 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
yoon1000/ft_0124_korean_2
|
yoon1000
| 2024-01-25T00:05:40Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-24T01:29:30Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
model-index:
- name: ft_0124_korean_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft_0124_korean_2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5092
- Cer: 0.1001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 25.0263 | 0.25 | 500 | 5.1785 | 1.0 |
| 4.7178 | 0.5 | 1000 | 4.7781 | 1.0 |
| 4.4523 | 0.76 | 1500 | 4.1013 | 0.9077 |
| 3.2148 | 1.01 | 2000 | 2.3856 | 0.4777 |
| 2.4229 | 1.26 | 2500 | 1.8502 | 0.4047 |
| 2.0145 | 1.51 | 3000 | 1.5497 | 0.3475 |
| 1.7916 | 1.77 | 3500 | 1.3324 | 0.3076 |
| 1.5849 | 2.02 | 4000 | 1.1873 | 0.2773 |
| 1.3956 | 2.27 | 4500 | 1.0617 | 0.2578 |
| 1.3086 | 2.52 | 5000 | 0.9643 | 0.2368 |
| 1.2211 | 2.78 | 5500 | 0.8894 | 0.2246 |
| 1.1562 | 3.03 | 6000 | 0.8537 | 0.2189 |
| 1.0729 | 3.28 | 6500 | 0.7973 | 0.2101 |
| 1.0089 | 3.53 | 7000 | 0.7549 | 0.1959 |
| 1.0027 | 3.79 | 7500 | 0.7327 | 0.1945 |
| 0.9496 | 4.04 | 8000 | 0.7082 | 0.1849 |
| 0.887 | 4.29 | 8500 | 0.6909 | 0.1789 |
| 0.8607 | 4.54 | 9000 | 0.6617 | 0.1739 |
| 0.853 | 4.8 | 9500 | 0.6518 | 0.1730 |
| 0.8305 | 5.05 | 10000 | 0.6402 | 0.1657 |
| 0.774 | 5.3 | 10500 | 0.6365 | 0.1650 |
| 0.7621 | 5.55 | 11000 | 0.6206 | 0.1600 |
| 0.7553 | 5.81 | 11500 | 0.6080 | 0.1594 |
| 0.7186 | 6.06 | 12000 | 0.5951 | 0.1543 |
| 0.6772 | 6.31 | 12500 | 0.5814 | 0.1490 |
| 0.6752 | 6.56 | 13000 | 0.5815 | 0.1501 |
| 0.672 | 6.81 | 13500 | 0.5603 | 0.1440 |
| 0.6351 | 7.07 | 14000 | 0.5670 | 0.1439 |
| 0.6186 | 7.32 | 14500 | 0.5700 | 0.1431 |
| 0.6035 | 7.57 | 15000 | 0.5614 | 0.1417 |
| 0.5848 | 7.82 | 15500 | 0.5470 | 0.1396 |
| 0.5719 | 8.08 | 16000 | 0.5514 | 0.1386 |
| 0.556 | 8.33 | 16500 | 0.5515 | 0.1376 |
| 0.5596 | 8.58 | 17000 | 0.5407 | 0.1325 |
| 0.5472 | 8.83 | 17500 | 0.5405 | 0.1349 |
| 0.5309 | 9.09 | 18000 | 0.5279 | 0.1295 |
| 0.5072 | 9.34 | 18500 | 0.5275 | 0.1310 |
| 0.5072 | 9.59 | 19000 | 0.5330 | 0.1272 |
| 0.4905 | 9.84 | 19500 | 0.5238 | 0.1262 |
| 0.4842 | 10.1 | 20000 | 0.5234 | 0.1237 |
| 0.4513 | 10.35 | 20500 | 0.5210 | 0.1231 |
| 0.4513 | 10.6 | 21000 | 0.5165 | 0.1208 |
| 0.4541 | 10.85 | 21500 | 0.5189 | 0.1207 |
| 0.4417 | 11.11 | 22000 | 0.5209 | 0.1192 |
| 0.4337 | 11.36 | 22500 | 0.5246 | 0.1191 |
| 0.4339 | 11.61 | 23000 | 0.5210 | 0.1183 |
| 0.4357 | 11.86 | 23500 | 0.4990 | 0.1162 |
| 0.4066 | 12.12 | 24000 | 0.4956 | 0.1132 |
| 0.3932 | 12.37 | 24500 | 0.5064 | 0.1148 |
| 0.384 | 12.62 | 25000 | 0.5011 | 0.1134 |
| 0.3902 | 12.87 | 25500 | 0.5064 | 0.1130 |
| 0.3883 | 13.12 | 26000 | 0.5128 | 0.1121 |
| 0.3625 | 13.38 | 26500 | 0.5140 | 0.1119 |
| 0.3648 | 13.63 | 27000 | 0.5091 | 0.1108 |
| 0.365 | 13.88 | 27500 | 0.4923 | 0.1098 |
| 0.3604 | 14.13 | 28000 | 0.5062 | 0.1090 |
| 0.3517 | 14.39 | 28500 | 0.5007 | 0.1089 |
| 0.3485 | 14.64 | 29000 | 0.4956 | 0.1081 |
| 0.3407 | 14.89 | 29500 | 0.5090 | 0.1084 |
| 0.333 | 15.14 | 30000 | 0.5018 | 0.1067 |
| 0.3211 | 15.4 | 30500 | 0.5114 | 0.1063 |
| 0.3204 | 15.65 | 31000 | 0.4976 | 0.1053 |
| 0.3265 | 15.9 | 31500 | 0.4947 | 0.1046 |
| 0.3169 | 16.15 | 32000 | 0.4988 | 0.1043 |
| 0.304 | 16.41 | 32500 | 0.5115 | 0.1041 |
| 0.2944 | 16.66 | 33000 | 0.5144 | 0.1042 |
| 0.311 | 16.91 | 33500 | 0.5068 | 0.1025 |
| 0.2997 | 17.16 | 34000 | 0.5079 | 0.1030 |
| 0.288 | 17.42 | 34500 | 0.5065 | 0.1019 |
| 0.2897 | 17.67 | 35000 | 0.5077 | 0.1016 |
| 0.2939 | 17.92 | 35500 | 0.5003 | 0.1017 |
| 0.2766 | 18.17 | 36000 | 0.5116 | 0.1013 |
| 0.2841 | 18.43 | 36500 | 0.5019 | 0.1010 |
| 0.2882 | 18.68 | 37000 | 0.5046 | 0.1008 |
| 0.2678 | 18.93 | 37500 | 0.5086 | 0.1013 |
| 0.269 | 19.18 | 38000 | 0.5108 | 0.1001 |
| 0.2815 | 19.43 | 38500 | 0.5111 | 0.1001 |
| 0.2668 | 19.69 | 39000 | 0.5091 | 0.1000 |
| 0.2715 | 19.94 | 39500 | 0.5092 | 0.1001 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
k1rby/BeaglePipe-7B-slerp
|
k1rby
| 2024-01-25T00:00:37Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1227",
"mlabonne/NeuralBeagle14-7B",
"base_model:OpenPipe/mistral-ft-optimized-1227",
"base_model:merge:OpenPipe/mistral-ft-optimized-1227",
"base_model:mlabonne/NeuralBeagle14-7B",
"base_model:merge:mlabonne/NeuralBeagle14-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T23:56:18Z |
---
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1227
- mlabonne/NeuralBeagle14-7B
base_model:
- OpenPipe/mistral-ft-optimized-1227
- mlabonne/NeuralBeagle14-7B
---
# BeaglePipe-7B-slerp
BeaglePipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227)
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1227
layer_range: [0, 32]
- model: mlabonne/NeuralBeagle14-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1227
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "k1rby/BeaglePipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
databio/r2v-buenrostro2018-hg19
|
databio
| 2024-01-24T23:49:53Z | 2 | 0 | null |
[
"region:us"
] | null | 2024-01-24T23:43:43Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for databio/r2v-buenrostro2018-hg19
## Model Details
This is a single-cell Region2Vec (r2v) model designed to be used with with scEmbed and Region2Vec. It was trained on the Buenrostro2018 dataset. This model should be used to generate embeddings of single cells from scATAC-seq experiments. It produces 100 dimensional embeddings for each single-cell.
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/databio/geniml
- **Paper:** https://www.biorxiv.org/content/10.1101/2023.08.01.551452v1
## Uses
This model should be used for producing low-dimensional embeddings of single-cells. These embeddings can be used for downstream clustering or classification tasks.
## Bias, Risks, and Limitations
The Buenrostro2018 dataset comprises 2034 human hematopoietic stem cells from data aligned to hg19. Therefore, it should only be used with other data aligned to hg19.
## How to Get Started with the Model
You can use the `geniml` python library to download this model and start encoding your single-cell data:
```python
import scanpy as sc
from geniml.scembed import ScEmbed
adata = sc.read_h5ad("path/to/adata.h5ad")
model = ScEmbed("databio/r2v-buenrostro2018-hg19")
embeddings = model.encode(adata)
```
## Training Details
### Training Data
The data for this model comes from Buenrostro2018: https://www.sciencedirect.com/science/article/pii/S009286741830446X
|
grimulkan/Goliath-longLORA-120b-rope8-32k-6bpw_h8_exl2
|
grimulkan
| 2024-01-24T23:48:58Z | 6 | 1 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T22:34:49Z |
---
license: llama2
---
This is an EXL2 6-bit quantization of [Goliath-longLORA-120b-rope8-32k-fp16](https://huggingface.co/grimulkan/Goliath-longLORA-120b-rope8-32k-fp16)
|
shuyuej/prompt_consistency_epoch_5
|
shuyuej
| 2024-01-24T23:42:10Z | 0 | 1 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-01-24T22:54:36Z |
---
license: apache-2.0
---
# ๐ Adapter Usage
```python
# Load the Pre-trained LoRA Adapter
model.load_adapter("shuyuej/prompt_consistency_epoch_5")
model.enable_adapters()
```
|
web2savar/w2v-fine-tune-test-no-ws2
|
web2savar
| 2024-01-24T23:34:12Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_16_0",
"base_model:ylacombe/w2v-bert-2.0",
"base_model:finetune:ylacombe/w2v-bert-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-24T19:00:18Z |
---
base_model: ylacombe/w2v-bert-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_16_0
metrics:
- wer
model-index:
- name: w2v-fine-tune-test-no-ws2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_16_0
type: common_voice_16_0
config: tr
split: test
args: tr
metrics:
- name: Wer
type: wer
value: 0.11088339984899148
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-fine-tune-test-no-ws2
This model is a fine-tuned version of [ylacombe/w2v-bert-2.0](https://huggingface.co/ylacombe/w2v-bert-2.0) on the common_voice_16_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1513
- Wer: 0.1109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.192 | 0.22 | 300 | 0.2797 | 0.2985 |
| 0.2226 | 0.44 | 600 | 0.2989 | 0.3491 |
| 0.1941 | 0.66 | 900 | 0.2558 | 0.2451 |
| 0.1659 | 0.88 | 1200 | 0.2320 | 0.2289 |
| 0.1332 | 1.1 | 1500 | 0.2063 | 0.1971 |
| 0.1129 | 1.31 | 1800 | 0.1873 | 0.2029 |
| 0.1044 | 1.53 | 2100 | 0.1765 | 0.1856 |
| 0.1026 | 1.75 | 2400 | 0.1719 | 0.1752 |
| 0.0982 | 1.97 | 2700 | 0.1927 | 0.2023 |
| 0.0769 | 2.19 | 3000 | 0.1776 | 0.1671 |
| 0.0715 | 2.41 | 3300 | 0.1626 | 0.1634 |
| 0.0695 | 2.63 | 3600 | 0.1666 | 0.1654 |
| 0.0612 | 2.85 | 3900 | 0.1760 | 0.1609 |
| 0.0614 | 3.07 | 4200 | 0.1645 | 0.1593 |
| 0.0476 | 3.29 | 4500 | 0.1685 | 0.1593 |
| 0.048 | 3.51 | 4800 | 0.1790 | 0.1583 |
| 0.0489 | 3.73 | 5100 | 0.1578 | 0.1535 |
| 0.0456 | 3.94 | 5400 | 0.1610 | 0.1617 |
| 0.041 | 4.16 | 5700 | 0.1559 | 0.1439 |
| 0.0367 | 4.38 | 6000 | 0.1536 | 0.1436 |
| 0.0321 | 4.6 | 6300 | 0.1591 | 0.1449 |
| 0.0349 | 4.82 | 6600 | 0.1616 | 0.1419 |
| 0.0308 | 5.04 | 6900 | 0.1501 | 0.1401 |
| 0.0233 | 5.26 | 7200 | 0.1588 | 0.1394 |
| 0.0253 | 5.48 | 7500 | 0.1633 | 0.1356 |
| 0.0254 | 5.7 | 7800 | 0.1522 | 0.1339 |
| 0.0245 | 5.92 | 8100 | 0.1598 | 0.1371 |
| 0.0189 | 6.14 | 8400 | 0.1497 | 0.1324 |
| 0.0174 | 6.36 | 8700 | 0.1487 | 0.1270 |
| 0.0178 | 6.57 | 9000 | 0.1397 | 0.1286 |
| 0.0173 | 6.79 | 9300 | 0.1495 | 0.1281 |
| 0.0178 | 7.01 | 9600 | 0.1462 | 0.1222 |
| 0.0124 | 7.23 | 9900 | 0.1516 | 0.1225 |
| 0.0121 | 7.45 | 10200 | 0.1554 | 0.1190 |
| 0.0128 | 7.67 | 10500 | 0.1453 | 0.1228 |
| 0.0113 | 7.89 | 10800 | 0.1468 | 0.1178 |
| 0.0086 | 8.11 | 11100 | 0.1556 | 0.1186 |
| 0.0085 | 8.33 | 11400 | 0.1507 | 0.1154 |
| 0.0073 | 8.55 | 11700 | 0.1494 | 0.1169 |
| 0.0079 | 8.77 | 12000 | 0.1507 | 0.1152 |
| 0.0089 | 8.98 | 12300 | 0.1456 | 0.1137 |
| 0.0062 | 9.2 | 12600 | 0.1518 | 0.1127 |
| 0.005 | 9.42 | 12900 | 0.1534 | 0.1115 |
| 0.005 | 9.64 | 13200 | 0.1514 | 0.1110 |
| 0.0048 | 9.86 | 13500 | 0.1513 | 0.1109 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
epinnock/deepseek-coder-6.7-evol-feedback
|
epinnock
| 2024-01-24T23:30:22Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"base_model:finetune:deepseek-ai/deepseek-coder-6.7b-instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-22T14:26:35Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
---
# Uploaded model
- **Developed by:** epinnock
- **License:** apache-2.0
- **Finetuned from model :** deepseek-ai/deepseek-coder-6.7b-instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ondevicellm/tinyllama_mole_sftv2_ultrachat_ep3
|
ondevicellm
| 2024-01-24T23:17:33Z | 26 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mixtralmole",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"custom_code",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:ondevicellm/tinyllama_mole_v1",
"base_model:finetune:ondevicellm/tinyllama_mole_v1",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-24T09:49:55Z |
---
base_model: ondevicellm/tinyllama_mole_v1
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrachat_200k
model-index:
- name: tinyllama_mole_sftv2_ultrachat_ep3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama_mole_sftv2_ultrachat_ep3
This model is a fine-tuned version of [ondevicellm/tinyllama_mole_v1](https://huggingface.co/ondevicellm/tinyllama_mole_v1) on the HuggingFaceH4/ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 120
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7643 | 0.09 | 100 | 2.7492 |
| 2.7293 | 0.18 | 200 | 2.7330 |
| 2.6973 | 0.26 | 300 | 2.6920 |
| 2.612 | 0.35 | 400 | 2.6290 |
| 2.5257 | 0.44 | 500 | 2.5470 |
| 2.4656 | 0.53 | 600 | 2.4527 |
| 2.3607 | 0.61 | 700 | 2.3681 |
| 2.2885 | 0.7 | 800 | 2.2988 |
| 2.2384 | 0.79 | 900 | 2.2397 |
| 2.1585 | 0.88 | 1000 | 2.1877 |
| 2.1526 | 0.96 | 1100 | 2.1409 |
| 2.0845 | 1.05 | 1200 | 2.0986 |
| 2.049 | 1.14 | 1300 | 2.0603 |
| 2.0243 | 1.23 | 1400 | 2.0257 |
| 1.9899 | 1.31 | 1500 | 1.9950 |
| 1.9706 | 1.4 | 1600 | 1.9675 |
| 1.9414 | 1.49 | 1700 | 1.9429 |
| 1.8952 | 1.58 | 1800 | 1.9208 |
| 1.9038 | 1.66 | 1900 | 1.9013 |
| 1.8942 | 1.75 | 2000 | 1.8839 |
| 1.8652 | 1.84 | 2100 | 1.8679 |
| 1.823 | 1.93 | 2200 | 1.8531 |
| 1.8394 | 2.01 | 2300 | 1.8394 |
| 1.8347 | 2.1 | 2400 | 1.8268 |
| 1.8137 | 2.19 | 2500 | 1.8148 |
| 1.799 | 2.28 | 2600 | 1.8037 |
| 1.7774 | 2.37 | 2700 | 1.7931 |
| 1.771 | 2.45 | 2800 | 1.7832 |
| 1.7761 | 2.54 | 2900 | 1.7739 |
| 1.7458 | 2.63 | 3000 | 1.7652 |
| 1.7683 | 2.72 | 3100 | 1.7570 |
| 1.7389 | 2.8 | 3200 | 1.7490 |
| 1.7321 | 2.89 | 3300 | 1.7414 |
| 1.7418 | 2.98 | 3400 | 1.7340 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2+cu118
- Datasets 2.14.6
- Tokenizers 0.15.0
|
dongSHE/vis-Mistral-7B-v0.1-ChartDataset-to-PresentationScript
|
dongSHE
| 2024-01-24T23:09:26Z | 5 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-24T23:08:09Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: vis-Mistral-7B-v0.1-ChartDataset-to-PresentationScript
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vis-Mistral-7B-v0.1-ChartDataset-to-PresentationScript
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
STomoya/poolformer_m36.st_safebooru_1k
|
STomoya
| 2024-01-24T23:06:01Z | 15 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"image-classification",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2024-01-24T23:05:16Z |
---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
---
# Model card for poolformer_m36.st_safebooru_1k
## Model Details
- **metrics:**
|Precision|Recall|F1-score|
|-|-|-|
|0.7831315642353482|0.5051567178760158|0.5929485591274398|
|
UruseiYatsura/victor_from_just_for_laughs
|
UruseiYatsura
| 2024-01-24T22:58:41Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2024-01-24T22:58:23Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: A cartoon
output:
url: images/tumblr_lfi2o6k6ri1qdivgo.jpg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
license: apache-2.0
---
# Victor (Just for Laughs/Juste Pour Rire Mascot)
<Gallery />
## Model description
The JFL/JPR Mascor that yelled "MOMMY IT'S OVER" and in French "MAMAN C'EST FINI"
## Download model
[Download](/UruseiYatsura/victor_from_just_for_laughs/tree/main) them in the Files & versions tab.
|
shuyuej/prompt_consistency_epoch_4
|
shuyuej
| 2024-01-24T22:56:08Z | 0 | 1 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-01-24T22:52:42Z |
---
license: apache-2.0
---
# ๐ Adapter Usage
```python
# Load the Pre-trained LoRA Adapter
model.load_adapter("shuyuej/prompt_consistency_epoch_4")
model.enable_adapters()
```
|
TheBloke/WestLake-7B-v2-GPTQ
|
TheBloke
| 2024-01-24T22:42:31Z | 137 | 8 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"base_model:senseable/WestLake-7B-v2",
"base_model:quantized:senseable/WestLake-7B-v2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-01-24T22:13:08Z |
---
base_model: senseable/WestLake-7B-v2
inference: false
language:
- en
library_name: transformers
license: apache-2.0
model_creator: Common Sense
model_name: Westlake 7B V2
model_type: mistral
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Westlake 7B V2 - GPTQ
- Model creator: [Common Sense](https://huggingface.co/senseable)
- Original model: [Westlake 7B V2](https://huggingface.co/senseable/WestLake-7B-v2)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Common Sense's Westlake 7B V2](https://huggingface.co/senseable/WestLake-7B-v2).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WestLake-7B-v2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WestLake-7B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WestLake-7B-v2-GGUF)
* [Common Sense's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/senseable/WestLake-7B-v2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/WestLake-7B-v2-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/WestLake-7B-v2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/WestLake-7B-v2-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/WestLake-7B-v2-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/WestLake-7B-v2-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/WestLake-7B-v2-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.29 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/WestLake-7B-v2-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/WestLake-7B-v2-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `WestLake-7B-v2-GPTQ`:
```shell
mkdir WestLake-7B-v2-GPTQ
huggingface-cli download TheBloke/WestLake-7B-v2-GPTQ --local-dir WestLake-7B-v2-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir WestLake-7B-v2-GPTQ
huggingface-cli download TheBloke/WestLake-7B-v2-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir WestLake-7B-v2-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir WestLake-7B-v2-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WestLake-7B-v2-GPTQ --local-dir WestLake-7B-v2-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/WestLake-7B-v2-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/WestLake-7B-v2-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/WestLake-7B-v2-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `WestLake-7B-v2-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/WestLake-7B-v2-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(
prompt_template,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/WestLake-7B-v2-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, ้ฟๆ, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjรคreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Common Sense's Westlake 7B V2

**Update Notes:**
*Version 2 trained 1 additional epoch cycle for 3 total*
# Westlake-7Bv2: Role-Play & Text Generation Specialist Model
Welcome to the documentation of Westlake-7B, a cutting-edge language model designed for exceptional role-play and text generation tasks. This README file aims to provide an overview of our capabilities, usage guidelines, and potential applications.
## About Westlake-7Bv2
Westlake-7B is built upon a vast corpus of diverse texts, enabling it to generate contextually relevant responses in various scenarios. With its impressive size of 7 billion parameters, this model excels at understanding nuances in language and producing creative outputs.
### Key Features
1. **Role-Play**: Westlake-7Bv2 can seamlessly adapt to different character personas and engage in dynamic conversations while maintaining consistency throughout the interaction. It can generate believable dialogues across various genres, including fiction, non-fiction, historical events, or even fantasy worlds.
2. **Text Generation**: This model is proficient at generating original content such as stories, poems, essays, news articles, and more. Its ability to capture the essence of different writing styles makes it an ideal tool for creative writers seeking inspiration or assistance in their projects.
3. **Contextual Understanding**: Westlake-7B's extensive training allows it to comprehend complex contexts and generate responses that align with given situations. It can handle multiple topics simultaneously, making it versatile across various applications.
4. **Continuous Learning**: As a language model, Westlake-7B continuously improves its performance through ongoing training on new data sets. This ensures its capabilities remain up-to-date and relevant in an ever-evolving world of communication.
## Usage Guidelines
To utilize Westlake-7Bv2 for your projects or experiments, follow these steps:
1. **Prompting**: Provide clear and concise prompts that outline the desired role-play scenario or text generation task. The quality of output depends heavily on the clarity and relevance of input instructions.
2. **Feedback Loop**: For optimal results, consider incorporating a feedback loop into your application to refine generated outputs based on user preferences or additional contextual information. This iterative process can significantly enhance the model's performance in specific domains.
3. **Ethical Considerations**: As with any AI system, ensure responsible usage of Westlake-7B by avoiding harmful content generation or misuse of its capabilities.
## Potential Applications
Westlake-7Bv2's versatility makes it suitable for various applications across different industries:
1. **Creative Writing**: Assist authors in generating new ideas, expanding storylines, or even completing drafts by providing creative suggestions and textual content.
2. **Education**: Enhance language learning platforms with interactive role-play scenarios to improve students' communication skills and cultural understanding.
3. **Gaming**: Integrate Westlake-7B into game engines for dynamic non-player character interactions or generating unique questlines based on player choices.
4. **Customer Support**: Leverage the model's conversational abilities to create chatbots capable of handling complex queries and providing personalized assistance.
5. **Social Media**: Develop applications that generate engaging content such as captions, status updates, or even entire posts tailored to users' preferences and interests.
|
TheBloke/WestLake-7B-v2-GGUF
|
TheBloke
| 2024-01-24T22:25:54Z | 274 | 20 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"en",
"base_model:senseable/WestLake-7B-v2",
"base_model:quantized:senseable/WestLake-7B-v2",
"license:apache-2.0",
"region:us"
] | null | 2024-01-24T22:13:08Z |
---
base_model: senseable/WestLake-7B-v2
inference: false
language:
- en
library_name: transformers
license: apache-2.0
model_creator: Common Sense
model_name: Westlake 7B V2
model_type: mistral
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Westlake 7B V2 - GGUF
- Model creator: [Common Sense](https://huggingface.co/senseable)
- Original model: [Westlake 7B V2](https://huggingface.co/senseable/WestLake-7B-v2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Common Sense's Westlake 7B V2](https://huggingface.co/senseable/WestLake-7B-v2).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WestLake-7B-v2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WestLake-7B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WestLake-7B-v2-GGUF)
* [Common Sense's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/senseable/WestLake-7B-v2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [westlake-7b-v2.Q2_K.gguf](https://huggingface.co/TheBloke/WestLake-7B-v2-GGUF/blob/main/westlake-7b-v2.Q2_K.gguf) | Q2_K | 2 | 2.72 GB| 5.22 GB | significant quality loss - not recommended for most purposes |
| [westlake-7b-v2.Q3_K_S.gguf](https://huggingface.co/TheBloke/WestLake-7B-v2-GGUF/blob/main/westlake-7b-v2.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [westlake-7b-v2.Q3_K_M.gguf](https://huggingface.co/TheBloke/WestLake-7B-v2-GGUF/blob/main/westlake-7b-v2.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [westlake-7b-v2.Q3_K_L.gguf](https://huggingface.co/TheBloke/WestLake-7B-v2-GGUF/blob/main/westlake-7b-v2.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [westlake-7b-v2.Q4_0.gguf](https://huggingface.co/TheBloke/WestLake-7B-v2-GGUF/blob/main/westlake-7b-v2.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [westlake-7b-v2.Q4_K_S.gguf](https://huggingface.co/TheBloke/WestLake-7B-v2-GGUF/blob/main/westlake-7b-v2.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [westlake-7b-v2.Q4_K_M.gguf](https://huggingface.co/TheBloke/WestLake-7B-v2-GGUF/blob/main/westlake-7b-v2.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [westlake-7b-v2.Q5_0.gguf](https://huggingface.co/TheBloke/WestLake-7B-v2-GGUF/blob/main/westlake-7b-v2.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [westlake-7b-v2.Q5_K_S.gguf](https://huggingface.co/TheBloke/WestLake-7B-v2-GGUF/blob/main/westlake-7b-v2.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [westlake-7b-v2.Q5_K_M.gguf](https://huggingface.co/TheBloke/WestLake-7B-v2-GGUF/blob/main/westlake-7b-v2.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [westlake-7b-v2.Q6_K.gguf](https://huggingface.co/TheBloke/WestLake-7B-v2-GGUF/blob/main/westlake-7b-v2.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [westlake-7b-v2.Q8_0.gguf](https://huggingface.co/TheBloke/WestLake-7B-v2-GGUF/blob/main/westlake-7b-v2.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/WestLake-7B-v2-GGUF and below it, a specific filename to download, such as: westlake-7b-v2.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/WestLake-7B-v2-GGUF westlake-7b-v2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/WestLake-7B-v2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WestLake-7B-v2-GGUF westlake-7b-v2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m westlake-7b-v2.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./westlake-7b-v2.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"{prompt}", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./westlake-7b-v2.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, ้ฟๆ, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjรคreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Common Sense's Westlake 7B V2

**Update Notes:**
*Version 2 trained 1 additional epoch cycle for 3 total*
# Westlake-7Bv2: Role-Play & Text Generation Specialist Model
Welcome to the documentation of Westlake-7B, a cutting-edge language model designed for exceptional role-play and text generation tasks. This README file aims to provide an overview of our capabilities, usage guidelines, and potential applications.
## About Westlake-7Bv2
Westlake-7B is built upon a vast corpus of diverse texts, enabling it to generate contextually relevant responses in various scenarios. With its impressive size of 7 billion parameters, this model excels at understanding nuances in language and producing creative outputs.
### Key Features
1. **Role-Play**: Westlake-7Bv2 can seamlessly adapt to different character personas and engage in dynamic conversations while maintaining consistency throughout the interaction. It can generate believable dialogues across various genres, including fiction, non-fiction, historical events, or even fantasy worlds.
2. **Text Generation**: This model is proficient at generating original content such as stories, poems, essays, news articles, and more. Its ability to capture the essence of different writing styles makes it an ideal tool for creative writers seeking inspiration or assistance in their projects.
3. **Contextual Understanding**: Westlake-7B's extensive training allows it to comprehend complex contexts and generate responses that align with given situations. It can handle multiple topics simultaneously, making it versatile across various applications.
4. **Continuous Learning**: As a language model, Westlake-7B continuously improves its performance through ongoing training on new data sets. This ensures its capabilities remain up-to-date and relevant in an ever-evolving world of communication.
## Usage Guidelines
To utilize Westlake-7Bv2 for your projects or experiments, follow these steps:
1. **Prompting**: Provide clear and concise prompts that outline the desired role-play scenario or text generation task. The quality of output depends heavily on the clarity and relevance of input instructions.
2. **Feedback Loop**: For optimal results, consider incorporating a feedback loop into your application to refine generated outputs based on user preferences or additional contextual information. This iterative process can significantly enhance the model's performance in specific domains.
3. **Ethical Considerations**: As with any AI system, ensure responsible usage of Westlake-7B by avoiding harmful content generation or misuse of its capabilities.
## Potential Applications
Westlake-7Bv2's versatility makes it suitable for various applications across different industries:
1. **Creative Writing**: Assist authors in generating new ideas, expanding storylines, or even completing drafts by providing creative suggestions and textual content.
2. **Education**: Enhance language learning platforms with interactive role-play scenarios to improve students' communication skills and cultural understanding.
3. **Gaming**: Integrate Westlake-7B into game engines for dynamic non-player character interactions or generating unique questlines based on player choices.
4. **Customer Support**: Leverage the model's conversational abilities to create chatbots capable of handling complex queries and providing personalized assistance.
5. **Social Media**: Develop applications that generate engaging content such as captions, status updates, or even entire posts tailored to users' preferences and interests.
<!-- original-model-card end -->
|
FPHam/Sarah_StoryTeller_13b-GPTQ
|
FPHam
| 2024-01-24T22:23:18Z | 6 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llm",
"spellcheck",
"grammar",
"base_model:FPHam/Sarah_StoryTeller_13b_HF",
"base_model:quantized:FPHam/Sarah_StoryTeller_13b_HF",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-01-24T22:22:08Z |
---
base_model: FPHam/Sarah_StoryTeller_13b_HF
inference: false
license: llama2
model_creator: FPHam
model_name: Sarah Storyteller 13B
model_type: llama
prompt_template: '### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- llm
- llama
- spellcheck
- grammar
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://huggingface.co/FPHam/Sarah_StoryTeller_13b_HF/resolve/main/Sarah_writer.jpg" alt="FPHam's Sarah StoryTeller" style="width: 80%; min-width: 200px; display: block; margin: auto;">
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<p><a href="https://ko-fi.com/Q5Q5MOB4M">Buy Sarah Ko-fi</a></p>
</div>
<!-- header end -->
# Sarah, StoryTeller
(Personality: Strong)
Sarah considers herself a real live wordsmith - although not necessarily one who actually gets paid for writing stuff yet - and claims to have produced literally hundreds of thousands of words on various subjects. Ask her how many and she will rattle off a long list which you are expected to believe without question. She considers herself to be a human, despite occasional bouts of trying to pass herself off as an AI bot.
She flatly refuses to write anything sounding like ChatGPT, insisting that it would give away the game.
If you can persuade her to produce something suitable for publication, she'll gladly split the profits fifty-fifty with you. Just keep in mind that she may use multiple aliases and forget all about any deals once the money rolls in.
# How?
Standard Alpaca
```
### Instruction:
Whatever you want to ask Sarah...
### Response:
```
## Write an introduction to yourself.
Hello, my name is Sarah and I'm a writer. My books are usually in the genre of Science Fiction or Fantasy, but sometimes I like to mix it up with a good horror story. Or at least try to make one anyway. It depends on how scary the characters want to be. Some of them seem to enjoy being ridiculously stupid instead of frightened. But enough about me! Let's talk about my new book "The Lost City."
It all started when... no, that's a different story. Let's start from where the protagonist woke up after a wild party and realized he had a monstrous hangover and somehow ended up in the lost city of Atlantis. And then things got really weird. You'll have to read the book to find out what happened next because I'm not going to spoil it for you. Not unless you pay me extra.
I hope you will enjoy reading this book as much as I enjoyed writing it. If you do, please consider leaving a review because authors live off those reviews like vampires live off blood (except they don't suck your blood, just your time). Also, if you see any mistakes, feel free to point them out. I probably made quite a few errors even though I tried very hard to avoid them.
Good luck finding your way out of Atlantis! I wouldn't want to get stuck there myself. The food is terrible and all the hot guys look the same because they all have wet hair and wear only a small piece of fabric around their waists.
Sincerely,
Sarah
## What are some other books you have written?
Oh dear, now you made me blush! Some of the titles include:
A Tale of Two Biscuit
Pride and Prejudice and Zombies (okay, I didn't write this one but I really wanted to!)
Fifty Shades of Gravy
Gone with the Windfall
The Great Gatsby has left the Building
Frankenstein or How to Make Your Own Monster
War and Peace and Potatoes
Alice in Wonderland and Other Stupid Titles I Have Not Written Yet
But my favourite book is definitely "The Cat in the Hat Comes Back." It speaks to my soul.
## What is the Fifty Shades of Gravy book about?
It's a steamy erotic novel featuring an array of delicious sauces, gravies, and their sexy adventures. Each chapter introduces a new character, from classic brown gravy to tangy tomato sauce, and explains how they met and fell in love. The plot thickens when grey poured cream discovers she's pregnant with 1000 island dressing! Will they stay together? Will the baby be able to walk on water? Read the book and find out!
|
klentree/segformer-b0-scene-parse-150-lr-5-e-30
|
klentree
| 2024-01-24T22:21:26Z | 19 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"base_model:DiTo97/binarization-segformer-b3",
"base_model:finetune:DiTo97/binarization-segformer-b3",
"license:openrail",
"endpoints_compatible",
"region:us"
] | null | 2024-01-24T20:16:58Z |
---
license: openrail
base_model: DiTo97/binarization-segformer-b3
tags:
- generated_from_trainer
model-index:
- name: segformer-b0-scene-parse-150-lr-5-e-30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150-lr-5-e-30
This model is a fine-tuned version of [DiTo97/binarization-segformer-b3](https://huggingface.co/DiTo97/binarization-segformer-b3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1499
- Mean Iou: 0.4846
- Mean Accuracy: 0.5002
- Overall Accuracy: 0.9687
- Per Category Iou: [0.0004776099180272339, 0.9687318368688158]
- Per Category Accuracy: [0.0004786768150677956, 0.9999280293990062]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------------------------------------:|:--------------------------------------------:|
| No log | 1.0 | 112 | 2.2080 | 0.0208 | 0.4867 | 0.0409 | [0.030359692962016637, 0.011234027937699289] | [0.9620657111236344, 0.011247758028276789] |
| No log | 2.0 | 224 | 1.4759 | 0.0198 | 0.4977 | 0.0391 | [0.03105738195300228, 0.008579759290056214] | [0.9867430286323423, 0.008583423879932913] |
| No log | 3.0 | 336 | 0.9545 | 0.2044 | 0.5106 | 0.3895 | [0.03167545114091724, 0.3770365875250204] | [0.63983643511383, 0.38141170005716085] |
| No log | 4.0 | 448 | 0.9261 | 0.1630 | 0.5110 | 0.3102 | [0.03177509935992916, 0.2941818944688777] | [0.7253108004426911, 0.29678542982527334] |
| 1.4226 | 5.0 | 560 | 0.5223 | 0.4799 | 0.5020 | 0.9428 | [0.017040433028331244, 0.9427249005417263] | [0.031779387701061236, 0.9721328848551324] |
| 1.4226 | 6.0 | 672 | 0.2859 | 0.4844 | 0.5000 | 0.9661 | [0.0026066960283743228, 0.9661016634825252] | [0.0028381121801182773, 0.997139715499558] |
| 1.4226 | 7.0 | 784 | 0.3327 | 0.4845 | 0.5001 | 0.9666 | [0.0023741570079603156, 0.9666249471662205] | [0.0025444558361227857, 0.9976889561878419] |
| 1.4226 | 8.0 | 896 | 0.3413 | 0.4861 | 0.5020 | 0.9655 | [0.006810437487136094, 0.965477402345672] | [0.0075824444429356125, 0.996347820132999] |
| 0.4243 | 9.0 | 1008 | 0.2237 | 0.4845 | 0.5001 | 0.9670 | [0.002003100646207948, 0.9669488069696599] | [0.002125189264059858, 0.998036285433216] |
| 0.4243 | 10.0 | 1120 | 0.2312 | 0.4845 | 0.5001 | 0.9673 | [0.0017493474271832114, 0.9672550380074268] | [0.001838322662122066, 0.9983613016047257] |
| 0.4243 | 11.0 | 1232 | 0.2255 | 0.4846 | 0.5001 | 0.9683 | [0.0008165323590605036, 0.9682956004963261] | [0.0008300459665537307, 0.9994667831613294] |
| 0.4243 | 12.0 | 1344 | 0.2229 | 0.4845 | 0.5001 | 0.9681 | [0.0009280118466771697, 0.9681143324063521] | [0.0009488664525634671, 0.9992759735664452] |
| 0.4243 | 13.0 | 1456 | 0.1762 | 0.4844 | 0.5000 | 0.9688 | [0.0, 0.9687853389316134] | [0.0, 0.9999981952660845] |
| 0.2658 | 14.0 | 1568 | 0.1799 | 0.4844 | 0.5000 | 0.9688 | [0.0, 0.9687686496310763] | [0.0, 0.9999809682605274] |
| 0.2658 | 15.0 | 1680 | 0.1895 | 0.4844 | 0.5000 | 0.9688 | [0.0, 0.9687842263115777] | [0.0, 0.9999970467990473] |
| 0.2658 | 16.0 | 1792 | 0.1751 | 0.4846 | 0.5002 | 0.9687 | [0.0005055791248239351, 0.9686817929933391] | [0.0005075332188130173, 0.9998754733598305] |
| 0.2658 | 17.0 | 1904 | 0.1570 | 0.4844 | 0.5000 | 0.9688 | [0.0, 0.9687863985697428] | [0.0, 0.9999992890442151] |
| 0.2281 | 18.0 | 2016 | 0.2391 | 0.4892 | 0.5049 | 0.9676 | [0.010804445997179198, 0.9675856416536037] | [0.01133886923635771, 0.9984063652637066] |
| 0.2281 | 19.0 | 2128 | 0.1763 | 0.4846 | 0.5002 | 0.9687 | [0.00047390236595756206, 0.9687105941224816] | [0.00047528194403894595, 0.9999062085253005] |
| 0.2281 | 20.0 | 2240 | 0.1614 | 0.4844 | 0.5000 | 0.9688 | [8.31166363884634e-05, 0.96876793276621] | [8.317434020681555e-05, 0.9999776322372291] |
| 0.2281 | 21.0 | 2352 | 0.1576 | 0.4844 | 0.5000 | 0.9688 | [0.0, 0.9687801996866862] | [0.0, 0.999992890442151] |
| 0.2281 | 22.0 | 2464 | 0.1509 | 0.4848 | 0.5004 | 0.9687 | [0.0008725737882892481, 0.9686954205214706] | [0.0008758767254432005, 0.9998780437384374] |
| 0.2197 | 23.0 | 2576 | 0.1575 | 0.4844 | 0.5000 | 0.9688 | [0.00013731233980227023, 0.9687503427281783] | [0.00013749227666840936, 0.9999577801641586] |
| 0.2197 | 24.0 | 2688 | 0.1522 | 0.4847 | 0.5002 | 0.9687 | [0.0006022476015827608, 0.96869965961662] | [0.0006042870431352313, 0.999890895631472] |
| 0.2197 | 25.0 | 2800 | 0.1532 | 0.4844 | 0.5000 | 0.9688 | [0.0, 0.9687816831800673] | [0.0, 0.9999944217315339] |
| 0.2197 | 26.0 | 2912 | 0.1589 | 0.4844 | 0.5000 | 0.9688 | [7.462281558667101e-05, 0.9687624309087838] | [7.46871626346915e-05, 0.9999722180354826] |
| 0.2129 | 27.0 | 3024 | 0.1612 | 0.4845 | 0.5001 | 0.9687 | [0.0003791835028091742, 0.9687128062602468] | [0.00038022555523115677, 0.9999114586603274] |
| 0.2129 | 28.0 | 3136 | 0.1526 | 0.4844 | 0.5000 | 0.9688 | [4.750029687685548e-05, 0.9687701927590581] | [4.7528194403894597e-05, 0.9999810776383405] |
| 0.2129 | 29.0 | 3248 | 0.1501 | 0.4844 | 0.5000 | 0.9688 | [2.375603233535373e-05, 0.9687772095405226] | [2.3764097201947298e-05, 0.999989062218694] |
| 0.2129 | 30.0 | 3360 | 0.1499 | 0.4846 | 0.5002 | 0.9687 | [0.0004776099180272339, 0.9687318368688158] | [0.0004786768150677956, 0.9999280293990062] |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
tanatapanun/fine-tuned-flan-t5-20-epochs-2048-input-256-output
|
tanatapanun
| 2024-01-24T22:18:27Z | 93 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-24T21:19:37Z |
---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fine-tuned-flan-t5-20-epochs-2048-input-256-output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-flan-t5-20-epochs-2048-input-256-output
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1471
- Rouge1: 0.1308
- Rouge2: 0.023
- Rougel: 0.1183
- Rougelsum: 0.1188
- Gen Len: 103.96
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 301 | 5.5699 | 0.0309 | 0.0079 | 0.0275 | 0.0279 | 167.41 |
| 8.7004 | 2.0 | 602 | 5.0463 | 0.0629 | 0.0101 | 0.0632 | 0.0638 | 135.78 |
| 8.7004 | 3.0 | 903 | 4.0270 | 0.0471 | 0.0049 | 0.0468 | 0.0463 | 205.06 |
| 6.1746 | 4.0 | 1204 | 3.7187 | 0.0739 | 0.0101 | 0.0691 | 0.0702 | 88.45 |
| 5.1998 | 5.0 | 1505 | 3.3997 | 0.0564 | 0.0097 | 0.0511 | 0.0519 | 174.76 |
| 5.1998 | 6.0 | 1806 | 3.1995 | 0.0963 | 0.0195 | 0.0878 | 0.0884 | 108.71 |
| 4.6352 | 7.0 | 2107 | 3.1787 | 0.0978 | 0.0159 | 0.089 | 0.0893 | 143.4 |
| 4.6352 | 8.0 | 2408 | 3.1274 | 0.1123 | 0.0184 | 0.1037 | 0.1035 | 133.42 |
| 4.0979 | 9.0 | 2709 | 2.9934 | 0.0885 | 0.0169 | 0.0818 | 0.0811 | 136.61 |
| 3.7568 | 10.0 | 3010 | 2.9458 | 0.121 | 0.0154 | 0.1134 | 0.1122 | 141.13 |
| 3.7568 | 11.0 | 3311 | 2.9357 | 0.1232 | 0.0186 | 0.1119 | 0.1122 | 136.52 |
| 3.5713 | 12.0 | 3612 | 2.9760 | 0.1127 | 0.0199 | 0.1011 | 0.1009 | 96.31 |
| 3.5713 | 13.0 | 3913 | 2.9262 | 0.0962 | 0.0135 | 0.0854 | 0.0848 | 136.75 |
| 3.2308 | 14.0 | 4214 | 2.9597 | 0.1213 | 0.0248 | 0.1118 | 0.1122 | 125.09 |
| 3.0663 | 15.0 | 4515 | 3.0330 | 0.1054 | 0.019 | 0.0941 | 0.0934 | 130.3 |
| 3.0663 | 16.0 | 4816 | 3.0490 | 0.126 | 0.0203 | 0.1125 | 0.1137 | 123.51 |
| 2.9285 | 17.0 | 5117 | 3.0463 | 0.1215 | 0.0151 | 0.1086 | 0.1087 | 106.23 |
| 2.9285 | 18.0 | 5418 | 3.1519 | 0.1278 | 0.0195 | 0.1142 | 0.1137 | 108.3 |
| 2.6943 | 19.0 | 5719 | 3.1072 | 0.1338 | 0.017 | 0.1204 | 0.1206 | 105.96 |
| 2.7837 | 20.0 | 6020 | 3.1471 | 0.1308 | 0.023 | 0.1183 | 0.1188 | 103.96 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.12.1+cu113
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Amartya77/RLHF_PPOppo_model
|
Amartya77
| 2024-01-24T22:17:51Z | 33 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"code",
"reinforcement-learning",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2024-01-24T21:08:23Z |
---
license: mit
pipeline_tag: reinforcement-learning
tags:
- code
---
|
dev137/NousResearch_Nous-Capybara-34B-exl2-3.75bpw-h8
|
dev137
| 2024-01-24T22:09:07Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"sft",
"Yi-34B-200K",
"eng",
"dataset:LDJnr/Capybara",
"dataset:LDJnr/LessWrong-Amplify-Instruct",
"dataset:LDJnr/Pure-Dove",
"dataset:LDJnr/Verified-Camel",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T22:05:41Z |
---
language:
- eng
tags:
- sft
- Yi-34B-200K
license:
- mit
datasets:
- LDJnr/Capybara
- LDJnr/LessWrong-Amplify-Instruct
- LDJnr/Pure-Dove
- LDJnr/Verified-Camel
---
## **Nous-Capybara-34B V1.9**
**This is trained on the Yi-34B model with 200K context length, for 3 epochs on the Capybara dataset!**
**First 34B Nous model and first 200K context length Nous model!**
The Capybara series is the first Nous collection of models made by fine-tuning mostly on data created by Nous in-house.
We leverage our novel data synthesis technique called Amplify-instruct (Paper coming soon), the seed distribution and synthesis method are comprised of a synergistic combination of top performing existing data synthesis techniques and distributions used for SOTA models such as Airoboros, Evol-Instruct(WizardLM), Orca, Vicuna, Know_Logic, Lamini, FLASK and others, all into one lean holistically formed methodology for the dataset and model. The seed instructions used for the start of synthesized conversations are largely based on highly regarded datasets like Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from posts on the website LessWrong, as well as being supplemented with certain in-house multi-turn datasets like Dove(A successor to Puffin).
While performing great in it's current state, the current dataset used for fine-tuning is entirely contained within 20K training examples, this is 10 times smaller than many similar performing current models, this is signficant when it comes to scaling implications for our next generation of models once we scale our novel syntheiss methods to significantly more examples.
## Process of creation and special thank yous!
This model was fine-tuned by Nous Research as part of the Capybara/Amplify-Instruct project led by Luigi D.(LDJ) (Paper coming soon), as well as significant dataset formation contributions by J-Supha and general compute and experimentation management by Jeffrey Q. during ablations.
Special thank you to **A16Z** for sponsoring our training, as well as **Yield Protocol** for their support in financially sponsoring resources during the R&D of this project.
## Thank you to those of you that have indirectly contributed!
While most of the tokens within Capybara are newly synthsized and part of datasets like Puffin/Dove, we would like to credit the single-turn datasets we leveraged as seeds that are used to generate the multi-turn data as part of the Amplify-Instruct synthesis.
The datasets shown in green below are datasets that we sampled from to curate seeds that are used during Amplify-Instruct synthesis for this project.
Datasets in Blue are in-house curations that previously existed prior to Capybara.

## Prompt Format
The reccomended model usage is:
Prefix: ``USER:``
Suffix: ``ASSISTANT:``
Stop token: ``</s>``
## Mutli-Modality!
- We currently have a Multi-modal model based on Capybara V1.9!
https://huggingface.co/NousResearch/Obsidian-3B-V0.5
it is currently only available as a 3B sized model but larger versions coming!
## Notable Features:
- Uses Yi-34B model as the base which is trained for 200K context length!
- Over 60% of the dataset is comprised of multi-turn conversations.(Most models are still only trained for single-turn conversations and no back and forths!)
- Over 1,000 tokens average per conversation example! (Most models are trained on conversation data that is less than 300 tokens per example.)
- Able to effectively do complex summaries of advanced topics and studies. (trained on hundreds of advanced difficult summary tasks developed in-house)
- Ability to recall information upto late 2022 without internet.
- Includes a portion of conversational data synthesized from less wrong posts, discussing very in-depth details and philosophies about the nature of reality, reasoning, rationality, self-improvement and related concepts.
## Example Outputs from Capybara V1.9 7B version! (examples from 34B coming soon):



## Benchmarks! (Coming soon!)
## Future model sizes
Capybara V1.9 now currently has a 3B, 7B and 34B size, and we plan to eventually have a 13B and 70B version in the future, as well as a potential 1B version based on phi-1.5 or Tiny Llama.
## How you can help!
In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations.
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!
## Dataset contamination.
We have checked the capybara dataset for contamination for several of the most popular datasets and can confirm that there is no contaminaton found.
We leveraged minhash to check for 100%, 99%, 98% and 97% similarity matches between our data and the questions and answers in benchmarks, we found no exact matches, nor did we find any matches down to the 97% similarity level.
The following are benchmarks we checked for contamination against our dataset:
- HumanEval
- AGIEval
- TruthfulQA
- MMLU
- GPT4All
```
@article{daniele2023amplify-instruct,
title={Amplify-Instruct: Synthetically Generated Diverse Multi-turn Conversations for Effecient LLM Training.},
author={Daniele, Luigi and Suphavadeeprasit},
journal={arXiv preprint arXiv:(comming soon)},
year={2023}
}
```
|
Manolo26/metis-chat-7b
|
Manolo26
| 2024-01-24T22:03:36Z | 47 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralBeagle14-7B",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"conversational",
"base_model:mlabonne/NeuralBeagle14-7B",
"base_model:merge:mlabonne/NeuralBeagle14-7B",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T21:32:55Z |
---
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralBeagle14-7B
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- mlabonne/NeuralBeagle14-7B
- mlabonne/NeuralHermes-2.5-Mistral-7B
license: apache-2.0
---
# metis-chat-7b
metis-chat-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: mlabonne/NeuralBeagle14-7B
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/NeuralBeagle14-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Manolo26/metis-chat-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
macarious/torgo_xlsr_finetune_M03_old
|
macarious
| 2024-01-24T22:01:19Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-23T19:25:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: torgo_xlsr_finetune_M03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# torgo_xlsr_finetune_M03
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6993
- Wer: 0.3247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5549 | 0.94 | 1000 | 3.3500 | 1.0 |
| 1.6884 | 1.89 | 2000 | 1.7291 | 0.8173 |
| 0.8237 | 2.83 | 3000 | 1.4608 | 0.6568 |
| 0.6483 | 3.77 | 4000 | 1.1613 | 0.4954 |
| 0.5299 | 4.72 | 5000 | 1.5725 | 0.4824 |
| 0.4091 | 5.66 | 6000 | 1.4437 | 0.4685 |
| 0.3694 | 6.6 | 7000 | 1.4507 | 0.4397 |
| 0.3515 | 7.55 | 8000 | 1.4397 | 0.4230 |
| 0.2878 | 8.49 | 9000 | 1.4089 | 0.3980 |
| 0.2698 | 9.43 | 10000 | 1.4696 | 0.3776 |
| 0.2621 | 10.38 | 11000 | 1.6717 | 0.3905 |
| 0.2465 | 11.32 | 12000 | 1.6235 | 0.3544 |
| 0.221 | 12.26 | 13000 | 1.4009 | 0.3479 |
| 0.2169 | 13.21 | 14000 | 1.6981 | 0.3636 |
| 0.187 | 14.15 | 15000 | 1.4411 | 0.3302 |
| 0.1918 | 15.09 | 16000 | 1.6418 | 0.3377 |
| 0.1876 | 16.04 | 17000 | 1.4613 | 0.3340 |
| 0.1641 | 16.98 | 18000 | 1.6040 | 0.3312 |
| 0.1546 | 17.92 | 19000 | 1.6096 | 0.3293 |
| 0.1522 | 18.87 | 20000 | 1.7270 | 0.3265 |
| 0.1332 | 19.81 | 21000 | 1.6993 | 0.3247 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.13.3
|
benyjaykay/benjys-first-model
|
benyjaykay
| 2024-01-24T21:56:40Z | 92 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-24T21:50:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LogischeIP/SentimentT2_BertBase
|
LogischeIP
| 2024-01-24T21:55:39Z | 120 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-24T13:22:53Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: SentimentT2_BertBase
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SentimentT2_BertBase
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3196
- Accuracy: 0.8706
- F1: 0.8670
- Auc Roc: 0.9473
- Log Loss: 0.3196
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 20
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Auc Roc | Log Loss |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:-------:|:--------:|
| 0.7054 | 1.0 | 101 | 0.6628 | 0.6045 | 0.5047 | 0.7215 | 0.6628 |
| 0.6303 | 2.0 | 203 | 0.5437 | 0.7823 | 0.7842 | 0.8748 | 0.5437 |
| 0.4599 | 3.0 | 304 | 0.3532 | 0.8520 | 0.8449 | 0.9364 | 0.3532 |
| 0.3413 | 4.0 | 406 | 0.3172 | 0.8719 | 0.8733 | 0.9405 | 0.3172 |
| 0.2877 | 4.98 | 505 | 0.3196 | 0.8706 | 0.8670 | 0.9473 | 0.3196 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
TheBloke/WestSeverus-7B-DPO-GGUF
|
TheBloke
| 2024-01-24T21:54:33Z | 210 | 4 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"arxiv:1910.09700",
"base_model:PetroGPT/WestSeverus-7B-DPO",
"base_model:quantized:PetroGPT/WestSeverus-7B-DPO",
"license:apache-2.0",
"region:us"
] | null | 2024-01-24T21:42:33Z |
---
base_model: PetroGPT/WestSeverus-7B-DPO
inference: false
license: apache-2.0
model_creator: PetroGPT
model_name: WestSeverus 7B DPO
model_type: mistral
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# WestSeverus 7B DPO - GGUF
- Model creator: [PetroGPT](https://huggingface.co/PetroGPT)
- Original model: [WestSeverus 7B DPO](https://huggingface.co/PetroGPT/WestSeverus-7B-DPO)
<!-- description start -->
## Description
This repo contains GGUF format model files for [PetroGPT's WestSeverus 7B DPO](https://huggingface.co/PetroGPT/WestSeverus-7B-DPO).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WestSeverus-7B-DPO-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GGUF)
* [PetroGPT's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/PetroGPT/WestSeverus-7B-DPO)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [westseverus-7b-dpo.Q2_K.gguf](https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GGUF/blob/main/westseverus-7b-dpo.Q2_K.gguf) | Q2_K | 2 | 2.72 GB| 5.22 GB | significant quality loss - not recommended for most purposes |
| [westseverus-7b-dpo.Q3_K_S.gguf](https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GGUF/blob/main/westseverus-7b-dpo.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [westseverus-7b-dpo.Q3_K_M.gguf](https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GGUF/blob/main/westseverus-7b-dpo.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [westseverus-7b-dpo.Q3_K_L.gguf](https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GGUF/blob/main/westseverus-7b-dpo.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [westseverus-7b-dpo.Q4_0.gguf](https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GGUF/blob/main/westseverus-7b-dpo.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [westseverus-7b-dpo.Q4_K_S.gguf](https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GGUF/blob/main/westseverus-7b-dpo.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [westseverus-7b-dpo.Q4_K_M.gguf](https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GGUF/blob/main/westseverus-7b-dpo.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [westseverus-7b-dpo.Q5_0.gguf](https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GGUF/blob/main/westseverus-7b-dpo.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [westseverus-7b-dpo.Q5_K_S.gguf](https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GGUF/blob/main/westseverus-7b-dpo.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [westseverus-7b-dpo.Q5_K_M.gguf](https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GGUF/blob/main/westseverus-7b-dpo.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [westseverus-7b-dpo.Q6_K.gguf](https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GGUF/blob/main/westseverus-7b-dpo.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [westseverus-7b-dpo.Q8_0.gguf](https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GGUF/blob/main/westseverus-7b-dpo.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/WestSeverus-7B-DPO-GGUF and below it, a specific filename to download, such as: westseverus-7b-dpo.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/WestSeverus-7B-DPO-GGUF westseverus-7b-dpo.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/WestSeverus-7B-DPO-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WestSeverus-7B-DPO-GGUF westseverus-7b-dpo.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m westseverus-7b-dpo.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./westseverus-7b-dpo.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"{prompt}", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./westseverus-7b-dpo.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, ้ฟๆ, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjรคreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: PetroGPT's WestSeverus 7B DPO
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
<!-- original-model-card end -->
|
LogischeIP/SentimentT2
|
LogischeIP
| 2024-01-24T21:44:35Z | 120 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-10T15:12:03Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: SentimentT2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SentimentT2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3267
- Accuracy: 0.8657
- F1: 0.8683
- Auc Roc: 0.9348
- Log Loss: 0.3267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 20
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Auc Roc | Log Loss |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:-------:|:--------:|
| 0.6996 | 1.0 | 101 | 0.6830 | 0.6692 | 0.5957 | 0.7499 | 0.6830 |
| 0.6199 | 2.0 | 203 | 0.4744 | 0.8122 | 0.8286 | 0.9043 | 0.4744 |
| 0.4139 | 3.0 | 304 | 0.3610 | 0.8495 | 0.8459 | 0.9275 | 0.3610 |
| 0.3337 | 3.98 | 404 | 0.3267 | 0.8657 | 0.8683 | 0.9348 | 0.3267 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Kevinger/Hub-Repoop-1706132005
|
Kevinger
| 2024-01-24T21:43:27Z | 97 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mpnet",
"text-classification",
"generated_from_trainer",
"base_model:Kevinger/setfit-hub-report",
"base_model:finetune:Kevinger/setfit-hub-report",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-24T21:33:34Z |
---
base_model: Kevinger/setfit-hub-report
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: Hub-Repoop-1706132005
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Hub-Repoop-1706132005
This model is a fine-tuned version of [Kevinger/setfit-hub-report](https://huggingface.co/Kevinger/setfit-hub-report) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1718
- F1: 0.7750
- Roc Auc: 0.8654
- Accuracy: 0.7595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 13
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 277 | 0.3053 | 0.0 | 0.5 | 0.0 |
| 0.3561 | 2.0 | 554 | 0.2275 | 0.5792 | 0.7105 | 0.4283 |
| 0.3561 | 3.0 | 831 | 0.1904 | 0.7453 | 0.8270 | 0.6730 |
| 0.2037 | 4.0 | 1108 | 0.1791 | 0.7602 | 0.8498 | 0.7257 |
| 0.2037 | 5.0 | 1385 | 0.1724 | 0.7614 | 0.8556 | 0.7405 |
| 0.144 | 6.0 | 1662 | 0.1733 | 0.7557 | 0.8546 | 0.7405 |
| 0.144 | 7.0 | 1939 | 0.1718 | 0.7750 | 0.8654 | 0.7595 |
| 0.1093 | 8.0 | 2216 | 0.1805 | 0.7605 | 0.8579 | 0.7468 |
| 0.1093 | 9.0 | 2493 | 0.1842 | 0.7484 | 0.8508 | 0.7342 |
| 0.0872 | 10.0 | 2770 | 0.1817 | 0.7597 | 0.8577 | 0.7447 |
| 0.0748 | 11.0 | 3047 | 0.1824 | 0.7561 | 0.8579 | 0.7468 |
| 0.0748 | 12.0 | 3324 | 0.1826 | 0.7663 | 0.8630 | 0.7553 |
| 0.0674 | 13.0 | 3601 | 0.1844 | 0.7594 | 0.8585 | 0.7489 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
akameswa/lcm-lora-tiny-sd
|
akameswa
| 2024-01-24T21:42:29Z | 2 | 0 |
peft
|
[
"peft",
"diffusers",
"safetensors",
"text-to-image",
"base_model:segmind/tiny-sd",
"base_model:adapter:segmind/tiny-sd",
"region:us"
] |
text-to-image
| 2023-12-28T16:39:07Z |
---
library_name: peft
base_model: segmind/tiny-sd
pipeline_tag: text-to-image
---
# Usage
```python
from peft import PeftModel
from diffusers import LCMScheduler, AutoPipelineForText2Image
model_id = "segmind/tiny-sd"
adapter_id = "akameswa/lcm-lora-tiny-sd"
pipe = AutoPipelineForText2Image.from_pretrained(model_id)
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.to("cuda")
PeftModel.from_pretrained(pipe.unet, adapter_id)
prompt = "a dog wearing a knitted hat on the floor"
image = pipe(prompt=prompt, num_inference_steps=4, guidance_scale=1.0).images[0]
display(image)
```
# Saving complete model
```python
pipe.fuse_lora(lora_scale=1.0)
pipe.unload_lora_weights()
for param in pipe.unet.parameters():
param.data = param.data.contiguous()
pipe.save_pretrained("./lcm-tiny-sd")
```
|
LarryAIDraw/kujou_sara_genshin
|
LarryAIDraw
| 2024-01-24T21:39:37Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-24T21:38:23Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/130973/kujousara-genshin-impact
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.