Commit
·
b135f25
1
Parent(s):
d82a6a4
Update README.md
Browse files
README.md
CHANGED
@@ -39,8 +39,9 @@ tags:
|
|
39 |
GPTQ quants require a good dataset for calibration, and the default C4 dataset is not capable.
|
40 |
|
41 |
**llama.cpp GGUF models**
|
42 |
-
GPT2Tokenizer fixed by [Kerfuffle](https://github.com/KerfuffleV2) on [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743), new models
|
43 |
|
|
|
44 |
|
45 |
## Read Me:
|
46 |
|
@@ -94,6 +95,8 @@ Hard acc:48.03
|
|
94 |
**llama.cpp GGUF models**
|
95 |
GPT2Tokenizer 支持由 [Kerfuffle](https://github.com/KerfuffleV2) 修复于 [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743),新模型稍后上传。
|
96 |
|
|
|
|
|
97 |
## 请读我:
|
98 |
|
99 |
另请参阅[14B版本](https://huggingface.co/CausalLM/14B)
|
|
|
39 |
GPTQ quants require a good dataset for calibration, and the default C4 dataset is not capable.
|
40 |
|
41 |
**llama.cpp GGUF models**
|
42 |
+
GPT2Tokenizer fixed by [Kerfuffle](https://github.com/KerfuffleV2) on [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743), new models are reuploaded.
|
43 |
|
44 |
+
Thanks TheBloke for GGUF quants: [https://huggingface.co/TheBloke/CausalLM-7B-GGUF](https://huggingface.co/TheBloke/CausalLM-7B-GGUF)
|
45 |
|
46 |
## Read Me:
|
47 |
|
|
|
95 |
**llama.cpp GGUF models**
|
96 |
GPT2Tokenizer 支持由 [Kerfuffle](https://github.com/KerfuffleV2) 修复于 [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743),新模型稍后上传。
|
97 |
|
98 |
+
感谢 TheBloke 制作 GGUF 版本量化模型: [https://huggingface.co/TheBloke/CausalLM-7B-GGUF](https://huggingface.co/TheBloke/CausalLM-7B-GGUF)
|
99 |
+
|
100 |
## 请读我:
|
101 |
|
102 |
另请参阅[14B版本](https://huggingface.co/CausalLM/14B)
|