JosephusCheung commited on
Commit
d82a6a4
·
1 Parent(s): 96379e3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -34,13 +34,13 @@ tags:
34
 
35
  *Image drawn by GPT-4 DALL·E 3* TL;DR: Perhaps this 7B model, better than all existing models <= 33B, in most quantitative evaluations...
36
 
37
- **Some problems with llama.cpp on GPT2Tokenizer, gotta fix soon...**
38
-
39
  # Please Stop Using WRONG unofficial quant models unless you know what you're doing
40
 
41
- GPTQ quants require a good dataset for calibration, and the default C4 dataset is not capable - [see the releated issue](https://huggingface.co/CausalLM/14B/discussions/3)
 
 
 
42
 
43
- **Some problems with llama.cpp on GPT2Tokenizer, gotta fix soon...**
44
 
45
  ## Read Me:
46
 
@@ -91,7 +91,8 @@ Hard acc:48.03
91
  **Zero-shot ACC 0.5921152388172858** (Outperforms WizardMath-7B and Qwen-7B)
92
 
93
 
94
- **GPT2Tokenizer 上的 llama.cpp 存在一些问题,会尽快修复...**
 
95
 
96
  ## 请读我:
97
 
 
34
 
35
  *Image drawn by GPT-4 DALL·E 3* TL;DR: Perhaps this 7B model, better than all existing models <= 33B, in most quantitative evaluations...
36
 
 
 
37
  # Please Stop Using WRONG unofficial quant models unless you know what you're doing
38
 
39
+ GPTQ quants require a good dataset for calibration, and the default C4 dataset is not capable.
40
+
41
+ **llama.cpp GGUF models**
42
+ GPT2Tokenizer fixed by [Kerfuffle](https://github.com/KerfuffleV2) on [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743), new models to be reuploaded.
43
 
 
44
 
45
  ## Read Me:
46
 
 
91
  **Zero-shot ACC 0.5921152388172858** (Outperforms WizardMath-7B and Qwen-7B)
92
 
93
 
94
+ **llama.cpp GGUF models**
95
+ GPT2Tokenizer 支持由 [Kerfuffle](https://github.com/KerfuffleV2) 修复于 [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743),新模型稍后上传。
96
 
97
  ## 请读我:
98