JosephusCheung commited on
Commit
3f4f76e
·
1 Parent(s): e4a0d4d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -37,6 +37,8 @@ tags:
37
  # CausalLM 7B - Fully Compatible with Meta LLaMA 2
38
  Use the transformers library that does not require remote/external code to load the model, AutoModelForCausalLM and AutoTokenizer (or manually specify LlamaForCausalLM to load LM, GPT2Tokenizer to load Tokenizer), and model quantization is fully compatible with GGUF (llama.cpp), GPTQ, and AWQ.
39
 
 
 
40
  **llama.cpp GGUF models**
41
  GPT2Tokenizer fixed by [Kerfuffle](https://github.com/KerfuffleV2) on [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743), new models are reuploaded.
42
 
@@ -105,6 +107,8 @@ Hard acc:48.03
105
  # 因果语言模型 7B - 与 Meta LLaMA 2 完全兼容
106
  使用无需远程/外部代码的transformers库加载模型,AutoModelForCausalLM和AutoTokenizer(或者手动指定LlamaForCausalLM加载LM, GPT2Tokenizer加载Tokenizer),并且模型量化与GGUF(llama.cpp)、GPTQ、AWQ完全兼容。
107
 
 
 
108
  **llama.cpp GGUF models**
109
  GPT2Tokenizer 支持由 [Kerfuffle](https://github.com/KerfuffleV2) 修复于 [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743),新模型稍后上传。
110
 
 
37
  # CausalLM 7B - Fully Compatible with Meta LLaMA 2
38
  Use the transformers library that does not require remote/external code to load the model, AutoModelForCausalLM and AutoTokenizer (or manually specify LlamaForCausalLM to load LM, GPT2Tokenizer to load Tokenizer), and model quantization is fully compatible with GGUF (llama.cpp), GPTQ, and AWQ.
39
 
40
+ # Recent Updates: [DPO-α Version](https://huggingface.co/CausalLM/7B-DPO-alpha) outperforms Zephyr-β in MT-Bench
41
+
42
  **llama.cpp GGUF models**
43
  GPT2Tokenizer fixed by [Kerfuffle](https://github.com/KerfuffleV2) on [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743), new models are reuploaded.
44
 
 
107
  # 因果语言模型 7B - 与 Meta LLaMA 2 完全兼容
108
  使用无需远程/外部代码的transformers库加载模型,AutoModelForCausalLM和AutoTokenizer(或者手动指定LlamaForCausalLM加载LM, GPT2Tokenizer加载Tokenizer),并且模型量化与GGUF(llama.cpp)、GPTQ、AWQ完全兼容。
109
 
110
+ # 最近更新: [DPO-α Version](https://huggingface.co/CausalLM/7B-DPO-alpha) 在 MT-Bench 超过 Zephyr-β
111
+
112
  **llama.cpp GGUF models**
113
  GPT2Tokenizer 支持由 [Kerfuffle](https://github.com/KerfuffleV2) 修复于 [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743),新模型稍后上传。
114