cmh commited on
Commit
32d0d2e
·
verified ·
1 Parent(s): ceac8ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -9
README.md CHANGED
@@ -12,14 +12,14 @@ pipeline_tag: text-generation
12
 
13
  [ExLlamaV2 is an inference library for running local LLMs on modern consumer GPUs.](https://github.com/turboderp-org/exllamav2)
14
 
15
- | Filename | Quant type | File Size |
16
  | -------- | ---------- | --------- |
17
- | [phi-4_hb8_3bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_3bpw) | 3.00 bits per weight | 6.66 GB |
18
- | [phi-4_hb8_4bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_4bpw) | 4.00 bits per weight | 8.36 GB |
19
- | [phi-4_hb8_5bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_5bpw) | 5.00 bits per weight | 10.1 GB |
20
- | [phi-4_hb8_6bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_6bpw) | 6.00 bits per weight | 11.8 GB |
21
- | [phi-4_hb8_7bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_7bpw) | 7.00 bits per weight | 13.5 GB |
22
- | [phi-4_hb8_8bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_8bpw) | 8.00 bits per weight | 15.2 GB |
23
 
24
  # Phi-4 Model Card
25
 
@@ -32,8 +32,7 @@ pipeline_tag: text-generation
32
  | **Developers** | Microsoft Research |
33
  | **Description** | `phi-4` is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning.<br><br>`phi-4` underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures |
34
  | **Architecture** | 14B parameters, dense decoder-only Transformer model |
35
- | **Inputs** | Text, best suited for prompts in the chat format |
36
- | **Context length** | 16K tokens |
37
 
38
  ## Usage
39
 
@@ -47,4 +46,65 @@ You are a medieval knight and must provide explanations to modern people.<|im_en
47
  <|im_start|>user<|im_sep|>
48
  How should I explain the Internet?<|im_end|>
49
  <|im_start|>assistant<|im_sep|>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
  ```
 
12
 
13
  [ExLlamaV2 is an inference library for running local LLMs on modern consumer GPUs.](https://github.com/turboderp-org/exllamav2)
14
 
15
+ | Filename | Quant type | File Size | Vram at 16k context|
16
  | -------- | ---------- | --------- |
17
+ | [phi-4_hb8_3bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_3bpw) | 3.00 bits per weight | 6.66 GB | 10,3 GB |
18
+ | [phi-4_hb8_4bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_4bpw) | 4.00 bits per weight | 8.36 GB | 11,9 GB |
19
+ | [phi-4_hb8_5bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_5bpw) | 5.00 bits per weight | 10.1 GB | 13,5 GB |
20
+ | [phi-4_hb8_6bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_6bpw) | 6.00 bits per weight | 11.8 GB | 15,1 GB |
21
+ | [phi-4_hb8_7bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_7bpw) | 7.00 bits per weight | 13.5 GB | 16,7 GB |
22
+ | [phi-4_hb8_8bpw](https://huggingface.co/cmh/phi-4_exl2/tree/hb8_8bpw) | 8.00 bits per weight | 15.2 GB | 18,2 GB |
23
 
24
  # Phi-4 Model Card
25
 
 
32
  | **Developers** | Microsoft Research |
33
  | **Description** | `phi-4` is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning.<br><br>`phi-4` underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures |
34
  | **Architecture** | 14B parameters, dense decoder-only Transformer model |
35
+ | **Context length** | 16384 tokens |
 
36
 
37
  ## Usage
38
 
 
46
  <|im_start|>user<|im_sep|>
47
  How should I explain the Internet?<|im_end|>
48
  <|im_start|>assistant<|im_sep|>
49
+ ```
50
+
51
+ ### With ExUI:
52
+ edit exui/backend/prompts.py
53
+
54
+ ```
55
+ class PromptFormat_phi4(PromptFormat):
56
+
57
+ description = "Phi-4 format"
58
+
59
+ def __init__(self):
60
+ super().__init__()
61
+ pass
62
+
63
+ def is_instruct(self):
64
+ return True
65
+
66
+ def stop_conditions(self, tokenizer, settings):
67
+ return \
68
+ [tokenizer.eos_token_id,
69
+ """<|im_end|>"""]
70
+
71
+ def format(self, prompt, response, system_prompt, settings):
72
+ text = ""
73
+ if system_prompt and system_prompt.strip() != "":
74
+ text += "<|im_start|>system\n"
75
+ text += system_prompt
76
+ text += "\n<|im_end|>\n"
77
+ text += "<|im_start|>user\n"
78
+ text += prompt
79
+ text += "<|im_end|>\n"
80
+ text += "<|im_start|>assistant\n"
81
+ if response:
82
+ text += response
83
+ text += "<|im_end|>\n"
84
+ return text
85
+
86
+ def context_bos(self):
87
+ return True
88
+
89
+ prompt_formats = \
90
+ {
91
+ "Chat-RP": PromptFormat_raw,
92
+ "Llama-chat": PromptFormat_llama,
93
+ "Llama3-instruct": PromptFormat_llama3,
94
+ "ChatML": PromptFormat_chatml,
95
+ "TinyLlama-chat": PromptFormat_tinyllama,
96
+ "MistralLite": PromptFormat_mistrallite,
97
+ "Phind-CodeLlama": PromptFormat_phind_codellama,
98
+ "Deepseek-chat": PromptFormat_deepseek_chat,
99
+ "Deepseek-instruct": PromptFormat_deepseek_instruct,
100
+ "OpenChat": PromptFormat_openchat,
101
+ "Gemma": PromptFormat_gemma,
102
+ "Cohere": PromptFormat_cohere,
103
+ "Phi3-instruct": PromptFormat_phi3,
104
+ "Phi4": PromptFormat_phi4,
105
+ "Granite": PromptFormat_granite,
106
+ "Mistral V1": PromptFormat_mistralv1,
107
+ "Mistral V2/V3": PromptFormat_mistralv2v3,
108
+ "Mistral V3 (Tekken)": PromptFormat_mistralTekken,
109
+ }
110
  ```