Commit
·
4455327
1
Parent(s):
fa14aff
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,17 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
|
5 |
+
4bit GPTQ quantized version of https://huggingface.co/tiiuae/falcon-40b-instruct
|
6 |
+
|
7 |
+
Make sure to run with FlashAttention like in https://github.com/huggingface/text-generation-inference
|
8 |
+
|
9 |
+
Also note the GPTQ 4bit quantized version seems to run about 2x slower compared to the 8bit bitsandbytes version within text-generation-inference, typically we were seeing about 600-800ms latency for token generation for 8bit bitsandbytes whereas we're seeing about 1.2-1.7s with the 4bit GPTQ version.
|
10 |
+
|
11 |
+
This was quantized using:
|
12 |
+
|
13 |
+
`text-generation-server quantize tiiuae/falcon-40b-instruct /tmp/falcon40instructgptq --upload-to-model-id AxisMind/falcon-40b-instruct-gptq --trust-remote-code --act-order`
|
14 |
+
|
15 |
+
Huggingface's GPTQ implementation can be found here: https://github.com/huggingface/text-generation-inference/blob/main/server/text_generation_server/utils/gptq/quantize.py
|
16 |
+
|
17 |
+
For testing and degradation purposes we've not looked at anything thoroughly, but for our usecases we did not notice any significant degradation which is inline with the claims of the GPTQ paper compared to other low bit quantization methods.
|