Update README.md
Browse files
README.md
CHANGED
@@ -14,6 +14,6 @@ been negatively affected by the quantization process.
|
|
14 |
|
15 |
The command run to quantize the model was:
|
16 |
|
17 |
-
`ct2-transformers-converter --model ./models-hf/WizardLM/WizardCoder-Python-13B-V1.0 --quantization
|
18 |
|
19 |
The quantization was run on a 'high-mem', CPU only (8 core, 51GB) colab instance and took approximately 10 minutes.
|
|
|
14 |
|
15 |
The command run to quantize the model was:
|
16 |
|
17 |
+
`ct2-transformers-converter --model ./models-hf/WizardLM/WizardCoder-Python-13B-V1.0 --quantization float16 --output_dir ./models-ct/WizardLM/WizardCoder-Python-13B-V1.0-ct2-float16`
|
18 |
|
19 |
The quantization was run on a 'high-mem', CPU only (8 core, 51GB) colab instance and took approximately 10 minutes.
|