Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
1 |
|
2 |
This is a quantized version of **WizardLM/WizardCoder-Python-13B-V1.0**, quantized using [ctranslate2](https://github.com/OpenNMT/CTranslate2) (see inference instructions there).
|
3 |
|
@@ -10,4 +15,4 @@ The command run to quantize the model was:
|
|
10 |
|
11 |
`ct2-transformers-converter --model ./models-hf/WizardLM/WizardCoder-Python-13B-V1.0 --quantization int8_float16 --output_dir ./models-ct/WizardLM/WizardCoder-Python-13B-V1.0-ct2-int8_float16`
|
12 |
|
13 |
-
The quantization was run on a 'high-mem', CPU only (8 core, 51GB) colab instance and took approximately 10 minutes.
|
|
|
1 |
+
---
|
2 |
+
license: llama2
|
3 |
+
tags:
|
4 |
+
- code
|
5 |
+
---
|
6 |
|
7 |
This is a quantized version of **WizardLM/WizardCoder-Python-13B-V1.0**, quantized using [ctranslate2](https://github.com/OpenNMT/CTranslate2) (see inference instructions there).
|
8 |
|
|
|
15 |
|
16 |
`ct2-transformers-converter --model ./models-hf/WizardLM/WizardCoder-Python-13B-V1.0 --quantization int8_float16 --output_dir ./models-ct/WizardLM/WizardCoder-Python-13B-V1.0-ct2-int8_float16`
|
17 |
|
18 |
+
The quantization was run on a 'high-mem', CPU only (8 core, 51GB) colab instance and took approximately 10 minutes.
|