File size: 820 Bytes
819c67c eeb4acf 346c551 eeb4acf 32b3d87 9d8e0c5 2734d6d 9d8e0c5 2a4003d 9d8e0c5 d9dfa5c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
---
license: apache-2.0
tags:
- code
- mistral
---
## Exllama v2 Quantization of Mistral-7B-codealpaca-lora
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.6">turboderp's ExLlamaV2 v0.0.6</a> for quantization.
Conversion done using evol-codealpaca-v1.parquet as calibration dataset.
Original model: https://huggingface.co/Nondzu/Mistral-7B-codealpaca-lora
<a href="https://huggingface.co/bartowski/Mistral-7B-codealpaca-lora-exl2/tree/6.0">6.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/Mistral-7B-codealpaca-lora-exl2/tree/8.0">8.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/Mistral-7B-codealpaca-lora-exl2/tree/4.0">4.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/Mistral-7B-codealpaca-lora-exl2/tree/3.5">3.5 bits per weight</a>
|