File size: 2,860 Bytes
cc21c9b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
license: llama2
library_name: transformers
tags:
- code
model-index:
- name: Code Millenials
  results:
  - task:
      type: text-generation
    dataset:
      type: openai_humaneval
      name: HumanEval
    metrics:
    - name: pass@1
      type: pass@1
      value: 0.7621
      verified: false
quantized_by: bartowski
pipeline_tag: text-generation
---

## Exllama v2 Quantizations of code-millenials-13b

Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.

# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)

Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.

Original model: https://huggingface.co/budecosystem/code-millenials-13b

No GQA - VRAM requirements will be higher

| Branch | Bits | lm_head bits | Size (4k) | Size (16k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------------ |
| [6_5](https://huggingface.co/Bartowski/code-millenials-13b-exl2/tree/6_5) | 6.5  | 8.0 | 14.4 GB | 24.0 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/code-millenials-13b-exl2/tree/5_0) | 5.0  | 6.0 | 12.1 GB | 21.7 GB | Slightly lower perplexity vs 6.5, can fit in 12 GB card with even lower context. |
| [4_25](https://huggingface.co/Bartowski/code-millenials-13b-exl2/tree/4_25) | 4.25 | 6.0 | 10.9 GB | 20.5 GB | GPTQ equivalent bits per weight. |
| [3_75](https://huggingface.co/Bartowski/code-millenials-13b-exl2/tree/3_75) | 3.75  | 6.0 | 10.1 GB | 19.7 GB | Lower quality but still generally usable. |
| [3_0](https://huggingface.co/Bartowski/code-millenials-13b-exl2/tree/3_0) | 3.0  | 6.0 |  9.1 GB | 18.7 GB | Very low quality, not recommended unless you have to. |

VRAM requirements listed for both 4k context and 16k context since without GQA the differences are massive (9.6 GB)

## Download instructions

With git:

```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/code-millenials-13b-exl2 code-millenials-13b-exl2-6_5
```

With huggingface hub (credit to TheBloke for instructions):

```shell
pip3 install huggingface-hub
```

To download the `main` (only useful if you only care about measurement.json) branch to a folder called `code-millenials-13b-exl2`:

```shell
mkdir code-millenials-13b-exl2
huggingface-cli download bartowski/code-millenials-13b-exl2 --local-dir code-millenials-13b-exl2 --local-dir-use-symlinks False
```

To download from a different branch, add the `--revision` parameter:

```shell
mkdir code-millenials-13b-exl2-6_5
huggingface-cli download bartowski/code-millenials-13b-exl2 --revision 6_5 --local-dir code-millenials-13b-exl2-6_5 --local-dir-use-symlinks False
```