File size: 762 Bytes
4bcb440 1bb83a8 e22676e 1bb83a8 e22676e 1bb83a8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
---
license: apache-2.0
---
**Undi95 type frankenstein of TinyLLama 1.1b**
https://github.com/jzhang38/TinyLlama
https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0
**GGUF custom quants included**
The secret sauce:
```bash
slices:
- sources:
- model: "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
layer_range: [0, 14]
- sources:
- model: "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
layer_range: [8, 22]
merge_method: passthrough
dtype: bfloat16
```
How to run as gguf:
```bash
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make -j
wget https://huggingface.co/SkunkworksAI/tinyfrank-1.4B/resolve/main/tinyfrank-q6L.gguf
./server -m tinyfrank-q6L.gguf --host "my.internal.ip.or.my.cloud.host.name.goes.here.com" -c 512
``` |