|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
**Undi95 type frankenstein of TinyLLama 1.1b** |
|
https://github.com/jzhang38/TinyLlama |
|
https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0 |
|
|
|
**GGUF custom quants included** |
|
|
|
The secret sauce: |
|
|
|
```bash |
|
slices: |
|
- sources: |
|
- model: "TinyLlama/TinyLlama-1.1B-Chat-v1.0" |
|
layer_range: [0, 14] |
|
- sources: |
|
- model: "TinyLlama/TinyLlama-1.1B-Chat-v1.0" |
|
layer_range: [8, 22] |
|
merge_method: passthrough |
|
dtype: bfloat16 |
|
``` |
|
|
|
How to run as gguf: |
|
|
|
```bash |
|
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make -j |
|
wget https://huggingface.co/SkunkworksAI/tinyfrank-1.4B/resolve/main/tinyfrank-q6L.gguf |
|
./server -m tinyfrank-q6L.gguf --host "my.internal.ip.or.my.cloud.host.name.goes.here.com" -c 512 |
|
``` |