apepkuss79 commited on
Commit
089ab12
·
verified ·
1 Parent(s): 96ac88c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -16,7 +16,7 @@ quantized_by: Second State Inc.
16
  <!-- header start -->
17
  <!-- 200823 -->
18
  <div style="width: auto; margin-left: auto; margin-right: auto">
19
- <img src="https://github.com/second-state/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
20
  </div>
21
  <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
22
  <!-- header end -->
@@ -29,7 +29,7 @@ quantized_by: Second State Inc.
29
 
30
  ## Run with LlamaEdge
31
 
32
- - LlamaEdge version: [v0.2.8](https://github.com/second-state/LlamaEdge/releases/tag/0.2.8) and above
33
 
34
  - Prompt template
35
 
@@ -51,13 +51,13 @@ quantized_by: Second State Inc.
51
  - Run as LlamaEdge service
52
 
53
  ```bash
54
- wasmedge --dir .:. --nn-preload default:GGML:AUTO:WizardCoder-Python-7B-V1.0-ggml-model-q4_0.gguf llama-api-server.wasm -p wizard-coder
55
  ```
56
 
57
  - Run as LlamaEdge command app
58
 
59
  ```bash
60
- wasmedge --dir .:. --nn-preload default:GGML:AUTO:WizardCoder-Python-7B-V1.0-ggml-model-q4_0.gguf llama-chat.wasm -p wizard-coder -s 'Below is an instruction that describes a task. Write a response that appropriately completes the request.'
61
  ```
62
 
63
  ## Quantized GGUF Models
 
16
  <!-- header start -->
17
  <!-- 200823 -->
18
  <div style="width: auto; margin-left: auto; margin-right: auto">
19
+ <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
20
  </div>
21
  <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
22
  <!-- header end -->
 
29
 
30
  ## Run with LlamaEdge
31
 
32
+ - LlamaEdge version: [v0.2.8](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.2.8) and above
33
 
34
  - Prompt template
35
 
 
51
  - Run as LlamaEdge service
52
 
53
  ```bash
54
+ wasmedge --dir .:. --nn-preload default:GGML:AUTO:WizardCoder-Python-7B-V1.0-Q5_K_M.gguf llama-api-server.wasm -p wizard-coder
55
  ```
56
 
57
  - Run as LlamaEdge command app
58
 
59
  ```bash
60
+ wasmedge --dir .:. --nn-preload default:GGML:AUTO:WizardCoder-Python-7B-V1.0-Q5_K_M.gguf llama-chat.wasm -p wizard-coder -s 'Below is an instruction that describes a task. Write a response that appropriately completes the request.'
61
  ```
62
 
63
  ## Quantized GGUF Models