Spaces:
Running
Running
fix local setup instructions in README.md (#1575)
Browse files* fix example .env.local in README.md
* remove unnecessary accessToken placeholder from README.md
---------
Co-authored-by: Nathan Sarrazin <[email protected]>
README.md
CHANGED
@@ -57,7 +57,22 @@ llama-server --hf-repo microsoft/Phi-3-mini-4k-instruct-gguf --hf-file Phi-3-min
|
|
57 |
|
58 |
A local LLaMA.cpp HTTP Server will start on `http://localhost:8080`. Read more [here](https://huggingface.co/docs/chat-ui/configuration/models/providers/llamacpp).
|
59 |
|
60 |
-
**Step
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
61 |
|
62 |
Add the following to your `.env.local`:
|
63 |
|
@@ -65,7 +80,7 @@ Add the following to your `.env.local`:
|
|
65 |
MODELS=`[
|
66 |
{
|
67 |
"name": "Local microsoft/Phi-3-mini-4k-instruct-gguf",
|
68 |
-
"tokenizer": "microsoft/Phi-3-mini-4k-instruct
|
69 |
"preprompt": "",
|
70 |
"parameters": {
|
71 |
"stop": ["<|end|>", "<|endoftext|>", "<|assistant|>"],
|
@@ -85,19 +100,9 @@ The `tokenizer` field will be used to find the appropriate chat template for the
|
|
85 |
|
86 |
Read more [here](https://huggingface.co/docs/chat-ui/configuration/models/providers/llamacpp).
|
87 |
|
88 |
-
**Step
|
89 |
-
|
90 |
-
```bash
|
91 |
-
docker run -d -p 27017:27017 --name mongo-chatui mongo:latest
|
92 |
-
```
|
93 |
-
|
94 |
-
Read more [here](#database).
|
95 |
-
|
96 |
-
**Step 4 (start chat-ui):**
|
97 |
|
98 |
```bash
|
99 |
-
git clone https://github.com/huggingface/chat-ui
|
100 |
-
cd chat-ui
|
101 |
npm install
|
102 |
npm run dev -- --open
|
103 |
```
|
|
|
57 |
|
58 |
A local LLaMA.cpp HTTP Server will start on `http://localhost:8080`. Read more [here](https://huggingface.co/docs/chat-ui/configuration/models/providers/llamacpp).
|
59 |
|
60 |
+
**Step 3 (make sure you have MongoDb running locally):**
|
61 |
+
|
62 |
+
```bash
|
63 |
+
docker run -d -p 27017:27017 --name mongo-chatui mongo:latest
|
64 |
+
```
|
65 |
+
|
66 |
+
Read more [here](#database).
|
67 |
+
|
68 |
+
**Step 4 (clone chat-ui):**
|
69 |
+
|
70 |
+
```bash
|
71 |
+
git clone https://github.com/huggingface/chat-ui
|
72 |
+
cd chat-ui
|
73 |
+
```
|
74 |
+
|
75 |
+
**Step 5 (tell chat-ui to use local llama.cpp server):**
|
76 |
|
77 |
Add the following to your `.env.local`:
|
78 |
|
|
|
80 |
MODELS=`[
|
81 |
{
|
82 |
"name": "Local microsoft/Phi-3-mini-4k-instruct-gguf",
|
83 |
+
"tokenizer": "microsoft/Phi-3-mini-4k-instruct",
|
84 |
"preprompt": "",
|
85 |
"parameters": {
|
86 |
"stop": ["<|end|>", "<|endoftext|>", "<|assistant|>"],
|
|
|
100 |
|
101 |
Read more [here](https://huggingface.co/docs/chat-ui/configuration/models/providers/llamacpp).
|
102 |
|
103 |
+
**Step 6 (start chat-ui):**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
104 |
|
105 |
```bash
|
|
|
|
|
106 |
npm install
|
107 |
npm run dev -- --open
|
108 |
```
|