Spaces:
Running
Running
feat: update docs
Browse files
README.md
CHANGED
@@ -19,6 +19,18 @@ A chat interface using open source models, eg OpenAssistant or Llama. It is a Sv
|
|
19 |
|
20 |
## Quickstart
|
21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
You can quickly start a locally running chat-ui & LLM text-generation server thanks to chat-ui's [llama.cpp server support](https://huggingface.co/docs/chat-ui/configuration/models/providers/llamacpp).
|
23 |
|
24 |
**Step 1 (Start llama.cpp server):**
|
|
|
19 |
|
20 |
## Quickstart
|
21 |
|
22 |
+
### Docker image
|
23 |
+
|
24 |
+
You can deploy a chat-ui instance in a single command using the docker image. Get your huggingface token from [here](https://huggingface.co/settings/tokens).
|
25 |
+
|
26 |
+
```bash
|
27 |
+
docker run -p 3000 -e HF_TOKEN=hf_*** -v db:/data ghcr.io/huggingface/chat-ui-db:latest
|
28 |
+
```
|
29 |
+
|
30 |
+
Take a look at the [`.env` file](https://github.com/huggingface/chat-ui/blob/main/.env) and the readme to see all the environment variables that you can set. We have endpoint support for all OpenAI API compatible local services as well as many other providers like Anthropic, Cloudflare, Google Vertex AI, etc.
|
31 |
+
|
32 |
+
### Local setup
|
33 |
+
|
34 |
You can quickly start a locally running chat-ui & LLM text-generation server thanks to chat-ui's [llama.cpp server support](https://huggingface.co/docs/chat-ui/configuration/models/providers/llamacpp).
|
35 |
|
36 |
**Step 1 (Start llama.cpp server):**
|