Benjamin Consolvo commited on
Commit
1c41095
·
1 Parent(s): 37ee722

readme edits

Browse files
Files changed (1) hide show
  1. README.md +15 -12
README.md CHANGED
@@ -8,17 +8,17 @@ sdk_version: 1.45.1
8
  app_file: app.py
9
  pinned: false
10
  license: apache-2.0
11
- short_description: 'LLM Chatbot on Denvr Dataworks and Intel Gaudi'
12
  ---
13
 
14
  # LLM Chatbot
15
- Similar to ChatGPT, this application provides a user-friendly Streamlit interface to interact with various LLM models hosted on Denvr Dataworks, powered by Intel Gaudi accelerators. The chatbot supports streaming responses and offers a selection of different language models, including Llama, DeepSeek, and Qwen models. Try it yourself with the models available in the left drop-down menu.
16
 
17
  [![llmchatbot](images/llmchatbot.png)](https://huggingface.co/spaces/Intel/intel-ai-enterprise-inference)
18
 
19
  ## Setup
20
 
21
- If you want to hose the application locally with Streamlit, you can follow the steps below. If you want to host the application on Hugging Face Spaces, the easiest way is to duplicate the space as per the screenshot, and set up your own API secrets as detailed below. Just like any GitHub repository, you can use the same Git actions with the Hugging Face Space to clone, add, push, and commit your changes.
22
 
23
  [![hf_dup](images/hf_dup.png)](https://huggingface.co/spaces/Intel/intel-ai-enterprise-inference)
24
 
@@ -35,9 +35,9 @@ pip install -r requirements.txt
35
 
36
  ### Secrets Management
37
 
38
- This application requires API credentials to be set up in Streamlit's secrets management. You need an OpenAI-compatible API key. In the case of this application, it is using an API key from [Denvr Dataworks](https://www.denvrdata.com/intel).
39
 
40
- 1. On Hugging Face Spaces:
41
  - Add your OpenAI-compatible API key under "Secrets" in the HF settings as `openai_apikey`
42
  - Add the base URL for your model endpoint under "Variables" as `base_url`
43
 
@@ -45,7 +45,7 @@ This application requires API credentials to be set up in Streamlit's secrets ma
45
  ```toml
46
  openai_apikey = "your-api-key-here"
47
  ```
48
- Set the `base_url` environment variable to point to your OpenAI-compliant model endpoint with hosted models.
49
  ```bash
50
  export base_url="https://api.inference.denvrdata.com/v1/"
51
  ```
@@ -55,15 +55,18 @@ Run the Streamlit application locally:
55
  streamlit run app.py
56
  ```
57
 
58
- ## Follow Up
59
 
60
- Connect to LLMs on Intel® Gaudi® accelerators with just an endpoint and an OpenAI-compatible API key, courtesy of cloud-provider Denvr Dataworks: https://www.denvrdata.com/intel
61
 
62
- Chat with 6K+ fellow developers on the Intel DevHub Discord: https://discord.gg/kfJ3NKEw5t
63
 
64
- Connect with me on LinkedIn: https://linkedin.com/in/bconsolvo
65
 
 
66
 
67
- ## License
 
 
68
 
69
- This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
 
8
  app_file: app.py
9
  pinned: false
10
  license: apache-2.0
11
+ short_description: 'LLM Chatbot with Intel® Gaudi®'
12
  ---
13
 
14
  # LLM Chatbot
15
+ Similar to ChatGPT, this application provides a user-friendly interface to chat with various LLMs. This application uses fully open-source models, all hosted with the inference endpoint [Intel® AI for Enterprise Inference](https://github.com/opea-project/Enterprise-Inference), powered by Intel® Gaudi® AI accelerators. The chatbot supports streaming responses and offers a selection of different LLMs, including Llama, DeepSeek, and Qwen.
16
 
17
  [![llmchatbot](images/llmchatbot.png)](https://huggingface.co/spaces/Intel/intel-ai-enterprise-inference)
18
 
19
  ## Setup
20
 
21
+ If you want to host the application locally with Streamlit, you can follow the steps below. If you want to host the application on Hugging Face Spaces, the easiest way is to duplicate the Hugging Face Space (per the screenshot below), and set up your own API secrets as detailed below. Just like any GitHub repository, you can use the same Git actions with the Hugging Face Space to clone, add, push, and commit your changes.
22
 
23
  [![hf_dup](images/hf_dup.png)](https://huggingface.co/spaces/Intel/intel-ai-enterprise-inference)
24
 
 
35
 
36
  ### Secrets Management
37
 
38
+ This application requires API credentials to be set up in Streamlit's secrets management. You need an OpenAI-compatible API key and model endpoint URL. In the case of this application, it is using an API key from [Denvr Dataworks](https://www.denvrdata.com/intel).
39
 
40
+ 1. If hosting on Hugging Face Spaces:
41
  - Add your OpenAI-compatible API key under "Secrets" in the HF settings as `openai_apikey`
42
  - Add the base URL for your model endpoint under "Variables" as `base_url`
43
 
 
45
  ```toml
46
  openai_apikey = "your-api-key-here"
47
  ```
48
+ Set the `base_url` environment variable to point to your OpenAI-compliant model endpoint with hosted models. For example,
49
  ```bash
50
  export base_url="https://api.inference.denvrdata.com/v1/"
51
  ```
 
55
  streamlit run app.py
56
  ```
57
 
58
+ Enjoy using the speedy LLM chat application for any number of language-based tasks like essay writing, summarizing text, or code generation.
59
 
60
+ ## License
61
 
62
+ This project is licensed under the Apache License 2.0.
63
 
64
+ ## Follow Up
65
 
66
+ Connect to LLMs on Intel Gaudi AI accelerators with just an endpoint and an OpenAI-compatible API key, using the inference endpoint [Intel® AI for Enterprise Inference](https://github.com/opea-project/Enterprise-Inference), powered by OPEA. At the time of writing, the endpoint is available on cloud provider [Denvr Dataworks](https://www.denvrdata.com/intel).
67
 
68
+ Chat with 6K+ fellow developers on the [Intel DevHub Discord](https://discord.gg/kfJ3NKEw5t).
69
+
70
+ Follow [Intel Software on LinkedIn](https://www.linkedin.com/showcase/intel-software/).
71
 
72
+ For more Intel AI developer resources, see [developer.intel.com/ai](https://developer.intel.com/ai).