LlamaFinetuneGGUF commited on
Commit
f692b6f
·
unverified ·
1 Parent(s): c31972d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -9
README.md CHANGED
@@ -198,15 +198,6 @@ sudo npm install -g pnpm
198
  ```bash
199
  pnpm run dev
200
  ```
201
-
202
- ## Adding New LLMs:
203
-
204
- To make new LLMs available to use in this version of Bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider.
205
-
206
- By default, Anthropic, OpenAI, Groq, and Ollama are implemented as providers, but the YouTube video for this repo covers how to extend this to work with more providers if you wish!
207
-
208
- When you add a new model to the MODEL_LIST array, it will immediately be available to use when you run the app locally or reload it. For Ollama models, make sure you have the model installed already before trying to use it here!
209
-
210
  ## Available Scripts
211
 
212
  - `pnpm run dev`: Starts the development server.
 
198
  ```bash
199
  pnpm run dev
200
  ```
 
 
 
 
 
 
 
 
 
201
  ## Available Scripts
202
 
203
  - `pnpm run dev`: Starts the development server.