LlamaFinetuneGGUF commited on
Commit
651a4f8
·
1 Parent(s): eb76765

Updated README Headings and Ollama Section

Browse files

Updated README.md Headings
Removed Ollama Section
Added Ollama env's info

Files changed (1) hide show
  1. README.md +10 -28
README.md CHANGED
@@ -4,11 +4,11 @@
4
 
5
  This fork of Bolt.new (oTToDev) allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
6
 
7
- Join the community for oTToDev!
8
 
9
  https://thinktank.ottomator.ai
10
 
11
- # Requested Additions to this Fork - Feel Free to Contribute!!
12
 
13
  - ✅ OpenRouter Integration (@coleam00)
14
  - ✅ Gemini Integration (@jonathands)
@@ -49,7 +49,7 @@ https://thinktank.ottomator.ai
49
  - ⬜ Upload documents for knowledge - UI design templates, a code base to reference coding style, etc.
50
  - ⬜ Voice prompting
51
 
52
- # Bolt.new: AI-Powered Full-Stack Web Development in the Browser
53
 
54
  Bolt.new is an AI-powered web development agent that allows you to prompt, run, edit, and deploy full-stack applications directly from your browser—no local setup required. If you're here to build your own AI-powered web dev agent using the Bolt open source codebase, [click here to get started!](./CONTRIBUTING.md)
55
 
@@ -124,6 +124,13 @@ Optionally, you can set the debug level:
124
  VITE_LOG_LEVEL=debug
125
  ```
126
 
 
 
 
 
 
 
 
127
  **Important**: Never commit your `.env.local` file to version control. It's already included in .gitignore.
128
 
129
  ## Run with Docker
@@ -192,31 +199,6 @@ sudo npm install -g pnpm
192
  pnpm run dev
193
  ```
194
 
195
- ## Super Important Note on Running Ollama Models
196
-
197
- Ollama models by default only have 2048 tokens for their context window. Even for large models that can easily handle way more.
198
- This is not a large enough window to handle the Bolt.new/oTToDev prompt! You have to create a version of any model you want
199
- to use where you specify a larger context window. Luckily it's super easy to do that.
200
-
201
- All you have to do is:
202
-
203
- - Create a file called "Modelfile" (no file extension) anywhere on your computer
204
- - Put in the two lines:
205
-
206
- ```
207
- FROM [Ollama model ID such as qwen2.5-coder:7b]
208
- PARAMETER num_ctx 32768
209
- ```
210
-
211
- - Run the command:
212
-
213
- ```
214
- ollama create -f Modelfile [your new model ID, can be whatever you want (example: qwen2.5-coder-extra-ctx:7b)]
215
- ```
216
-
217
- Now you have a new Ollama model that isn't heavily limited in the context length like Ollama models are by default for some reason.
218
- You'll see this new model in the list of Ollama models along with all the others you pulled!
219
-
220
  ## Adding New LLMs:
221
 
222
  To make new LLMs available to use in this version of Bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider.
 
4
 
5
  This fork of Bolt.new (oTToDev) allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
6
 
7
+ ## Join the community for oTToDev!
8
 
9
  https://thinktank.ottomator.ai
10
 
11
+ ## Requested Additions to this Fork - Feel Free to Contribute!!
12
 
13
  - ✅ OpenRouter Integration (@coleam00)
14
  - ✅ Gemini Integration (@jonathands)
 
49
  - ⬜ Upload documents for knowledge - UI design templates, a code base to reference coding style, etc.
50
  - ⬜ Voice prompting
51
 
52
+ ## Bolt.new: AI-Powered Full-Stack Web Development in the Browser
53
 
54
  Bolt.new is an AI-powered web development agent that allows you to prompt, run, edit, and deploy full-stack applications directly from your browser—no local setup required. If you're here to build your own AI-powered web dev agent using the Bolt open source codebase, [click here to get started!](./CONTRIBUTING.md)
55
 
 
124
  VITE_LOG_LEVEL=debug
125
  ```
126
 
127
+ And if using Ollama set the DEFAULT_NUM_CTX, the example below uses 8K context and ollama running on localhost port 11434:
128
+
129
+ ```
130
+ OLLAMA_API_BASE_URL=http://localhost:11434
131
+ DEFAULT_NUM_CTX=8192
132
+ ```
133
+
134
  **Important**: Never commit your `.env.local` file to version control. It's already included in .gitignore.
135
 
136
  ## Run with Docker
 
199
  pnpm run dev
200
  ```
201
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
202
  ## Adding New LLMs:
203
 
204
  To make new LLMs available to use in this version of Bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider.